The big news last week was that a study touted around the world for showing the supposed health benefits of eating chocolate was as a hoax. As revealed at io9, the study was done in order to show how bad things are in science journalism and what can get published and noticed today in diet and medical journals using specious statistical tools.
The way it worked is this: the author, John Bohannon, collected a rather small number of subjects to do an experiment with three groups changing or keeping their normal diets. Then data was collected from all groups and a later battery of tests were done to find any differences. The problem with a study like this is that with the small sample size and the very many different tests, the chances of finding any variable change that is “statistically significant” is rather high. Note that “statistical significance” is not the same as having a result that is large and noticeable but instead is a measure of how unlikely to get that result if there were no correlation between input and output (i.e., diet with chocolate and weight). With most papers, a result is statistically significant if the chances of getting a correlation when there is none is less that 5% (p < 0.05); but that also means that if you do twenty tests you can expect one to be statistically significant just by chance. With so many tests and so few subjects to average out any statistical fluctuations, then any positive results -are at best specious since chance cannot be ruled out. Roll the die enough and you will get snake-eyes. Heck, it’s expected, and that should have been noticed by any journal reviewer or trained science journalist.
So the fact that the study even got published, let alone got wide attention, shows there is something wrong in how things are working.
Interestingly, this has been causing not simply a reflection on issues in science and journalism, but there is a question on the very ethics of doing a fake study like this one. Over at Science News, the study has been called “shameful”, it has been said to have failed to make a point, and PZ Myers has said he would have not done such a study on ethical grounds nor would he use it as an example for his students about how not to do a study. Some the arguments seem rather weak to me, others have more merit.
For example, it is said that this hoax “undermines all of science and all of journalism”. That is problematic as a criticism when the point was to show that there is something seriously wrong about the way science journalism is being done. The hoax would not have worked if there wasn’t this problem in journalism today of promoting tiny and problematic studies to the world. The small study about vaccines and autism by Andrew Wakefield should not have had wide press for similar reasons–tiny group of subjects, no significant controls, and a mechanism that was highly speculative and not demonstrated by the evidence.
Also, there hasn’t been the same negative reaction to other hoaxes in the literature to expose problems. The Sokal affair decades ago is still hailed as an example of the problems in post-modern critiques of science, and the only people that really complained about it were those that got burned. Similarly, there is James Randi and the various hoaxes he was responsible for when it came to paranormal research, especially Project Alpha. Also, the very same person that put out the chocolate study hoax, John Bohannon, had done another study of making fake studies in cancer research journals, demonstrating that so many of the open access journals do not do any peer review or do it so poorly it might as well not exist. The only real complaint I heard about that study (not from those who got burned by it) was it failed to apply that same scrutiny to subscription journals (i.e., Science, Nature, Cell, etc.). It seems very odd that the same author can do much the same sort of demonstration of issues in how scientific literature is produced, and in one case he is criticized for not hoaxing more journals and in another case he is told to not have hoaxed anyone at all.
On the other hand, there is a more serious issue brought up with the fake chocolate study: the effects on the people that heard the news and may have acted on it. Whenever you do any sort of study that affects humans (or animals for that matter), you have to think about what are the risks to the subjects. I have to do that for my own research in education, and you can bet the house that you need to do that for medical or nutritional studies on people. Sending out fake information like this may well either cause people to make unhealthy decisions (i.e., eat more chocolate and what negative effects it could have) or not trust science or media to get useful health information and carry on in damaging habits.
This is a reasonable ethical problem for anyone that is a consequentialist when it comes to morality (a virtue ethicist wouldn’t even allow things to go this far). However, shouldn’t the same be true of the fake cancer studies I mentioned above? For consistency’s sake, we would have to condemn that study as well and more so given that poor cancer treatment choices are probably more likely to be harmful than choices about consuming chocolate. The only substantial differences I can see are: the systematic way the fake cancer study was done; the level of media attention for the chocolate study) which was part of the point of the hoax. Really, I see those differences as a result of the intended audience. The first was really for scientists to realize the dangers of the open access journals and many having a lack of integrity. The second (chocolate) one was for mass consumption (or no consumption) and realizing how fad-based the news media is over non-scientific claims in health and diet.
Overall, there is some ethical line blurring going on, but the real issue is the fact that these sorts of fake studies are given so much air time on TV or page clicks on the Internet. However, the “researchers” never actually lied, nor did they fake data. They gave their spurious results and they were accepted as science. That is the problem–one can honestly send data like this to a supposedly peer-reviewed journal and get world-wide attention. To mitigate the negative effects that this fake study may have had on people is to proclaim widely how everyone needs to be more skeptical of science journalism (which often fails to be done by a dedicated science correspondent) and teach people more about the need to do science-based medicine.
The key term here is science-based. Evidence-based medicine, which is a great step forward compared to traditional medical practices, has the problem that it fails to consider on its own the plausibility of the results in question. Science-based medicine is what we need more of, and science journalists need to be more aware of this and convey it to their audiences if they want to remain relevant, useful, and trusted. Otherwise we will all get burned when someone less honest put out spurious or fake data in order to get attention, money, or gravitas.