The big news last week was that a study touted around the world for showing the supposed health benefits of eating chocolate was as a hoax. As revealed at io9, the study was done in order to show how bad things are in science journalism and what can get published and noticed today in diet and medical journals using specious statistical tools.
The way it worked is this: the author, John Bohannon, collected a rather small number of subjects to do an experiment with three groups changing or keeping their normal diets. Then data was collected from all groups and a later battery of tests were done to find any differences. The problem with a study like this is that with the small sample size and the very many different tests, the chances of finding any variable change that is “statistically significant” is rather high. Note that “statistical significance” is not the same as having a result that is large and noticeable but instead is a measure of how unlikely to get that result if there were no correlation between input and output (i.e., diet with chocolate and weight). With most papers, a result is statistically significant if the chances of getting a correlation when there is none is less that 5% (p < 0.05); but that also means that if you do twenty tests you can expect one to be statistically significant just by chance. With so many tests and so few subjects to average out any statistical fluctuations, then any positive results -are at best specious since chance cannot be ruled out. Roll the die enough and you will get snake-eyes. Heck, it’s expected, and that should have been noticed by any journal reviewer or trained science journalist.
So the fact that the study even got published, let alone got wide attention, shows there is something wrong in how things are working.
Interestingly, this has been causing not simply a reflection on issues in science and journalism, but there is a question on the very ethics of doing a fake study like this one. Continue reading