Monday, January 30, 2017

Why So Much Bad Science

Last time I featured the latest news about palm oil, that it might be carcinogenic (or maybe not).  I used it to show how the debate about various cooking oils along with butter and margarine has been going round and round for several decades.  First A is bad for you so use B instead; and then we hear that the scientists have changed their minds about B.  We get lists, ordered from good to not so good, of oils (or another category of food) from different sources giving different information.  But each seems to have good reasons for their opinions based on some scientific study. 

Confusion is compounded when the news media pick up the story of a new study just out with new information of danger or benefits or else contradicting an older study.  The palm oil story reminded me of a published paper by a Stanford statistician in August 2005 “Why Most Published Research Findings Are False.”

One important reason for his claim is that scientific experimentation is never absolute.  When trying to find a relationship between a behavior or treatment and a result, several problems arise.  Some relationships may happen by chance alone.  The common standard of 95% correlation coefficient leaves a 5% chance that the result is incorrect.  The standard allows for one in twenty to be a false positive.  But many other factors conspire to push the number of false findings much higher.

First, correlation does not imply causation.  Even if one thing happens right after another, or they both vary in the same way, there may be no relationship.  This site gives several examples, one where a simple nutrition questionnaire found a (strong statistically significant) correlation between eating eggrolls and dog ownership and between drinking coffee and cat ownership.  Obviously eating an eggroll doesn’t result in an irresistible urge to get a dog.

Some other factors that may lead to errors include: 
  • Using smaller sample sizes – this is often done for efficiency and economy; it’s easier and cheaper when fewer people are tested.
  • Settling for a smaller effect size – the link to cancer or other outcome may be very slight; but if they find any link at all, they are tempted to get it published to beat others.  Science is very competitive for both funding and credit.
  • Considering a greater number of relationships – the more you look for the more you will find.  In June 2015 I gave the example of a paper claiming a positive relationship between eating chocolate and losing weight.  Although the author did not make up any data, he later admitted that he had compared so many variables that one or two were bound to show a relationship just by chance.  It so happened that he found this very counterintuitive and alluring relationship, chocolate and weight loss, and published it as a tongue-in-cheek report.  Unfortunately, before he could correct the record, news agencies around the world ran with it as breaking scientific news.
  • Greater flexibility in experimental designs – a lack of standardization in the design of a test can cause results to vary.
  • Greater financial and other interests and prejudices, especially in a hotter scientific field – scientists have personal interests and biases just like all other humans.  They may unconsciously try harder to make the data fit their theory.  They may not check a supportive result because it is the answer they and their sponsor or employer are looking for.
Another problem is that scientists are less likely to spend time replicating research to confirm findings.  It’s hard to get funding for merely repeating someone else’s work.  It’s also not as interesting, and you are less likely to get headlines or other recognition. 

Fortunately, this idea of testing the results of others is catching on.  In 2011 a group of psychologists from around the world tried to replicate findings of 100 published papers from 2008.  They could reproduce the results from only 39.  Furthermore, “Daniele Fanelli, who studies bias and scientific misconduct at Stanford University in California, says the results suggest that the reproducibility of findings in psychology does not necessarily lag behind that in other sciences.”  They confirmed that twelve-year-old assertion that just because it’s published in a reputable journal doesn’t necessarily mean it’s true. 


So where does it leave critical thinkers who want to rely on scientific data instead of social media rumors or endorsements and anecdotal evidence from advertisers?  We can’t jump to conclusions based on every “latest discovery” seen on Dr. Oz or the news.  Be patient.  Look for large sample sizes and confirming tests.  In this fast-paced technological age, it is doubly difficult to separate the valid from the bogus, but virtually none of the breaking news requires an instant response.

No comments:

Post a Comment

Click again on the title to add a comment