0
7

[–] indiglo_girls 0 points 7 points (+7|-0) ago 

But...but...I thought researchers were above such things? /s.

0
9

[–] rwbj 0 points 9 points (+9|-0) ago  (edited ago)

Read the article. He's not discussing people faking research for the sake of their careers but corporate sponsors trying to pay researchers to fake their results to paint the company's product in a favorable light, and at the minimum not publish if it paints the company's product in a negative light. Basically when any sort of research comes out that's sponsored by a corporation you should be incredibly skeptical. Most people aren't and simply think science is science. But an independent or public sector researcher is in a far different situation than a researcher who is being sponsored by a corporation who intends on using that research to generate profit.

edit To elaborate, the peer review process does not actually replicate the experiment so it can't verify the data. All peer review does is ensure that, assuming the data is legitimate, that the science is legitimate and relevant. Things like corporate medical trials are buried under 50 layers of nondisclosure agreements and so you're simply left to trust the company and their sponsored researcher that their data is what they say.

0
2

[–] unsweetenedsoymilk 0 points 2 points (+2|-0) ago 

Yup. That's why I think public funding for universities to do independent research is very important. However (and I've noticed), many university researchers aren't exactly apprehensive when it comes to receiving outside financing/sponsorship/benefits. I think it's a bad culture that people aren't publishing their negative study results more often, as they contain valuable data for other researchers as well.  

Do you remember the whole Hwang Woo-Suk incident? I think that was the turning point for me to be very skeptical of any studies, no matter how credible they may seem. He had the whole world fooled for quite some time.

0
2

[–] tehpatriarchy 0 points 2 points (+2|-0) ago 

On a related topic the book "The Cult of Statistical Significance" is a good read.

Summary of common errors:

  1. If it is statistically significant it is practically significant.

  2. If it is not statistically significant we showed there is no effect.

  3. Effect sizes don't matter, just 'significance'.

The Vioxx case as described in the book will make you think twice about taking any pharmaceutical.