Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Combating bad science (economist.com)
99 points by mathattack on March 17, 2014 | hide | past | favorite | 18 comments


This is an incredibly important problem that Ioannidis is working on. Not only could it increase our confidence in scientific results, it can help institutionalize a bunch of practices (reproducibility, publishing negative results, post-publication review) that we know to be good ideas.

But it might be worth it to take a step back and recognize what a brilliant process science is. After all, even with such waste, such bias, and such low rates of reproduction, science still gives us amazing insights into the world, enriches our understanding of the universe and our place in it, and of course births incredible technology. Through fraud exists, though scientists are flawed humans, though funding agencies can be so conservative or wrong-headed—still, science chugs along. It's so robust, so powerful, and yet so simple!

It's truly amazing to live in our world.


The problem is that science for many scientists is the business of persuading reviewers and editors, publishing positive results on a regular basis and spending the rest of the time writing grant proposals.

This has to do with the publishing process itself, and the poor peer review (closed, sloppy, even for "respectable" journals) of the current publishing system. Science nowadays generates large amounts of knowledge, not all of which is useful, but we lack even rudimentary tools to catalog and analyze that information.

Very few people have suggested ways to catalog science in ways that are useful to inform future research[1,2].

[1] http://www.kurzweilai.net/new-tools-to-manage-information-ov...

[2] http://www.silvalab.com.cnchost.com/silvapapers/S2Neuron2013...


I could not agree more to your description of what the life after your PhD seems to be. The only other workday activities you are missing out are probably (a) travelling, meetings, and conferences [lots] and (b) reading papers [hundreds/year].

However, I fail to see how "research maps" would improve our way to catalog and analyze information. What is shown in your ref [1] is a regulatory network. These models have been used for years to drive research into diseases ranging from cancer to neurodegenerative diseases. And, there is (at least in the biomedical sciences) a very vibrant community dedicated to that very job, the "bio-curators" [http://www.biocurator.org/] that directly or indirectly fill (relational) bio-repositories. "Research maps" like STRING, Reactome, PIR, BioCyc, or KEGG, to name a few, have been around for quite a while, too. So I think the claim in [1] that Silva has "invented" "research maps" is a quite some PR exaggeration, as this can be simply described as a "Reactome limited to Neuroscientists only".


Agree it's not a better solution, it's just that there is little talk of it in general. Paywalled science plays a big role here, it basically makes it impossible to gather even paper titles for further analysis.


Amen. In my experience academic research is very different from what the wider public thinks it is.

You sometimes ask yourself: Would the public continue throwing their taxes at scientific research if they knew that most of their cash is just consumed to fuel the egos of professors?


This is true above and beyond just science. I get the need for research. I get the idea of pushing the boundary of knowledge, checked by peer review. The current system just seems very broken. People respond well to incentives, and they're incented to publish novel results no matter what. All this on the back of a lot of grad students, most of which whom will struggle to get jobs.


I would propose an even more "meta" categorization. Research should be categorized into 'A' level and 'B' level. B level would mean that the purpose of this research is to explore an hypothesis and suggest directions for further research, while A level would mean an attempt at conclusive (or as close to conclusive as possible) results after enough B level research has been done in a certain field. B research would encompass what we suspect and/or want to examine; A research would encompass what we claim to know.


It's an incentive problem. People tend to game any system based on how there measured. And PHD's are generally measured by how much they publish as well as how quickly they publish. Take two academics that publish the same finding 3 months apart and it's the guy who published first who becomes famous. But, there are a lot more PHD's out there than famous ones so for the most part it's #of papers that's most important.

Ideally you want a lot of incremental research(easy), using other peoples data (cheap), and showing something novel(fame lottery). While most of this is practically useless it still gets published because there are plenty of Journals and they all need to publish something every month.

That's not to say people are not still doing high quality research but rather the signal to noise ratio often low for a reason.


How do you fix this, assuming you're constrained by the same amount of resources?

My sense is the end game should be encouraging long term research where there is a decent enough chance for success relative to the gains from it - the chances are hard to determine but somewhat based on the idea, and the team chasing it. Essentially this is a VC model for ideas that are too far out to generate attractive returns.


#1 is probably require Journals to agree and publish research before the results are known based on the experimental design and require scientists to include there raw data. Preferably as they collect it. This is to counter: http://xkcd.com/882/

Require Journals to Publish the first 2 studies that try and reproduce a given result. Note: Publish need not mean include in the paper copy's just indexed and searchable online. This let's you create a reproducibility index which might not mean much until they have done a fair bit of research but still something there going to keep in mind.

Funding agency's would need to set aside a large amount to validate any study. However, as there is a lot of null results that the percentage is probably not all that high. More importantly as not doing so means the results are practically worthless in the first place it's still a good idea even if it takes 1/2 the overall budget.


>After all, even with such waste, such bias, and such low rates of reproduction, science still gives us amazing insights into the world, enriches our understanding of the universe and our place in it

Not most science, it looks like. Most science gives us false insights into the world, reenforcing our prejudices and the sales of dubious products.

>and of course births incredible technology.

Or at least it once did. We're at the end of riding Moore's law, which confused us into thinking that science was progressing, when it was just transistor density that was progressing.


Public policy based on such science impacts millions of people. The lipid hypothesis is a classic example.


I am happy to have seen Ioannidis' talk on this very matter a while ago - very inspiring indeed.

However, I think, often such articles as the one in The Economist lead "outsiders" to the opinion that much of the current biomedical science is flawed. They say that "only" 10 out of 13 in the article have been shown to be reproducible, and while even three bad teeth are not good, it is not even clear what was wrong in the remaining three. It might be very trivial issues, or it might be fraud, who knows. Furthermore, it does mean that at least 3/4 of those papers were fine. Last, so far, the reproducibility results of the 50 cancer papers is still outstanding.

So while I do agree there is an issue at hand, it is a bit like with Apple vs. Windows issues: consumers of the former brand are used to high quality, so a single bug or issue in an Apple soft- or hardware quickly fills news outlets in the world, while nobody gives a damn about even the most critical issues in Windows, because their customers are used to it. While it is important to correct the (ab-)use of biomedical statistics, I think the way this is being presented to outsiders tends a bit towards sensationalism.

I do agree with Ioannidis' as far as that our papers should get more in-depth reviews (not only) of the statistics in them, but the problem is that we need to publish 2-3 papers per year that in turn need 3-5 reviewers per paper, so everybody has to do about 10-15 reviews per year. And by "doing a review" I do not mean just doing an "intensive" reading of the paper being reviewed in half a day, but actually looking at the data and methods and spending some days on it. I am not sure we have the time for that and I believe the reason is this madness of having to publish several papers per year just to survive. I.e., the real problem is our "publish or perish" system, nothing else. All this leads to the fact that it can be better to have dozens of junk papers than one good one, at least if you do not manage to get it to the very top (Nature, Science, Cell, etc.).


> what a brilliant process science is.

Compared to what? Religion? Well, due.

> After all, even with such waste, such bias, and such low rates of reproduction, science still gives us amazing insights into the world, enriches our understanding of the universe and our place in it

The low rates of reproductions are in social sciences, drug research, etc, where the "insight into the world" comes from physic, astrology, etc.


> the "insight into the world" comes from physic, astrology, etc.

then I hope we live in two different worlds - preferring astrology over religion will not get us very far, I guess... (just cracking some balls, but it's a too funny typo to resist)


Follow the money. There's no money in drawing conclusions about and publishing on basic physics. Does the Microwave Background Radiation show that the Universe is open or closed? It's really just an academic discussion that has no financial winners or losers either way besides a Nobel prize at some point, so the science tends to be fairly unbiased.

On the other hand, there's a great deal of money in science that affects the human body, like cancer research, drug research, as well as the social sciences. When there's a lot of money and hype around a field of science, we should keep our bs detectors on high alert.


Interesting initiative. For those interested in finding out more information about reproducibility in science, check out David Donoho and Roger D. Peng as well (and of course, Ioannidis has lots of other interesting publications in this area).

One of the outcomes from these efforts that I'm hoping for is that more scientists realize that reproducibility is not just a sideshow that's nice to achieve for some abstract reason. Rather, reproducibility is the core of science and reproducibility is the value that a scientist provides.

Based entirely on my personal experience, it's depressing how little other scientists even understand the problem (much less care about it). It's just not on anybody's radar. I'm often told that it's a waste of time and not worth bothering about because people have more important things to do. (This is at the same time that we [in our specific field] routinely throw out analysis results that took weeks or months to prepare, because they couldn't be reproduced).


NSF/NIH grants? :D




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: