Christie Aschwanden writes at length on the question of whether or not Science is Broken in the FiveThirtyEight blog.

Christie points out a lot of good things – science is hard, the incentives are screwy, and even good science can change outcomes based on a different analysis. Based on these points and lengthy interviews, she declares:

I’ve spent many months asking dozens of scientists this question, and the answer I’ve found is a resounding no.

Christie Aschwanden

There’re a few problems with that statement, though, as well as the reasoning Christie uses through the article. The author isn’t incorrect in saying that Science isn’t broken, but that does not imply that Science is doing just fine. While retractions and new results do show the scientific system is working, the rate at which we see them can and should be improved.

My background, if you read the Skeptical Methodologist, is in software. In software, we have a four pillar system of quality – Testing, Peer Review, Static Analysis and Design. Not everyone seems to recognize this, though.

Often you’ll run across engineers who believe that testing is the only way to quality – that peer review and other methods are either wastes of time or just evidence that someone didn’t test hard enough. Test, test, test – it’s a monistic theory of quality that can lead to myopia.

From a quality perspective, there are things that are just really hard to test for. We’re lucky, though, as these things are easy to spot in peer review, or through static analysis, or through design methods. It’s only when we use all the arrows in our quiver that we get the best results and produce the best software.

Likewise, there’s a bit of a scientific monism running around, and I think when we start talking about this monism – science as the only or even best way to truth – we get in trouble. We become close minded about other sources of truth – and we run into the same problems of trying to test for all defects in software. Other sources of truth are valuable not because they compete with science, but because they complement science.

How we set up our scientific models – an issue pointed to in the article – can’t itself really be answered by ‘traditional’ science. It does, however, have guidelines in philosophy, namely hermeneutics. The idea there is that how we chose to see the problem we’re trying to solve affects what answers we might see, and that’s valuable. Double checking that our hypothesis is coherent and that our conclusions properly draw from our results, these borrow from the philosophical field of logic and are not necessarily native to empirical science.

Checking our peer’s results, not necessarily looking for fraud but just understanding another human being can catch things that all the rigor in the world might miss, that’s the social circle of our lives, not necessarily empirical science. Looking at the higher system of science, the systems of incentives and what reporting body reports to whom, what prevalent theories – and their owners – might have to gain or lose in prestige by newer competing theories – this is all politics. Political and Economic scientists may have good things to say here in making the process more effective, but not traditional empirical science.

Some might argue that good peer review, unbiased modelling, and fixing incentive structures, that’s all part of ‘good’ science, and when you don’t do it, that’s ‘bad’ science. This runs afoul of the ‘no-true-Scotsman’ fallacy, unfortunately, as basically no matter what one detractor might say about science, the proponent will merely say “no, no, no, that’s not TRUE science. TRUE science avoids that.”

Instead, I see science as but one means of discovering truth, and I see the hard sciences in particular as methods using mathematical theories backed with instrumented empirical evidence, gathered by large enough means to run statistical tests on. I define this kind of science narrowly, and by that definition, this kind of science is incomplete. We need other sources of truth to compliment it – testing with our peer review, static analysis with our design methods – to make it better.

Asking scientists whether science is broken misses the point – anyone with a political mind would tell you that. You need to be asking everyone involved in the system, as well as actually properly define what system it is you’re trying to talk about. Science, with a big letter S, needs more input from more fields – it can’t go it alone. We can’t keep just adding on new methods and rigor to what we have and claim that NOW we have GOOD science – we need to acknowledge that our path to truth is always an improving process, and not merely because of new evidence.

New procedures, new incentives, new methods and new models will all be developed, and those all improve what we know about the world too when combined with good empirical science.

When fighting the monster that is our ignorance about the natural world, we need to use every arrow in the quiver.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s