When debating some controversial science claim, I’ve often heard people argue that “scientists are always wrong.” Usually it’s from those arguing for some “thing,” medicinal or otherwise, that’s supposed to make your life better, but seems to fly in the face of science, or at least isn’t backed by any reputable study.
For example: people arguing marijuana (or at least some of its chemical constituents) kills cancer, but “western medicine” wants to keep you sick with things like chemotherapy, so they’re suppressing the evidence. Something I largely debunked here with just a little critical thinking. So I don’t need to rehash that specific point again.
But what I do want to cover, is the notion that scientists are often wrong. If you were to ask this question, and require a simple “yes” or “no” answer to whether scientists are often wrong, the answer I suppose is “yes, yes they are.” But that’s partly by design, and this is an important part few seem to understand.
If you were to ask most people outside the science community what science is, they’d probably conjure up people in a lab with beakers mixing chemicals together, and hoping that by combining bleach, marijuana, gluten-free wheat, and organic apple seeds, somehow, you’ll have a cure for any particular rare condition that ails you.
But what is science really? It’s a method—thus the moniker “the scientific method.” It’s a means by which you can most likely find the truth about something.
This is WAY oversimplified, but it basically goes like this:
- You observe something in the world, and go “Hmm?” Emphasis on the question mark.
This is how science starts—people have questions.
Non-scientists will often answer them with something complicated and/or supernatural like gods or aliens, if they’re struggling to find a more natural answer to their question. Others just make a random guess based on what they think is most likely the best answer, and go with it, evidence-be-damned.
Because science is hard work, and moving on past this phase requires far more than just imagination.
Scientists however will assume nothing until there is evidence of something. So if they’re compelled to answer the question, they’ll move to phase 2.
2. You gather as much evidence as you can on the thing you saw.
From this point forward, we separate the scientists (or skeptics like myself, since I’m not a professional scientist) from the non-scientists, because non-skeptics/scientists stopped after phase 1 when they opted for a guess.
If there’s no evidence to gather, sadly your work here is done, and you must accept that you don’t know. Think of cryptozoology, like Bigfoot ‘experts” or ghost hunters and such. They have no evidence to test (like an actual bigfoot to observe and test—alive or dead), yet they make claims anyway which are always pure speculation.
So whatever they’re doing, trust me, it isn’t science. Using scientific words, and scientific equipment doesn’t make one a scientist, following the rules of the scientific method does.
3. If you are able to gather evidence, you form a hypothesis, what a layperson might call an “educated guess,” based on the evidence you’ve gathered.
This forming of a hypothesis is different from a guess, in that it is based on the evidence you’ve gathered so far, and none of the evidence gathered should be contrary to your hypothesis.
A guess is often just what you think is most likely, but isn’t always weighed against the evidence you have. You see this often in political or religious debates, where people have an ideology, and any evidence they’ve gathered so far, if it doesn’t support their ideology, is thrown out as if the evidence must somehow be flawed. It’s a process called confirmation bias, and sadly we all do it. Especially if we’re not even aware it’s a thing, and that we should avoid it.
4. Here’s where those beakers might come in. Time to do some testing.
Now here’s the interesting part. If you’re a scientist, you try to prove yourself wrong. Yeah, I said it—WRONG. It’s a principle called falsification.
If you can’t disprove (falsify) your hypothesis, then you assume you have a potentially true hypothesis. Professionals will try to get such findings published in a peer-reviewed and reputable journal, then hope other scientists in their field will test it.
Know what those others will do?
You guessed it, try to prove the hypothesis wrong as well. Not because they want the first scientist to be wrong, or are their competition, but because that’s just how it works.
So why try to prove it wrong, versus prove it right? Derek Muller from the highly-respected YouTube channel Veritasium made an excellent video explaining why, in a pretty unique presentation. I encourage you to watch it. It will make you think differently, if you don’t already think this way.
How often are scientists wrong?
Of all the sciences, one of the most rigorously tested would surely be biology, specifically pharmacology, or medicine. As this story reports, approximately 1 in 5,000 drugs actually make it from concept to FDA approval. Which means 4,999 were effectively falsified. Those seem like pretty horrible results, for sure.
It’s not all bad, though. One of these drugs for instance, was sildenafil, the active ingredient in Viagra. It was initially meant for the treatment of blood pressure, and through clinical testing proved ineffective for that purpose, but highly effective at “pitching tents.” Serendipity at its finest, since Viagra has proven to be far more profitable for its founder, than the blood pressure medicine would likely have been.
But such serendipity is simply an added benefit of rigorous testing, and the proper documentation of all findings. Science is technically always about the unknown. You can ignore things that don’t fit into your desired outcome, or you can follow the data wherever it takes you and learn from it.
But with medicine, obviously lives are at stake in a pretty profound way, so the level of scrutiny there is rightfully going to be higher than any other field of science.
To a layperson, this might seem like the argument is that scientists are wrong 4,999 times out of 5,000, and this is where the “scientists are always wrong” myth starts to germinate. Not because they are wrong, but because of how science is often reported.
You see, technically, they weren’t wrong. They never made the claim you often heard. They formed a plausible idea, and then tested it rigorously to see if it stood up to the scientific method. With medicine, the number of phases a drug goes through is staggering.
Again, very oversimplified, but it’s something like this:
- Test it in a lab (say in a petri dish), basically taking some live diseased cells, put them in a dish, and see if the chemical in question kills them, or otherwise does what you’re hoping it does.
- Test them in animals, like rats
- Test them on an animal that may be closer to humans genetically, like a apes
- Test them on a few healthy humans to make sure they don’t get sick
- Test them on a very small amount of humans to see if it helps
- Test them on a medium-sized group of humans to see if you can show a statistically significant result
- Test them on a large group of humans so you have a result you can argue is most certainly one certain assumptions could be made about.
Now you can start to understand why it can take 12 years for a drug can get to market. But here’s where the “scientists are always wrong” argument often comes into play. Because after phase 1, the findings are published. After phase 2, the findings are published. After phase three, again, the findings are published. This will be true for all phases.
Now, a reporter, website, or any other type of media who knows nothing about science, picks up the published study from phase 1, and writes a big, attention-grabbing headline that reads “Scientists discover cure for cancer,” and a straw man of the finest quality is born.
Because they don’t understand these results are merely a step along the road of a cure, and with respect to cancer, each one is different anyway. The tests would surely be against one type of cancer, such as lung, breast, or prostate cancer, for instance. Not just cancer as a whole.
A year later when this substance fails phase 2, another reporter reports that scientists show the same substance now is not effective at curing cancer. And the public is left thinking scientists screwed up—they didn’t.
People who know nothing about science irresponsibly misrepresented the phase 1 story, the populace which aren’t largely scientists didn’t know how to decipher the misleading clickbaity headline, and voila, “Scientists are always wrong.”
You can also find this notion with people who are skeptical of larger theories, like the big bang theory, or evolution. They’ll point out that “evolution doesn’t explain how life started” or other things we don’t know yet.
But what such people seem to not understand, is that large theories have a couple important facets they aren’t considering.
First, think of a particular scientific theory as a puzzle depicting Albert Einstein standing in his study. Your puzzle has a thousand pieces, and you’ve so far rightly inserted 950 of them. You can clearly see it’s Einstein in his study at this point, but there’s a few small details (missing pieces), maybe a few books on the shelf in the background, that you can’t yet identify. You’re still not sure about such facts, and that may change the picture significantly, but it is much more likely it will not, instead just filling in those small blanks.
For evolution for instance, this might be the fact it isn’t understood how non-living organics (carbon-based substances) became living organisms (carbon-based life forms). Just because we don’t understand that facet, doesn’t mean the other “950 pieces” we do understand aren’t true, or are suspect.
The other important part to understand about a theory, is that it’s a theory instead of a law, because it isn’t entirely observable. We can see the effect of gravity on something and measure it accordingly, so that’s a law.
But with evolution or the big bang, we can’t go back in time and watch it happen. So all we can do is theorize based on data we have, and try to recreate the event in some small way so we can observe it. From there, we can make a fair assumption the theory holds true if replicated.
Since skeptics are often religious in nature, they’ll refute science with the Bible, Quran, or other religious works, as if we should assume such works are true. But almost all claims made by modern-day scientists which contradict religion, have a mountain of evidence supporting them, to the point that people like the pope himself, have acquiesced to, as reported here. And it’s important to understand that such religious works aren’t supported by evidence either, as far as we know. We can’t go back in time and observe them being written, nor do we have any supporting documents to back up their claims. It could literally have been written by one delusional person thousands of years ago, sold to a larger group of people as truth, a religion was born thereafter, and we’d have no way of knowing. So assuming such religious texts must be right on the subject of gaps in scientific knowledge does not follow any reasonable logic.
So are scientists always wrong? Of course not. Through the course of their methods, they form hypothesis which they often prove wrong, but by the time they get to a point where they make a claim, they are demonstrably far more correct than any other group of people on the planet. Be a skeptic and question everything, including science. But proper skepticism should lead you to find that the scientists did their part correctly; the errors came in how that information made its way to you along the way.