"When is a Fact a Fact?": A Conversation with Peter Vickers (Keywords: Certainty; Truth; Science; Expertise; Consensus )
- Peter Vickers
- May 10
- 11 min read

This conversation was taken from our recent book, Science, Anti-Science, Pseudoscience, Truth, edited by Anthony Morgan. If you enjoy reading this, please consider becoming a patron or making a small donation. We are unfunded and your support is greatly appreciated.
Is science getting at the truth? Those who spread doubt about science tend to argue that scientists were “sure” in the past, and then they ended up being wrong. This conversation looks to historical investigation and philosophical-sociological analysis to defend science against this potentially dangerous scepticism. Indeed, as Peter Vickers argues, we can confidently identify many scientific claims that are future-proof: they will last forever, so long as science continues.
***
Jana Bacevic (JB): Could you start by summarising the main argument in your new book, Identifying Future-Proof Science?
Peter Vickers (PV): The debate between realists and anti-realists in philosophy of science is often seen as a debate between those who believe scientific claims and theories to be true and those who do not, respectively. But the more I have investigated my own community of philosophers of science, the more I have come to realise that actually all of us – or nearly all of us – agree that there are at least some well-established scientific facts that we can be certain about. For example, hardly anyone (including the anti-realists) doubts that climate change is real or that viruses exist and cause diseases like COVID. When it comes to established scientific facts, all of us are largely in agreement about their truth, even if my own personal list of established scientific facts might not overlap perfectly with another philosopher’s list. So, though there is this realism and anti-realism debate amongst the philosophers where we look split on the question of science and truth, I show in the book that there is actually a sense in which all of us are pro-science: we take at least some established scientific facts to be indisputably true, even if we disagree about other things. And I thought that, as a community, we should highlight this consensus of opinion.
All of us are pro-science: we take at least some established scientific facts to be indisputably true, even if we disagree about other things.
JB: What do you mean by certainty here? And how certain can we be about the claims of science? I ask this because there have been instances in the past where scientific claims that were taken to be certain, that scientists were extremely confident about, have later turned out to be false. How, then, do we understand certainty when it comes to science?
PV: Many react to such examples by saying that there is no certainty in science; everything is just a theory and there are more or less good reasons for it. There is certainty only in geometry and mathematics, but not in science. I am not using certainty in that strong sense, because nothing would then be certain in science. But there is another sense of certainty that is much more apt for science: certainty as that which is far beyond reasonable doubt. For example, we once merely speculated that dinosaurs roamed the earth millions of years ago. This later became a theory. It is now accepted as being beyond reasonable doubt. We can be certain about this, even if this fact does not have mathematical certainty. There is an epistemic space to be explored here, which is short of “strong certainty” but also goes way beyond saying it is just a good theory or the best one we have got so far.
There is a history to why we think about certainty the way we do. Karl Popper famously thought that for a theory to be considered scientific, we must be able to falsify it. According to him, we can never say that a scientific theory is true or is approaching the truth, because every idea or theory is just waiting to be falsified. But if that were the case, we would still be trying to falsify the theory that dinosaurs roamed the earth, and the theory that the earth turns on its axis. To be sure, both started out as mere hypotheses, but, at some point, they became solid theories accepted beyond reasonable doubt by the scientific community. Claims like these are so well established now that it would be absurd to think that we should still be trying to falsify them.
There is another sense of certainty that is much more apt for science: certainty as that which is far beyond reasonable doubt.
After Popper came Thomas Kuhn. In his classic, The Structure of Scientific Revolutions, he argued that science goes through revolutions and with each revolution comes a new paradigm. We are currently working with one paradigm or framework of doing science and this is bound to change with future revolutions. But at least with some things in science, we should not be expecting revolutions. In the future, we are not going to look back and think that past scientists believed in dinosaurs because that was “their paradigm”. Or that they used to think smoking causes cancer, but not so in our paradigm. It is important to be humble about our scientific claims, but take it too far and it will start to seem absurd.
JB: Who is the primary audience for your book? On the one hand, you are engaging with the realism and anti-realism debate which is considered a narrowly philosophical debate, though of late those arguments can be seen playing out in non-philosophical areas as well. But, on the other, your discussions about certainty or the degree to which we can be confident about scientific knowledge claims have implications for how we should interpret reports from the Intergovernmental Panel on Climate Change (IPCC), for instance. How is the book speaking to these different audiences?
PV: I am hoping that it will have an impact on a wide audience and on our general attitudes towards the relationship between science and truth. For one, it is meant to raise the profile of science as the primary means for getting to the truth – at least sometimes. Of course, there are many cases where we have to say that we just do not know. Similarly, there are other cases where there is a consensus that a theory is the front runner, but no consensus that it is true. But despite all this, there are such things as established scientific facts and even if some of those facts (e.g. that dinosaurs once existed) seem banal, it is important to establish that there are such things.
Second, there are cases where there is no solid scientific consensus, but we still have to decide which scientific claims to trust, which scientists to listen to and how to act, despite the looming uncertainty. And I am hoping to give people tools which they can use to make these kinds of judgements, which are different from the tools we typically rely on. Most of us do not know enough physics, chemistry or biology to competently judge scientific claims on our own. The method I am proposing instead involves learning to judge the strength of community opinion: identifying the relevant scientific community for the issue in question and working out what level of disagreement or consensus exists amongst them. For example, if the question is related to COVID, the opinions of cosmologists are not relevant, but those of epidemiologists are. Once the relevant scientific community is identified, you need to look at whether there is a weak or strong consensus amongst them regarding, say, the efficacy of vaccines. Something like 80% would be weak, but 90% or 95% would indicate a strong consensus. You are not going to get 100% in science. This is why I opt for 95% consensus as a high enough standard. So, instead of trying to learn the science and work out what it says yourself, there is this alternative way of assessing scientific claims: identify who the experts are and see if there is a solid community consensus. This is called “social epistemology”. But we are not taught these things. And that’s why I argue, in the final chapter, that these methods of social epistemology should be part of our schooling.
There is this alternative way of assessing scientific claims: identify who the experts are and see if there is a solid community consensus.
The third kind of audience is the scientists themselves, because even they often do not know when to call something a fact. An anonymous IPCC report author recently said that when they are writing a climate change report, if something is not yet a fact, they put in brackets after the statement “high confidence” or “very high confidence”. But if it is an established scientific fact, they can just state it without any qualifiers. The author noted that it isn’t clear when we cross the line from “very high confidence” to “established fact”. So, the book is also relevant for scientists writing these and other kinds of reports.
JB: This growing focus on social elements of knowledge production, especially justification and legitimation, has also inadvertently added fuel to the anti-realist fire that has stoked doubt regarding things like climate change and vaccines. Some of these people choose not to trust science or scientists, not because they cannot understand it or lack the background knowledge required to assess the validity or reliability of particular scientific claims, but because they value a kind of epistemic autonomy. So, how should we approach scepticism towards science that is not a result of ignorance or lack of relevant knowledge or even epistemic capability, but rather motivated by a desire for epistemic autonomy or independence, no matter how misdirected?
PV: The philosopher Douglas Allchin has noted that one of the traditional goals of education is “intellectual independence for all”. On this view, by getting everyone more educated about science, we can get them to make better and informed decisions. He, like me, is making a case for trusting expert consensus, and when and how to do so. Why is this important? Because every one of us, including the scientists themselves, are in the same boat today. Even trained scientists cannot just look at any area of science and decide what it says anymore. Even trained scientists have to do social epistemology just like the rest of us, because their knowledge or expertise in some areas is not enough to make reliable judgements about the science coming out of other areas.
Even trained scientists have to do social epistemology just like the rest of us, because their knowledge or expertise in some areas is not enough to make reliable judgements about the science coming out of other areas.
Different scientists also have different instincts. Some are more conservative than others, while some are mavericks. Take the doctrine of atomism, for example. Some jumped at atomism thinking it was true right away, some remained cautious yet open, while some others were fiercely sceptical and took their time to come around. The idea behind the consensus approach I am advancing here is that once all these individual differences are washed out, it is the community judgement as a whole that tells us the way to go. It is only when the whole community is pulling behind an idea, despite all their differences in terms of political affiliations, personalities, attitudes to risk, and so on, that you can reliably trust the science coming out of that community, because that does not happen easily or often. There really has to be a huge amount of evidence to get a large and diverse group of scientists to pull together in the same direction. This is why it makes sense to trust the consensus opinion of a scientific community.
If it is the community opinion that we should be trusting, then no individual, including no individual scientist, can claim to be an expert on everything. It is not that laypersons get told what to believe, whereas scientists are well placed to look at the science directly and decide for themselves. First of all, these issues are too complicated, and expertise in one area does not guarantee reliable judgements about another. Second, individual scientists have their own biases which skew their judgements. We have seen this many a time in the history of science. Some have been super confident about a theory, only for it to be disproved later. This often happens because they have their particular perspective on the evidence while being committed to their idiosyncratic background assumptions, both of which could be biasing them one way or another. This is why our individual judgements about science are extremely impoverished compared to the judgement of a whole community. It is very easy for an individual to interpret or judge scientific claims wrongly, but it is much rarer for a whole community in a state of strong consensus to be wrong about the same issues.
The ideal of intellectual independence is attractive to many of us. We would all like to be educated enough to go look at science directly and make up our own minds about climate change or vaccines, without relying on the judgements of others. But these issues are way too complicated. Plus, we have got our own biases. We must therefore come to terms with the fact we cannot go out there and make these judgements for ourselves, certainly not reliably in most cases.
These issues are way too complicated. We must therefore come to terms with the fact we cannot go out there and make these judgements for ourselves, certainly not reliably in most cases.
JB: You seem to mostly rely on examples from the natural sciences to make your argument. Would you say your ideas apply equally well to the social sciences?
PV: The social sciences are a difficult case, because they do not have the kind of evidence base that natural sciences do. In the book, I discuss examples in the natural sciences in which a consensus was formed and technologies were later developed that enabled us to test the consensus theory, eventually showing it to be correct. Continental drift is a case where a strong consensus eventually formed. Then, subsequently, we put satellites in space and saw the drift happening in real time, which proved the consensus (and the theory) to be correct. In fact, there has never been a case where a solid international scientific consensus formed, only for this consensus to be overturned by contrary evidence coming from later technologies. But when it comes to the social sciences, we typically do not find such a very strong consensus in an international and diverse scientific community. There may be one or two examples from psychology where there is a strong, solid international scientific consensus: say, that depression is a genuine mental disorder and not just a mood or character trait. But even with depression, you will find serious disputes within the community, including about the very word “disorder”.
It is not easy to find scientific claims around which there is a strong community consensus and almost no disagreements, and this tendency is exacerbated in the realm of the social sciences because claims tend to be more philosophically contested. Even with a word like “cause”, as in “smoking causes lung cancer”, you are not going to get a 100% consensus because there will be a significant number of scientists who will disagree about what that word means and it could mean very different things depending on the theory of causation you are committed to. You have to be really cautious about the terminology you bring in, even with what seem like innocent claims. You can see why a claim that, say, “science has disproven free will” is going to create immense difficulties for building a consensus on it, given its contested meanings and imports. Sometimes this is even true for strongly held scientific claims about, say, continental drift. It is an established fact that South America split from Africa some 140 million years ago, but the moment you start saying anything about why – plate tectonics – there can be serious disagreements. If a scientific community is serious about arriving at a consensus, they have to be very careful about the statements or terms they use to describe their theories. And, as I said, this is especially true with the social sciences.
JB: What do you see as the main difference between your book and the historian Naomi Oreskes’ Why Trust Science? given that she also focuses on consensus as a metric of reliability? Is the difference simply a philosopher’s versus a historian’s perspective?
PV: I discuss Oreskes’ book quite a bit and I do present my work as building on hers. I think she has done some fantastic work, but there are some differences. She thinks that history shows we cannot be sure about anything in science and that’s directly opposed to what I am arguing. Her work is no doubt pro-science, but historians sometimes take this approach where they argue for extreme modesty when it comes to scientific claims because past instances of overconfidence have turned out to be wrong. They also make a symmetry argument: since there have been many changes in the past 500 years of scientific thought, we should expect similar changes in the next 500 years. But I think there are some things we can be certain about – even looking far into the future.
Further Resources:
Jana Bacevic, “Epistemic Autonomy and the Free Nose Guy Problem”, The Philosopher, 109:2 (2021)
Naomi Oreskes, Why Trust Science? Princeton University Press, 2019
Peter Vickers, Identifying Future-Proof Science. Oxford University Press, 2023
If you enjoyed reading this, please consider becoming a patron or making a small donation.
We are unfunded and your support is greatly appreciated.