There’s a church across the road from my home with a big poster out front that says, “Theories keep changing. God does not.” Setting aside the fact that human conceptions of gods have certainly changed over time, the point here is that people crave certainty. Having it makes decisions much easier in an overwhelming environment of choices. I personally think that the adaptability of scientific theories gives it a strength you can rely on, but there are plenty of people who, for one reason or another, try to portray this as a weakness. In 2010, Harvard-based historian of science Naomi Oreskes gave us some vivid and well-researched examples on the nefarious end of this spectrum with a book (co-authored with NASA historian Erik
Conway) called Merchants of Doubt. It uncovered deliberate, organised, and often highly orchestrated disinformation campaigns designed to create public distrust in scientific findings about a range of phenomena such as the dangers of smoking and the anthropogenic causes of global warming. The book became a bestseller and remains an essential text for the environmental movement. Oreskes came to write that book as a result of negative reactions she received to some of her earlier research. In 2004, she published a paper in Science titled “The Scientific Consensus on Climate Change,” which looked at a sample of 928 academic publications on climate change and rather than finding a “controversy,” as had been widely reported in the press at the time, she found complete agreement that humans were causing global heating. After that, Oreskes found herself at the centre of a different controversy. As she said in an interview with the New York Review of Books in October 2019, “my paper for Science got a lot of attention. And that’s when the attacks began – vicious letters, threats… It hit me like a truck. I was completely blindsided.” That experience made Oreskes realise that scientists like her have been naïve to think it was sufficient to just do scientific work, hand over the facts, and let government and business leaders act accordingly. As she explains in the afterword of her latest book, “When it comes to doubt-mongering, one cannot fight fire with fire. One has to shift the terms of the debate. One way to do so is by exposing the ideological and economic motivations underlying science denial, to demonstrate that the objections are not scientific, but political. Another is by explaining how science works and affirming that, under many circumstances, we have good reason to trust settled scientific claims.” Oreskes tackled the first part of this two-pronged strategy with Merchants of Doubt. Now, she is back to finish the job with Why Trust Science? The title of this book sets up a herculean philosophical task for Oreskes. To fully answer its question, she’d have to unpack the historically complex meanings of each of the three words used. First, what is science? Is it whatever men in white lab coats do, a definable method, a cannon of results, or something else entirely? Second, what does it mean to trust something rather than to distrust it, dismiss it, take it on faith, or just accept it on authority? And finally, there is the question of why. Why do anything? As Hume famously asked, how do you give a reason to get from what is to what you ought to do? In this case, how might observations from science tell us why we should listen to the prescriptions of scientists?
It won’t be a surprise to readers of this journal that Oreskes doesn’t quite give us convincing answers to all of those questions. Authors often lack control over their book titles, though, so we can’t really hold that against her. As a professor of the history of science, however, Oreskes is extremely well poised to deal with the first two questions of what science is and when it has been trustworthy, and that does open some windows on how to think about the final question.
The opening three sections of Why Trust Science? are based on a series of lectures that Oreskes gave at Princeton, so they are not aimed at the general public. They are meant to pick up a conversation that is continually ongoing in the scientific community. As such, she doesn’t build up her account of science from the most basic questions. For example, she does not note that the word “science” is derived from the Latin word scientia, meaning knowledge, and that scientific activities can be seen as far back as Ancient Egypt and Mesopotamia before they were somewhat codified in Ancient Greece as natural philosophy. Formal codification of a scientific method for natural philosophers didn’t truly begin, though, until Francis Bacon published Novum Organum (“New Method”) in 1620. This is where we begin to see the outline fall into place of using observations to develop theories via induction, which allow one to deductively formulate hypotheses, which can then be tested via experiments, which provides a new set of observations that can begin the cycle all over again.
Oreskes jumps into this story two centuries later with Auguste Comte – the father of positivism – and what she calls “The Dream of Positive Knowledge.” If anything could be trusted, it would be a scientific finding that had been absolutely positively proven to be true. Unfortunately, as Oreskes makes clear with her retelling of the history of science, such dreams have proven fruitless. She notes that Comte himself was a fallibilist, recognising that Hume’s “problem of induction” meant that beliefs could not be proved with certainty. Comte still thought science could be reliable, however, by virtue of the nature of its practices. Roughly a century later, beginning in the 1920’s, the Vienna Circle’s logical positivists picked up this thread by trying to add the principle of verification to these practices of the scientific method, insisting that only observations verified by the senses could be meaningful. But trust in this principle was short-lived as A.J. Ayer and other members of the circle soon recognised this was not enough to overcome the problems raised by Hume. Karl Popper came along and exposed even further shortcomings. He rejected several tenets of logical positivism while introducing falsifiability into the mix of what distinguishes a scientific claim from a non-scientific one. Ironically, by noting that an observation can only refute another claim, and that no amount of observations can ever verify a belief, Popper’s critical rationalism opened the door for a form of radical scepticism that he abhorred. Popper’s answer to this was the notion of corroboration in which he thought we could have good reasons to believe theories that have passed severe tests. The problem, of course, is what constitutes a severe test? Popper relied on the character of individual scientists to address this, but it only takes a few character studies to dismiss that line of proof.
Oreskes tells us that Ludwig Fleck made the next advance by developing the first modern sociological account of the scientific method. Fleck was unambiguously anti-realist about the truth, saying it was merely whatever a thought collective settled upon. But since collectives could be democratic and progressive, this was key to understanding how science might advance. Thomas Kuhn soon put forth another major idea along these lines, calling the frameworks of these thought collectives paradigms, and claiming that shifts to new ones were incommensurable shifts in meanings, values, priorities, and even identities of the scientists immersed in them. Kuhn’s emphasis on the importance of communities of scientists who shared a common paradigm helped spur the growth of a new academic discipline – the sociology of science. One influential group of these scholars – known as the Edinburgh school – ran with Kuhn’s ideas and gathered examples of the social elements responsible for scientific conclusions, calling this process the social construction of scientific knowledge. This was a well-supported form of relativism that led Oreskes to note that “no serious scholar of the history or sociology of knowledge can sustain the claim that our knowledge is absolute.” In fact, Oreskes is even forced to agree with Paul Feyerabend who said that if you pressed him, he would have to say there is no unique method or principle of science, and that if you look at it historically, “anything goes.”
This seems like quite a comedown for a book purporting to tell us why we should trust science. But it’s the kind of admission of weakness that is necessary to sustain trust, since unsupported bravado eventually leads to collapses in confidence. As Oreskes says, “Those of us who wish to defend science from ideologically and economically interested attack must be not only willing and able to explain the basis of our trust in science, but also able to understand and articulate its limits.” Now that the limits have been well and truly set, what is left as the basis for trust? Oreskes’ claim is that it comes from a field that might surprise many scientists – feminism. Feminist philosophers of science, most notably Sandra Harding and Helen Longino, have been at the forefront of what is known as standpoint epistemology, which recognises that the world is only comprised of subjective views that are influenced by one’s position in society. This does not mean that no meeting of minds is possible, however.
Oreskes notes that the implication of this subjectivity is that greater diversity is the only thing that can help to make the viewpoints of individual scientists as collectively strong and close to objectivity as possible. The subjectivity of similar subjects (along the axis of any chosen variable) does little to combat the observed relativism in science, but, as Oreskes says, “objectivity is likely to be maximised when there are recognised and robust avenues for criticism, such as peer review, when the community is open, non-defensive, and responsive to criticism, and when the community is sufficiently diverse that a broad range of views can be developed, heard, and appropriately considered.” In the second section of her opening salvo, Oreskes draws out five historical examples that illustrate when science has “gone awry” from this ideal. The details come from stories about the limited energy theory, the rejection of continental drift, eugenics, depression associated with hormonal birth control, and dental floss. These stories range from 1873 to today, from fairly contained to very widespread, and from slightly innocuous to genocidally dangerous. They are the kind of stories that sceptics like to cite as evidence for why we cannot trust science, but Oreskes uses them as examples of why her definition of science is trustworthy since none of these examples measure up. After examining them all, Oreskes claims the requirements of “what it takes to produce reliable knowledge” are fivefold: 1) consensus, 2) method, 3) evidence, 4) values, and 5) humility. With this, she has provided her answer to the first two questions raised by her title: what science roughly is, and what characteristics we should look for to know when that science is trustworthy. As for the third question – why we should care about or listen to science at all – Oreskes ends her initial argument with a small coda appealing to shared values. Stated baldly, she says, “let me be clear about my values. I wish to prevent avoidable human suffering and to protect the beauty and diversity of life on Earth.” If science can help us do this, who can be against it? This isn’t the most rigorously defended argument, nor does it claim to know the values of scientists in general, but if Oreskes can get others onto this seemingly incontrovertible common ground, then discussions there ought to be able to build trust and consensus. In the spirit of continuing the hard work of building consensus on this issue, a few suggestions can now be offered. For a book that is largely about the epistemology of science, its focus on the history of science to the exclusion of the history of epistemology has left a gaping hole. A few points from that story might really help with the case Oreskes is building. To start, there is perhaps the origin of epistemology with Socrates’ dialogue in Plato’s Theaetetus where a definition for knowledge is settled upon as justified, true, belief. Sceptical objections against this so-called JTB theory of knowledge have been raised by many philosophers (Oreskes notes a few of them), but it still remained widely accepted in epistemology until 1963 when Edmund Gettier published a short paper that questioned whether any justification could ever be good enough to claim that a piece of knowledge was true. Just as “the dream of positive knowledge” was pursued and abandoned in the philosophy of science, the dream of a fully justified true belief has also been pursued without success in epistemology. Oreskes occasionally uses the JTB phrase in her book, but she never goes into its Platonic origins or Gettier destruction. Perhaps this is because the traditional main camps of epistemology – realism, idealism, and scepticism – have offered no help to Oreskes’ project of building trust in scientific knowledge. A lesser known branch, however, does help. And that is the branch of evolutionary epistemology (EE).
Evolutionary epistemology, just like sociological, historical, and anthropological inquiries about knowledge, de-emphasises the questions around justification and truth, and instead asks, empirically, what beliefs agents actually hold. Oreskes does cover some of these human-centred naturalist approaches, but she neglects EE, which looks for universal mechanisms at work in the world. The psychologist and philosopher Donald Campbell, who coined the term EE, posited that all forms of evolution adhere to a Universal Selection Theory characterised by blind variation and selective retention (BVSR). Although the exact units of selection for knowledge (e.g. memes) are still being hotly debated, the analogies between biological evolution and cultural evolution are strong enough for many to accept EE terminology. Oreskes herself falls into this camp, perhaps unintentionally, when she says, for example, that “Scientific ideas, like evolution itself, may change dramatically over time, but they do so by the accumulation of small transformations and differing interpretations.” Taking this view of ever-evolving knowledge, and combining it with the aforementioned abandonment of absolute knowledge, we can formulate a replacement of the JTB theory of knowledge by positing that knowledge can never be claimed to be true, but can only ever be justified beliefs that are currently surviving our best rational selections. And Oreskes herself has done much of the heavy lifting in defining what those best rational selections are in the scientific realm. Thus, the evolution of epistemological theories – one of the two distinctive programmes within EE – helps to put Oreskes’ definition of science into philosophical context and use it in a potential solution to a problem dating back to Plato. The other major programme of EE – analysing the evolution of epistemological mechanisms – supports Oreskes’ project as well. Sifting through the evolutionary history of life as part of his development of EE, Campbell settled on a 10-step outline that showed the broad categories of mechanisms that biological life has used to gain knowledge. This starts with the earliest origins of life where problems were solved over generations through mere genetic variance alone, without any aids from motion or the formation of memories. This earliest slow accrual of genetic knowledge eventually led, according to Campbell, to the other mechanisms: movement, habit, instinct, visually-supported decisions, memory-supported decisions, observational learning from social interactions, language, cultural transmissions, and finally, scientific accumulations of knowledge. Today, these mechanisms have been further divided or re-defined, but nevertheless, Campbell showed clearly how knowledge processes could be distinguished and understood in a developmental, rather than uniform, manner. So, taken altogether, the two programmes of EE show us that Oreskes’ definition of science provides us with the best knowledge we can philosophically hope to get, and this arrives after our long evolutionary history of accruing knowledge and developing knowledge mechanisms in tandem with one another. That makes good science about as trustworthy as we can currently get. I said near the beginning of this article that Oreskes’ work would open up windows on the third question of why we should listen to science, even if we grant that it is trustworthy. Oreskes claimed that ethical common ground would help, but unless a summum bonum or greatest good is agreed to, very different choices can still be prioritised from within this common ground. Also, the lack of certainty about any knowledge implies that existentially risky choices should be off the table altogether no matter how trustworthy the science might seem. Arguments drawn from evolutionary ethics offer definitions for a summum bonum, and scholarship about the precautionary principle helps define where exactly the interactions between irreversibility and uncertainty become an existential problem. These have the potential to give us the reasons why and the realms where we ought to use scientific knowledge, but to reach consensus on that would require an article that is entirely too long and off topic for this review. So, for now, you’ll just have to trust me. Or not. Ed Gibney is a writer and philosopher who tries to bring an evolutionary perspective to both of those pursuits. His work can be found at evphil.com.
From The Philosopher, vol. 108, no. 1 ('The Other Animals').