“Post-truth” seems to be among the few welcome casualties of the Covid-19 pandemic. Pronounced Oxford University’s “Word of the Year” in 2016, it has come to symbolize the disappearance or further weakening of rationality and objectivity, as well as the world of Trump, Brexit, and people who have “had enough of experts”. The onset of the pandemic, however, seemed to have finally put a stop to, if not reverse, the declining trust in science that some have come to associate with “postmodernism” or the “crisis of expertise”. Suddenly, scientists featured in the evening news, secondary-school science textbooks were dusted off, and the UK’s Chief Medical Officer, Professor Chris Whitty, had developed a cult following. Perhaps we can all finally breathe a collective – post-post-truth – sigh of relief?
There are many reasons to celebrate the return of expertise. The lack of trust in science has, in many cases, been used to stoke doubt with disastrous consequences – “scepticism” about human contribution to climate change being the most obvious. At least as far as the practical order is concerned, the pandemic seems to have turned everyone into ersatz-realists: even the most committed social constructionist probably washes their hands and sanitizes assiduously, knowing, all the while, that germs did not exist before Louis Pasteur discovered them. In this sense, the return of expertise promises to usefully short-circuit the long-overwrought debate between realism and relativism, which, though in most cases usefully transcended, occasionally comes back to haunt public discourse, most recently in the form of the UK Government’s critique of Foucault, critical race theory, and other supposedly “relativist” forms of theorizing.
There are, however, equally good reasons to be cautious about the return of the experts. One is the possibility of “expert capture”: namely, the fact that certain experts, or disciplines, can exert undue influence on public policy or decision-making. For instance, some analyses attribute the 2008 economic crisis to the overt influence of neoclassical economists on decision-making, circumventing the instruments of accountability that should have been available to the public. In the Covid-19 pandemic, policymakers have used insights from behavioural science to design public interventions, but these interventions were primarily targeting people’s behaviour – not their knowledge. In this sense, expertise can be used to make decisions less, rather than more, transparent to the public. On the other hand, not everyone is in an equal position to judge expert knowledge or advice. For instance, reliance on “folk” science or on conspiracy theories led some people to engage in highly risky practices, from injecting Listerine to refusing to get vaccinated. In this sense, it is quite possible that we do not want all kinds of expert knowledge to be open to scrutiny or dispute.
Can we devise a “Goldilocks” ratio of epistemic trust – sufficient for a democracy to function, but also sufficient to prevent potential abuse of power by experts? In this particular case, whose knowledge we know and trust is confounded by another problem – whom do we trust to make assessments about that knowledge? This leads us to the relationship between knowledge, authority, and expertise.
The philosopher Linda Zagzebski places authority at the core of questions about knowledge. In her view, we are generally well placed to trust others’ authority. This makes intuitive sense: we do not quiz the train driver about her knowledge each time we board a train, or argue with the nurse about the proper way to insert a needle. Under most regular circumstances, our daily lives involve relying on multiple forms of expertise – and unproblematically so.
What complicates the unproblematic reliance on others’ knowledge, Zagzebski notes, is that trust in expert authority contradicts two fundamental values of liberal democracy:
Epistemic self-reliance. This refers to the value we place on knowledge attained through our own cognitive (and, possibly, other) capacities: for instance, when we teach students critical thinking, this is motivated by the conviction that learning how to assess arguments and evidence for themselves is preferable to telling people they should form beliefs purely on others’ authority;
Epistemic egalitarianism. This is the assumption that all humans have, in principle, equal epistemic capacities. Commitment to epistemic egalitarianism, for instance, prevents us from discriminating on the basis of imagined or demonstrated epistemic capacities, such as limiting the right to vote to those with a certain level of education.
In Zagzebski’s analysis, our investment in both principles stems from the fact that we see their combination – epistemic autonomy – as a correlate of and, possibly, precursor to personal and agential autonomy. In other words: we equate “thinking for oneself” with the possibility of acting independently, for instance, as citizens in a democracy. However, this presents distinct problems when we encounter people whose reasoning seems to lead them to conclusions that are fundamentally different from, and perhaps inimical to, our own. This is what I refer to as the “Free Nose Guy” (FNG) problem.
The Free Nose Guy is a well-known figure of the pandemic. Usually male and White, the Free Nose Guy (FNG) wears his mask on his chin or below his nose. What confounds us about the FNG is the fact that he has clearly not chosen to resist public health advice as such. He wears his mask, just not properly. In this sense, the FNG both conforms to and does not conform to expert opinion. It is possible that the FNG wears his mask below his nose because he does not know how viruses are transmitted. In this case, we can try informing him, directing information campaigns, and so on. Alternatively, it is possible the FNG does know how viruses are transmitted, but does not care. Is the FNG ignorant, arrogant, or both? We cannot be sure, so what do we do?
Reactions to the FNG problem seem to conform to the following pattern. On the level of individual judgments, they often take the form of doubting either epistemic self-reliance or epistemic egalitarianism, or both. We may say that FNGs are not, in fact, thinking for themselves: they have been misled – by the media, by political leaders, or both – or that they are just not epistemically capable, i.e., they are stupid or ignorant. Structural solutions to this problem correspondingly address either epistemic self-reliance or the assumption of epistemic equality. For instance, they may seek either to alter the pool of information from which people can draw knowledge – through “fact-checking” or pointing to misinformation/“fake news”, for example – or to highlight the need to minimise space given to those who present opinions that substantially diverge from scientific consensus, e.g. on climate change. The more radical version of these proposals sometimes takes the form of arguing for “epistocracy” – that is, putting the experts in charge of political decisions – or, at the very least, mandating behaviours that adhere to expert judgment, such as making vaccinations obligatory.
I think there are many moral and ethical objections to circumventing or limiting either epistemic self-reliance or epistemic equality. However, this should not detract from the fact that commitment to both leaves us, politically speaking, riding the same bus with the FNGs. In other words, if we grant the FNGs epistemic autonomy, we have to concede that their behaviour leaves us exposed to harm. Thus, we may have to constrain their general or bodily autonomy by, for instance, mandating the use of face masks or making vaccination obligatory. If, however, we want to preserve the FNGs’ right to choose, but also to prevent their choices from endangering everyone else, we may have to constrain their epistemic autonomy: for instance, we may have to block access to sites that promote vaccine hesitancy. Is it possible to solve this problem without endangering either epistemic or liberal democracy?
In a paper from 2011, “Democracy, Public Policy, and Lay Assessments of Scientific Testimony”, political philosopher Elizabeth Anderson addresses this problem. Like me, Anderson believes that developing the capacity of “lay” publics to evaluate scientific knowledge claims is important for preserving the principles of democracy. It is also preferable both to maintaining the status quo (one could argue, even more so at present than it might have been in 2011) and resorting to “epistocratic” solutions.
One part of Anderson’s solution is to reframe expertise as a question of degree. In this sense, people’s knowledge can be judged in relation to the position they occupy on the scale between complete (presumably ignorant) “layperson” and a recognized scientific leader. In the case of climate change, for instance, the hierarchy of expertise would proceed from “laypersons” to people with a BA in a scientific discipline, to scientists working in a related field, to scientists directly involved in climate change-related research (known as “contributory” expertise), and finally to, say, Nobel Prize laureates in the field. This makes intuitive sense – after all, non-experts should be able to assess scientific testimony, but not all non-experts are in the same position to do this.
The second part consists of establishing a set of procedures that even ordinary, lay members of the public can apply in order to assess expert testimony. These include not only assessing someone’s formal credentials, but also their honesty and epistemic responsibility. Anderson lists a number of tell-tale signs of epistemic dishonesty – including evasion of peer review, dialogic irrationality, or advancing clearly ludicrous theories (one could think of the Flat Earth Society or conspiracy theories linking Covid-19 with 5G towers). Finally, Anderson argues it is relatively easy to establish the existence of consensus between trustworthy experts – when it comes to anthropogenic climate change, for instance, literature reviews and expert statements, including those in the reports of UN’s Intergovernmental Panel on Climate Change (IPCC), unambiguously confirm human impact on the planet’s climate.
Anderson also proposes some remedies for social conditions that prevent “lay” publics from forming reliable judgments of expert testimony. These conditions include biased media reports, segregation of social networks by partisan affiliation, and “cultural cognition”, defined by Anderson as “a tendency of people to assess risks on the basis of cultural values, and to distrust experts who present testimony inconvenient to those values”. This seems more than pertinent to the present epistemic condition; indeed, Anderson’s more recent work also addresses some of the structural impediments to developing the kind of constructive, respectful discussion that may be expected of citizens in a democracy. Anderson’s solutions address the conditions under which people form beliefs – for instance, exposing them to a diversity of opinions or, as she suggests in her more recent work, forming bipartisan discussion groups where people can find common ground for discussion once they have stepped back from passionate attachment to specific partisan views.
While I think many of these solutions are worth pursuing for, if nothing else, their capacity to generate less polarized debates, I fear that they reflect the philosopher’s tendency to see knowledge and belief primarily as features of individual epistemic agents, and social structures as more-or-less distinct from (though obviously connected to) these. In some cases, this can lead to what social theorist Margaret Archer characterized as the “hydraulic” view of society: lift or press lever on one “end” – for instance, social structure – and things change on the “other end” (individual behaviour), and vice versa. For instance: “fix” the conditions for debate by making them less partisan or eliminate conspiracy theories from public discourse, and people will be less inclined to support partisan views or believe in conspiracy theories. People who are more open to other political views will, in turn, make for a more democratic or peaceful society, and people who do not believe in Covid-5G-tower conspiracy theories will, for instance, make for a healthier one. In this sense, small “tweaks” to principles of epistemic self-reliance – e.g. removing certain kinds of content from the internet – are meant to create conditions under which epistemic equality does not pose a threat.
I think this shows the difficulty of remaining committed to both epistemic self-reliance and epistemic equality. While the hierarchy of expertise Anderson proposes does not directly violate epistemic equality – as it does not preclude anyone from evaluating expert testimony – its formal version makes for a rather clunky evaluation of epistemic capacity. For instance, it does not account for differences within formal credentials or epistemic positions: in some countries, certain graduate programmes in the social sciences retain a degree of training in the natural sciences, and vice versa. This training is not sufficient to position someone squarely on the “expert” end of the spectrum, but it is far from clear that they should fall into the same “lay” category as, say, someone who has never taken a science class.
Secondly, even formal or procedural assessments of expert testimony are not immune from the manipulative power of “rogue scientists”, who often use their formal credentials to present controversial opinions. Recent cases, such as the appointment of Noah Carl, a proponent of “race science”, to a research fellowship at Cambridge University, show us that expert assessments of expert testimony are susceptible to errors in estimating credibility; in light of this, an average epistemically self-reliant person would be too. Finally, structural remedies – for instance, adjusting the balance of content on news programmes – are always imperfect. The existence of social media and alternative news outlets make it very difficult to control the kind of information that people can access. In other words, under conditions of epistemic pluralism, we have no way of guaranteeing that exposure to a diverse pool of information will not lead people to conclude that, indeed, they can wear face masks under their chin, or are better off not getting vaccinated. Are we stuck with the FNGs forever?
One of the limitations of this view, in my opinion, is that it does not recognize the multitude of ways in which knowledge or belief can be “social”. This, of course, brings us to the bigger debate in social theory, social ontology, and social epistemology – what do we mean by “social”? Rather than revisit its different aspects, on this occasion I will focus on a facet which I believe we would all agree is irreducibly social: social inequalities. This, I would like to suggest, is necessary for us in order to understand the “lay/expert” distinction: where it comes from, what it depends on, and what it does.
The etymology of both “lay” and “expert” reflects durable assumptions about epistemic hierarchy. “Lay” comes from laos (λαός), the generic ancient Greek term for “people”. Laos, in this sense, precedes demos; it is pre-political. Yet, it is already social. One can only be a “layperson” by virtue of belonging to a community of laypeople – and, importantly, by virtue of not belonging to a different community. This underlines the importance of the social and historical framing of knowledge and expertise, as well as their hierarchies.
In late Antiquity and the early Modern period, “lay” or “laity” became defined by its distinction from clerical orders and their members. While religious traditions vary according to who they count(ed) as “lay” and who as clergy, the distinction was important. While ordained and “lay” members could share ontological, epistemological, as well as moral and ethical assumptions, as a rule lay members devoted less time and attention to these in daily life. “Lay” people, in most contexts, were free to marry, reproduce, consume, accumulate, and occasionally fail to uphold moral or ethical commitments. The role of clergy, among other things, was to offer advice and guidance (and, sometimes, punishment) on how to remedy these failures, or, at the very least, set an example of proper belief and conduct through their own lives.
This association between knowledge and conduct persists in the contemporary distinction between “lay” and “expert”. Laypeople are those whose knowledge can be taken as representative of the population at large; a layperson is the epistemic exemplar of “everyman” (or everywoman). They are expected to be ordinarily ignorant (or wrong) about all sorts of things. Experts, on the other hand, are regularly expected to be right, and are supposed to be held accountable when wrong. The slippage between “wrong” (epistemic) and “wrong” (moral) is evident in the contemporary relationship to expertise. It is reflected both in the reverence for expert opinion on topics that are not their (actual) area of expertise – from Albert Einstein’s views on world politics to Richard Dawkins’ views on other people’s beliefs – and in public reactions to experts’ (supposed) failures to uphold moral commitments (for instance, the outrage with which the UK public met the media reports that the Imperial College epidemiologist Neil Ferguson, one of the people behind the pandemic modelling that changed the UK Government’s initial approach, met with his lover during lockdown). In these cases, it is quite clear that we are granting authority on moral or ethical matters to people who are, by ordinary standards, as “knowledgeable” or as “ignorant” about these matters as any “lay” person; and conversely, that we are using someone’s position in the epistemic hierarchy in order to argue that they should be held to similarly high standards when it comes to moral conduct.
This association (or confusion) between moral and epistemological judgment has long been the key theoretical question of my work. Why is it, for instance, that we believe critique has both truth-value and normative (or moral) value – that it tells us something about how the world is, and something about how it ought to be? How, and under what conditions, are these two aspects connected? My interest is, in particular, in the political effects of the fact-value distinction. What kind of performative and affective “work” does the distinction between lay and expert knowledges – and our assessment of either – do?
This brings us back to Zagzebski’s link between authority and expertise. Is “trust in science”, then, just the flipside of trust in religious or, for that matter, political authority? In other words, does it relate to the same refusal of epistemic self-reliance and desire for external sources of validation? This would make our Free Nose Guy much closer to those who share Einstein quotes or Neil de Grasse Tyson’s TED talks. In this sense, someone might argue, just like epistemic self-reliance can result in different conclusions and behaviours – from concluding that it makes sense to get vaccinated to the belief that vaccination is risky – its absence could also take several forms: for instance, I could choose to get vaccinated because a celebrity I admire has done it, and I could refuse to get vaccinated because a religious leader told me not to.
This is not to invite in a flat relativism about the truth-value of knowledge claims; it is rather to say that we should be mindful about inferring anything about people’s belief-forming practices on the basis of their behaviour alone. After all, humans are notoriously prone to ambiguity in epistemic matters: we believe that carbon emissions are a problem yet continue to fly, for instance. Literary scholar Steven Connor has recently coined the neologism “epistemopathy” to capture the kinds of affects associated with knowing or not knowing, the “mattering of knowledge rather than the mere matter of its facts”. Relatedly, ignorance, as the social theorist Linsey McGoey has argued, is never simply the absence of knowledge; on the contrary, the capacity to generate and maintain ignorance is a feature of the distribution of social, political and economic power. This is not to reintroduce a different kind of epistemic-moral hierarchy, in which ignorant (but naïve) masses are kept in the dark by the all-seeing (but evil) elites. Instead, I want to draw attention to the role that our judgments of epistemic (in)capacity – our own, and other people’s – play in how we go about living with one another.
Rather than trying to “judge” or assess the FNGs’ epistemic capacities, therefore, I think we could more usefully start by asking what kind of social, political, and interpersonal conditions make it more likely that people would, under certain circumstances, display certain kinds of epistemic capacities, habits, and tendencies. After all, if experts become expert through sustained practice, maybe more time for everyone to spend on broadening their knowledge and learning new things would make everyone a bit more knowledgeable? While not everyone would need to qualify as an expert, neither would people need to remain completely ignorant of everything outside their immediate area of interest. This model, itself inherited from the Enlightenment, exists in the idea of a “Renaissance Man”, as well as some contemporary forms of liberal arts education. Yet this model is not only, as its name suggests, strongly gendered; it is also highly socially stratified.
Contemporary education often entails early specialization, streaming students into “tracks” according to supposed aptitude or expressed preference. Early specialization is usually justified through recourse to employability, but this is itself a classed concern: not everyone, in other words, is under equal pressure to obtain a job right after graduation. Of course, some schools are better at teaching general knowledge, and children and young people from more privileged backgrounds are more likely to pick up on these through primary socialization, even if this kind of instruction is lacking in schools. Yet this only reinforces the link between social and epistemic inequality. Rather than deriving a hierarchy of expertise from formal educational credentials, we would be better off designing a free public education system where basic knowledge about how the world works would truly be accessible to all.
Simply opening up access to knowledge would not suffice, though. After all, as libertarians are fond of arguing, almost all kinds of knowledge are already available on the internet – and yet, many people prefer to use it to post pictures of cats or read conspiracy theories. For this kind of access to truly hold the promise of democratization, two other conditions need to obtain. One is a social context in which people’s knowledge can be subjected to discussion and evaluation, possibly guided by more experienced peers or teachers – with schools and universities, arguably, better positioned to do that than, say, social media. The other is the equitable distribution of labour that does not make certain kinds of activities accessible only to those with economic and social capital. After all, people are more likely to take time to learn about things and reflect on their beliefs if time is something they have. It is not equally easy for everyone to go to the Science Museum or read the National Geographic: access to activities and materials such as these depends on class, ethnicity, and location, and is heavily skewed in favour of middle- and upper-middle-class individuals and families. A single parent of three working in two part-time jobs is less likely to drive an hour to visit an exhibition, even though they may be equally interested, capable, and keen to learn as someone with a permanent, well-paid job, and a live-in nanny. In other words, while the conditions for epistemic self-reliance may be there, the societies we live in are in fact very far from egalitarian – epistemically and otherwise.
The Free Nose Guy problem, then, could have just as much to do with the assertion of personal and political autonomy as with epistemic autonomy. FNGs may not be able to choose their working hours, their commute, their kids’ school, or their social background, but they can choose in what – and in whom – they place their epistemic trust. In deeply exploitative societies, where conditions for equality, solidarity, and cooperation are quickly deteriorating, this is often among the few kinds of autonomy they have left.
This essay argued that the question of why some people find it hard to believe in expert opinion, even when it concerns questions of overarching consensus such as climate change or the spread of the Covid-19, rests on another big question of liberal democracy: how we assess and evaluate other people’s belief-forming habits and capacities, and, in particular, how this connects to social inequality. Trust in experts, in other words, cannot be discussed without the consideration of political, economic, and social effects of epistemic hierarchies, including the lay/expert dichotomy itself.
Too big a gap between “lay” and “expert” knowledges creates a public equally susceptible to “expert capture” – that is, overt reliance on expert opinion, especially if it entails highly technical knowledge – and to different kinds of pseudoscience, including conspiracy theories. This is why the phenomenon of FNGs can be compatible with both libertarianism and authoritarianism. Defiance of expert authority is often no more than a consequence of a desire for stricter, and more absolute, forms of authority. Those among us who dream of a return to the epistemic authority of experts should be mindful of the tendency to confuse the two.
Jana Bacevic is assistant professor at Durham University, UK, and member of the editorial board of this journal. She is also a member of the Cambridge Social Ontology Group. Her work is in social theory, philosophy of science, and political economy of knowledge production, with particular emphasis on the relationship between epistemological, moral, and political elements. She tweets (@jana_bacevic) and occasionally blogs at janabacevic.net. She is vaccinated.