top of page

"Trust, Expertise and Hostile Epistemology": A Conversation with C. Thi Nguyen (Keywords: Science; Pseudoscience; Vulnerability; Transparency; Metrics)


White house on hill

This conversation was taken from our recent book, Science, Anti-Science, Pseudoscience, Truth, edited by Anthony Morgan. If you enjoy reading this, please consider becoming a patron or making a small donation. We are unfunded and your support is greatly appreciated.



A key vulnerability for cognitively limited beings such as ourselves arises from trust. Much of the current misinformation crisis seems to derive from misplaced trust – trust in anti-science celebrities, trust in conspiracy theory forums and propagandistic media networks. We rely on each other to navigate the world, but this trust can be exploited even when we have done our due diligence. In this conversation, C. Thi Nguyen discusses his idea of “hostile epistemology”, which examines how environmental factors exploit our cognitive vulnerabilities. As finite beings with limited cognitive resources, we constantly reason in a rush due to overwhelming information, leaving gaps that can be exploited. Given this, how can individuals with limited understanding determine which group to trust?

***

Johnny Brennan (JB): The idea of hostile epistemologies suggests that our reliance on cognitive shortcuts, while essential for daily life, can leave us vulnerable to exploitation and errors. What sparked your interest in this topic? Was it a reaction to readings in social epistemology, or did a personal event or realization lead you to this perspective?


C. Thi Nguyen (CTN): When I teach epistemology, I often start with Descartes’ First Meditation, where he realizes his beliefs are infested with lies and decides to discard everything and start anew. But this radical Cartesian approach of throwing everything away and starting again is something we push back against in social epistemology. During graduate school, I became obsessed with the crucial question of how a non-expert chooses which expert to trust (an issue that actually dates as far back as Socrates). Many in philosophy consider this problem solvable, but truly understanding the difficulty of managing trust in experts outside one’s specialty requires a major shift in thinking.


Two readings fundamentally changed my views on epistemology and philosophy. The first, Elijah Millgram’s 2015 book The Great Endarkenment, argues that hyper-specialization is the essential problem of our era. It claims that no single person can master complex, cross-disciplinary arguments. And that is what life in the world of the sciences is like. For example, if you wanted to explain to someone why you are taking antibiotics beyond trust in one’s medical practitioner, this requires very specific biological expertise.


The other one, a 1986 paper by Annette Baier called “Trust and Antitrust”, is foundational in the trust literature. A feminist epistemologist, Baier critiques the notion that morality begins with free, equal individuals of power voluntarily agreeing to come into association. She literally says that is something that only rich guys in a gentlemen’s club could have ever considered plausible. Instead, for Baier, moral life starts in vulnerability. What it is to trust someone is to make yourself vulnerable by putting something in their power, and in particular putting something in the power of their good will.


What it is to trust someone is to make yourself vulnerable by putting something in their power, and in particular putting something in the power of their good will.

This idea makes many people nervous. Once, a student in my class claimed he never trusted anyone because it makes him vulnerable. I asked him how he got to school, and he said he drove on the highway. I pointed out how many people he trusted with his life in that short drive: other drivers, brake mechanics, scientists who developed the brake fluid formulations, and structural engineers. When you think about it, you realize the countless people you trust daily, which can be dizzying and nauseating.‎


JB: So, both Descartes and the moral philosophers Annette Baier critiques confront problems individually, believing they can solve them through foundationalism, tearing down and rebuilding knowledge from the ground up alone. In contrast, Baier embraces vulnerability. From a Cartesian perspective, this individual approach is optimistic, while leaning into vulnerability carries a kind of pessimism as there are a litany of problems in the novice’s way as they seek to navigate the world of trust and expertise.   


CTN: It is only pessimistic if you fantasize about being an independent individual who doesn’t need to trust anyone and can be wholly self-contained. And the fact is that we are plunged into this world ‎where we have to trust that people are going to feed us, we have to trust that they are not going to kill us – these are acts laden with trust. This can be seen pessimistically in terms of the loss of some crucial sense of independence or alternatively, it can be seen as an embrace of deep sociality, but either way, trying to manage our trust is a terrifying predicament.


Whether you are optimistic or pessimistic, Cartesian or Baierian, hinges on a technical question: can you have good reason and confidence in selecting which experts to trust? I believe there is a presumption that you can.


For me, it is clear that it would be difficult for me to vet the trustworthiness of specialists like immunologists or climate change modellers due to their extensive knowledge of fields about which I know almost nothing. Faced with this, there are two main approaches: some believe there is a core intellectual virtue that is visible without needing expertise, allowing us to know who to trust. This is a kind of intuitive trust in the authority and legitimacy of an expert. The other approach relies on large-scale institutional credentialing, like trusting publications in the New England Journal of Medicine. This approach shifts trust to a network of institutional decision-making rather than managing it directly.


JB: This process of credentialing institutions serves as a powerful signal of trustworthiness, often replacing rigorous verification processes. However, the process of determining what makes an institution good in the first place involves numerous complexities, including assessing the legitimacy of the institution’s accrediting bodies, and so on.


CTN: There is a 2016 paper by Paul Smaldino and Richard McElreath called “The Natural Selection of Bad Science”. It is a computer modelling paper, and the idea is that there is a gap between what produces good science and what gets you published more often. Essentially, if you lower your rigour standards, you will get more positives and in turn more publications, while being more rigorous will result in fewer publications. From this, they note that high-status jobs go to those with more publications, and, in turn, graduate students tend to imitate these high-status individuals. The paper explores how quickly this degrades the quality of science, and the answer is: very quickly.


In any social institution, certain signals are used to confer social status and power. If there is any gap between the signal of quality and actual quality, we should assume that those who target the signal over the actual quality will gain social power faster.

If we use institutional proxies like citation rates or publication rates to determine which scientists or academics to trust, these metrics can be gamed and don’t always reflect quality. In any social institution, certain signals are used to confer social status and power. If there is any gap between the signal of quality and actual quality, we should assume that those who target the signal over the actual quality will gain social power faster. This freaks me out because it suggests an inevitable force exploiting that gap between the signal and reality.


When discussing institutional trust, transparency metrics are important. In her BBC Reith lectures on trust, Onora O’Neill suggests that trust and transparency are often seen as aligned, but they are actually in tension. Transparency requires experts to explain themselves to non-experts, which is challenging because expert reasoning isn’t easily transferable. This leads to deception or the invention of fake reasons. In my paper “Transparency is Surveillance”, I found that the situation might be even worse, as experts might limit their actions to what can be publicly justified, and in so doing, compromise their expertise. I concluded that transparency mechanisms can work to uncover bias and corruption, but they are fundamentally rooted in distrust. They arise from concerns about, say, politicians mishandling public funds, leading to a need for monitoring. However, this essentially conflicts with our need to trust experts. Requesting transparency from experts means asking them to justify their actions in understandable terms, but the whole point of using an expert was to be able to trust someone who understood something you didn’t.


I’m interested in the inherent tension between needing to trust experts to do things we cannot understand, which puts us in this very vulnerable situation, and then cutting them off from their expertise by trying to force them to conform to our understanding. This tension isn’t accidental; it stems from our cognitive limitations. Each of us sees a little patch of the world and then somehow has to link them up correctly, but we have to do the linking from inside our limited patch.


Each of us sees a little patch of the world and then somehow has to link them up correctly, but we have to do the linking from inside our limited patch.

JB: If the demand for transparency is rooted in distrust, could we be mistaking the proper target of that distrust when it comes to expertise? Meena Krishnamurthy’s paper “(White) Tyranny and the Democratic Value of Distrust” discusses how distrust of institutions is a good thing as it can serve as a check against institutional power, especially in preventing tyranny. However, while this vertical distrust between institutions and citizens might be seen as healthy, we might argue that applying the same distrust to demand transparency from experts might be misapplying it. Do you see them as one and the same?


CTN: I think it is really hard to be sure when you are misapplying or correctly applying distrust because of the complex epistemic puzzle we are involved in. The claim is never to trust more or to trust unlimitedly, as the world is full of bad actors and malicious individuals, and we need to be on guard for them. That said, without trust we couldn’t rely on experts, science, or even basic services. Total distrust is impractical, but total trust invites corruption, bias, and domination by bad actors. The problem is that expertise often operates out of our sight, creating a deep trust issue. We can trust a babysitter because their actions are observable and not a form of arcane expertise. However, trusting the statistical modelling of climate data produced by experts, whose work we cannot directly observe, is different.


Individuals still face the challenge of choosing which communities to trust. And properly vetting a community requires knowledge that is beyond an individual’s capacity.

If the scope of scientific knowledge is so vast and the world is so complex, the only way to generate understanding and knowledge is as a huge collective unit. Some philosophers accept this and conclude that knowledge is social, but individuals still face the challenge of choosing which communities to trust. And properly vetting a community requires knowledge that is beyond an individual’s capacity. We can rely on some signs to help us, but the essential issue is the limitations of our minds. These limitations necessitate trust in others, yet we must manage this trust itself with our limited understanding (and limited time).


JB: Within the hostile epistemology framework, you highlight how bad actors can exploit cognitive vulnerabilities. How much should we focus on these bad actors versus the effects of our cognitive vulnerabilities doing what they do, with no malicious intervention from bad actors? Your approach seems to shift blame from individuals’ shortcomings to external bad actors, but I’m wondering whether we can eliminate blame altogether, acknowledging that these issues arise naturally from our inherent cognitive limitations?


CTN: In a paper I wrote called “Seductions of Clarity”, I explored the idea that something can be too clear. My theory is about the feeling of understanding. Some social epistemologists and philosophers of education and science believe that understanding isn’t just grasping individual facts but seeing things as a whole, appreciating its explanatory power, and being able to communicate it.


Imagine someone wanting to game your sense of understanding, wanting to build something that felt like understanding. They would maximize its comprehensiveness, explanatory power, and communicability. This resembles a conspiracy theory – a simple, clear model of the world. Such a model is a powerful engine of seduction, but it doesn’t require intentional manipulation. Oversimplified models can arise for various reasons.


Understanding isn’t just grasping individual facts but seeing things as a whole, appreciating its explanatory power, and being able to communicate it.

Picture a complex, nuanced model alongside a simpler one created by someone without the right expertise. The simpler model transmits more easily. This leads to the quick spread of simpler models over more nuanced ones, without any hostile actors. The result is fast propagation of easily comprehensible models and slower propagation of difficult, expertise-dependent models.


JB: What effect does hostile epistemology have on intellectual self-trust, for example, trusting our sensory faculties to be accurate and not deceive us? Should we be concerned about our ability to trust ourselves and our intellectual capacities in that sense?


CTN: In my “Seductions of Clarity” paper, another example of seductive oversimplified clarity is bureaucratic metrics. To provide context, there is a field called science and technology studies (STS), rooted in philosophy of science and now interdisciplinary, that explores this topic.


In Theodore Porter’s 1995 book, Trust in Numbers, which was a foundational text in the STS field, he explains why institutions and politicians favour quantitative over qualitative justifications, even when the former are flawed. One reason is that politicians use numbers for the pretence of objectivity, so that they can disclaim decision-making and look like they are not responsible. In some cases, this might be true, but Porter offers a deeper explanation. He argues that qualitative information is rich, dynamic, and context-sensitive but doesn’t travel well between contexts because it needs shared background knowledge to understand. Quantitative data, however, is made context-invariant and standardized, making it easily portable between contexts. This is known as the “portability theory of data”. This doesn’t mean that data is false; rather, it means that it is made to be understandable without context. And that gives it a competitive advantage in being uptaken and understood.


The concern is that people often trust metrics over their internal sense. Philosopher of food, Megan Dean, provides a striking example from the psychology of food. Restraint theory explains why lifelong dieters experience a cycle of dieting and binging. The theory argues that these dieters focus so much on external nutrition metrics that they lose their inner sense of satiety. Without these metrics, they can’t tell if they are full or not.


JB: By outsourcing trust too much to the portability of data and numbers, we starve our sense of self-trust and our ability to trust our own intuitions.


CTN: Exactly. This can be seen as a loss of self-trust or as extending our mind to external evaluative objects. For instance, using a Fitbit means trusting it as your evaluator. The key issue is the displacement of trust from your own judgment to external, large-scale institutional evaluators, which lack sensitivity to individual values. This shift happens without any hostile actors, driven by the appeal of transparent, portable metrics. This erodes self-trust and reliance on personal evaluation, replacing them with institutional resources.


The key issue is the displacement of trust from your own judgment to external, large-scale institutional evaluators, which lack sensitivity to individual values.

That said, trusting your own ability to evaluate complex issues like climate change over the science is problematic. As philosophers, we find ourselves in the peculiar situation of striving to promote intellectual autonomy while also telling people to trust the science that they cannot fully understand. The challenge is to figure out how to be autonomous and responsible in managing that trust.


JB: I am interested to hear your thoughts on the relationship between hostile epistemology and pseudoscience/anti-science.


CTN: For me, this aligns with the seductions of clarity idea. Hostile epistemology preys on our need for shortcuts due to time pressure. This use of heuristics and shortcuts is perfectly rational, but it can be exploited. And pseudoscience often exploits this vulnerability. The paradox for someone like me, who trusts science, is that I cannot neatly explain most of my reasons for this trust. When I try, it often turns out to be more complex and less clear than expected.


But someone following pseudoscience can often provide a clear, sweeping explanation. In contrast, actual experts offer nuanced, complex answers. An interesting empirical study on jurors found that they tend to trust expert witnesses who make clear, confident, unqualified statements about everything, while the actual experts tend to emphasise complexity, various possible conditions, and so on. In fact, juror trust was found to be inversely correlated with the expertise of the witness! Pseudoscience thrives on this clarity and confidence, making it more appealing despite its inaccuracy.


It is easy to say “trust the science, not the pseudoscience”, but the real issue is when you realize that some of the sciences are not being conducted well. When you become aware of things like the replication crises in the social sciences, or the use of flawed statistical methods to game the publication process, this makes it difficult to know what to trust.


We trust science in order to avoid pseudoscience, but parts of science suffer from bad methodology, profiteering interests, and status-seeking.

Paul Smaldino’s research highlights that many studies in sports medicine use poor statistical methods, leading to numerous false positives. This allows those who produce low-quality studies to dominate the field, as the flawed methods are widely accepted and not well understood. And then there is a huge amount of justifiable suspicion regarding research on drug efficacy, with funding from drug companies being a significant predictor of the reported efficacy of the drug. We trust science in order to avoid pseudoscience, but parts of science suffer from bad methodology, profiteering interests, and status-seeking. And now we need to decide how to live, which medications to take, and so on. It is a situation that can make your head spin!


In many ways, the problem of the modern era is the problem of trust management. When the world feels beyond your control, even defining the problem becomes challenging. I don’t know what to do. I don’t know how to live. When faced with these incredibly complex questions about which institutions to trust, which science to trust, and so on, the sense of security I feel in knowing what to believe often disappears, and I have no replacement. This can leave me feeling insecure and confused.

 

Further Resources

 

  • Annette Baier, “Trust and antitrust”. Ethics (1986)

  • Johnny Brennan, “Trust and the Plea for Recognition”. The Philosopher, 122:1 (2024)

  • Elijah Millgram, The Great Endarkenment: Philosophy for an Age of Hyperspecialization. Oxford University Press (2015)

  • C. Thi Nguyen, “Hostile Epistemology”. Social Philosophy Today, 39 (2023)

 


If you enjoyed reading this, please consider becoming a patron or making a small donation.

We are unfunded and your support is greatly appreciated.

bottom of page