The Weight of Forever


White house on hill

When confronted with a best-selling book by an up-and-coming Oxford philosopher, it’s hard to resist the urge to subject the title to a little linguistic analysis. Building on the success of his previous book, Doing Good Better, which introduced the wider public to the idea of effective altruism – optimising our positive impact on the world through scientific philanthropy – William MacAskill’s new book What We Owe the Future introduces us to the idea of longtermism, which extends the scope of such optimisation to cover our impact upon the far-flung future. But, giving in to our inner pedants, we might be inclined to ask: is “the future” the sort of thing to which one can “owe” anything? We generally take ourselves to have obligations to people, mostly others and sometimes even ourselves. We aren’t typically indebted to things, be they inanimate objects or (vast) stretches of time. I’m not sure I owe the 21st century anything, literally speaking, let alone the long aeons that follow.


It’s thus perhaps unsurprising that the book begins by inviting us to personify the future: we are to imagine that we are everyone who has, does, or will exist, living each life out in order, from prehistory until the end of time; and from our vantage in the present consider the quality of those lives still to come, the fate of humanity in aggregate now singular destiny. From this perspective, the overwhelming majority of our life lies before us, and the choices we make now will determine just how good the rest of it will be. What we owe to the future is thereby framed as if it were an obligation to ourselves. Sheer self-interest would dictate that we get our act together, get our priorities straight, and cultivate the capacity for (extremely) delayed gratification. Of course, this is only an allegory, not an argument, but it shades into one of MacAskill’s guiding metaphors: “humanity as an imprudent teenager” reaching the age at which it needs to decide what it’s going to do with itself – getting an eduction, choosing a career, and perhaps settling down in a nice galactic supercluster to raise a few quadrillion kids as extensions of itself. But that’s skipping ahead to the end. For now, let’s return to the beginning, bearing the problem of obligation in mind.


In essence, the book sets out to do two things: to lay out the philosophical framework that motivates longtermism (principally chapters 1, 2, 8, and 9), and to sketch the political program this framework implies when combined with empirical research into the current state of the world and its likely future trajectories (chapters 3 to 7, and 10). MacAskill’s prose is light and easy to follow. Technical vocabulary is kept to a minimum, and is always explained when introduced, with judicious use of concrete examples. There are however a lot of numbers, ranging from complex meta-analyses of social trends with accompanying charts and graphs (e.g., standard of living changes in the US: 1870-2020) to fantastical figures that seem more like convenient back of the envelope calculations (e.g., the moral significance of terrestrial species ranked by aggregate mass of neurons). It’s perhaps easier to feel lost in these sections, though their workings are available to be checked, alongside the other domain-specific research carried out by the team of experts that helped MacAskill compose the book. What ties this all together is a touch of historical narrative. The more empirically-minded chapters begin by taking paradigm cases from the past that illustrate the point in question: ranging from the Chinese Warring States period (the persistence of value changes), through the Islamic Golden Age (the cost of societal stagnation), to the birth of the Abolitionist movement (small groups can effect big changes), and the creation of the Spaceguard defence program (the tractability of x-risks). These episodes set the rhetorical pace of the book, conjuring a sense that the shape of future events is equally legible.


In what follows I’m going to bypass such narratives. Instead, I’ll try to reconstruct the overall argument of the book in as much detail as is reasonable, before turning to the task of assessing it. In the first two sections, I’ll outline the philosophical framework and its corresponding political program. In the third and fourth sections, I’ll summarise the most coherent common objections to the positions these incorporate, and then propose a more original criticism of the overall perspective of longtermism. Finally, I’ll conclude by returning to the opening metaphors.


***


The Longtermist Framework


The philosophical framework of longtermism rests on two seemingly innocuous propositions: that future people count, and that numbers matter. The first claim is that whatever moral concerns apply to people alive in the present should also apply to those who do not yet exist, and that this holds no matter how far into the future they might be. MacAskill motivates this in two ways. On the one hand, he appeals to our intuitions about the moral significance of harm: if we leave broken glass on a mountain trail and this results in a little girl being hurt, we are culpable regardless of whether this injury occurs the next day or in the next century. On the other, he suggests that distance in time is morally equivalent to distance in space: our prima facie obligations to those who live in distant times are more or less the same as our obligations to those who live in distant lands. The second point is that, when weighing competing claims that such obligations make upon us, we often have to turn to numbers (e.g., selecting the course of action that will harm the fewest people), and that whatever moral calculus we choose to apply in these cases must include the claims of the unborn (e.g., selecting an action that may harm some people in the present if it spares more people in the future). MacAskill is willing to countenance that future claims may be discounted relative to present ones for a few reasons, but these discounts are ultimately irrelevant. This is because the number of people who may follow in our footsteps is so vast as to verge on the sublime. MacAskill illustrates this, literally, by filling a few pages with icons representing a billion people each, and explaining that 20,000 pages would be required simply to cover the projected population of earth over the next 500 million years. Even with steep per capita discounts, this leaves a large range of cases in which the claims of the future easily outweigh those of the present. It’s precisely these cases with which longtermism is concerned.


The immediate problem posed by all this is how to perform the relevant moral accounting, and for this task MacAskill turns to decision theory: a formal approach to making choices that works by ascribing quantities of value to their intended outcomes, such that the options can be ranked. More specifically, he advocates making choices based on expected value: the product of an outcome’s potential value and the probability of it coming to pass. This inclines us to aim for worse outcomes that are significantly more likely than the better alternatives (e.g., amputating a limb, rather than hoping an infection won’t spread), and unlikely outcomes that are significantly better than the likelier ones (e.g., buying a lottery ticket, rather than a third packet of crisps). Finally, he breaks down the value of every outcome into three components: significance, persistence, and contingency. Significance is simply whatever is good about the outcome (e.g., providing shelter to a homeless person), persistence is how long this good persists (e.g., is the shelter temporary or permanent?), and contingency is roughly how rare this good is, or how unlikely it would be to occur if we don’t choose to realise it right now (e.g., would someone else provide it if we don’t?). Each factor is weighted by expectation separately before they are summed together. This is the abstract framework that MacAskill applies to the concrete political problems posed in the middle of the book.


You may have noticed that what was originally framed in terms of obligations to act in certain ways (i.e., the rights of future people not to be harmed) has somehow become a matter of the relative worth of states of affairs (i.e., whether a world in which future people’s lives are improved is better, overall). To put this in more technical terms, we seem to have slipped from deontology into axiology. These are to some extent interchangeable: to say that one painting is better than another appears to imply that, all else being equal, if you can only save one from the flames, you ought to choose the former; and it seems natural to say that when you’re obliged to sacrifice one life for the sake of ten, this implies that this is more important than the alternative. However, this only works up to a point. Some conflicting obligations invite obvious quantitative comparisons (e.g. saving more or fewer lives), but others don’t (e.g., breaking a promise or preventing a harm). There are equally different types of value (e.g., beauty, justice, or utility), which motivate different sorts of actions (e.g., appreciation, coercion, or acquisition), and permit extensive and fine-grained comparisons between states/acts without thereby being mutually comparable. It would theoretically be possible for MacAskill to wield decision theory on a case-by-case basis, making use of local metrics that rank certain types of state or event without committing himself to a global metric that makes every outcome comparable, but this is clearly not what he intends. He wants to consider the future as a whole, in order to chart the best possible path through it, and this means embracing some measure of absolute value, or goodness simpliciter. Precisely what this measure might be is not addressed in the more concrete sections of the book (chapters 3-7), where a combination of liberal values (egalitarianism and econometrics) play proxy for it, but it eventually comes out into the open (chapters 8 and 9), just in time to motivate the book’s most controversial claims.


MacAskill wants to consider the future as a whole, in order to chart the best possible path through it, and this means embracing some measure of absolute value, or goodness simpliciter.

I am of course speaking of well-being. This is one of those philosophical concepts we seem to have an intuitive grasp of: we’re more or less happy talking about more or less happy lives. But shaping these intuitions into a coherent and precise measure of absolute value is a tricky matter. MacAskill mentions the three main approaches – preference satisfaction (getting what we want), hedonism (getting what we enjoy), and objective lists (getting what is good for us) – but stays agnostic as to which is correct. The only constraints he imposes on well-being are that it is quantifiable, and that it can be negative as well as positive: there may be some lives which are not worth living. However, this doesn’t mean he says nothing about how we might measure well-being. A significant chunk of chapter 9 is devoted to various methods of measurement, all in the name of establishing whether the world is currently in a state of net-positive well-being (it is), and so whether or not we should aim to preserve the core features of our current way of life (we should). This is also where the narrow focus on human well-being gets widened to encompass the lives of animals, with some surprising conclusions: human use of domesticated livestock is an unmistakable negative for well-being, but human eradication of natural animal populations is probably a net-positive, given how miserable their lives must be on average. MacAskill elsewhere considers the possible contributions of aliens, our extinct hominin relatives, and the post-human progeny we may someday give rise to. But beyond these calculations, the key philosophical claim is that whatever else we may take to be valuable in some way (e.g., art, nature, knowledge, or liberty) is only valuable as a means to the end of well-being, whatever it consists in.

The really controversial claims, however, are made in chapter 8, which discusses the field of population ethics: whether, why, and when it is right to bring new people – new bearers of well-being – into the world. Here MacAskill sets out to establish that “all other things being equal, having more happy people makes the world a better place.” (169). He opens by claiming that every consistent position currently defended in the field entails counter-intuitive and potentially unpalatable conclusions, but we must nevertheless choose between them. The first option is the average view: we should raise the average well-being of the population. The downside of this is that it might encourage us to create a lot of people in outright suffering, as long as they suffer less than the current average. The second is the total view: we should raise the overall well-being of the population. The downside of this is what Derek Parfit called the repugnant conclusion: that it might encourage us to create an arbitrarily large number of people with marginally positive well-being rather than any smaller number of people living genuinely good lives. The final option is the critical level view: we should ensure that the well-being of the population’s members exceeds whatever threshold passes for a good life. The downside of this is the sadistic conclusion: that it might encourage us to create a few people in abject misery rather than a larger number of people who live lives just shy of good.


It’s important to see that each of these options depends upon a peculiar moral symmetry, captured in the following inference: if making new people with unhappy lives is bad (obvious), then making new people with happy lives must be good (contentious). It seems that the best way to avoid the unpalatable choice between the above options would thus be to argue that these situations are really asymmetrical: it can be bad to create a miserable person and yet be neither better nor worse to make a happy person than no person at all. MacAskill calls this the “the intuition of neutrality”, and presents two distinct arguments against it. His first argument proceeds from the fragility of identity, or the fact that slight changes to the context of conception result in entirely distinct people being born. Because any long-term intervention will affect these contexts, neutrality would imply that all such interventions are equally worthwhile. To put this in more concrete terms, if we enact a climate change action plan to make the future better for those who follow us, the people who will exist in the future will be distinct from the people who would have existed otherwise, and so it could not strictly be better. To retain long-term moral distinctions, we must reject neutrality. His second argument goes as follows: say a couple are considering having a child, but the mother has a vitamin deficiency that will result in a child who suffers from migraines if conceived before taking supplements. They thus have three options: a) no child, b) a child with migraines now, or c) a child without migraines later. Neutrality implies that options (b) and (c) are both as good as option (a), but that in turn implies that they should be equally as good as one another. This contradicts the obvious moral imperative to choose (c) over (b), all else being equal. So again, neutrality must be rejected. If you’re suspicious of these arguments, I think you’re right to be, but I’ll hold off any criticisms for now.


Out of all the available options, MacAskill accepts the total view, repugnant conclusion and all. But that’s only the beginning of the controversy. There would appear to be two major consequences of his positions: in the short-term, that we should have kids; and in the long-term, that we should colonise space. There’s reason to be fruitful and multiply at whatever scale we can manage. The grandest possible scale puts those putative 20,000 pages to shame. We would need massive libraries packed with nothing but weighty tomes just to illustrate the extent to which the moral claims of a future space fairing civilisation outweigh the meagre concerns of the present. However, these two cases are treated somewhat differently. On the one hand, MacAskill doesn’t claim that “we are morally required to bring more happy people into existence, or that we’re blameworthy if we fail to do so – just that, all other things being equal, having more people makes the world a better place” (169). Yet, on the other, he does think that “the practical upshot of this is a moral case for space settlement” (189). What exactly is going on here?


When it comes to the political domain, longtermism takes a more aggressive approach to its conflicts with common sense.

Well, MacAskill seems to be proposing a compromise of sorts between the deontic and the axiological. This is not the only instance of such compromise. Throughout the book he is at pains to insist that longtermism is compatible with dominant liberal values, and that it can’t be used to make cases for violating individual rights, including our right not procreate should we choose. This sometimes comes across as a mere rhetorical compromise, designed to placate the worries of potential sceptics, but there are at least two arguments offered for it. The first isn’t articulated in a single place, but is more of a thread that runs throughout the book. It suggests that liberty is instrumentally important from a consequentialist perspective, both as a component of personal well-being and as a structural feature of societies that optimise for it in the long-term. The second is articulated, and draws on MacAskill’s extant work on moral uncertainty (see, for example, his 2020 book, Moral Uncertainty, co-authored with Krister Bykvist and Toby Ord). It suggests that if we are not completely certain that consequentialism is true, and thus that there is some significant chance that liberty is an intrinsic good, then we should aim to act in ways that are as consistent with this as possible. It is on this basis that he can write: “I see longtermism as a supplement to commonsense morality, not a replacement for it” (241). However, when it comes to the political domain, longtermism takes a more aggressive approach to its conflicts with common sense.


***


The Longtermist Programme


The political programme of longtermism is founded upon a singular speculative thesis: that we are currently at a crux point in history, where our actions may have an outsized influence upon the shape of everything that comes after. We’re in the middle of a period of rapid social and technological change, yet at the same time we’re more interconnected as a species than we’ve ever been before. MacAskill argues that this means our society is in a state like molten glass: easily shaped into new configurations that once they settle might persist for an extremely long time, influencing the lives of untold trillions across a vast swathe of space. We thus have a responsibility to ensure that it ends up in the right configuration, which first and foremost means avoiding the wrong ones. The worst outcome would be our complete extinction. This would not only eliminate whatever good we currently bring to the world, but would foreclose the possibility of those far greater goods of which we’re capable. Once we account for this loss of potential, extinction events are so much worse than even near-extinction events that tiny probabilities demand decisive action to mitigate against them. MacAskill admits that there might technically be even worse states than extinction, such as societies designed to perpetuate endless torture, but he thinks that these are vanishingly unlikely, because they require unsustainably malign intent.


The next worst outcome would be societal collapse. MacAskill defines that as the loss of the capacity to produce industrial and post-industrial technology. This is a fall from grace we might come back from, but we might equally never recover, or slip still further into extinction. The good news is that, according to MacAskill’s experts, civilisation is a good deal more resilient than you might expect: even under very grim assumptions about mass death and resource depletion, agriculture and industrialisation are likely to re-occur in the short- to medium-term. He is even quite upbeat about catastrophic climate change: “even with 15 degrees of warming, the heat would not pass the lethal limit for crops in most regions” (137). The main danger there is unforeseen tipping points (e.g., collapse of cloud formation) that commit us to unsurvivable warming once a certain threshold is passed. The main risk to the rebirth of technological civilisation turns out to be lack of fossil fuels. If we burn through them before society collapses, there may not be enough left to bootstrap a new industrial base. The longtermist prescription is thus that we must not only stop burning them to prevent climate change, but also to give ourselves a second chance should society ever collapse.


Beyond this, there is societal stagnation. MacAskill analyses this principally in economic terms, focusing on economic growth as a metric of progress, and identifying technological development as the key factor driving growth. Stagnation is inherently bad, insofar as it represents lost potential, though in the cosmic scheme of things this may be little more than a blip on the path to interstellar civilisation. However, there are graver risks associated with it. A given technological level is not always sustainable in the long run (e.g., fossil fuel-based industry), and so stagnation eventually risks regression, which might then lead to collapse and even extinction. MacAskill notes that technological progress has been slowing since the 1970s, with few major innovations outside of IT and telecommunications. His hypothesis for explaining this downturn is that we are gradually exhausting the low-hanging fruit of available technological discovery, and that, even given ways of making that process more efficient, it is in the end a numbers game: specifically, population size, as this determines how many people can be dedicated to research and development.[1] The upshot of this analysis is that the greatest potential driver of stagnation is population decline, which is another trend we can forecast given the drop in rates of reproduction across the board in developed countries. This means that there are also instrumental reasons to encourage procreation on a large scale (at least until discovery can be automated). A final, less obvious danger of stagnation is that in the name of reversing it, society may change its values for the worse (e.g., normalising exploitation or slavery).


The risks and rewards of value change is another key topic for MacAskill. As already suggested, he’s broadly in favour of the liberal values dominant in most Western democracies, by which I mean some mix of deontic and axiological principles that favour a balance between social egalitarianism and individual liberty putatively compatible with a market economy under capitalism. He thinks there’s room for improvement here (e.g., on animal rights) and is willing to countenance uncertainty about where improvements must be made, but he believes that the last two centuries have seen definite moral progress (e.g., abolitionism and feminism). Although he insists that this progress wasn’t guaranteed by any means, he does suggest that it wasn’t entirely accidental: technological progress tends to enable moral progress, as the ensuing economic growth encourages egalitarian attitudes, and the critical mindset required by science spills over into ethics. The pressing problem is how to ensure such moral progress continues, and to prevent premature lock-in to a poor set of values. Here his prescriptions are recognisably liberal: to encourage open political experimentation by structuring relationships between diverse communities in ways that sustain competitive evolution (e.g., charter cities), while emphasising the importance of free speech and free migration. The main problem for this is equally recognisable: a variant of the paradox of tolerance regarding which values must be locked-in now to prevent premature lock-in of others.


Beyond climate change, MacAskill considers three concrete dangers that come with significant risk of collapse or extinction. From least to most severe, these are: world wars, engineered pandemics, and the emergence of artificial general intelligence (AGI). His reasons for ranking engineered pandemics more highly than world wars are persuasive and thoroughly researched. While I can’t really do justice to them here, it’s worth pointing out that MacAskill has form on this issue: he’s been campaigning for increased pandemic preparedness since before the outbreak of COVID-19. However, his reasons for ranking the threat posed by AGI above war, plague, and climate change are less squarely empirical and at least as controversial as his claims about population ethics. Those who have read Superintelligence, a 2014 book by MacAskill’s Oxford colleague and long-standing longtermist Nick Bostrom, will be familiar with the basic idea: the potential for AGI to recursively improve itself will create a truly epochal break, as it quickly becomes powerful enough to obviate human agency entirely. According to MacAskill, the issue is not just the power of AGIs but their constancy: they are effectively immortal and unchanging, so that whichever priorities they begin with – be they our own values, some disastrous approximation thereof, or a spontaneous drive to dominate or destroy us – these are likely to be locked-in forever, our molten destiny flash-freezing into crystalline fate. Figuring out how to ensure AGI is aligned with the correct values is thus not only avoiding a possible extinction risk (the likelihood of which varies wildly with expert opinion), but actively steering the future towards desirable outcomes. This also makes it even more important to cultivate and propagate good values before AGI arrives.


AGIs are effectively immortal and unchanging, so that whichever priorities they begin with, these are likely to be locked-in forever, our molten destiny flash-freezing into crystalline fate.

MacAskill acknowledges that these risk assessments are provisional. In the face of uncertainty about the future, he advocates three rules of thumb: 1) take actions we’re confident are good, 2) maximise our available options, and 3) learn more. The first rule recommends immediate action on climate change, fossil fuel depletion, and pandemic preparedness. The other two recommend a more circumspect approach to AGI and world war. The book closes by considering how we should go about implementing these priorities. At the individual level, MacAskill counsels against focusing exclusively on improving our own behaviour (e.g., engaging in more ethical patterns of consumption). Instead, he recommends three specific things: political activism, ideological evangelism, and having children. At the collective level, he calls for people to work together in groups, establishing an effective division of labour, and ultimately building a wider movement. In many ways this is a natural extension or evolution of the effective altruism movement that he’s been cultivating for the past decade. Nevertheless, there is another compromise here. MacAskill stipulates that the book only makes a case for what he calls weak as opposed to strong longtermism, in which optimising the goodness of the far-flung future is merely one among our most pressing priorities, rather than the most pressing one. This weaker position is perhaps more conducive to the task of movement-building, as it is less likely to alienate potential allies and recruits, but it’s worth noting that he explicitly defends the stronger position elsewhere (see his 2021 working paper, “The Case for Strong Longtermism”, co-written with Hilary Greaves). It’s the seeming inevitably of this slide from weak to strong longtermism, and the ensuing abandonment of his other moral compromises, which most alarms MacAskill’s critics.


The Flaws of Longtermism


One metric by which we might judge the success of What We Owe the Future is the amount of critical engagement it has drawn, and it has drawn quite a lot. The main worry shared by most critics is that MacAskill’s moral accountancy might be used – counter to his stated intent – to rationalise taking questionable risks in the present for intangible rewards in the very far future, e.g., redirecting funding from climate change mitigation to speculative AI safety research, or abandoning attempts to save lives in the third-world because on average lives in the first-world contribute more to the long-run prospects of the species, or even countenancing genocide if it facilitates the creation of a greater space-faring/post-human civilisation. It does not help matters that other notable longtermists are far more open about embracing such fanatical trade-offs (for an account of some of these, see “Against longtermism” by Émile P Torres, published last year by Aeon). The weight of forever threatens to warp the moral landscape beyond all recognition. Yet, though I agree that the compromises designed to prevent MacAskill’s position from sliding in this direction are quite flimsy, I think it best not to dwell on positions he doesn’t explicitly endorse. After all, there are enough criticisms of the positions he does espouse to go around. To begin with, I’ll break these down into ethical, political, and epistemological criticisms.


Beginning with ethics, there are some significant objections to MacAskill’s arguments in favour of increasing the number of people in the universe. In particular, Kieran Setiya has objected to his axiological argument against the intuition of neutrality (see “The New Moral Mathematics”, published by Boston Review). Setiya points out that two options can be on a par, qua actions, without implying that their results will be exactly equal, qua outcomes. For instance, we may think that it is neither better nor worse to become a philosopher than a poet, but we usually don’t take this to imply that being an unemployed philosopher is exactly as good as being an employed one. As we have seen, MacAskill confronts us with two options (have a child, don’t have a child), but three outcomes (no child, child with migraines, child without migraines). We might have asked why the options didn’t branch further (e.g., have no child but adopt, have no child but adopt twins, etc.). The real issue here is that his consequentialism has no place for the value of individual actions as distinct from all-things-considered outcomes, and so simply cannot express the intuition of neutrality. The intuition can be framed much more precisely if we switch from axiological to deontological language, particularly if we permit the use of conditional obligations: you may or may not have a child as you like, but if you do have a child, then you ought to conceive it in a way that avoids life-long migraines (this approach to the morality of procreation has been developed in much more detail by Johann Frick in his 2020 paper, “Conditional Reasons and the Procreation Asymmetry”). There are reasons to prefer consequentialist axiology, as it obviates the difficult work of reasoning about our obligations – identifying distinct principles governing action and mediating conflicts between them – but this comes at the expense of the nuances of agency – recognising choices that are at once both free and meaningful.


The problem might not be that our ethical intuitions are incoherent, but that the longtermist framework is just too simplistic to articulate them.

This ambivalence towards agency is a deeper problem for MacAskill’s longtermism, and becomes obvious if we contrast his arguments for the moral claims of future people with his first argument against the intuition of neutrality. On the one hand, he presents the future as if it were fixed: time being just like distance, our obligations to future people are to those who simply will exist, regardless of our choices. On the other, he presents the future as if it were ephemeral: identity being essentially fragile, our obligations to future people evaporate as our choices change who will exist. The true function of MacAskill’s opening allegory is to reconcile these perspectives by leading us to the idea that we don’t really have distinct obligations to distinct individuals, but only a singular obligation to make the best world we can, like an artisan tasked with perfecting an ornament. Yet once again, deontological language is far richer than he allows. We can make sense not only of de re obligations that pick out specific people who may or may not exist (e.g., we ought not to harm those who will follow us [if we act in this specific way]), but also de dicto obligations whose targets vary depending on which possibilities our actions actualise (e.g., we ought not to harm whoever follows us [regardless of our actions]). This is closely related to the conditional obligations already mentioned (e.g., if there are people who follow us, then we ought not to do them harm). The problem thus might not be that our ethical intuitions are incoherent, but that the longtermist framework is just too simplistic to articulate them.


Moving on to politics, there is a troubling naiveté to some of MacAskill’s analyses. One often gets the impression that he would rather reason with a bad, but tractable metric than with no such metric at all. For example, Alexander Zaitchik objects to MacAskill’s use of economic growth as a proxy for progress and well-being, not simply because it is a poor measure of economic and cultural flourishing, but because the assumption of indefinite growth built into the book’s projections is incompatible with the mid-to-long term habitability of the planet (see “The Heavy Price of Longtermism”, published in the New Republic). Here casual extrapolation of single-variable trends crashes headlong into the “actual biophysical math” on which the stability of complex systems depends. Beyond this, the historical narrative which frames these extrapolations – combining uncut technological determinism and the triumph of liberal values – is reminiscent of the Panglossian optimism of Steven Pinker’s Enlightenment Now. To be clear, What We Owe the Future is not without its criticisms of the status quo, but it still fits rather neatly into that genre of capitalist theodicy. It embodies not just the social but the economic sensibilities of Benthamite liberalism, along with its penchant for technocracy, prescribing a combination of market-based solutions, organised lobbying, and scientific philanthropy that eschews criticism of the dominant institutional arrangements entirely. One might argue that such critique is not the job of a book like this, caught as it is between the philosophical stakes of our attitude towards the future and the empirical dangers this attitude must confront, but the book is equally a call to action, and its ambivalent approach to agency affects even this. For while we are encouraged on the one hand to be maximally ambitious (to shape the whole history of humanity going forward etc.), we are also told to restrain our actions (to advocate for incremental change within the system as it is etc.). I think it best to say that the book approaches politics without a working theory of power: what it takes to fight it, to win it, and, perhaps most importantly, to resist its corrupting influences.[2]


Turning at last to epistemology, there are some reasons to think that the calculus of expected value is not the best guide to long-term action. Informally, Regina Rini makes the case that our understanding of the far future is so poor as to make expected value calculations worse than useless: “Our trying to anticipate the needs of star-faring people millions of years hence is no more realistic than Pleistocene chieftains setting aside sharpened flint for twenty-first-century spearheads” (see “An Effective Altruist?” in the TLS). Her point is not simply that our distribution of probabilities is likely to be wrong, but that our grasp of the possible outcomes to which these are assigned is essentially limited. More formally, David Kinney has argued that to consistently implement the sort of decision procedure MacAskill recommends at the scale he suggests would be computationally intractable: “[T]here isn’t an efficient algorithm that tells us whether or not we ought to take steps to prevent or mitigate the effects of any potentially catastrophic event” (see his “Longtermism and Computational Complexity”). I can’t do justice to the technical details of Kinney’s argument here, but there is a semi-technical point I can add, relevant to Rini’s objection.


No matter how good a conceptual argument is that the sudden emergence of effectively godlike AGI is impossible, it might never lower the probability beneath a threshold where it must be taken more seriously than much more tangible dangers.

MacAskill’s approach is based on what is known as subjective Bayesianism. This means that the probabilities it deploys are not intended to be objective measures of the frequency of events, but rather degrees of belief that they will occur, or credences, which are iteratively updated as we acquire new information. This explains why, when crunching the numbers that rank AGI as a more pressing risk than pandemics, world war, or climate change, MacAskill relies on estimates distilled from the credences of surveyed experts, rather than conceptual arguments about the underlying causes (91). I don’t mean to suggest that this sort of approach is entirely unjustified. The problem with dealing with uncertainty about the future is that we often lack anything resembling either objective statistics or a solid conceptual framework, and yet we still have to make decisions. Bayesianism is a framework that lets us start somewhere (even just a hunch) and improve the decisions we make over time (depending on how those hunches shake out). But in practice it tends to assign non-zero probabilities in every case, simply lowering them with repeated updates. This is usually not an issue, but when those seemingly tiny probabilities are multiplied by the astronomical quantities of positive or negative value posited by longtermists, their expected value suddenly overrides that of outcomes whose likelihood we have a much better statistical and/or conceptual grasp on. This means that no matter how good a conceptual argument is that the sudden emergence of effectively godlike AGI is impossible, it might never lower the probability beneath a threshold where it must be taken more seriously than much more tangible dangers. Paradoxically, this way of updating their beliefs might actually insulate longtermists against having to modify their priorities.[3]


The Future of Freedom


One thing common to most critiques of What We Owe the Future is a pronounced scepticism of its more science-fiction like elements: space colonisation, artificial intelligence, and especially transhumanism. Here the argument that it doesn’t sufficiently account for uncertainty about the future sometimes shades into a charge of Promethean hubris: that the very desire to reshape our species’ destiny is an error in itself, and that we must curtail such speculative ambitions. On this point, I gladly demur. One doesn’t have to agree with the details of MacAskill’s analysis to recognise that our increasing technological capacity to modify our environment and ourselves offers unprecedented opportunities alongside unprecedented dangers, and that, provisional as it may be, this calls for us to cultivate the agency needed to realise the former and evade the latter. My complaint is not with the extent of MacAskill’s ambitions, but with their character. In the final line of the book’s introduction, he explains that his aim is to leave his great-great-grandchildren “a world that is just and beautiful” (7, my emphasis). I think that this unexamined conjunction reveals the quintessential problem with the longtermist vision of the future. My aim will thus be to present an aesthetic objection: where other critics have already argued that this might not be a just world, I will argue that it might not be a beautiful one.


Let’s return to the proposition that our principal obligation is to make the best world we can, like an artisan perfecting an ornament. From the longtermist perspective, this means optimising the configuration of available matter in the universe to maximise the overall quantity of well-being over the entirety of its history. What We Owe the Future only really covers one aspect of this optimisation – population size – though it occasionally gestures at others, such as transitioning from biological to digital substrates capable of supporting greater numbers of people in simulation without decreasing individual well-being. So let’s consider another obvious optimisation strategy: why not design an optimum, standardised bearer of well-being, and then mass-produce it, filling the universe with as many copies as possible? This is a viable strategy regardless of what well-being consists in: it could either be a substance experiencing maximum pleasure (hedonium), a template person maximally satisfied with what they have (eudaimon), or a minimal social unit configured to maximise objective list criteria (eutopia). The advantage of standardisation would be to drastically decrease the risk that variations in configuration would result in overall decreases in well-being. This strategy raises a fundamental question: would a more or less homogeneous and unchanging universe really be more beautiful than a heterogeneous and evolving one? Is there not some essential value to the diversity and novelty that is missing in this purportedly perfect world?


Would a more or less homogeneous and unchanging universe really be more beautiful than a heterogeneous and evolving one? Is there not some essential value to the diversity and novelty that is missing in this purportedly perfect world?

Of course, MacAskill would argue that I’m putting the cart before the horse here: the beauty of the world is but a means to the end of well-being – if the world must be less beautiful in order to be more just, so be it. But I think there’s something more fundamental at stake in this relationship. Let’s approach the issue from the other direction: could we create an optimal ornament, or artwork, in the same way as we might supposedly optimise the universe? Does the idea of a best possible painting, poem, or musical composition even make sense, let alone the idea of a greatest artwork simpliciter? We certainly deploy axiological language in these domains, talking in painstaking detail about what is good, bad, better, and worse, debating the relative merits of different works, accounting for and discounting personal taste. Yet despite this they are not always fully comparable. A painting by Matisse may be neither better nor worse than a poem by Malarmé. A song by Led Zeppelin may be better than one by Black Sabbath, qua blues, but maybe not qua rock. Even when works are comparable, there is rarely a singular dimension underpinning our rankings, as if simply making a song longer, louder, or lower in pitch would always make it better, but a melange of different factors that cohere to make one thing superior, the balance of which isn’t obvious in advance. In this respect, the various arts resemble research programs, progressively uncovering vectors of improvement, branching into parallel paths, and articulating modes of comparison as they go. It’s not clear that this process of discovery has any intrinsic limit – an optimum at which it might halt – but it does appear to be governed by an intrinsic ideal – a norm which promotes diversity and novelty. Why couldn’t life itself be governed by such an ideal, as if we were striving to make art of ourselves and our societies? Why not have flesh and blood humans sailing the stars, digital minds living in glorious simulations, and AGIs pursuing their own inscrutable pleasures? Why not let a thousand post-humanities bloom?


From MacAskill’s perspective, such aesthetic judgments are intractably subjective. The production of artworks is better evaluated as a means to the end of well-being than as the proliferation of ends in-themselves, and the same is true for people. There are two possible arguments for this. The first claims an inadequacy on the part of beauty: that it lacks an objective criterion by which disputes might be decided. It’s true that there’s no decision procedure here, or an easy empirical index to which beauty might be reduced (e.g., neurological stimulation). But it isn’t for that matter merely subjective. We don’t simply want what we want. Our desires are neither completely transparent to us nor forever fixed. We often appreciate that there are things we should want. Not in the sense that we should all desire the same things, but rather that our peculiar combination of preferences give us reasons to explore certain options (e.g., a preference for Black Sabbath may suggest you should listen to Earth), or even to refine our tastes in particular ways (e.g., that you learn to appreciate longer, slower riffs). The pursuit of beauty is thus not restricted to some specific experiential qualities (e.g., prettiness, pleasantness, or sublimity), but is more generally concerned with varieties of excellence. From this perspective, aesthetics investigates the evolution of coherent preferences: there are plenty of ways to go wrong, even if there’s no singular way to be right. Such free exploration of possible configurations of freedom might easily push the limits of what is recognisably human.


The second argument claims an advantage on the part of well-being: that it possesses an objective criterion that makes resolving disputes comparatively easy. Though we haven’t yet agreed what it consists in, once we do, all those tricky normative problems can be reduced to comparatively straightforward instrumental ones: the real work lies in figuring out how to achieve well-being, rather than working out what it consists in. Yet what if this notion of well-being is nothing more than a set of formal properties in search of a substance? Just because we can sometimes hack together measures of living-well suited to specific situations, such as when we have a fixed range of resources and a set number of people with pre-determined preferences, doesn’t mean we can generalise them to creating new people with determinable preferences using all available resources. There simply might not be a way to obviate the search for what makes life worth living, as exemplified by the unending quest for excellence in artistic domains. Hedonism and objective list approaches try to short-cut this process by stipulating an answer, thereby restricting the freedom of future folk to define what is good for themselves. By contrast, preference satisfaction approaches index well-being directly to what they want, but in doing so encourage us to optimise their preferences for efficient satisfaction, curtailing freedom in a different way. Either is permissible as long as freedom – both liberty and agency – remains a means to the end of well-being. The alternative is to recognise that well-being is a poor substitute for freedom, insofar as this itself is beautiful. Children enrich the world not as fungible quantities of well-being, but as autonomous creatures as unique as any artwork. They are the most beautiful things that many of us ever create (for some more thoughts on this, see Tom Whyman’s 2021 book, Infinitely Full of Hope). But they are equally another open-ended opportunity to experiment with excellence, and the only way to extend our own aesthetic adventures. Herein lies the impetus for a future filled with new and varied forms of post-human life.[4]


The conception of goodness that MacAskill deploys collapses the difference between justice and beauty in a way that prescribes mandatory excellence on a cosmic scale.

To be clear, I’m not suggesting that we should substitute aesthetic concerns for ethical ones. My aim is rather to separate out concerns that MacAskill runs together – one more suited to deontology, the other to axiology. The conception of goodness that he deploys collapses the difference between justice and beauty in a way that prescribes mandatory excellence on a cosmic scale.[5] On the one hand, this tempts us into the sort of absurd mathematical tradeoffs between seemingly incomparable goods that utilitarianism is infamous for. Future people can count, and numbers can matter, even if we don’t aggregate the value of every aspect of their lives, refusing to allow large numbers of trivial gains to outweigh smaller numbers of serious losses. On the other, it invites us to aim for perfection where it makes no sense to do so.[6] Although some potential futures are no doubt worse than others, there need not be a best one, anymore than there need be a best cuisine, play, or novel, and the choices between the better ones could hardly be less meaningful or complex (e.g., would you rather live idyllically in J.R.R. Tolkien’s Shire, exuberantly among Iain M. Bank’s Culture, or competitively as one of Hannu Rajaniemi’s Zoku?). Once we properly distinguish these values, we can recognise reasons to pursue grand long-term projects that will change the face of the galaxy, while acknowledging they are essentially independent of and constrained by the ethical obligations we have to those who do and will someday exist. These include obligations not only to prevent foreseeable harms, but to secure their freedom to determine their own manifold destinies, rather than locking them into some singular fate. Crucially, this means that, even if there’s some sense in which the existence of free beings is what is most beautiful, as I’m strongly inclined to think, they are not for that matter obliged to exist. Freedom must be free. Voluntary extinction would be a tragedy, but not a crime.[7]


Conclusion: How We Grow Up


Returning to the beginning, we can now diagnose a deeper problem with MacAskill’s personified future: ethics might make the same demands of each of us, and these might even require that we co-operate in some ways, but it should not for that matter compel us to act as one. Even if universal optimisation doesn’t demand standardisation, it does require rationing individual freedom in the name of a singular project. We’re entitled to a limited range of personal projects that are more or less equal from the perspective of well-being, but only insofar as the choice between them marginally improves it. This balance determines a set share, either of fungible resources or specific opportunities. We will be allowed our little hobbies as long as they aren’t too wasteful. On the one hand, this is a natural argument for populations confined to simulation: there’s no need for the children to see a real forest when a virtual one will do. On the other, it’s an argument for godlike AGI: a paternalistic program aligned to the optimum, rigidly enforcing distributive justice. This may sound utopian or dystopian depending on your inclinations, but it is obvious that this world wouldn’t permit waste for its own sake. There will be no stars given over to art wholesale, when they might support another trillion souls. No megastructures built as monuments to our ideals. No supernovae instigated just because we can. If we are to grow up as a species, we must abandon such silly dreams. There is no abundance so great that it ought not be put to a proper use.[8]

In closing this review, I’d like to consider this second metaphor – “humanity as imprudent teenager” and question the conception of maturity it embodies. From MacAskill’s perspective, the greatest choice faced by this metaphorical teenager seems to be what career to choose. This is a topic he has thought a lot about, having co-founded an organisation (80,000 Hours) to help young people choose careers that will optimise their positive impact on the world. He once even (somewhat infamously) recommended that young people go work in finance, where they can earn the most money, and so donate it to charitable causes where it might do the most good. His prescription for humanity seems somewhat similar: we’re supposed to do whatever is needed to secure the resources required to achieve what’s most worthwhile. This means giving up on our dreams and joining the family business, which here means the business of family itself: managing the spread of our kith and kin. In case this all sounds too sudden, MacAskill does recommend a gap year of sorts – what he calls the long reflection: “a stable state of the world in which we are safe from calamity and we can reflect on and debate the nature of the good life, working out what the most flourishing society would be” (98). But the point of this period is to definitively determine the values we want to lock-in. Once we’ve worked out what’s optimal, it’s time to get paternal.


It’s worth contrasting this metaphor with an older image, couched in similar terms: “Enlightenment is man’s emergence from his self-imposed immaturity”. Kant also believed he lived at a crux point in history, whose significance lay in a willingness to take on responsibility for the principles guiding our thoughts and actions. Yet the Enlightenment wasn’t intended as a limited period of critical reflection upon these principles, but to usher in a more sustained openness to such reflection. This meant acknowledging the authority of future generations to break with prior orthodoxies. Every generation confronts the question of the good life, and rises to the challenge of answering it themselves. Figuring out what makes life worth living is not something separable from the flourishing of individuals or societies, but an integral part of this flourishing. To deny that is to stunt development, rather than encourage it.


With this in mind, Kant adds: “Rules and formulas, those mechanical aids to the rational use, or rather misuse, of his natural gifts, are the shackles of a permanent immaturity”. From this perspective, the real risk of AGI as conceived by MacAskill, Bostrom, and others is not simply locking in bad values, but locking in values simpliciter. This is reflected in the burgeoning literature on AI alignment, much of which reads like a series of increasingly desperate attempts to falsify Goodhart’s law.[9] Just as any given system of codified laws may always be challenged in the name of justice, so any given measure of intrinsic worth might always be challenged in the name of beauty. I don’t mean to suggest that there’s some special sense for value that humans innately possess and computers will forever lack. Rather, I suspect that the epistemic opacity of aesthetic excellence – its gestalt qualitative resistance to predictable quantitative optimisation – is an essential feature of why we value it, and that this holds lessons about value more generally. The idea that we could simply win at life by maxing out our endorphins is, if nothing else, exceptionally inelegant – a way of cheating that seems to miss the point entirely. We might say something similar about tiling the world with copies of an exceedingly content template. So, rather than attempting to align AGI with our values by beginning with utility maximisation, we might instead take the advice of Schiller, and begin with aesthetic education.


Rather than attempting to align AGI with our values by beginning with utility maximisation, we might instead take the advice of Schiller, and begin with aesthetic education.

Perhaps the worth of What We Owe the Future lies in providing a simplistic answer to a complicated question – one that shakes us out of our complacency and forces us to confront its complexities. For my part, I still don’t think we owe the future anything, strictly speaking, and this is why the future can be so exciting. It’s a blank canvas full of wondrous potential; an object of desire, rather than compulsion. Yet we do owe our descendants many things. Not least to prevent foreseeable harms. The future is equally a vast darkness harbouring terrifying possibilities; an object of fear, which compels. MacAskill is to be applauded for trying to account for these harms, even if he equates them with uncountable benefits. But we must refuse the moral bribery that results from this equation, which both suborns our integrity and distorts our sense of worth. The weight of forever must never be allowed to crush the flower of free choice. We must allow our descendants to go their own ways, even if their paths proliferate and diverge in a manner that can’t in principle be circumscribed by any overarching project of optimisation. The resulting diaspora might even be more beautiful, by far.


Peter Wolfendale is an independent philosopher who lives in Newcastle, in the North East of England. You can pre-order his new essay collection, The Revenge of Reason, here.

Twitter: @deontologistics

[1] It’s worth pointing out that there are many competing analyses of this immediate historical trend that locate the problem elsewhere. Cf. Paul Mason’s Postcapitalism; Mariana Mazzucato’s The Entrepreneurial State; J. Storrs Hall’s Where’s My Flying Car? [2] Despite this, MacAskill already wields more influence and commands more resources than the vast majority of academic philosophers ever have or will through the various organisations he has founded and the partnerships he has brokered. This is a sobering truth that each of his critics must confront. In some sense this only makes it more important to assess the exercise of this influence, and to critique the ideas that motivate it, but it also behoves us to do it in a more serious manner than we may be accustomed to. Nevertheless, the revelations about FTX and the Future Fund that emerged during the writing of this piece do serve to underline my point. [3] I owe this objection to Beau Sievers. [4] This should not, under any circumstances, be construed as an endorsement of eugenics. [5] To put this in technical terms, utilitarianism doesn’t permit supererogation. If an obligatory act is one that it is good to do and bad not to do, a supererogatory act is one that is good to do but isn’t bad not to do. [6] This gives a new sense to Voltaire’s maxim: ‘the perfect is the enemy of the good’. [7] I think it worth adding that the position I’m sketching isn’t incompatible with obligations to avoid involuntary extinction, or even limited reproductive obligations. Given that people will almost certainly continue to choose to procreate, we are obliged to secure the future for those who come into existence, including respecting their own right to procreate. In order to perpetuate society in a way compatible with the equal dignity of future people, we might even have reason to incentivise reproduction. This is one way of rationalising political responses to the problem of demographic ageing. The important thing is that this inverts MacAskill’s conception of the means-ends relationship: population management is a means to the end of individual freedom, rather than reproductive liberty being a means to the end of optimising population. [8] For a very different reflection on the relationship between scarcity and abundance considered at maximal scale, see the general economics developed by Georges Bataille in The Accursed Share, Volume 1. [9] “When a measure becomes a target, it ceases to be a good measure.”

 

The Philosopher is supported by our readers.

Your donation or subscription will help us continue to pay our writers and grow our magazine.

Please donate here or subscribe here.