top of page

"Predicting the Future of Mind" by Thomas Moynihan (Keywords: History of Ideas; Evolution; Holism)


White house on hill

From The Philosopher, vol. 111, no. 1 ("Where is Philosophy Going?").

If you enjoy reading this, please consider becoming a patron or making a small donation.

We are unfunded and your support is greatly appreciated.



“To unravel the archaeology of human knowledge, we must treat former systems of belief as valuable intellectual ‘fossils’, offering insight about the human past, and providing precious access to a wider range of human theorising only partly realised today.” Stephen Jay Gould


A gigasecond is one billion seconds. In approximately 3.16 gigaseconds, it’ll be 2123. If The Philosopher still exists then, I think this will be a good thing, as it will presumably mean human philosophers still exist. Applying what the philosophers call a fortiori reasoning, this will also mean humans, in general, remain around.


But what might they be pondering?


***


To ask this is to attempt to predict the future of mind. In what follows, “mind” intends something greater than sentience or even selfhood: something, instead, like capacity for cumulative culture and intergenerational self-correction. In other words, mind is the ability for each generation not only to inherit behaviours or practices unquestioningly, but also to improve them by integrating new ones and jettisoning misguided ones. A form of cooperation between generations rather than merely within one, it means our mental lives are both constituted by histories and the constitutors of it. Without this, there would be no mind, in the sense of something belonging to us, as we would only be the prisoners of inherited instincts, and wouldn’t have the type of mental lives we do, which can be said to be our own because they are, at least partially, self-authored.


Because of this, however, mind cannot begin “fully formed”, as if shot out of a pistol straight into the absolute. It must tarry with its contingencies, hoisting itself progressively above the blinkers of being historically rooted. But it is this process, drawn across the generations, which grants knowledge the status of something wrought by agency and, thus, genuinely earned instead of being tyrannically unamending and arrogated. Without sensitivity to what’s erroneous and arbitrary, there is no genuine knowledge, only dogma or doxa at best. Disagreement with precedent is what makes mind free to learn, but this takes time; this is why learning cannot skip its past, nor leap far ahead of itself.


***


Nonetheless, a tour through prior predictions regarding knowledge’s future gives a sense of how much we err in our prophecies.


In 1726’s Gulliver’s Travels, Jonathan Swift dismissively satirised scientists as naïfs who, instead of influencing worldly affairs, invent flagrantly ineffectual contraptions, like machines to make cucumbers glow. It took a further century for their vocation to become distracting enough for the word “scientist” to require coinage. By the time another century passed, in the wake of World War I, no one would dismiss science’s worldly efficacies.


Two years after the term “scientist” was first used, in 1835, Auguste Comte declared that humans could never know anything about the “chemical composition” of stars. In 1859, the spectroscope was invented. A decade later, Norman Lockyer named the element “helium” after studying solar spectra.

By this time, countless Victorians prophesied that, beyond coal, no other power source capable of running industrial societies would ever be found. As late as 1928, Robert Millikan, Nobel-winning physicist, declared that the energy available through subatomic tampering “may perhaps be sufficient to keep” the “pop-corn man going” on “a few street corners”, but “that is all”.


My personal favourite, however, is the reaction to Charles Babbage, who went on to develop the first mechanical computer. One reviewer, in 1832, dismissed Babbage’s “wild speculations” upon future uses of “his calculating engine”, and, in particular, his proposal for “conveyance of letters through the metropolis and the country, by means of wires suspended from steeple to steeple”. They couldn’t countenance our displacement as the sole manipulators of abstract symbols on Earth.


***


Why, then, is anticipating mind’s future so slippery? After all, thinking is seemingly the thing we are most fluent in.


One reason rests in holism. Ideas don’t exist as atoms, isolated from one another. They come as bundles – as webs of belief – where altering one doesn’t just invalidate or update neighbours, but has collateral, cascading ramifications for distant attitudes that are not always obviously related. Background, auxiliary assumptions matter.


For example, one can be aware of species extinction as a natural reality, but not be sensitive to how bad it is without a constellation of other beliefs, which did not arrive all at once. For example, by at least 1800, biologists were making use of fossil evidence to agree that some prehistoric species had perished, never to return. However, as there was no agreement on how species appear, there could be little agreement on what exactly was lost. Only after 1859 and Darwin did a consensus build that complex lifeforms never appear spontaneously, divorced from immensely long-winding unbroken lines of prior descent. Without this conviction, there was no clear apprehension that when a lifeway is lost, aeons of accumulated adaptation and diversification is lost too, nor that something equally complex cannot simply pop into existence without retracing such labours.


But Darwin himself ingrained an assumption that extinction is invariably the midwife of “progress”, proceeding by reliably weeding out the “unfit”. This precluded apprehension of one of extinction’s immense tragedies: that lineages pushed to perishing may have gone on, to survive, thrive, flourish, and diversify for considerably longer. Insistence that extinction can be as much to do with lack of luck as paucity of adaptability – an insight Darwinism itself helped eclipse for more than a century – only became prominent in the later decades of the 20th century, following new evidence regarding the role of previous mass extinctions. All of which is to say that it would be wrong to assume that a naturalist from 1823, though she may have accepted irreversible extinctions, also apprehended, as we so fluently do now, the depth of what’s lost when lifeways perish.


Thus, in the history of ideas, treating statements atomistically distorts through the process of smuggling in our own background beliefs. This serves to conceal the true variability of worldviews over time, occlude the alienness of past belief, and lure us toward expecting attitudes to remain more recognisable into the future.


***


The interrelatedness of knowledge thus makes it hard to anticipate developments in any one area of belief, given that, in order to predict one, you’d have to predict the interactions of all. Furthermore, while few would argue there is no contingency in attitudes, it’s incredibly hard to ascertain just how much is there.


While few would argue there is no contingency in attitudes, it’s incredibly hard to ascertain just how much is there.

Contingency here simply means that something didn’t have to have happened the way it did. This really gains bite when stretched over time: when an event in the past, having gone one way rather than another, shapes all that can possibly come thereafter in ways that open up hitherto unexplored regions of possibility or forever close off previously accessible ones. Under this guise, contingency is, as Stephan Jay Gould argued, the very “essence of history”.

One exemplar is life’s evolution on Earth. The overwhelming majority of possible lifeforms will never exist, and, of the ones that have existed, the vast majority never will again. The path evolution actually takes through this total space of possibility matters, because most potentials remain unrealised, insofar as what came before constrains everything that can possibly come thereafter, in unpredictable ways which could easily have turned out otherwise. This process is directional, irreversible, contingent.


What truly throws this into relief is the implication that not only life’s future but also its past exhibits fundamental contingency: that what’s next is deeply unpredictable, but also that, if you “replayed the tape” from the beginning, present affairs could, potentially, be unrecognisable.


It seems that, like life, mind might display similar historicity. Cultural history, that is, evidently exhibits directionality, because of cumulative insight. It also exhibits some degree of irreversibility, because all attempts to perfectly resurrect past milieus remain, at best, inauthentic reconstructions, mediated by modern nostalgias. But how much contingency does the history of ideas involve? Given limited access to the total space of possible conceptions, having limited “plays of the tape”, it’s difficult to fathom how deep this space of unrealised variability truly extends.


Undeniably, truth-seeking (not to mention basic needs and affordances congenital to human embodiment) will provide strong sources of constraint, just as biologists point to adaptiveness as the source of convergent morphologies. Certainly, I don’t think that, upon replaying the tape on inquiry – and, particularly, scientific inquiry – the results would be unrecognisably divergent each time. Similarly, I don’t think that mind’s future will be chaotic drift, loosened from all attractors like veracity or pragmatism.


But are there not ways in which later theories – even if supported generously by evidence and consilience – might still be constrained and coloured by the legacies of earlier idea-formations or master-metaphors, which didn’t have to have become ingrained in the way they in fact did? Is it not true that at least some conceptual practices begin as exaptations? That is, they are seized upon simply because they are available, despite having emerged in other contexts and to serve other functions.


What if regions of belief that appear to be the highest peak of all only seem so because we haven’t spied the even higher peaks lurking beyond our current range of vision?

Moreover, holding that, given time, contingency “washes out” – via convergence upon good values or true beliefs – implies we’re confident we’ve somehow surveyed the range and bounds of all alternatives. But what if regions of belief that appear to be the highest peak of all, measured by some such value as accuracy or veracity, only seem so because we haven’t spied the even higher peaks lurking beyond our current range of vision? There may always be unconceived alternatives: capable of explaining available evidence better, but currently inaccessible to us, due simply to thought’s prior pathways through the total space of what’s thinkable – pathways carving inheritances which didn’t need to have happened, but did.


***


What’s undeniable is that people have a track record of being overhasty in announcing learning’s exhaustion of alternate possibilities within any given domain. This is because it’s hard to look beyond what’s already actual or precedential, into regions unrealised.


Around 1900, physicists learned this the hard way. A generation before, many believed science was “wonderfully complete”: that all fundamental laws had been discovered, within which “all physical phenomena must forever fit”; that “future progress” would involve finer-grained quantifications of old theories, not discovering qualitatively new phenomena.


But then Curie and Einstein happened, cleaving open entirely unexplored regions of theoretical and practical possibility. It became clear that Victorian physicists “had taken themselves a little too seriously”, assuming they had discovered an “all-inclusive set of laws”. (These admonishments came from Robert Millikan in 1927, the year before he predicted nuclear power would only ever power popcorn…)


Similarly explosive expansions of possibility have taken place on numerous occasions throughout the past. Indeed, whilst it’s incoherent for there to be more actual than is possible, it’s clearly coherent for there to be more possible than is actual. People have long realised this. However, the acknowledged ratio of unrealised possibilities over actual realisations has steadily grown across time.


This has proceeded, most clearly, within conceptions of our universe. Given belief in eternity or infinity, Ancient Greeks and Romans tended to hold that all that was possible was found somewhere in actual time or space. The books were balanced, with no unspent, surplus possibilities.


During the Middle Ages, having become enamoured with omnipotence, Arabic and European theologians liked to imagine God could have made the universe radically otherwise. But, due to their concurrent belief in omnibenevolence, they invariably believed that this world couldn’t be otherwise than it in fact was.


Later, Renaissance innovations, bringing forth things never seen before, began extending our estimations of what was possible. The sense of history’s unrealised possibilities – that halo, circumscribing time’s actual course, of paths untaken and potentials unprecedented – has thickened ever since.


During the 1600s, the Copernican revolution increased the arena for what is actual, extending it to countless other stars. However, this didn’t immediately provoke a matching expansion of the acknowledged range of possibilities, such that many assumed all planets repeat Earth’s biography to varying degrees. It took concerted reflections, throughout the 1800s and 1900s, to build an appreciation of the true scope of unrealised or missed possibilities inherent in historical processes. This unfolded through growing reflection on counter-to-fact timelines in human affairs, alongside strengthening conviction regarding chance’s role in shaping life’s macroevolution.


History has thus, steadily, made us more historical. If, that is, you take “being historical” as involving not only acknowledging your ordinal placement within factual chronology, but additionally all the prior happenstances upon which your existence is counterfactually dependent, alongside just how divergently everything could have gone. Being historical involves apprehending the conglomerated forks required to have forged everything you are: all the filtrations of future possibilities which didn’t need to have happened, but, without which, nothing you recognise would necessarily exist. Flowing from this comes an appreciation of how open-ended the future might also be.


Just as our here and now has, over the centuries, been decentred within space and time, so too has what’s actual – be it personal, sociocultural, evolutionary, or beyond – been progressively revealed as just one island within a vaster ocean of alternate possibilities, most of which will remain forever unrealised within this parochial universe. Having acknowledged this, it is our fate to select which archipelagos are manifested next; though, as Marx knew, “people make their history, but not exactly as they please”.


***


Leaping into meta-induction, I believe that this tendency – to have been revealed to have underestimated the latitude of what is unrealised – will persist into the future. Particularly so, when it comes to that domain we call “mind”. In short, I don’t think we yet have any inkling of the true breadth of its possibilities.


Indeed, though it seems the thing we’re most fluent in, I would suggest that we’re actually strangers to mind, insofar as it is a historical – unfinished and ongoing – category. Not inbuilt, nor unquestioningly inherited, but freely innovated myriad generations ago, when some ancestors invented the ingenious theory of “internal states” to explain their peers’ outward behaviour, before turning this inward upon themselves.


Though it seems the thing we’re most fluent in, I would suggest that we’re actually strangers to mind, insofar as it is a historical – unfinished and ongoing – category.

We have been elaborating this theory, and its consequences, ever since. Each generation must interrogate it anew. Let’s hope there will be many more to come, tenaciously keeping the adventure going, in whichever direction they see best fit. Whatever that may involve, we in 2023 won’t be able to have foreseen 2123, let alone 3123, but that’s part of what will have made mind’s unfolding free.

Thomas Moynihan is a UK-based writer and visiting research fellow at Cambridge University’s Centre for the Study of Existential Risk. His research interests include how attitudes to time, history, and contingency have changed over time, as beliefs about deep history and the further future have become more capacious. At present, he’s working on a book exploring how history’s horizons have expanded throughout the past, as people have slowly pieced together the ways in which present action and accident can scar the entire future for life on Earth, in indelible yet avoidable ways. Website: https://thomasmoynihan.xyz Twitter: @nemocentric

 

From The Philosopher, vol. 111, no. 1 ("Where is Philosophy Going?").

If you enjoyed reading this, please consider becoming a patron or making a small donation.

We are unfunded and your support is greatly appreciated.

bottom of page