AI and the Future of Wisdom

This article was originally written as a submission to the essay competition on ‘Automation of Wisdom and Philosophy’, July 2024.

July 2024

It’s true that advanced AI will create new situations requiring new and potentially high-stakes decisions to be made. It’s also true that AI-powered automation could help us better understand these situations (which isn’t surprising given that we, i.e. human intelligence, also try to better understand the very situations that we ourselves create). And it’s true that we will want to understand and think about these new situations well enough to make wise choices. But cultivating good thinking about what’s needed to leverage AI-driven automation here requires us to clarify which aspects we want AI to support with, and to what extent.

It matters whether we want AI to help us think better about novel situations, or to automate high-quality thinking on our behalf; it matters whether we want AI to help humans make better decisions, or to make those decisions on our behalf; and it matters whether we want AI to help us cultivate wisdom, or to cultivate wisdom on our behalf. Should automation be the means or the end, or some combination? And if a combination, where do we draw the line? If we decide that we want AI to engage in ever more thinking on our behalf, to make ever more decisions on our behalf, and to cultivate ever more wisdom on our behalf, the question of what thinking, decision-making, or wisdom cultivation is left for humans is a real one. How we answer such questions would therefore seem to have profound (albeit unknown) implications not only for the nature of our thinking, philosophy, and wisdom, but for our entire experience of life.

Part of figuring out what role we want AI to play in the cultivation of wisdom (including what sorts of good thinking we want it to automate) is figuring out what we don’t want it to do. All else being equal, mitigating negative impacts would seem to be a higher priority than leveraging positive ones, especially given that humanity seems to have done quite a good job of using its extensive history and individual and collective experiences to cultivate wisdom before the dawn of AI. (In fact, this might be an especially important point, as explored below.)

One way of thinking about AI’s potentially negative impacts on wisdom is in terms of its impacts on the process of cultivating wisdom. From a macro perspective, it appears there are two major possibilities here. The first possibility is that AI negatively impacts our pre-existing method of cultivating wisdom. This method seems to involve having human experiences, reflecting on them, and concluding something about the nature of the human condition. It’s a method that has dominated our wisdom cultivation efforts throughout history. The second possibility is that AI negatively impacts the process of cultivating wisdom per se. Before looking at the manner in which AI might have these differential impacts, the two possibilities merit further reflection.

In terms of what’s least bad, the first possibility would be the preference. All else being equal, it would seem that curtailing pre-existing methods of doing something valuable is preferable to curtailing the ability to ever do that thing again. In fact, if only the pre-existing method of wisdom cultivation is threatened by AI, it isn’t obvious this is a net-negative outcome. In many aspects of human life, the process of old making way for new leads to net-positive outcomes. Moreover, if only the pre-existing method of cultivating wisdom is negatively impacted, this scenario still leaves open the possibility that AI could enable alternative methods of cultivating wisdom. These might even be better methods, and they might lead to new forms of wisdom that are even more valuable and might not have otherwise been possible for us to cultivate. So this scenario could actually be a net-positive for our cultivation of wisdom.

But the second possibility is definitely the worst of the two. If wisdom cultivation per se is negatively impacted by AI, this could be disastrous. This could involve the loss of wisdom obtained thus far, and even the foreclosure of our ability to cultivate as-yet-untapped future wisdom. This could be catastrophic for the human project of wisdom cultivation.

Assessing which possibility is more likely seems to require an understanding of the nature of the relationship between (a) the process of cultivating wisdom and (b) human experience. We can consider this relationship in terms of three scenarios.

In the first scenario, there is something fundamentally unique and special about the power of human experience in cultivating wisdom. Our interactions with the world and the contents of our conscious experience directly affect our ability to cultivate wisdom. The less we interact with the world and the more limited the scope of our conscious experience, the lower the potential for us to cultivate wisdom.

In the second scenario, human experience is fundamentally necessary for reaching a baseline (however that is defined) of wisdom cultivation, but then it stops being necessary. Once that baseline is achieved, human experience can take a back seat in cultivating more wisdom. The remainder of whatever wisdom is left to be cultivated can be done by, say, AI — and perhaps more efficiently. This scenario assumes that human experience is still needed to reach a given baseline, meaning that AI can only ‘take the baton’ (whatever that means in practice) once this baseline is reached. Suffice to say, it isn’t obvious how close we currently are to that baseline, whether we have already reached it, or if we would even realise it once we have.

The third scenario is that human experience is not necessary at all for cultivating wisdom. This means human experience was never actually required in the first place. Therefore, AI could take the reins whenever it’s functionally able to do so. In this scenario, the fact that the cultivation of wisdom has hitherto been the preserve of human minds is a result of the fact that other advanced intelligences, like AI, weren’t around to do the job earlier.

All this said, I think it’s likely that human experience is actually necessary for cultivating wisdom. Take the example of stoic philosophy. It seems hard to diminish the impact that human experience has had on the writings of the stoic philosophers and the development of their ideas; the continued practical relevance of stoic philosophy is often considered central to its lasting appeal over the past two millennia. Indeed, when reading Marcus Aurelius’ Meditations or Seneca’s Letters From A Stoic, it’s clear that their ideas were deeply grounded in their own personal experiences and those of people they observed or heard about. The stoics’ perspectives on happiness, fulfilment, tranquillity, virtues, what it means to live a good life, and making the most of our finite existence are all ideas that seem to have been overwhelmingly directly informed by the nature of human experiences. It’s not obvious that stoic wisdom could have been cultivated in the absence of these experiences.

With this in mind, I’m discounting scenario three and assuming that human experience is indeed necessary for cultivating wisdom. This leaves scenarios one and two. If human experience is fundamental for cultivating wisdom (scenario one), then negative impacts from AI would seem to pose a greater risk. If human experience is required only to reach a certain baseline (scenario two), then negative impacts from AI would seem to pose a lesser risk — although it also matters how far we currently are from reaching that baseline. So the next question concerns whether, and to what extent, AI could affect the scope of human experience in ways that might undermine our ability to cultivate wisdom.

Could AI affect the scope of human experience? Unlike some of the questions posed above, the answer to this one is obviously ‘yes’. The sheer breadth of ways in which AI does this — from automating information and knowledge retrieval, to automating creative expression, to automating navigation — is so apparent on a daily basis as to almost be too trite to talk about. But could AI affect the scope of human experience in ways that might undermine our ability to cultivate wisdom? This depends on whether its effects on human experience are significant enough to undermine our ability to leverage that human experience for cultivating wisdom. This is something that seems to remain an open question.

There is, however, some obvious cause for pessimism. For example, it’s a common refrain that AI will help improve humanity’s collective quality of life by automating the kinds of tasks that most humans typically don’t find enjoyable, and therefore freeing up more of everyone’s time to engage in things we do find enjoyable, like creative expression and intellectual pursuits. In relation to cultivating wisdom, there are two issues with this. First, the kinds of activities that AI will handle for us could include a significant number of those which formed much of the human experiences upon which we have cultivated wisdom so far. Think of the stoicism example. It’s hard to quantify what proportion of relevant human experiences are going to be accounted for in the process of automation. But by the time we can, it might be too late to meaningfully change.

Second, advancements in generative AI clearly demonstrate that creative expression can also be automated — and automated well. It’s likely that generative AI will soon — probably within the lifetimes of almost everyone alive today — be able to mass-produce images, videos, and even interactive video games that are functionally indistinguishable from what humans used to create. They might even be better than what we did, and could, create. Moreover, given the ‘intelligence’ part in Artificial Intelligence, it’s hard to see why humans’ intellectual pursuits won’t end up being automated too.

All of this raises two further, perhaps deeper, questions about the relationship between AI and the cultivation of wisdom.

First is the question of whether wisdom that’s cultivated in the absence of, or without reference to, human experience qualifies as wisdom at all. It’s fair to say (and I assumed above) that for all of human existence, cultivating wisdom has been a direct function of the depth and breadth of human experience. For example, in many cultures an older member of the community might be considered wise by virtue of their age, behind which is assumed to be a wealth of experience of the trials and tribulations of life. (Incidentally, the stoics cautioned against equating a long life rich in experience with a life in which someone has merely existed for many years.) Similarly, we often refer to someone as being ‘wise beyond their years’ when they show signs of the wisdom that we would only expect an older person to have had the experience necessary to cultivate.

Second is the question of what the purpose of cultivating wisdom is. It’s possible that in a world where human experience is marked by increasingly fewer physical constraints and challenges, the scope for what’s relevant for the cultivation of wisdom might be narrowed (although it could also be broadened). Looking to the very distant future, if humanity reaches the stage where we can upload our consciousness to computers and overcome many of the core limitations that are inherent in the human condition — things like physical constraints, dealing with uncertainty, and perhaps even finitude and mortality — it isn’t obvious what we would need or want to cultivate wisdom for. With ever fewer constraints on human existence and fewer needs to make value judgments around what to do with our limited capacities as finite human beings, wisdom itself would seem to be at risk of redundancy.

There is clearly a lot of uncertainty around how, why, and when AI could impact our cultivation of wisdom, and what we might want to do about it. In that sense, this essay implies some research avenues specifically regarding the automation of wisdom cultivation. Preparatory research could start to triangulate (a) which aspects of thinking, decision-making, and wisdom cultivation we might actually want AI to automate, and (b) how far we might want AI to automate them. The three scenarios discussed earlier could be validated and additional or alternative ones generated. Research could explore how AI will (or could be developed to) expand the scope and scale of human experiences in ways that are net-positive enablers of human wisdom cultivation.

Most importantly, assuming that continued and dramatic advancements in AI are a civilizational fait accompli, it seems crucial to better understand the extent to which human experience really is necessary for wisdom cultivation, including the extent to which different kinds of wisdom draw upon different kinds of human experiences in different ways. This research seems important for scenarios one and two above, but especially so for scenario one. That said, it isn’t obvious how we would undertaken such research. If we figure out how, then the research findings could also support the development of mechanisms to help limit AI’s encroachment on whichever human experiences turn out to be the most essential ones for cultivating wisdom.