Agency.
On the Philosophical Stakes of AI
A philosophical dialogue with Reid Hoffman + Greg Beato, ‘Superagency. What Could Possibly go Right with our AI Future?,’ New York, 2025.
§1 The Question Concerning Agency
Humanity is in the early stages of learning to live with AI.
The number of humans who use AI daily is steadily increasing.
Indeed, the adoption numbers — we are now at roughly 500 million daily active users — make it entirely plausible to assume that, within a few short years, the majority of humans will have multiple AI agents:
For all we know, half or so of humanity will soon spend more time talking to our individualized agents than we now spend time being absorbed by the screens of our smart phones.
One of the most fascinating or, if you will, troubling insights that emerges from the reports that study early adopters is that the way we humans ‘use’ AI is quite different from the ways in which we used prior software.
In fact, the term ‘use’ or ‘user’ may no longer be adequate.
Because the majority of humans who adopt AI do not just use it (like one uses, say, Google search). Rather, they talk to it — and expect it to talk back.
They share and discuss the most intimate aspects of our lives — from work to love to mental health — with AI.
And, or this is what the reports say, they overwhelmingly tend to trust its answers.
One of the key questions that emerges from this emergent pattern of human-AI living together concerns human agency:
Our capability to think, to collect information, to weigh the pros and the cons, in order to eventually arrive at a decision and take action.
Will we humans develop a cognitive dependency on AI and gradually outsource — not all but some, perhaps most of — our decision-making to AI?
The question is at the core of a thought-provoking new book by Reid Hoffman and Greg Beato, ‘Superagency. What Could Possibly Go Right With Our AI Future?’
Before I discuss their book, however, I briefly turn to the history of thought — to the history of the concepts and conceptual assumptions that we humans live by.
For I think that the full stakes of the question concerning AI and agency — and the full significance of the book by Hoffman and Beato — will only become visible if one approaches this question from the perspective of the history of the modern concept of agency.
§2 Agency is a Concept
What, actually, is agency?
Strange as it might sound to us today, the concept of agency is of rather recent origins:It first emerged in the 1680s — in the context of the philosophical effort to invent a new, a ‘modern’ or ‘enlightened,’ concept of the human. One that would be very different from what the middle ages suggested humans are.
Here is one of the earliest known efforts to explicitly pose and answer this question. It appears in John Locke’s ‘An Essay Concerning Human Understanding,’ first published in 1694.
"For, since freedom consists in a power of acting or not acting, according as the mind directs, a man in respect of willing cannot be free if what he wills be determined by something other than his own desire, guided by his own judgment."
A few lines on, Locke refines his answer.
“> "We must consider what person stands for; which, I think, is a thinking intelligent being, that has reason and reflection, and can consider itself as itself, the same thinking thing, in different times and places; which it does only by that consciousness which is inseparable from thinking, and as it seems to me essential to it. … For since consciousness always accompanies thinking and it is that that makes every one to be what he calls self, and thereby distinguishes himself from all other thinking things; in this alone consists personal identity, i.e., the sameness of a rational being."”
What is remarkable about these two passages is that they do not, actually, offer a definition. Rather, what Locke delivers is a vision: a vision of what it could mean to be human, to live a human life. A vision that no one — at least if one goes by the history of the written record in the West — had ever articulated before.
To be human, says Locke, is to have a mind that is capable of reason. It is, furthermore, to have a conscious self that resides in the mind and that exists in the form of thought.
Finally, it is to have the capacity to reflect freely — to reason — in order to arrive at a judgment.
Freedom, Locke roughly says, enabled by reason: This, from now on, shall be the foundation of what it means to be human.
We have inside ourselves an unconstrained openness and we have the capacity to navigate this openness freely, as a self-aware, intentional individual.
And agency?
Agency is an external manifestation of the internal thought processes — the free deliberation — of the self. It is the place where the inside and the outside coincide, is the ultimate expression — the highest form — of living life as a free, autonomous human being.
§3 Before Agency
Today, it is easy to take Locke’s conception of the human as ‘an agent’ for granted.
However, for most of human history the idea that humans are the free authors of their thoughts and actions has been largely unknown.
Allow me a single sentence from Homer to illustrate my argument; it is from Odyssey 19.138:
“> "For a god breathed into my mind the thought."”
Consider the distance that separates Locke from Homer (the 17th century CE from the 7th century BCE).
Here the autonomous individual that is the author of his or her own thoughts and actions. There the experience of thoughts as something that comes from outside of oneself, as something sent — or withheld — by the Gods.
Of course, humans were always able to act. They were always doing things. But how they understood their action — how they understood the source of their doings — changed quite significantly over time.
In Homer, actions follow not from internal deliberation but from external inspiration or compulsion.
In Locke, in sharp contrast, thoughts are not gifts or impositions from gods but products of one's own faculties. The Lockean self owns its thoughts, authors its decisions, and claims responsibility for its actions.
It took millennia, and many conceptually discontinuous detours, for our modern concept of agency — or for the idea that reason is a human intrinsic quality that enables them to be the free originators of their thoughts — to emerge.
To be more precise, the specific self-comprehension of humans as autonomous agents appears to only have surfaced in the late 17th century.
§4 Agency is a Historical Achievement
Why do I spend so much time historicizing the concept — the experience — of agency?
Because I believe that only if one is fully aware that the concept of agency-as-human-autonomy-and-freedom is not a given, that it has not always existed, can one fully appreciate the might of the question concerning the relation between AI and agency.
Agency is a world-historical achievement. That is, someone needed to have the idea that humans should be understood — and understand themselves — as free to act. And once this idea was articulated and shared, it had to be upheld and defended against its enemies (which were many). It had to be elaborated on and refined. What is more, people had to be enrolled in the idea.
Allow me to re-phrase the above more emphatically:
Inventing the concept of agency didn't just describe freedom; it helped constitute it by providing a framework through which people could experience themselves as, well, free and autonomous agents.
Locke’s Essay, published roughly 350 years ago, was the experimental philosophical frontier of the effort to establish the possibility for humans to understand — and experience — themselves as free and autonomous individuals. Agency, arguably, has been one of the core achievements of the European Enlightenment.
§5 Enter AI
Will AI undermine agency?
If one poses this question in the light of the brief historical sketch I just offered, its full stakes come into view: will AI undermine the Enlightenment notion of the human as a free, autonomous being that is inscribed in — and enacted by — the concept of agency?
It is with these questions in mind that I approached Reid Hoffman’s new book, Superagency, co-written with Greg Beato.
The answer that emerges from Hoffman and Beato’s new book is a clear ‘no.’ AI will NOT undermine our agency. To the contrary, there is a good chance that AI will increase our agency and catapult us into a state they call superagency.
What does superagency mean?
Let me prevent an all too simple misunderstanding. Superagency doesn’t mean more effectiveness. It does not mean more for less and faster.
Rather, their suggestion is that we understand AI as an opportunity: the opportunity to build a technology that can help us grow the freedom and the autonomy that have defined human agency since at least 1694.
How?
§6 The Philosophical Stakes of AI
In one of the most exciting passages of their book, Hoffman and Beato write:
“AI is increasing your agency … it’s helping you take actions. And either way, something new and transformative is happening. For the first time ever, synthetic intelligence, not just knowledge, is becoming flexibly deployable as synthetic energy has been since the rise of steam power in the 1700s. Intelligence itself is now a tool — a scalable, highly configurable, self-compounding engine for progress.”
I understand these lines to be announcing that we are — or that we could be, if we want it — amidst an epochal shift.
A shift from the age of human agency (Enlightenment 1.0) to the age of human agency grown by AI (Enlightenment 2.0).
Three challenges defined the Enlightenment period in which we lived until just now:
Make people recognize that they are capable of thinking — and acting — for themselves, freely and autonomously; encourage them to exercise their reason, sapere aude; make existing knowledge available to as many people as possible.
If I write that we lived in the age of the Enlightenment ‘until just now’ then that is because I think that the internet was very much continuous with Enlightenment humanism:
The internet as defined by Google (what is the internet? A library of webpages that contain knowledge) was arguably the most significant Enlightenment infrastructure project ever.
But now, with AI, with synthetic intelligence — or so Hoffman and Beato argue — a whole new era has begun:
“For the first time ever … intelligence itself is … a tool — a scalable, highly configurable, self-compounding engine for progress.”
In fact, AI is a tool unlike any prior one — for it is a tool that can learn by itself, with and alongside us; a tool that can actively process, analyze, and apply itself to knowledge and thereby help us increase what Locke was the first to call agency.
Here is Locke one more time:
“It is by the reflection that the mind takes notice of its own operations, and thereby comes to have ideas of its own understanding, will, and other powers.” (Essay, Book II, Chapter I, Section 4)
Can AI — intelligence as a tool — increase both the subtlety and the scale of ‘the reflection’ by way of which ‘the mind takes notice of its own operations?’ And can it, thereby, increase our potential ‘to have ideas of [our] own understanding?’
In fact, could AI increase the number of things I can think qua human? Could it add space(s) to the mind — Locke’s tabula rasa — and, thereby, grow the space of possibility I have available to deliberate? That is, can AI grow the human autonomy and freedom that stand behind — that enable — agency?
Grow it, maybe, beyond the human as we know it?
The biggest challenge of the here and now, say Hoffman and Beato, is threefold. Help people understand that AI can be an increase in their agency, that is, in their capacity to think freely and autonomously; build an intelligence infrastructure that is as ubiquitous and as easily accessible for everyone as the knowledge infrastructure that was/is the internet; encourage people to actually use AI, to engage it, and to thereby participate in shaping it.
§7 A New Epoch
At one point, while reading Superagency, I lost my capacity to distinguish between the intent of its authors and what my expertise in the history of thought made me see in their book.
In some sense, I didn’t want to distinguish between reader and authors — because the blurring of perspectives enabled me to see AI in new ways, ways I find exciting and important.
First, the blurring enabled me to clearly see that AI has extraordinary — extraordinarily scary — philosophical stakes:
It runs the risk of undermining the freedom and the autonomy we take for granted for over 350 years, thanks to the Enlightenment.
It enabled me as well, second, to see that AI has a hugely exciting philosophical potential:
If we manage to build and use AI so that we can grow human autonomy and freedom — the number of things I can think and the space of possibility in which my thinking takes place —, then AI can be the next chapter in the complicated human quest for freedom.
Call it agency — or Enlightenment — 2.0
This is not hyperbole. For our fundamental categories of experience remain open to reinvention.
If agency could emerge, at the turn from the 17th to the 18th century, as a new form of human self-comprehension then there is no reason why AI cannot be another such moment of conceptual (self-)transformation.
The challenge as I see it is to build AI in terms of its philosophical stakes and potentials:
For this to be possible, we need to combine philosophical research (a careful, experimental study of how AI challenges the conceptual assumptions that have defined the human throughout the modern period) and technology development (the actual building of AI).
We need to learn how to practice AI as an experimental philosophy of what it is — or could be — to be human today.
Hoffman and Beato give me reason to hope that we can do this — that we are not ending the story of human agency but writing its next chapter.
~