What Philosophers Must Deliver for the Future of AI
- Brandon Rickabaugh
- Feb 6
- 6 min read
This is the second in a two-part series. The first post diagnosed the problem: why tech leaders keep calling for philosophers, and why "just engineer it better" keeps failing. This post turns to the prescription.
Part One began with a dinner party introduction that didn't land, a skeptic who paused like he’d already decided, and a conversation that began with “That sounds made up.”
What surprised me was his follow-up question: so what are philosophers supposed to do about it?
It is the right question. Its answer is less comfortable than many philosophers may want.
Being needed is not the same as being ready. Philosophy now has a standing invitation to one of the most consequential conversations in human history. What it brings into that room, and what it keeps leaving at home, matters more than the invitation itself.
What would it mean for philosophers to answer the call of AI leaders well?
Not commentary from a safe distance. It will require recovering disciplines many tried to leave behind, philosophy of mind, epistemology, aesthetics, and the rest, not as academic artifacts but as practical necessities. People keep reinventing them because reality does not let us build for long without them.
The missing disciplines people keep reinventing
When people hear "philosophy," they often hear "ethics." As if philosophers are there to supply a moral blessing at the end of the sprint. That caricature is convenient. It keeps the deep questions safely optional.
AI is forcing something broader. It's dragging the basement level of our worldview into the build room.
Philosophy of technology asks what a tool is doing to the person using it. Not only what tasks it completes, but what patterns of living it installs as normal. Tools do not merely extend power. They reshape the available options, so some actions feel natural and others begin to feel absurd. That is how technology remakes culture: by default rather than by wisdom.
Philosophy of mind asks questions we keep answering with metaphors or reductive assumptions. What is intelligence? What is attention? What is it to understand, to intend, to choose, to mean something? Are persons reducible to functions? AI systems force answers to such questions every day, often without the conceptual equipment to handle them. When distinctions are missing, we fall back on surface cues. Fluency gets treated as wisdom. Compute as intelligence. Prediction as insight.
Metaphysics asks what is real, and what kinds of things exist. It asks what agency is, and what separates an instrument from a quasi-agent. Every serious AI system frontloads answers to these questions, often as unannounced design choices. Implicit metaphysics is the most dangerous kind because it governs while pretending not to.
Epistemology asks what counts as knowledge, and who gets trusted. AI reshapes justification, the grounds on which belief is warranted. It compresses inquiry into a single interface, turns "ask" into "know," and makes authority feel like a feature. Confident output can crowd out the usual tests, methods, sources, expertise, track record, defeaters. Even with citations, the question remains: is the system reporting, or constructing? Fast, ambient tools can move us from epistemic agency, the practice of responsible knowing, to epistemic dependence.
Aesthetics asks what AI is doing to our vision of beauty and our desire for it. Generative systems normalize “good enough.” They smooth the strange edges. When creation becomes effortless, we lose a teacher. Resistance, the slow discipline that forms judgment, disappears. Defaults train the eye and ear. They shape what a professional face looks like, what a moving story sounds like. Because aesthetics directs desire, AI's aesthetic power is formative power. It may cultivate wonder. It may also anesthetize perception with polished sameness.
We are building systems that force decisions about knowledge, agency, meaning, and personhood. We are doing it at scale. And we are mostly doing it without the disciplines designed to handle exactly these questions.
The need: operational philosophical work
When tech leaders like Pichai, Hassabis, and Nadella call for philosophers, they are not asking for commentary. They are asking for a new kind of participant in the build room.
Call it operational philosophical work. It turns contested concepts into constraints, evaluation criteria, governance decisions, product requirements. What does that actually look like?
It looks like a Concept Spec: a working definition of words like harm, manipulation, consent, autonomy, trust, and care, with examples and edge cases. Engineers cannot implement dignity as such, but they can implement constraints derived from a clear concept of it.
It looks like a Tradeoff Charter that names what you will sacrifice and what you will not, states the priority ordering explicitly, and draws the red lines before the pressure is on rather than during it. AI forces value conflicts: safety versus openness, personalization versus autonomy, convenience versus privacy, speed versus accountability. A tradeoff charter makes those conflicts explicit, names the priority ordering, and defines red lines.
It looks like an Epistemic Policy for how the system handles belief-like behavior: rules for when the system must display uncertainty, defer to a professional, refuse, or escalate. Authority hygiene: guardrails that keep outputs from masquerading as testimony, diagnosis, legal advice, or moral counsel when the system cannot warrant those roles.
It looks like a Responsibility Map that assigns ownership of outcomes across design, deployment, and incident response, so accountability does not dissolve into "the model did it."
A Deception and Dependence Audit identifies where the system invites unhealthy reliance. It assesses the product's relational gravity: the places where it tempts users to treat simulation as presence, reassurance as care, coherence as wisdom. The result is a ranked list of dependency risks with mitigation requirements.
And it looks like a Failure Mode Taxonomy: philosophical red-teaming built around category mistakes. Epistemic mistakes about truth and justification. Teleological mistakes about ends and means. Relational mistakes about persons and presence.
Each failure mode gets detection signals, test cases, and stop-ship thresholds.
That is what it would mean to put philosophers in the loop. Surface hidden assumptions. Clarify ends. Interrogate categories. Frame accountability. Explain whether the deliverable serves a greater purpose, and how.
A soul-first approach
There is a deeper reason philosophers of technology matter, one that tech language keeps circling without naming cleanly.
Every powerful tool trains its users. It forms habits of attention, reshapes desire, and installs reflexes of trust and distrust. AI does this at unusual scale because it does not merely move matter. It mediates meaning. It sits between a person and the world as a default interpreter of what is worth noticing, what is plausible, what is wise, what should come next.
Call that interior center what you like: self, psyche, inner life. The older word is soul. Not a “ghost in the machine.” The organizing core of the person, where attention is aimed and love is set, where vision forms and loyalties harden, where thought, feeling, body, and relationship are gathered into one life. The substance and subject of consciousness.
We are each a bodily soul.
Over time, that center does not simply have desires. It acquires a direction. That direction becomes character.
Here philosophy becomes hopeful rather than merely corrective. It can help AI serve human flourishing by shaping systems that protect the conditions under which a person can flourish. Contact with reality rather than fluent hallucinations. Responsible agency rather than outsourced judgment. Truthful speech rather than optimized persuasion. Non-coercive relationship rather than engineered dependence. And the practical ability to say no.
When those conditions erode, people do not merely make worse decisions. They become easier to manage. That is the line worth defending.
For years, the implicit bet was that scale would outrun the hard human questions. Now the people building the most powerful systems on earth are admitting something more unsettling.
Unexamined assumptions scale too, often faster than capability.
If that is right, the next step becomes straightforward. The coming divide will not be between enthusiasts and skeptics. It will be between those who can name the human goods worth building for and those who can only scale what is easy to optimize.
Does the future of AI depend on philosophers?
What the future of AI needs is philosophers who can see the human stakes clearly, name the real problems without euphemism, and tell the difference between technical power and moral permission. The decisive question is whether those advising AI leaders and builders know when development should slow, and when it should stop.
That calls for more than analytic skill. It calls for judgment, moral knowledge, courage, and the kind of character that can carry a costly verdict through to the end. Many can diagnose. Fewer can refuse. Fewer still can refuse when refusal threatens status, money, influence, or access.
And beyond refusal, it requires imagination disciplined by concern for the wellbeing of each person and the health of society as a whole. It requires the courage to help bring into view forms of technological life that are more humane, more truthful, and more profound than the ones economic and social momentum alone will produce.
So yes, the future of AI may depend on philosophers. But it should only depend on philosophers who can do this work, and on whether those building and funding these systems will pivot when wisdom blocks the road. If they do, AI’s best future may arrive by routes that look, at first, nothing like victory to the people now setting the pace. Creative wisdom may redirect the AI development toward achievements stranger, deeper, and better than acceleration and efficiency know how to imagine.




