top of page
3D HD Render Computer Art_edited.jpg

Why Silicon Valley’s
Brightest Minds in AI Are
Calling on Philosophers

Brandon Rickabaugh, PhD

In Silicon Valley, leaders are rarely short on confidence. They speak the language of inevitability: “disruption,” “exponential,” “world-changing.” They assure us that what they are building is not just profitable but transformative. Yet in recent months, some of the most powerful people in technology have begun saying something few expected: the future of AI depends on philosophers.

This isn’t a nostalgic nod to the humanities. It is an admission of limits. A recognition that technical brilliance alone has delivered staggering capabilities and equally staggering failures. And a recognition that the problems left unsolved are not mathematical. They are philosophical.

 

A Surprising Chorus

The shift was evident in a high-profile moment on 60 Minutes. Sundar Pichai, CEO of Google and Alphabet, was asked who should guide the future of AI. His answer was not more engineers. “How do you develop AI systems that are aligned to human values, including morality?” he asked. “This is why I think the development of this needs to include not just engineers, but social scientists, ethicists, philosophers and so on” (CBS News).

For Pichai, AI is “more profound than fire or electricity” (Fortune). Still, he admits that society has no consensus on what human values should guide it, or how fast institutions can adapt. The mismatch between technological acceleration and moral reflection has become his central worry.

 

Demis Hassabis, CEO of Google DeepMind, echoes the call. He has warned that AI’s social transformation could be “ten times bigger and maybe ten times faster than the Industrial Revolution” (Wired). Hassabis argues that society will need an entirely “new political philosophy” to handle the disruptions AGI may bring. Shifts in jobs, infrastructure, and even the basic terms of democracy (Time).

 

Steven Johnson, author and now editorial director for Google Labs’ NotebookLM project, has been more candid. “There’s just a whole set of questions around AI that no one was thinking about, except for philosophers, until about two years ago” (Business Insider).

Stephen Wolfram, the mathematician and creator of Mathematica, cuts sharper still. After four decades at the intersection of computation and science, he shrugs off the euphemism of “guardrails.” “When you start talking about guardrails on AI, these are essentially philosophical questions… ‘Well, what is the right thing?’” (TechCrunch).

Mira Murati, former CTO of OpenAI, sees the same need: “There are a lot of ethical and philosophical questions that we need to consider. It’s important that we bring in different voices, like philosophers, social scientists, artists, and people from the humanities” (Time).

Satya Nadella, CEO of Microsoft—the company that has bet billions on OpenAI—was bluntest of all: “We need some moral philosophers to guide us on how to think about this technology and deploying this technology” (Axios).

And beyond individual companies, UNESCO has insisted that philosophy be embedded in governance. Its Recommendation on the Ethics of AI, adopted by 194 countries, declares: “Philosophy, ethics, and human rights must be embedded alongside technical standards in the global governance of AI” (UNESCO). Even MIT Sloan Management Review now advises executives that “generating sustainable business value with AI demands critical thinking about the disparate philosophies determining AI development, training, deployment, and use” (MIT Sloan).

 

Why Now?

Why are these leaders saying this now, after two decades of “move fast and break things”? The answer is simple: the breaks have become impossible to ignore.

The Horizon system in the UK Post Office scandal treated machine logs as courtroom truth. Hundreds of innocent people were prosecuted, families destroyed, reputations ruined—all because data was mistaken for testimony. That was not a software bug. It was an epistemological error.

Facebook’s entanglement with Cambridge Analytica showed what happens when “engagement” is treated as an unquestioned good. Narrow metrics hollowed out the concept of “community,” creating fertile ground for manipulation at scale.

Amazon’s much-hyped “Just Walk Out” stores, heralded as the future of retail, collapsed when it was revealed that human labor—not AI magic—was doing much of the work. The deeper mistake was philosophical: mistaking surveillance for trust, and novelty for genuine service.

Google Glass failed for the same reason. It never convincingly answered what human good it was supposed to serve. Instead it imposed a worldview in which every glance might be recorded.

Microsoft’s Tay chatbot is another warning. Within hours of its launch, Tay was manipulated into producing racist, misogynistic content. The failure was not in the NLP system itself but in the assumption that AI could learn “authentically” from the internet without moral guidance.

More recently, AI companion apps like Replika promised intimacy but often deepened loneliness. AI mental health apps risk something similar: patterned text mistaken for care, fluency mistaken for presence (Washington Post).

The pattern is clear. These failures are not simply bugs. They are conceptual mistakes—about truth, trust, intimacy, and harm. And they reveal why tech leaders are now willing to say, in public, that they cannot proceed without philosophy.

 

What They Need Philosophers to Do

When Pichai, Hassabis, Murati, Nadella, and others call for philosophers, they are not asking for decorative moralizing. They are looking for a new kind of participant in the build room.

  • Surface hidden assumptions. Philosophers are trained to ask: What is this tool for? What vision of the human does it encode? What forms of relationship does it reward or discourage?

  • Clarify ends. Fraud prevention is not trust. Engagement is not community. Philosophers name the difference.

  • Interrogate categories. Defining urgency as “response time” reproduces inequity. Treating logs as witnesses misclassifies data as testimony. Philosophers can catch these errors before they scale.

  • Frame accountability. Who is responsible when a system persuades, manipulates, or harms? Engineers may say “the model.” Executives may say “the market.” Philosophers insist: responsibility belongs to persons.

  • Tell the story. Products need a narrative of purpose that holds up to scrutiny. Philosophy helps answer the hardest question: Why does this deserve to exist?

  • Translate the philosophy and history of technology into the context of particular technological developments.

 

The Challenge Ahead

What’s most striking is not just that AI leaders are acknowledging the need for philosophers. It’s that they are doing so publicly and urgently. For decades, the prevailing mantra was that scale would sort out the problems. Now, CEOs and CTOs are conceding that unexamined assumptions are problems that AI can’t scale.

The profile of the philosopher they need is not decorative. It is insurgent and consultative at once. Someone who can derail billion-dollar roadmaps when they’re built on illusions, and also help design better systems in their place.

The leaders have already spoken: Pichai says philosophers belong at the table. Hassabis warns we need new ones within a decade. Murati insists they must be “in the loop.” Nadella calls for moral philosophers. Wolfram admits guardrails are “philosophy all the way down.” Johnson points out no one else was asking these questions. UNESCO demands philosophy embedded in global governance. MIT Sloan says ignoring it risks conceptual drift.

The question now is whether philosophers themselves will step up—not to decorate, but to decide. Not to soften the conversation, but to sharpen it.

 

Because the failures of the past decade have shown what happens when philosophy is absent. And the most powerful people in AI seem to know it.

Why Silicon Valley’s
Brightest Minds in AI Are
Calling on Philosophers

Brandon Rickabaugh, PhD

Revised Sep 1, 2025

To see how this looks in more detail, see my essay below.
bottom of page