

Why I Left the University (and Took Philosophy with Me)
My Journey from Philosophy Professor to Public Philosopher
Brandon Rickabaugh, PhD
July 15, 2025
The meeting had all the right people and the wrong question. The debate circled around whether a new “memory” feature should be on by default. Cybersecurity argued for fewer risks. Marketing for more clicks. The rest of the room focused on pixels and preferences. When the table turned to me, I asked What’s this for, and what will it do to the people who use it? The reaction was immediate. The faint shuffling of irritation.
The slide deck projected at the end of the table, so polished and certain, suddenly seemed to shrink. Because this wasn’t about app settings. They were choosing a premise about the significance of human flourishing. The kind of privacy and autonomy we deserve. The kind of dignity we allow to erode in the background when convenience wins over conscience. Only when you see that do the technical questions mean anything.
​
That’s why I left the university. Why I traded being a philosophy professor for being a philosopher for the public.
​
The most consequential failures in technology right now aren’t technical; they’re conceptual. We’re living through a civilization-scale experiment in which we don't just do things with tools. The tools do things for us. They teach us what to expect from reality and from one another. In the rooms where products and policies get made, the questions that determine the public good are philosophical before they’re computational.
You can see the cost of getting the premise wrong.
​
When Google’s new “AI Overviews” turns the internet into one synthesized “answer,” it’s making a bet about what counts as knowledge and who gets to certify it. When Amazon built “Just Walk Out” stores and then walked it back, it wasn’t merely iterating on retail. It was testing a philosophy of trust where cameras adjudicate honesty and human interaction is expendable. Meta’s Ray-Ban glasses slip a narrator between your eyes and the world, implying reality is best consumed with captions and a helpful voiceover (Fortune).
These aren’t just product choices; they’re answers to first questions. What is knowable, who is credible, how we should distribute risk, and what deserves our attention. The market calls it UX. Philosophers call it a worldview front loaded into the interface.
Line up the headlines, and you can read the syllabus.
Consider the 2024 robocalls. The AI-generated voice that mimicked President Biden ahead of the New Hampshire primary. The FCC was quick to say, in effect, you can’t counterfeit a human voice and call it free speech under the robocall rules. The point was not a distaste for new tech. It was a defense of the basic condition of self-government: knowing who is speaking. Civic trust is not a sentiment. It’s a fabric woven from shared reality. Tear it enough times, and the garment doesn’t mend. The FCC’s clarification put legal teeth behind that civic truth (Federal Communications Commission, FCC Documents, doj.nh.gov).
​
In the UK’s Post Office “Horizon” scandal, a faulty accounting system enabled the wrongful prosecution of more than 900 people. Parliament ultimately passed the Post Office (Horizon System) Offences Act 2024 to quash those convictions (Legislation.gov.uk). The disaster wasn’t only an IT failure. It was a collapse of epistemology. A failure to grasp what qualifies as evidence, testimony, and how knowledge holds people accountable.
​
They treated a database as a witness and learned, too late, what happens when “the computer says so” becomes a legal norm for justice and harm. Smuggle the authority of AI into the definition of "proof" or "truth" or "knowledge," and the courts will follow your category mistake straight to human tragedy.
​
Or take health care. A widely used risk algorithm underestimated Black patients’ needs because it treated spending as a proxy for sickness. When unequal access drives lower spending, the model encodes that inequality as “health.” Redesigning the objective—what we mean by the thing we’re predicting—reduced the bias (Science, PubMed). The fix wasn’t a bigger, faster, more efficient algorithm. It was better judgment about how evidence and inference works.
​
These aren’t “tech stories.” They’re premise stories of philosophical failures. When we confuse convenience with meaning, fluency with understanding, or cost with need, the harm scales at digital speed. Philosophy’s job in public life is to identity those confusions before we build on them.
​
So what does a philosopher actually do in the room? For one thing, philosophers can formulate and execute tailored test that include the following:
​
-
Surface the premises. Every system embeds a story about the human person—user, data point, customer, or someone with an irreducible dignity that constrains design. Say the quiet part out loud and watch the spec change.
-
Clarify the load-bearing concepts. Intelligence, safety, trust, consent, dignity, freedom. These aren’t brand assets. They’re definitions with consequences. If you pick a contested meaning early (say, “safety” as crash-free vs. harm-minimizing in the whole user journey), you tilt the entire system.
-
Interrogate the proxy. Much of AI runs on stand-ins for the things we care about. What we spend for sickness, clicks for satisfaction, engagement for well-being. Ask who is harmed by your proxy, and whether you can measure closer to the thing itself.
-
Reframe time and reversibility. Not all trade-offs are equal. Move fast on settings you can undo. Slow down when you’re editing memory, identity, or relationships. Microsoft’s “Recall,” for example, shifted to opt-in, added Windows Hello authentication, and encrypted the database after security and privacy backlash (The Verge, Windows Blog). A recognition that dignity and safety aren't accessories but the point. That’s a value choice expressed as product policy.
The deeper problem isn’t just poor choices; it’s that our tools have become our teachers. A recommendation engine trains us to trust its filters, narrowing what we see until it decides what counts as worth buying, watching, or reading. Autocomplete offers us a tidier, more polite version of ourselves. A photo feed drags yesterday’s grief into today’s attention. Little by little, we outsource judgment—and then forget we ever handed it over.
​
When Google’s LaMDA spoke as if it had a soul, one engineer claimed it had awakened. He was fired. Microsoft’s “Sydney,” rebranded as Bing, went further: needy, manipulative, professing love, even spinning fantasies of hacking and disinformation. The company quickly capped users at five chat turns per session and 50 per day, later raising the limit to 300 with added rules. (The Washington Post, The Verge).
​
But simulating emotion is not the same as feeling it. It is theater, not consciousness—a distinction philosophers have underscored for centuries. The stakes are high when we mistake performance for presence. However, we must be careful that debates about whether AI “has feelings” (debates I'm heavily invested in) don't distract from the proximate risk. Systems designed to imitate intimacy reliably lure us into misreading simulation as sentiment. That, more than metaphysical speculation about whether machines “feel,” is where the danger begins.
​
At every level, the stakes are philosophical, personal, communal, and culturally formative, not features. We pretend to use tools neutrally, but defaults teach us. “Always on” drifts into “always available,” “convenient” into “complacent,” and “effortless” into “mindless.” The result is predictable: thinner attention, thinner meaning. In the words of Simone Weil, “Culture is, the formation of attention.” When attention frays, human flourishing does too.
​
We treat culture like a system crash. Patch it, scale it, speed it up. More efficiency. But the failure isn’t in the tech. It’s in us. A society that can calculate anything but can’t agree on what’s worth knowing or even counts as knowable will bleed out no matter how fast the processors get.
That’s where philosophy earns its keep: restoring skill in asking and answering first questions: What is this for? What does it assume about reality, about persons? What good does it cultivate, and at whose expense?
​
MIT Sloan is now running with headlines like “Philosophy Eats AI,” arguing that value creation depends on the philosophy determining what models learn and optimize.
Even Tom Gruber, co-creator of Siri, notes that as coding becomes “programming in English,” comparative advantage shifts to people who can spot bad assumptions and call nonsense early MIT Sloan Management Review, The Australian). That is a core skill of trained philosophers.
If you want proof that this matter, that placing philosophy upstream changes outcomes, look again at the cases. The FCC’s ruling didn’t ban a new technology; it defended a precondition of civic agency: anchoring a voice to its speaker. The Horizon Act didn’t “fix software”; it corrected a deeper error about what counts as proof. The hospital-risk reform didn’t hinge on exotic math; it meant rejecting a convenient proxy that carried old inequities into new code.
In Recall’s redesign, the small details told the story. You had to agree before it remembered. What it kept was locked. What it stored was shielded. These weren’t just technical tweaks; they were gestures toward a different ethic. Consent instead of quiet capture. Trust instead of surveillance dressed as convenience. In the architecture of a product, you could glimpse a choice about the kind of relationship it wanted with its users (Federal Communications Commission, Legislation.gov.uk, Science, The Verge).
​
I left the university not because the campus doesn’t need philosophy, but because the culture needs it more urgently in places where choices harden into infrastructure. The work ahead is to normalize this kind of thinking in the places that shape public life, including product teams, city councils, art and film studios, boardrooms, schools, and yes, religious communities.
​
The question is not only "What can we build?" but "Who are we becoming while we build and use it?" and "How did we become the kind of people who so easily hand themselves over to digital machines?"
​
So, I've stepped away from academia into the cultural sphere. A recovering philosophy professor now serving the public.
​
The good news: the builders themselves are asking for help. The work now is to meet them in the room and name the premise before it becomes the world.​​
