
Sam Altman
Turned You into a Cost
What happens when the metric rewrites the world it measures
Brandon Rickabaugh, PhD
3.4.2026
The Hook: Brandon Rickabaugh, PhD, argues that treating human development as what Sam Altman called an AI "training cost" is a category error. While AI is optimized for efficiency, humans are defined by answerability—a moral standing that cannot be automated without losing justice.
Table of Contents
Value of a Statistical Life
Metrics and Moral Frames
Training Equivocation
Where the Drift Goes Next
Reclaiming the Stack
Value of a statistical life.
One of the fastest ways to change how people treat human beings is to change what counts as a fair comparison.​
One of the fastest ways to change how people treat human beings is to change what counts as a fair comparison.
​
In the 1950s, the RAND Corporation ran into a moral deadlock over pilot safety spending. If a US Air Force pilot was a "God-given soul," the cost to save him was infinite. If he was seen as a "replacement part," his worth plummeted to the price of a new recruit. Economist Thomas Schelling broke the stalemate by shifting the focus from the person to the probability of death—what he called the "Value of a Statistical Life."
​
Economist Thomas Schelling helped break the stalemate by shifting the question. Stop asking about this person, he suggested, and start asking about risk. How much is society willing to pay to reduce the probability of death across a population (AEA)?
​
That move gave us the “Value of a Statistical Life,” a metric that makes human life legible to cost benefit analysis. That metric is now used by several regulatory bodies to value a human person at roughly $10 million. It succeeded because it rebranded the human as a data point in a cost-benefit analysis.
​
The number itself is not the main story. The story is the shift in gaze. You stop seeing a person and start seeing a variable. Once you pick a ruler, the ruler starts redesigning the moral landscape by limiting how we measure the value of things and people. ​
VSL =
ΔWTPΔRisk
= $10 million
​Sam Altman made a similar move with a line meant to calm anxiety about AI’s energy appetite.
​​​​​
Altman's move.
Speaking at the AI Impact Summit in India, Altman argued that AI critics make an unfair comparison when they set the energy cost of training a large language model next to the cost of a single human response. “But it also takes a lot of energy to train a human,” he said: about twenty years of life and the food consumed during that time, plus the long inheritance of culture and evolution that makes a mind possible. The fair comparison, he suggested, comes after the model is trained, question for question versus a trained human. “Measured that way,” he implied, AI may already be as energy efficient as we are.
Altman is reading from a common script. Yuval Noah Harari calls humans “an assemblage of organic algorithms.” Bill Gates predicts that in medicine “the machine will probably be superior to humans.” Because AI will outclass the diagnostic capabilities of doctors. His assumption: doctors, insofar as they are useful as doctors, are reducible to diagnostic capabilities. Elon Musk has described us as a “biological boot loader” for digital superintelligence.
​
Different settings, same rhetorical move: they invite you to think of personhood as computation, and computation as the thing that matters most. It is a way of deciding, in advance, what the world looks like. You pick the ruler and the ruler limits what and how to measure things. Some things get counted. Other things get ignored.
​
Most reactions to Altman’s remark argue about the numbers. That debate has its place. Data centers consume real power. Model training and inference have real costs. Energy accounting should be honest.
​
The deeper concern is not a disputed spreadsheet. It is the inference hiding in the comparison, the way a clever metric can smuggle in a picture of the human person as the same kind of thing as an LLM.
​
Here is what the mind is tempted to conclude, almost without noticing:
​
If both humans and LLMs can be described as "things" that are “trained” to produce answers, then they are comparable in the relevant way. If they are comparable in the relevant way, then the right standard is efficiency per answer. If efficiency per answer is the right standard, then the norms we use to evaluate tools can govern how we evaluate persons in the domains where answers matter.
​
That is the drift. It is rarely announced as a theory. It arrives as practice.
Metrics and moral frames.
Notice the widening move inside Altman’s remark. He starts with a narrow complaint about bad comparisons. Then he folds into “training” twenty years of human development, the calories that sustained it, and the long civilizational inheritance that formed a mind. The rhetorical strength is plain. Scattered realities become legible under one heading: cost.
​
Think of the debate as a comparison stack, with three layers.​​
Cost layer: How much energy does a process consume? Energy per query. Dollars per case. Seconds saved.
Kind layer: What sort of thing are we talking about? A tool we operate, an organism we manage, or a person we address.
Norms layer: Which standards govern evaluation? Efficiency, truthfulness, justice, responsibility, trust.
The deception of powerful metrics is that they too often pretend to stay in the first layer. They do the math in the Cost layer. Then, they start rearranging the Kind and Norms layers. They make certain questions feel outdated or sentimental. They make other questions feel like the only adult questions left.
​
Training equivocation.
In machine learning, “training” is external optimization. Engineers specify an objective, define a loss function, adjust parameters, and deploy. The system is tuned to reduce an error signal relative to a targeted goal. The language fits the thing. A designed artifact gets shaped toward an external aim.
​
Human development can include training in the ordinary sense: drills, habits, skill acquisition. You can train for basketball, for surgery, for piano. But the growth of a child into a mature person cannot be captured by optimization language without distortion. It won't work for sports, education, and art either.
​
A human being does not merely meet targets. A human being lives inside ethical norms. That means more than behaving predictably. It means being the sort of creature for whom reasons can have authority. For whom truth can be loved. For whom the good can be recognized as binding even when you don't want the constraints.
​
One name for this difference is answerability: responsiveness to reasons and ownership of who you've become and what you bring into the word. It is the standing that makes “Why did you so that?” a serious question. A person can be guilty and irrational or praiseworthy and virtuous. A tool can only work or malfunction. A person can lie. A tool can only output.
​
This is not pious abstraction. You can watch it happen in education.
​
A student gives the correct answer for a bad reason. A crude grading scheme registers success. The worksheet is satisfied. An algorithmic shorthand is satisfied. But a serious teacher presses anyway. How does that follow? What did you see? Why did you choose that step? The aim is not output. The aim is formation: becoming someone who cares about grasping the truth, someone who can be corrected because he sees, not merely because he can produce.
​
A generated explanation and an accountable judgment can look similar on the surface while belonging to different orders of life. One can be prompted into existence. The other can be demanded, contested, defended, withdrawn, repented of.
​
That is why answerability matters. In domains where answerability is part of the point, “efficiency per answer” is not the governing norm. It is, at best, a constraint.
​
Where the drift goes next.
It is tempting to treat this as a rhetorical hazard, a philosopher’s complaint about metaphors. But the drift is already built into institutions that reward only what they can count in the details of ordinary life.
​
Consider the difference between an answer and a justification. An answer can be correct and still unowned. A justification is something a person stands behind. Institutions increasingly want the first and quietly downgrade the second.
​
In its 2025 report Governing with Artificial Intelligence, the OECD found that 57% of analyzed government AI use cases support “automating, streamlining or tailoring services.” Throughput wins. Civic trust and answerability drop out because they do not fit procurement columns. The point is not that automation is always wrong. The point is what becomes normal once “measured that way” becomes the default frame.
​
Here is how it looks in practice. A benefits applicant is denied. A family’s paperwork is flagged. A citizen is told they are ineligible, high risk, low priority, noncompliant. When they ask why, they receive an explanation that sounds like a reason but functions like a report: a summary of factors, a confidence score, a policy citation. The surface resembles accountability. The structure does not.
​
A human by having a mind, emotions, and will, can be questioned in a way that bites. You can press. Which evidence did you weigh? What were you allowed to consider? What did you ignore? Who is responsible for this judgment?
A person can be embarrassed by a question. A person can be shown to be unjust. A person can be held answerable, accountable.
​
When a model output or an opaque workflow sits in the middle, answerability becomes overhead. It becomes a feature you add if you have time. A compliance document you generate after the fact. A customer service script. What cannot be cleanly counted becomes hard to defend, then easy to drop and ignore.
The reduction does not arrive as a manifesto. It arrives as a set of defaults: faster, cheaper, smoother. Fewer humans in the loop. Fewer opportunities for refusal. Fewer places where a human must give an account.
​
This is how a metric becomes an environment. Really, it's how a metric disturbs and decays moral knowledge.
​
Reclaiming the stack.
None of this requires us to deny that energy matters. Altman is right that honest accounting is worth doing, and cheap moral outrage is not a substitute for measurement. But energy cannot tell us what a person is. Cost cannot adjudicate Kind. Efficiency cannot settle Norms in domains where answerability is constitutive.
​
The point is not “never compare.” The point is compare without letting the Cost layer colonize the rest of reality.
​
RAND learned that once you pick a ruler, the ruler starts redesigning the landscape. We are doing that again. When cost/benefit math drifts into Kind and Norms, institutions stop asking what we owe persons and start asking what we can optimize.
​
If you want a simple diagnostic, watch what happens to the word “why.” In a flourishing moral life, “why” is not a request for output. It is a request for accountability. It assumes a a person who can be responsible for speech, a citizen who can be responsible for action, a teacher who can be responsible for judgment. It assumes persons.
​
In the domains where people can be sanctioned, excluded, graded, denied care, denied benefits, denied opportunities, or deprived of rights, answerability is not a luxury. It is part of what justice is. It's part of what mercy is. Trust is not a byproduct of speed. Trust is built when reasons can be demanded and owned, when responsibility can be located, when the question “Why?” has teeth.
​
Measure energy and efficiency. Build cleaner power. Improve systems. Cure diseases. But do not let the language of “training cost” become the language of explaining what a person is. That metaphor is cheap in the way a sedative is cheap. You pay later in a currency you cannot measure.
​
If we want a future worth living in, we will have to learn how to keep the layers distinct: to do accounting without turning persons into instruments, to use tools without adopting tool norms as our moral law, and to insist, stubbornly, that some parts of reality only come into focus when you refuse to measure them “that way.”

About the Author
Brandon Rickabaugh, PhD, is a philosopher and author specializing in the philosophy of mind, consciousness, and digital ethics. He is the founder of the NOVUS Initiative and a former Associate Professor of Philosophy. His work explores the intersection of the soul, human flourishing, and emerging technology. For more on these topics, explore his other Popular Writing or view his full research profile.





