top of page

The Cost of Altman’s Metaphor

  • Writer: Brandon Rickabaugh
    Brandon Rickabaugh
  • Mar 4
  • 5 min read

Updated: Mar 13



When we measure humans by their "training cost," the person falls out of the frame.



One of the fastest ways to change how people treat human beings is to change what counts as a fair comparison.

 

In the 1950s, the RAND Corporation faced a moral deadlock over pilot safety spending. If a US Air Force pilot was a “God-given soul,” the cost to save them was infinite; seen as a “replacement part,” their worth plummeted to the price of training a new recruit. Economist Thomas Schelling broke the stalemate by shifting the focus from the person to the probability of death.



This shift was expressed by what Schelling called the “Value of a Statistical Life.” That metric is now used by several regulatory bodies to value a human person at roughly $10 million. It succeeded because it rebranded the human as a data point in a cost-benefit analysis.

 

Sam Altman made a similar move with a line meant to calm anxiety about AI’s energy appetite.


Speaking at the AI Impact Summit in India, Altman argued that AI critics make an unfair comparison when they set the energy cost of training a large language model next to the cost of a single human response. “But it also takes a lot of energy to train a human,” he said: about twenty years of life and the food consumed during that time, plus the long inheritance of culture and evolution. The fair comparison, he suggested, comes after the model is trained, question for question versus a trained human.


“Measured that way,” he suggested, AI may already be as energy efficient as we are.


Most reactions to Altman’s line argue about whether the numbers add up. I’m more interested in the illicit inference hiding in the comparison: the way a clever metric can slip in a picture of the human person as the same kind of thing a model is, an input output box with a fuel bill.

 

Altman is reading from a common script.



Yuval Noah Harari calls humans “an assemblage of organic algorithms.”






Elon Musk has described us as a “biological boot loader” for digital superintelligence.



Different settings, same maneuver. A metaphor shows up as a convenience, then gets promoted into a category, and soon the category starts doing moral work. It is a way of deciding, in advance, what the world looks like. You pick the ruler and the ruler limits what and how to measure things.


Some things get counted. Others get ignored.


 

Metrics and Moral Frame

Notice the widening move inside Altman’s remark. He starts with a narrow complaint about bad comparisons. Then he folds into “training” twenty years of human development, the calories that sustained it, and the long civilizational inheritance that made your mind possible. The rhetorical strength is plain. Scattered realities become legible under one heading: cost. Human growth becomes an energetic pipeline. History becomes an amortized training run.

 

At one level, this is unobjectionable. Metrics do not lie. They simply leave whole realities off the invoice. The trouble begins when a valid comparison in the energy domain is used to overwrite questions that do not belong to it.

 

Think of the debate as a comparison stack, with three layers:

 

Cost layer: How much energy does a process consume? This is an engineering and infrastructure question.


Kind layer: What sort of thing are we talking about: a tool, an organism, or a person? Cost data cannot settle this.


Norms layer: Which standards govern evaluation: efficiency, truthfulness, justice, responsibility? Energy math does not choose these standards.

 

A cost metric can tell you what something consumes. It cannot tell you what it owes.

Here is the failure mode in one line. Equivocation on “training” enables an illicit inference, and the illicit inference is a layer violation from Cost to Kind and Norms. Even if AI is more energy efficient per query, that fact does not imply parity in kind, nor does it settle what we owe each other in domains where answerability is central.


 

Culture as Raw Material

Altman places the long inheritance of culture on the same ledger as a massive dataset scraped from the internet. A poem becomes a sentiment vector. Beauty becomes engagement. Understanding becomes a test score. Awe becomes a dopamine spike. Love becomes an attachment algorithm with measurable outputs. Holiness becomes an aesthetic preference.

 

When the Cost layer invades the Kind and Norms layers, the world stops being a field of meanings and becomes a field of manipulable variables. Altman is effectively rebranding human history as a massive, unformatted R&D project.

 


Training Equivocation

In machine learning, “training” is external optimization. Engineers specify an objective, define a loss function, adjust parameters, and deploy. The language fits the thing: a designed artifact tuned toward a target.

 

Human development can include training in the familiar sense: drills, habits, skill acquisition. But the growth of a child into a mature person cannot be captured by optimization language. Human formation produces a being who can live inside norms rather than merely meet targets, who can be addressed with reasons and held to them.

 

The test is simple. What can a person enter into that a tool cannot.

 

One answer is answerability. This is a cluster of obligations: responsiveness to reasons, ownership of speech, and the standing that makes “Why did you say that?” a serious question. A person can justify, retract, apologize, and confess.

 

A person can be guilty and irrational. A tool can only malfunction.

 

You see this in the classroom. A student gives the correct answer for a bad reason. A crude grading scheme, an algorithmic shorthand, registers success. But a serious teacher presses anyway: How does that follow? What did you see? The aim is not optimized output. It is the formation of someone who cares about grasping the truth. Updating weights is not contrition. A generated explanation and an accountable judgment can look similar on the surface while belonging to different orders of life.


 

Institutional Drift

This is not a problem confined to rhetoric. It is already being coded into institutions that reward what they can count.

 

In its 2025 report Governing with Artificial Intelligence, the OECD found that 57% of analyzed government AI use cases support “automating, streamlining or tailoring services.” That is the dashboard metric: throughput and administrative speed. The displaced good is civic trust and answerability, which do not fit neatly into procurement columns. The consequence is predictable. Systems get purchased for what they quantify, and human judgment becomes invisible to the spreadsheet that decides what counts even when everyone insists it still matters.

 

The pattern generalizes. Once “measured that way” becomes the default frame, institutions start treating answerability as overhead. What cannot be cleanly counted becomes hard to defend, then easy to drop. The reduction does not arrive as a theory. It arrives as practice.


 

Reclaiming the Stack

Altman is right that energy accounting should be honest. But a responsible response has to name what the comparison cannot tell us. It cannot tell us what a person is. It cannot tell us which goods govern education, medicine, or civic life. Those are Kind and Norms layer questions, and they do not become engineering questions just because we can attach numbers to them.

 

Once you see the comparison stack, the problem looks less like a reason to hesitate and more like a constraint worth honoring. AI’s real upside is in domains where it extends human capability without pretending to replace it, not in the ones where optimization vocabulary has redefined the task to fit the tool.


Keeping the Cost Register honest with lifecycle accounting builds the trust that scale requires, because scale runs on trust and trust runs on honesty.


Keeping the Kind Register clear, refusing to let “training” collapse the distinction between artifact and person, keeps AI deployed where it actually excels. Making the Normative Register explicit, treating answerability and trust as design constraints rather than residual concerns, is how you build systems people can rely on rather than merely tolerate.


The ambition does not need to shrink. The specification needs to grow.

 

 
 
bottom of page