top of page
MLK_Nobel_Prize_Lecture_edited_edited_edited.jpg

MLK's Test for Our Soul in Living with Technology

How “thing-orientation” explains our misuse of new machines

Brandon Rickabaugh, PhD

1.21.2026

MLK's Test for Our Souls in Living with Technology
How "thing-orientation" explains our misuse of new tech

Brandon Rickabaugh, PhD

1.21.2026

This essay is a draft in progress and part of a larger project on MLK and the philosophy of technology. I will upload revisions as I make progress on this project.

MLK

& Machines that Think?

A few years ago, it would have sounded like satire: people confiding in software, asking it to “stay with me,” letting it name their fears, receiving its reassurance as something like care. Now it is commonplace. The interface is warm; the replies are fluent; the tone is patient. From the inside, it doesn’t feel like an illusion.]The more the system adapts, the more it seems to “perceive” you as a singular “you,”

 

This is the new pressure point in the philosophy of technology: not simply what machines can do, but what they solicit from us, and do on our behalf. The kinds of attention, trust, dependence, and sense of self-understanding they normalize. The kind of political and moral application that bring more harm that we should be comfortable with. 

 

 

How has our development and deployment of technological power brough us to this point?

Martin Luther King Jr. is not a figure most readers associate with this problem. Yet King spoke directly about “machines that think,” “computer minds,” and the temptation to treat persons as things—inputs to process, resources to manage, obstacles to route around. He was not doing futurism. He was naming a moral and spiritual logic that becomes especially tempting when systems scale: the shift from person-orientation to thing-orientation. Once that shift takes root, technological progress can masquerade as moral progress. The machine improves; the human shrinks.

​

What makes King unexpectedly central to contemporary philosophy of technology is that he supplies a unifying criterion that many later theorists imply but rarely foreground: What conception of the person is silently governing our tools?Ivan Illich helps us see thresholds where tools cease to serve and begin to rule. Jacques Ellul helps us see the autonomy of efficiency as a social force. Albert Borgmann helps us see how devices reform daily life and attention. David Nye and Bronislaw Szerszynski help us see how awe and enchantment recruit our imagination into consent. King ties these together into a moral anthropology with teeth: technological evaluation is inseparable from judgments about the human good, and judgments about the human good are inseparable from the kind of people our systems are forming.

​​

​

King’s framework:

not anti-technology, anti-idolatry

 

King’s critique of technology is often misunderstood as nostalgia or generalized suspicion. It is neither. His target is not the existence of tools but the spiritual bargain we make with them. In one of his most incisive lines, he can stand before modern technical power—“mammoth productive facilities with computer minds… planes that almost outrace time”—and still insist that these can be “awesome” without being “spiritually inspiring.” The point is not that awe is bad. The point is that awe is dangerously easy to misread as formation.

​

A culture can be impressed without being transformed. It can be dazzled into compliance. It can learn to equate speed with salvation, scale with significance, novelty with meaning. When that happens, technology becomes what King feared: not merely a set of instruments but a rival liturgy, training admiration rather than moral perception. The crowd comes to feel “blessed” by power—by frictionless convenience, by optimized control—while the deeper questions become harder to ask without seeming ungrateful or anti-progress.

​

This is where King’s “thing-orientation” becomes more than a slogan. It is a diagnosis of the moral psychology that allows depersonalizing systems to expand under the banner of improvement. Thing-orientation is the habit of treating persons as objects to administer—units of labor, profiles to monetize, behaviors to nudge, risks to score. Person-orientation, by contrast, insists that persons remain subjects: beings whose agency, dignity, and interior formation are not accidental, not disposable, and not reducible to their outputs. In King’s vision, this is not merely an ethical preference. It is a claim about reality: the person is not the kind of thing you can correctly understand by optimizing around him.

​

Once you see this, much of contemporary philosophy of technology reads less like a collection of competing theories and more like a set of converging witnesses. They describe different mechanisms by which thing-orientation is installed—often with our permission, and sometimes in the name of care.

​

​

Illich:

the threshold where “help” becomes rule

​

Ivan Illich’s Tools for Conviviality is one of the clearest accounts of a pattern King names morally: beyond a threshold, tools that once served human goods begin to sabotage them. Illich’s language of conviviality is deliberately human-scale. A convivial tool expands the competence of ordinary people. It invites participation rather than dependence. It remains answerable to the community using it.

​

But scale changes the moral geometry. When systems exceed human scale—when they become too large, too technical, too centralized—they can become manipulative rather than empowering. Illich’s “radical monopolies” name what happens when a tool doesn’t merely offer a service but reorganizes society so that the service becomes mandatory, and alternatives become unintelligible or inaccessible. This is not just about efficiency; it is about agency. The system grows by quietly transferring competence away from persons and into institutions.

​

King’s critique of “gargantuan industry and government, woven into an intricate computerized mechanism,” makes the same point from a different angle: the person is “left outside.” That is not a poetic flourish. It is a political anthropology. A society can build systems that “work” while treating the human subject as peripheral—served, managed, measured, and displaced.

Illich supplies the grammar of thresholds; King supplies the moral diagnosis of why modern societies keep crossing them. Thing-orientation makes the transfer of agency feel like progress. It allows institutional expansion to masquerade as moral advance: more schooling becomes “more learning,” more medicalization becomes “more care,” more technical mediation becomes “more connection.” But the sabotage is subtle. The system does not merely deliver a good; it redefines the good in terms the system can provide. Conviviality becomes consumption.

​

This is why the question of AI “assistance” is not merely a question of usefulness. When a system offers companionship, counsel, education, therapy, or moral guidance at scale, we should ask Illich’s question through King’s lens: does this tool expand the competence of persons and communities—or does it replace competence with dependence while calling the replacement “care”?

​

​

Ellul:

technique as a moral alibi

​

If Illich helps us see the threshold where tools begin to rule their users, Jacques Ellul helps us see why that rule so often feels inevitable. Ellul’s concept of technique is not “technology” in the narrow sense. It is the total logic of method—procedures rationally optimized for maximum efficiency in every domain. Technique is seductive because it offers a consistent promise: whatever the problem, there is a method that can manage it—faster, cleaner, more reliably than the messy unpredictability of persons.

​

This is exactly where thing-orientation becomes socially durable. The more technique prevails, the more moral questions are translated into technical questions: What interventions will change behavior? What model will reduce risk? What metric will improve outcomes? The person is re-described as a system to optimize. Agency becomes noise in the data.

​

Ellul often emphasizes the near-inescapability of technique’s logic—its tendency toward autonomy. But King adds a crucial dimension that keeps moral agency in view without naïveté: the autonomy of technique is also a human capitulation, an acquiescence of will, imagination, and courage. “The system made me do it” is rarely a literal description. It is often a moral alibi.

​

This matters because the most dangerous depersonalization is the one nobody feels responsible for. It arrives without villains, under the banner of “best practices,” “innovation,” “responsible AI,” “safer systems.” King’s insistence on responsibility is not optimistic. It is bracing. It says: if the system is dehumanizing, our resignation is part of the mechanism. The point is not to deny systemic inertia; it is to refuse the idea that inertia absolves us.

​

In the age of machine mediation, this refusal becomes urgent. When efficiency becomes the default moral argument, personhood becomes negotiable. King’s language gives you a way to name the bargain: we are trading ends for means, dignity for throughput, moral formation for management.

​

​

Borgmann:

devices and the slow hollowing of the interior

​

If Ellul diagnoses the systemic logic that presses on us from above, Albert Borgmann helps us see how the same logic settles into the habits of ordinary life. Borgmann’s device paradigm describes a shift in how modern technology delivers goods: it increasingly separates commodities from the practices that once disclosed their meaning. We get warmth without tending a fire, food without cooking, music without learning an instrument, navigation without knowing a place. The device is not merely a tool; it is a pattern of life that makes engagement optional.

​

King’s internal/external distinction—the worry that societies can perfect the external while neglecting the internal—fits Borgmann’s insight with a sharper moral edge. The erosion of focal practices is not just cultural loss; it is civic and spiritual diminishment. Certain goods—patience, skill, mutual dependence, gratitude, courage—are not simply possessed. They are cultivated. They require practices that form the interior.

​

This is where the question of “AI companionship” becomes more than a curiosity. A system that delivers the feeling of being known without the labor of knowing—without risk, vulnerability, mutual obligation—does not merely add a new product to the marketplace. It proposes a substitute for a practice. It offers intimacy without the disciplines of attention, fidelity, and reconciliation.

​

Borgmann helps us see why that matters phenomenologically; King helps us see why it matters morally. Beloved community is not built by optimized experiences. It is built by formed persons—persons capable of truth-telling, endurance, forgiveness, and concrete solidarity. If devices steadily externalize life—outsourcing attention, memory, judgment, and even companionship—then the interior capacities required for justice are weakened. A society can become hyper-connected and relationally thin. It can become hyper-informed and morally confused.

​

King’s insight here is quietly radical: technology is not only an external power; it is a formation regime. It trains the kind of people who will later either accept injustice as “the way things are” or resist it as a violation of the real.

​

​

When wonder becomes consent:

Nye, Szerszynski, and Illich

​

David Nye’s account of the technological sublime helps explain why these bargains are so hard to see. Technology can feel quasi-religious: overwhelming, awe-inspiring, a spectacle of power that invites gratitude and silence. King’s warning that the awesome may fail to be spiritually inspiring is not prudishness; it is discernment. Awe can be anesthetic. It can discipline the imagination toward admiration and away from interrogation.

​

Bronislaw Szerszynski adds a complementary insight: modernity does not only disenchant; it also re-enchants—often through technology. Tech can become a new wonder, a substitute transcendence. King saw this with unnerving clarity when he described a “new man-centered religion” that points to spectacular advances as justification for its faith. The danger is not that technology feels meaningful. The danger is that meaning is offered on terms that train appetite rather than deepen wisdom.

​

Here King’s criterion becomes practical: judge an enchantment by what it produces. Does it yield deeper love, justice, patience, and courage—or intensified desire for novelty, control, and frictionless power? If wonder recruits us into domination, then wonder has become counterfeit. The question is not whether we feel awe again, but whether awe is ordered toward the good.

The test: does this tool treat persons as subjects—or as inputs?

​

If King belongs in contemporary philosophy of technology, it is not because he can be footnoted beside Illich, Ellul, Borgmann, and others—though he can. It is because he supplies the governing question their best insights demand: What picture of the person is this system presupposing and producing?

​

Illich gives us thresholds: the moment when help becomes rule and competence becomes dependence. Ellul gives us autonomy: the way efficiency hardens into fate. Borgmann gives us daily formation: the way devices deliver goods while thinning the practices that once formed virtue. Nye and Szerszynski show how awe and enchantment recruit our imagination into consent. King integrates these into a single criterion with moral urgency: the revolution from thing-orientation to person-orientation.

​

That revolution is not anti-technology. It is anti-idolatry. It does not demand that we abandon tools. It demands that we refuse to let tools silently decide what a person is. The hopeful claim in King’s vision is not that technology will save us, or that we can engineer a frictionless moral world. It is that technology can serve beloved community—if we recover the spiritual and moral imperatives adequate to technological power.

​

The practical upshot is a test that is both simple and severe:

​

  • Does this system expand agency or replace it with dependence?

  • Does it cultivate competence, responsibility, and mutual obligation, or does it offer managed substitutes while calling them goods?

  • Does it require and strengthen the interior capacities needed for justice—attention, truthfulness, patience, courage—or does it outsource them?

  • Does it treat persons as subjects to be formed or as inputs to be optimized?

 

King’s genius is to insist that these are not peripheral questions. They are the real questions. The danger is not merely that our machines grow smarter. The danger is that we become satisfied with being treated as things—efficiently served, endlessly impressed, quietly managed—while the work of becoming persons is left outside the system. The beloved community will not be built by awesome devices. It will be built by people who refuse the bargain, and who demand tools worthy of persons.

What Does Selma Have to Do with Silicon Valley?

Martin Luther King Jr.’s Forgotten Thoughts 

on Technology and Thinking Machines

Neurotechnology_edited.jpg

We've Been Thinking about AI All Wrong

​

Image by Markus Spiske

The Future of AI Is Not About Intelligence.

bottom of page