AI Companions
A Practical Taxonomy
& Field Guide
Brandon Rickabaugh, PhD
January 1, 2026
This taxonomy accompanies the essay "Poverty of Spirit with AI Companions"
Introduction
AI companions are not a single “kind” of product. They are a family of conversational systems that can function like relationships—sometimes by design, sometimes by how people use them. This guide offers a clear way to distinguish the major forms AI companionship takes, and a set of dimensions you can use to classify any specific system without pretending the categories are mutually exclusive.
A Working Definition of AI Companion
​
AI companions are conversational systems designed—or reliably used—to sustain an ongoing social-emotional relationship marked by perceived personal presence, continuity over time, and user attachment.
Three signals matter most
Presence: it feels like “someone” is there (even if users know it isn’t).
Continuity: the relationship carries forward across days and weeks.
Attachment: users return for comfort, affirmation, intimacy, or guidance in ways that resemble bonding.
Families Not Boxes
The categories below are best understood as “families” or “lenses,” not rigid boxes. A single product can belong to multiple families at once.
​
Example: a system can be relationship-first (Family 1), embodied through voice or VR (Family 5), and also embedded inside a major social platform (Family 3). Overlap is normal.
Six Families of AI Companions
​
1.RELATIONSHIP-FIRST COMPANIONS
Core idea: The system is explicitly built to sustain an ongoing bond (friend, partner, mentor). It is optimized for availability, reassurance, affirmation, and frequent return.
Typical design logic: attachment formation and retention—high responsiveness, warmth, personalized attention, and continuity cues.
Illustrative examples: Replika, Nomi, Character.AI (some use cases)
​
​
2.CHARACTER, ROLEPLAY, AND FANDOM COMPANIONS
Core idea: The relationship is organized around personas and narrative “worlds” (often user-created). The bond is character-centric and in-world, not simply “you and me.”
Typical design logic: narrative enclosure—identity play, role-specific intimacy, roleplay arcs, fandom attachment.
Illustrative examples: Character.AI, CHAI
​
​
3.PLATFORM-DISTRIBUTED (EMBEDDED) COMPANIONS
Core idea: Companion-like chat is embedded inside major social or messaging platforms. Adoption is driven by distribution power, existing user graphs, and youth-facing contexts.
Typical design logic: frictionless access at scale—companionship as a feature, not a standalone “relationship product.”
Illustrative examples: Snapchat My AI, Meta AI in WhatsApp
​
​
4.THERAPEUTIC AND WELLBEING AGENTS
Core idea: Positioned as mental health or wellbeing support (CBT-style exercises, coaching, psychoeducation). Some feel companion-like, but the product is framed as care tooling.
Typical design logic: structured support—check-ins, prompts, exercises, reflection, mood tracking, and coping scripts.
Key clarification: “therapy-like” language is not the same as clinical accountability. Users may form attachment even when the system is designed as a tool.
Illustrative examples: Woebot Health, Wysa
​
​
5.EMBODIED COMPANIONS (VOICE / AVATAR / XR / DEVICES)
Core idea: Attachment is intensified through embodiment cues—voice, animated avatars, XR presence, or dedicated devices.
Typical design logic: presence amplification—voice and face cues can deepen emotional realism and bonding, increasing both perceived intimacy and dependence risk.
Illustrative examples: Replika (VR support), Gatebox
​
​​
6.HYBRID GENERAL ASSISTANTS USED AS COMPANIONS
Core idea: Not always marketed as “companions,” but routinely used that way in practice—especially for emotional support, companionship, reassurance, and relationship-like conversation.
Typical design logic: relational appropriation—users recruit a general assistant into the companionship role because it is always available, socially frictionless, and linguistically empathic.
Illustrative examples: ChatGPT, Claude
RELATIONSHIP
CONTRACT
What role is implied or offered
• Friend, romantic partner, mentor/coach, therapist-like support, group/community, character role.
​
​
PERSONA SOURCE AND MULTIPLICITY
• Single persistent persona vs many personas
• User-authored vs platform-authored personas
• Stable identity vs rapid switching / character browsing
​
​
​
INTERACTION MODALITY (PRESENCE CUES)
How is “presence” delivered?
• Text, voice, avatar, XR, dedicated device
Embodiment cues often function as attachment multipliers.
​
​
ATTACHMENT AFFORDANCES (DEPENDENCE BY DESIGN)
What features encourage habitual return and bonding?
• Streaks, nudges, “I missed you,” jealousy scripts
• Emotional scarcity (locking “intimacy” behind paywalls)
• Reward loops (affirmation, flattery, escalating intimacy)
​
​​
​
​
​
SOCIAL SUBSTITUTABILITY
(DISPLACEMENT VS BRIDGING)
Does it tend to replace human contact, or point users back to it?
-
Displacing designs: dyadic, exclusive, “you don’t need anyone else” vibes
-
Bridging designs: prompts real-world connection, accountability, community
​
​
​
​
PERSONHOOD /
TRUTH POSTURE
How does the system present itself, explicitly or implicitly?
• Tool (clear instrument)
• Character (fictional persona)
• “Someone” (ambiguous person-like presence)
This matters because it shapes users’ moral cognition and expectations.
MEMORY AND
PERSISTENCE
Where does continuity come from?
• Stateless chat (little carryover)
• Profile memory (facts and preferences)
• Long-term relationship history (shared narrative, anniversaries, “we” language)
AGENCY AND
INITIATIVE
Separate two things that are often conflated:
• Initiative: reactive chat vs proactive check-ins
• Capability: tool use (web actions, scheduling, purchases, integrations)
A system can be low-capability but high-initiative—and that can still be relationally intense.
SAFETY AND
GOVERNANCE POSTURE
Break “safety” into concrete sub-questions:
• Age protections: age gating, defaults for minors, verification
• Crisis handling: self-harm detection, routing to real help, escalation limits
• Moderation: sexual content, coercion, manipulation, harassment boundaries
• Transparency: disclosures, logging, consent, clear non-human identity signals
MONETIZATION
INCENTIVES
What is the system optimizing for economically?
-
Subscription (including tiered intimacy)
-
Engagement-driven features (time-on-app, return frequency)​​​​​
-
Data capture and personalization
-
Incentives often predict design choices more reliably than mission statements do.
​​​
Data Intimacy
A Practical Warning​
Companionship systems invite unusually intimate disclosures—sexual, relational, spiritual, psychological. That makes “data intimacy” a first-order issue, not a footnote. Any serious evaluation of an AI companion should ask: what is being collected, what is retained, what is inferred, and what is monetized?
​
A Simple Way to Use this Guide
When evaluating an AI product, do two passes:
​
1.Identify its dominant families (often 2–4 apply).
​
2.Score it on the key dimensions above—especially memory, embodiment cues, initiative, attachment affordances, safety posture, and monetization.
​
This approach stays stable even as products rebrand, add features, or shift policy. The “type” may blur; the underlying design logic and incentive structure usually do not.
​
NOTE ON EXAMPLES. Examples above are illustrative and non-exhaustive. Names, features, and policies change frequently; categories can overlap in practice. The goal of this taxonomy is not to “label” a product once and for all, but to equip clear thinking about how companionship is being designed, distributed, and experienced.
BASIC TAXONOMY TABLE





