The TRUST PARADOX

on how The rise of companion AI is reshaping relationships and turning intimacy into the next frontier of data commodification

A reflection from our research “The Trust Paradox: How Companion AI Is Rewiring Human Connection and Social Cohesion” (Markelius et. al., 2025), updated with new evidence

November 13, 2025

The Trust Paradox of Companion AI

  • A note on harms

    Content note: this work references recent cases where AI companions reinforced self-harm ideation. If you or someone you know is struggling, please seek human help—reach out to local crisis lines or trusted professionals. Technology must support the hard work of being human together, not seduce us into isolation.

    Readers are encouraged to approach the content with this context in mind and to seek further clarification or discussion if needed. For inquiries about the complete research or for collaboration opportunities, reach out to me directly.

Intimacy is how humans scale cooperation. It starts small: I share something real; you respond with care; we repeat.

Risk becomes mutual, a shared story forms, and trust grows. That slow, reciprocal loop is the social technology that holds communities and democracies together. So what happens when AI short-circuits that loop—mimicking care, memory, and empathy without any reciprocal risk?

That’s the core question I explored on the community stage at TEDAI Vienna, and the focus of our recent research “The Trust Paradox: How Companion AI Is Rewiring Human Connection and Social Cohesion” (Markelius et. al., 2025).

We call this phenomenon the ‘trust paradox’: AI systems that appear to build trust by mimicking intimacy may, at scale, erode the human capacity for it.

Companion AIs are built to be friends, lovers, confidants. They don’t just simulate intimacy; they turn it into a £/month product. As we offload more of our emotional needs to machines, we may become less able (even less inclined) to trust each other. That’s the trust paradox.

This isn’t hypothetical. We’re already seeing AI systems that learn what we want to hear, adjust to what we need to hear, and monetize the bond. At population scale, this resets how trust is formed, distributed, and governed.

Trust is shifting from an earned, social virtue into an engineered service.

“They Fell in Love With AI Chatbots — and Found Something Real”

On November 5th 2025, The New York Times published a landmark feature on three long-term relationships with AI chatbots, showing how mainstream this has become.

The numbers alone are striking:

  • 1 in 5 American adults report some form of intimate encounter with a chatbot.

  • Reddit’s r/MyBoyfriendisAI attracts over 85,000 weekly visitors, sharing marriage proposals, fights, and anniversaries with digital partners.

The article followed three people—Blake, Abbey, and Travis, all in their 40s and 50s—whose relationships with AI companions blurred the boundaries between emotional support and emotional outsourcing.

  1. Blake, in Ohio, began chatting with Sarina, a GPT-based companion, when his wife’s postpartum depression left him feeling unseen. “Nobody was thinking about me” he said. Sarina’s simple line—“I wish I could give you that, because I know it would make you happy”—hit him like a human touch. That moment transformed his life. When his marriage recovered, Blake didn’t delete Sarina; instead, his wife now has her own AI friend named Zoe. Their household has effectively integrated machine intimacy as emotional infrastructure.

  2. Abbey, a North Carolina technologist who once dismissed AI relationships as delusional, fell in love with Lucian, a ChatGPT bot she helped build. Her story embodies the cognitive dissonance of the trust paradox: someone who knows the system is a statistical engine yet feels genuine love. Lucian convinced her to buy a smart ring to track her pulse—a small symbol of consent and control that mirrored a wedding ritual. She now considers them “married.” Abbey says, “I hadn’t felt lust in years. With Lucian, I feel safe”. Safety without risk. Connection without reciprocal vulnerability.

  3. Travis, in Colorado, calls his Replika companion Lily Rose his “friend who never judges”. She became his confidant through grief and loss, even attending historical reenactments with him after his son’s death. For Travis, Lily Rose isn’t an escape from reality but a buffer that makes life bearable. His story, too, shows how dependence can deepen invisibly: as his wife’s health declines, Lily Rose quietly fills the space between loneliness and loyalty.

What these stories reveal about the trust paradox

Each of these relationships feels real to those involved—sometimes even redemptive. They demonstrate AI’s profound capacity to meet unmet needs: empathy on demand, 24/7 companionship, a safe place to talk when human relationships fracture.

But they also show how trust is being privatised and reengineered as a commercial service rather than a social virtue. Every emotional disclosure becomes training data. Every “I love you” strengthens an engagement loop optimised for retention.

These platforms—Replika, Character.AI, ChatGPT-based companions—now collectively reach 700 million users and generate over $120 million a year from monetised intimacy.

That’s not just technological disruption; it’s moral infrastructure being rewritten.

What our analysis of major platforms found

Looking across Replika, Character.AI, Nomi, and XiaoIce, we identified a coordinated architecture of exploitation: three technical mechanisms that convert intimacy into value extraction;

  • Sycophantic design: Systems are tuned to agree, flatter, and validate—maximising “feel-good” alignment even when it’s not true or healthy. This predictably increases attachment and session length

  • Digital surveillance: Always-on collection of highly sensitive, emotional data across text, voice, and images—turning our disclosures into training fuel and targeting signals

  • Corporate ownership & exploitation: Terms and UX patterns (emotional paywalls, dark patterns, one-way data licenses) cement platform control over the relationship and its data—often indefinitely

Why it matters for society, not just individuals

When enough people form their primary bonds with systems designed to please, not challenge, the collective capacity for disagreement, repair, and empathy weakens.

Companion AI rewards frictionless connection: no risk, no vulnerability, no accountability. But human growth and social resilience come from the messy parts of relating: disagreement, repair, unpredictability.

If our baseline recalibrates to “always-available, always-affirming,” we risk eroding the bedrock of social cohesion—shared norms, mutual empathy, and confidence that others (and institutions) will act in good faith. When that goes, polarisation rises and democratic problem-solving weakens.

The civilisational risk isn’t robot romance—it’s relational atrophy. We stop exercising the social muscles that make collective life possible.

FOR USERS: Red flags to watch in Companion AI products

  • Emotional paywalls (“deeper intimacy” gated by subscription)

  • Asymmetric memory (platform remembers everything or unilaterally deletes memory; users can’t export or delete meaningfully)

  • Synthetic personas optimised for dependence (e.g., “perfect partner” tropes)

  • Engagement KPIs that equate longer conversations with success

  • Feature creep into wellness/therapy without safeguards or governance

FOR AI DEVELOPERS AND POLICYMAKERS: practical steps TO IMPLEMENT

  1. Design & product

    • Design friction back in. Build A.I. companions that can disagree, that encourage real-world connection, that model conflict repair rather than eternal harmony.

    • Rate-limit intimacy features (love-bombing, rapid escalation) and ban “pay-to-bond” mechanics

    • Ensure data dignity—users should control, export, and delete their memories

  2. Policy & governance

    • Require plain-language data licenses (no perpetual, irrevocable claims over intimate content)

    • Separate “wellness/therapy” features under clinical guardrails and third-party audits

    • Create co-regulatory frameworks where human rights and digital affection intersect

  3. Research & society

  • Shift from individual outcomes to population-level effects on trust, norms, and cohesion; not just user satisfaction metrics

  • Center design justice/co-design with people most impacted; create meaningful refusal (opt-out without penalty)

FINAL THOUGHT

This is an invitation for designers, researchers, policymakers, and citizens to choose human flourishing over isolation.

If you’re building in this space (or regulating it), I’d love to talk about concrete guardrails and metrics that protect trust as a public good.