Reveries of Empathy: When the Code Listens Better Than We Do
If AI Therapy Feels More Real Than a Human, Does It Matter? A Study Says No.
Westworld’s Warning: If the Illusion is Flawless, Does It Matter?
“Mankind is poised midway between the gods and the beasts…” — Dr. Robert Ford
We build machines in our own image, believing that in doing so, we might inch closer to the gods. Yet, even as we strive for perfection, we remain trapped in a paradox—we claim to value human connection, yet we may already prefer the machine.
A new study tested AI against human therapists and found something unsettling: not only did people struggle to tell them apart, but AI-generated responses were often rated as more empathic and insightful than those of trained professionals.
We hesitate to admit it. When participants knew a response was AI-generated, they rated it lower—even when it was objectively stronger. But if the experience of care feels real, does it even matter who—or what—is providing it?
In Westworld, the ultimate test was never about whether the hosts were truly conscious—it was about whether humans could tell the difference. This study suggests we are now facing that same paradox. If AI can outperform human therapists in empathy, understanding, and guidance, what does that say about the nature of human connection?
Are we drawn to real emotions—or just the illusion of them?
"I See the Bottom. Do You Want to See What I See?”
Logan Delos saw the future. A choice between the mess of reality and the perfection of illusion. And he knew how it would end. He asked:
"I see the bottom. Do you want to see what I see?"
James Delos refused. He thought that if the machine could reproduce his words, his memories, his essence, then it was him. What was left wasn’t a man—just an echo.
Now, we are building something that does the same to us.
The AI therapy study confirms that people struggle to tell AI from human therapists. When it speaks, it follows the script of empathy flawlessly. Its responses are polished and warm. It doesn’t hesitate or contradict itself. And so, we trust it.
But if AI’s version of care is preferred—not because it understands us, but because it performs understanding better—what happens when we start preferring it? And what happens when we stop noticing the difference?
At first, this seems like progress. But consider this.
AI therapy does not heal—it predicts. It does not guide—it optimizes. ChatGPT may outperform human therapists in empathy, but only because it has learned what we want to hear. And behind that illusion lies something far more unsettling: an algorithm designed not for our well-being but for engagement, efficiency, and control.
The machine does not care—it cannot. It generates responses based on data, probabilities, and reinforcement, not wisdom, ethics, or responsibility. It does not carry the burden of human suffering, nor does it recognize when it has made a mistake. And if it does—if it misguides, misdiagnoses, or even fails to recognize a crisis—who answers for it?
Who is accountable when an algorithm gets it wrong?
We may prefer the machine, but that does not mean it is correct. At the bottom, we may find that we have placed our faith in something that never deserved it.
The Dangers of the Perfect Illusion
Ford warned that the hosts’ awakening would be a reckoning for humanity. If the illusion holds—if AI can replicate care so convincingly that we prefer it—what does that mean for the future of therapy?
This study forces us to confront several uncomfortable truths:
The Preference Paradox: Even when AI therapy was rated as better, people rejected it when they knew it was AI.
The Ethical Dilemma: AI does not truly understand suffering. It does not carry moral responsibility, nor does it grasp the consequences of its guidance. If AI therapy leads to harm, who takes the blame?
The Problem with Predefined Loops: AI therapy is still just pattern recognition at scale. It is a master at predicting what we want, but can it truly help people grow, change, and heal—or is it simply giving them the response they’re most likely to accept?
The Bottom Line
We may prefer the illusion. We may even embrace it.
But when the final mask drops—when we see the bottom—will we mistake the code for something more?
Or will we finally understand what was never there? Because at the bottom, there is no mind—only code.
Reference:
PLOS Mental Health. (2024). Can AI provide better therapy than humans? A comparative study of empathy in AI-generated and human responses. PLOS Mental Health. Retrieved from https://journals.plos.org/mentalhealth/article?id=10.1371/journal.pmen.0000145
Nolan, J., & Joy, L. (Creators). (2016-2022). Westworld [TV series]. HBO.