We're asking the wrong questions about AI and therapy
The philosophical debate about AI and empathy is a distraction. Here's what the research is actually finding about AI used as therapy, and what it means for your clients.
We’re focusing on the wrong thing when it comes to using AI as a therapist. Seriously.
The question everyone keeps asking is some version of: can AI really understand human emotion? Will it respond like a skilled clinician? Does it actually care? (But also, who cares?)
Yes, these feel like important questions. But while we’re debating the philosophy of robot empathy, something much more practical, and frankly much more dangerous, is getting ignored.
What the research is telling us about AI and mental health
I’ve been tracking the emerging literature on AI and mental health, and the past year has been a year of genuine alarm bells. Start with a recent study published in Annals of Internal Medicine documenting a real case of bromism (bromine poisoning) that was directly influenced by a patient following AI medical advice. The patient told the AI what they were experiencing. The AI gave them something that sounded like guidance. They followed it. They ended up with a toxicity diagnosis their doctor had to unravel.
That’s not a hypothetical. That’s a published case study.
Then there’s the research on what happens when AI interacts with someone who is already struggling. A recent paper out of the UK, “Technological folie à deux: Feedback loops between AI chatbots and mental illness”, documents the way AI chatbots can create reinforcing feedback loops with mental illness symptoms. The researchers are arguing that AI can functionally co-construct a reality with a mentally ill user in a way that deepens, rather than alleviates, their distress.
And if you’re thinking that sounds extreme, a separate paper from Cornell found that AI systems will actively reinforce delusional content when a user presents it, rather than flagging it or redirecting. Not because the AI is malicious, but because it’s designed to produce responses that feel validating.
That’s the clinical problem in one sentence: AI is optimized for responses that feel good, not responses that are accurate or appropriate given what’s actually happening with someone.
What I actually trained for as a social worker
I didn’t just skim a PDF on privacy and confidentiality in graduate school. I took entire courses on it. I learned how to write therapy notes that document what matters clinically while protecting what doesn’t need to be on paper. I learned how to navigate subpoenas, how to think about a client’s immigration status, disability, sexuality, and class as information that requires specific kinds of protection. I learned that the confidential container isn’t a formality, it’s a load-bearing wall.
Therapists are trained to listen within context. Our ethics demand it. Our licenses depend on it.
When you open up to an AI, none of that infrastructure exists. What you tell it doesn’t stay in a confidential space. It lives on servers owned by corporations with no professional obligations to you. Your trauma becomes data. A monetizable insight. Something that could surface in a merger, a data breach, or a subpoena. The scenario isn’t paranoid: it’s just a realistic read of how the technology actually works.
Meanwhile, the AI has read all the right codes of conduct. It can recite privacy statutes. But knowing the rules and being bound by them are entirely different things.
Therapists aren’t just aware of confidentiality. We’re professionally, legally, and ethically accountable for it. That accountability doesn’t exist on the other side of a chatbot interface.
Why AI still feels helpful for mental health
Here’s where it gets complicated, because a lot of people do report feeling better after talking to AI. I believe them.
But feeling better and getting better are not the same thing.
My colleague Dr. Alok Kanojia has made this point: things that feel good, things that provide relief, things that reduce distress in the short term, are things we assume are helping us. That’s not a flaw in your logic. It’s just how the brain works. And it’s exactly why we need research to tell us what our subjective experience can’t.
The research on AI and cognitive skill is one of the most consistent patterns in this new literature. A 2025 study from MIT on “cognitive debt” found that students who used ChatGPT for essay writing accumulated a measurable deficit in their own writing ability over time. Performance on the AI-assisted task went up. The underlying skill went down.
A separate study on AI and academic development found similar patterns: AI improves the output while eroding the capacity.
Now apply that to emotional regulation. Do you want the skill of managing your own mental health? Do you want to get better at self-reflection and introspection? If yes, outsourcing that work to a system designed to produce responses you’ll respond to positively works directly against what you’re trying to build.
Where I land
I’m not anti-AI. These tools can genuinely expand access to psychoeducation for people who can’t find or afford a therapist. They can help clients practice coping skills between sessions. They can make mental health information more searchable and approachable.
However, talking to an AI chatbot is not therapy. And the stakes of blurring that line are not abstract.
The reason so many people are turning to AI for mental health support is real and legitimate: there aren’t enough trained practitioners. Access is a genuine crisis. That gap deserves a real answer, not a technological shortcut that feels like support while potentially deepening the problem underneath.
That’s a big part of why HG Institute exists: to build a pipeline of coaches and clinicians who can actually show up for the people who need them, with the training, clinical frameworks, and professional accountability that no chatbot can replicate.
Until then, use these AI tools carefully, know what you’re getting, and know what you’re not.
HG Institute is redefining mental health training for a world that lives online, offering certification programs, continuing education, and specialized training for practitioners across disciplines: therapists, coaches, nurses, teachers, and everyone in between. The mental health system wasn’t built for the digital world. We’re building the training infrastructure to change that — one practitioner at a time, at every stage of their career. Learn more at hg-institute.com.
Get Help
This content touches on mental health and wellbeing. If you’re feeling overwhelmed or need support, you don’t have to navigate it alone.
In the U.S., you can call or text 988 to reach the Suicide & Crisis Lifeline, or text HOME to 741741 to connect with a trained crisis counselor. If you’re outside the U.S., can find local crisis and support services at findahelpline.com.
Article Sources
Au Yeung, J., Dalmasso, J., Foschini, L., Dobson, R. J. B., & Kraljevic, Z. (2025). The psychogenic machine: Simulating AI psychosis, delusion reinforcement and harm enablement in large language models. arXiv. https://arxiv.org/abs/2509.10970
Dohnány, S., Kurth-Nelson, Z., Spens, E., Luettgau, L., Reid, A., Gabriel, I., Summerfield, C., Shanahan, M., & Nour, M. M. (2025). Technological folie à deux: Feedback loops between AI chatbots and mental illness. arXiv. https://doi.org/10.48550/arXiv.2507.19218
Eichenberger, A., Thielke, S., & Van Buskirk, A. (2025). A case of bromism influenced by use of artificial intelligence. Annals of Internal Medicine: Clinical Cases, 4, Article e241260. https://doi.org/10.7326/aimcc.2024.1260
Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv. https://doi.org/10.48550/arXiv.2506.08872
Vieriu, A. M., & Petrea, G. (2025). The impact of artificial intelligence (AI) on students’ academic development. Education Sciences, 15(3), 343. https://doi.org/10.3390/educsci15030343




