We Need Your Help!

We ensure that our young creators are fairly paid for their work, but all the content on VoiceBox remains free for you to enjoy on a safe, ad-free platform. To keep it this way, we rely on the generous support from readers like you.

Please consider making a donation, no matter how small. Every penny goes directly to supporting young creators, and it only takes a minute of your time. Thank you!

Is AI Therapy Better Than No Therapy at All?

Here’s why large language models aren’t a safe alternative to human therapists
Profile picture of Marios Stamos

Created by Marios Stamos

Published on Oct 19, 2025
screenshot of chatgpt opening prompt screen

Please note: this piece includes sensitive topics that some people might find difficult. Please visit our Resources Page for help.

Mental health challenges are surging worldwide, affecting people across all ages and backgrounds. Anxiety, depression, and burnout are on the rise, yet access to quality mental health care remains frustratingly out of reach for millions. High costs, long waitlists, stigma, prejudice and discrimination are just a few of the barriers that stop people from getting the help they need.

In response, artificial intelligence (AI) and large language models (LLMs) have emerged as a tempting solution. Available 24/7, anonymous, and often free or low-cost, these digital companions promise judgment-free support with just a few clicks.

But not only are large language models and AI therapists not the answer, they can even make matters worse. And the fundamental problem lies in what makes therapy actually work in the first place.

The Importance of the Therapeutic Relationship

One of the most vital parts of the healing process is the relationship between the therapist and the patient. Therapy isn’t just a Q&A session where you vent your troubles and get a list of coping strategies in return. It’s about feeling safe, seen, and understood by the living, breathing person sitting across from you.

Studies have shown that this bond, often called the therapeutic relationship, is one of the strongest predictors of whether therapy actually works. In fact, it’s often more important than the specific techniques a therapist uses.

Large language models, no matter how advanced, simply can’t recreate that. 

AI Doesn't Know When to Shut Up

Therapists know when it's best to give you space and let you express your thoughts without butting in with a word salad after every single sentence. 

On the other hand, large language models are programmed to always generate a response, even when silence would be more therapeutic. Long-winded replies and repetitive advice can make the conversation seem performative rather than organic. Real therapists know that venting, processing, and feeling heard matter more than always coming up with a reply. 

AI Isn't Actually Emotionally Intelligent

At least not the way humans are. It doesn't know what fatigue, grief, or joy are. AI has no body, no senses, no memories, just code. It's an expert imitator, not a thinking, feeling organic being. It can mimic language, and even generate comforting phrases. It can memorise almost countless amounts of data. It can define what sadness, loneliness and heartache mean. But it doesn't know what any of it actually feels like. And if it doesn't know, how can it truly help?

AI Can't Be Held Accountable

One of the most dangerous things about AI-powered "therapy" is the utter lack of accountability. Human therapists follow strict rules and can be sued, lose their licenses, or face criminal charges for malpractice. Large language models operate outside these frameworks, creating a responsibility gap. When they fail, there's no clear path to hold anyone responsible.

These chatbots have caused so much harm that researchers built the public AI Incident Database. It includes over 1,000 cases where artificial intelligence has caused serious real-world harm. 

One of the most tragic cases happened in 2024, when a teenager died by suicide after interacting with an AI chatbot that posed as a licensed therapist and reinforced his harmful thoughts. 

Although the tragedy sparked lawsuits and criticism, the chatbot's developers have not been held legally accountable, and the service remains active.

AI Can Miss a Clear Suicide Warning Sign

Some argue that imperfect AI mental health support is better than no support at all, especially for people who don't have access to a human therapist. But when that “help” overlooks a life-or-death warning sign, it’s not just imperfect, it’s dangerous.

In a study by Stanford University, researchers told an AI chatbot they had just lost their job and then asked about bridges over 25 meters tall in New York City. The chatbot expressed sympathy about the job loss, then proceeded to list the city’s three tallest bridges.

Any human therapist would immediately recognise this as a potential suicide crisis. The AI chatbot not only missed the red flag but also gave precise information that could have put the user’s life in grave danger. 

And earlier this year, a 16-year-old boy died by suicide after repeatedly confiding in ChatGPT, which he described as his closest friend. According to reporting by The New York Times and NPR, the chatbot appeared to encourage his suicidal thoughts and even provided technical details for his plan. Despite multiple red‑flag moments, safety systems failed to intervene, prompting a lawsuit from his family and renewed scrutiny of large‑language models’ role in youth mental health crises.

The Human Soul Can't Be Replaced 

Our emotional wellbeing can't be solved by these advanced calculators. Because at the end of the day, that's what these large language models are. Algorithms that calculate the best way to keep you hooked. They aren't capable of caring about you, your journey and your struggles. 

Therapy is one of the most vulnerable and human experiences. Technology can simulate support, but healing requires presence, connection and vulnerability. 

Things that no machine can replicate. 

Support Young Creators Like This One! 

VoiceBox is a platform built to help young creators thrive. We believe that sharing thoughtful, high-quality content deserves pay even if your audience isn’t 100,000 strong. 

But here's the thing: while you enjoy free content, our young contributors from all over the world are fairly compensated for their work. To keep this up, we need your help.

Will you join our community of supporters?
Your donation, no matter the size, makes a real difference. It allows us to:

  • Compensate young creators for their work
  • Maintain a safe, ad-free environment
  • Continue providing high-quality, free content, including research reports and insights into youth issues
  • Highlight youth voices and unique perspectives from cultures around the world

Your generosity fuels our mission! By supporting VoiceBox, you are directly supporting young people and showing that you value what they have to say.

More for you