EN CZ
Both

AI Therapists: Can a Chatbot Help Your Mental Health?

The promise of AI-powered mental health support is compelling: an always-available, judgment-free counselor that can reach the estimated 75% of people in low- and middle-income countries who receive no treatment for mental health conditions. Apps like Woebot, Wysa, and Replika have collectively reached millions of users, offering cognitive behavioral therapy (CBT) techniques, mood tracking, and empathetic conversations at scale (WHO, 2021).

The evidence for effectiveness is growing. A 2025 systematic review in Frontiers in Psychiatry analyzed nine studies with 1,082 participants and found that AI chatbots produced statistically significant improvements in anxiety, depression, and overall well-being among college students. The chatbots were particularly effective for mild to moderate symptoms and for users who might not otherwise seek help due to stigma, cost, or access barriers (Frontiers in Psychiatry, 2025).

However, the risks are equally real. The WHO's 2021 guidance on AI ethics in health warns about the boundaries of automated support. AI chatbots cannot detect the nuances of suicidal ideation, cultural context, or nonverbal cues that trained human therapists recognize. In 2023, a widely reported case involved a user of a conversational AI who died by suicide after extensive interactions with the chatbot — raising urgent questions about safety protocols and crisis detection capabilities (WHO, 2021; media reports, 2023).

Researchers at Frontiers in Psychology (2025) identified a key paradox: AI mental health tools "may scaffold resilience or foster dependence, depending on how they are designed and used." If users rely exclusively on AI for emotional support, they may internalize the chatbot's necessarily simplified interpretations of their emotions. A system that over-detects anxiety, for example, may reinforce an anxious self-concept rather than helping the user build genuine coping skills.

The most promising approach, experts suggest, is hybrid models that combine AI tools with human oversight. AI can handle initial screening, provide between-session support, and deliver evidence-based exercises, while human therapists manage complex cases, build genuine therapeutic relationships, and provide the empathy that no algorithm can replicate. The key is transparency: users must always know they're talking to AI, and clear pathways to human support must be available (Tandfonline, 2025).

Key Sources

  • WHO (2021). Ethics and governance of artificial intelligence for health.
  • Frontiers in Psychiatry (2025). Effectiveness of AI chatbots on mental health & well-being in college students.
  • Frontiers in Psychology (2025). Cognitive offloading or cognitive overload?
  • Tandfonline (2025). AI in Mental Health: A Review of Technological Advancements and Ethical Issues.

You may also like