Yes, but only when deliberately constrained. The critical distinction isn't whether AI can deliver mental health care safely—it's whether it's deployed with clinical oversight and explicit clinical intent.
The evidence is measurable. One peer-reviewed trial showed AI delivering cognitive skills training achieved non-inferiority to face-to-face NHS therapy [#444]. That outcome required the system to be purposefully designed and clinically confined, not simply skinning ChatGPT into a mental health chatbot. The problem you're seeing now is the latter: unregulated apps without oversight or accountability [#411].
The ethical framing flips the question. With hundreds of millions of people without access to mental health care, leaving gaps unfilled carries its own risk. Psychological therapy is labour-intensive; a single clinician cannot scale to meet demand. AI paired with clinician oversight is the only credible path to scaling provision while maintaining safety [#425]. That said, the 1% error rate is unacceptable in healthcare; AI systems need clinical sign-off, not autonomy [#417]. The work now is standardising which functions AI handles (intake, non-clinical triage) versus which require clinician decision-making. Until that framework exists across health systems, the risk isn't AI in mental health—it's AI deployed without one.
Search 450+ episodes and 42,000 chunks of healthtech conversation.