In a recent Harvard Business Review study, therapy emerged as the number one use case for generative AI. Not productivity, not content creation, but therapy. This finding may come as a surprise to some HR leaders, but it reflects something many employees already know; when support is hard to access or stigmatised, people turn to what’s immediately available. Increasingly, that means ChatGPT.
A surprising new trend in AI use
As generative AI tools have become more conversational and accessible, employees are beginning to use them as a first port of call for emotional support. The appeal is understandable; AI is always available, non-judgemental and anonymous. For someone navigating anxiety, stress, or loneliness, it can feel easier to confide in a chatbot than a line manager or a formal support service.
This trend should prompt concern. Not because people are seeking help, but because they’re seeking it from a tool never intended to provide it. ChatGPT is a general-purpose AI trained to predict language patterns. It is not trained to manage mental health risk, detect clinical red flags, or respond safely to someone in distress. It does not have the oversight, traceability, or clinical safeguards needed to ensure that conversations do more good than harm.
What the regulators are saying
Recent guidance from professional bodies supports this caution. The American Psychological Association has issued warnings about the use of generative AI in mental health contexts, citing the absence of safety frameworks and the real potential for harm. They raised explicit concerns about AI tools being used for therapy without clinical validation.
In the UK, the Medicines and Healthcare products Regulatory Agency (MHRA) is moving toward reclassifying even gentle wellbeing interactions as medical-grade interventions. That includes metaphorical exercises such as “write your thoughts on a leaf and watch it float away.” Under new guidance, these will soon be regulated as Class 2A medical devices, a shift that reflects growing recognition that mental health tools, even digital ones, carry real risk if poorly designed.
What these developments point to is a critical distinction between AI that is trained to talk and AI that is trained to care. Between models optimised for engagement and those designed for safety. Between content generation and clinical support.
What should employers take from this?
First, if therapy is the top reported use of generative AI, it’s reasonable to assume that employees in your organisation are experimenting with it in this way. It’s not just tech-savvy Gen Z employees, either. The comfort with digital tools now spans generations and geographies. This behaviour doesn’t signal recklessness; it signals unmet need. People are looking for support that’s easy to access, stigma-free and available in the moment they need it.
Second, the response shouldn’t be to restrict access to general AI tools outright, but to offer safer, clinically governed alternatives that meet employees where they are. This means looking for mental health tools that are:
- Clinically validated: backed by research, with evidence of efficacy across diverse populations.
- Designed with clinician oversight: involving psychologists and therapists in both development and risk governance.
- Transparent in how they work: with explainable AI models, not just opaque deep learning systems.
- Built for privacy and compliance: adhering to data protection standards like ISO 27001 and HIPAA.
- Ethically governed: with no data monetisation and no third-party sharing without informed consent.
- User-centred: offering full control over personal data, including the ability to delete it easily and at any time.
An opportunity to meet employees where they are
There is also an opportunity here. When safe, scalable tools are introduced, they don’t just reduce the risk of harmful AI use. They increase uptake of support overall. We know that a large proportion of employees never engage with traditional mental health services due to stigma, cost, or time. Digital tools that feel informal, anonymous and immediate can help bridge that gap, provided they are purpose-built for the task.
This is not a case of being anti-AI. On the contrary, the right kind of AI can play a powerful role in workplace wellbeing. It can triage needs, offer self-guided support and connect people to human care when needed. It can scale in ways human services cannot, extending reach without compromising quality, but only when safety and ethics are the starting point.
As generative AI becomes more embedded in daily life, the challenge for employers will be to distinguish between what is available and what is appropriate. Employees will continue to explore new tools in moments of vulnerability. The best response is not to judge or block that behaviour, but to ensure that when someone reaches for help, they find something designed to hold that moment safely.
Look for tools that adhere to clinical standards, involve mental health professionals in their design and are transparent about their use of AI. These are the indicators that the technology is not only engaging, but trustworthy. In a world where emotional support can be a chat window away, that difference matters.
A final thought
HR leaders are not expected to become AI experts, but in this rapidly shifting landscape, asking the right questions helps. As AI becomes part of the wellbeing toolkit, now is the time to shape policies and partnerships that put employee safety first. Choosing tools that are built for care, not just conversation, can help ensure that your organisation supports employees in the way they deserve: with empathy, evidence and trust.
About the author:
Sarah Baldry is chief marketing officer at Wysa, the global leader in AI-driven mental health support, offering services through employers, insurers, and healthcare providers. Its emotionally intelligent conversational agent uses evidence-based cognitive-behavioral techniques (CBT) and soft skills training to enhance mental resilience. With over 6 million users across 95 countries, Wysa works with corporate clients including Vitality Insurance, NHS, L’Oreal, Bosch, and Colgate-Palmolive. For more details, visit www.wysa.com.
You might also like: