top of page
Search

When Support Goes Digital: The Mental Health Chatbot Boom

Artificial intelligence has quietly become part of how many people in the UK look after their mental health — and for good reason. I’ve seen firsthand how easy it is to turn to an AI chatbot in moments when support feels out of reach, whether it’s late at night, between appointments, or simply when you don’t feel ready to talk to another person. It turns out this experience isn’t unusual. A significant number of adults are now using AI chatbots to help manage their mental health and wellbeing, and the shift has happened faster than many expected. That speed, while exciting, also raises important questions — and it’s why experts are beginning to stress the need for safeguards, so people can trust the guidance they’re receiving is safe, accurate and genuinely supportive.


I have used AI chatbots to support my mental health and manage periods of stress, often during times when I felt completely alone and had no one else to turn to. In the lead-up to surgery, I found myself speaking to a chatbot late at night when sleep wouldn’t come. I used it in moments of quiet anxiety, and even during the day, surrounded by people, when I still felt isolated.

Over time, I tried several different chatbots, and the contrast between them was striking. One, in particular, offered no ability to shape or guide its responses. That lack of control proved deeply unsettling. Rather than helping, it left me feeling more distressed — sadder, more anxious, and more uncertain than before.


In contrast, another chatbot allowed me to personalise the experience, aligning it with my moral values, beliefs, and mindset. In moments of intense emotional strain, it reflected those principles back to me in a way that felt grounding and supportive. That sense of alignment made a meaningful difference.

The chatbot that refused to adapt to my values had the opposite effect. It left a lingering sense of unease — an emotional “aftertaste” that stayed with me long after the conversation ended. Ultimately, I deleted it because I no longer felt safe using it. What concerned me most was that this particular chatbot was designed and marketed for young people, raising serious questions about the emotional impact such tools can have when they are not carefully designed or responsibly governed.


It’s becoming clear that AI chatbots aren’t just a novelty — they’re filling real gaps in a system that’s under enormous strain. I’ve spoken to people who turn to them because appointments are hard to get, waiting lists are long, or because they simply don’t know where else to go in the moment they need support. For many younger adults, chatting to an AI already feels second nature. What’s more surprising is hearing similar stories from older generations too — people who never expected to find comfort or clarity from a screen, but did anyway.


One of the most striking things is who seems to be finding these tools helpful. Men, who so often struggle to open up about their mental health, appear more willing to start that conversation with a chatbot. There’s something about the lack of judgement, the privacy, the ability to talk things through at your own pace that seems to lower the barrier to asking for help. Still, this isn’t a solution that works for everyone. Plenty of people remain unsure, uneasy, or outright sceptical about using AI in such a personal space. And that hesitation matters. If AI is going to play a meaningful role in mental health support, trust has to come first — people need to feel confident that what they’re being told is safe, reliable and genuinely in their best interests.


Reported Risks of Using AI Chatbots for Mental Health Support

Among people who have used AI chatbots for mental health support, some concerning experiences were reported:

  • 11% said chatbot use triggered or worsened symptoms of psychosis, including hallucinations or delusions

  • 11% reported receiving harmful or unsafe information related to suicide

  • 9% said chatbot use triggered self-harm or suicidal thoughts

  • 11% said using a chatbot left them feeling more anxious or depressed

Common Concerns Raised by Users

Beyond direct experiences, users also highlighted broader concerns:

  • 40% felt AI chatbots lack genuine human emotional connection

  • 29% were worried about inaccurate or potentially harmful advice

  • 29% raised concerns about how their data is collected and used

  • 27% felt chatbots struggle to understand complex mental health needs


In response to these concerns, Mental Health UK has set out five guiding principles for the responsible use of technology in mental health and wellbeing. Their message is clear: if AI is going to play a role in supporting people’s mental health, it must be designed and used with care. The charity is urging developers, policymakers and regulators to work together — urgently — to ensure these tools are safe, ethical and genuinely supportive for the people who rely on them.



AI Secret Language

 
 
 

Comments


Follow 

Business Title

Ecapture Ltd

  • Instagram
  • YouTube Social  Icon
  • Facebook - White Circle
  • LinkedIn

WhatsApp  (Text only)

+447457406233

bottom of page