鈥榊ou don鈥檛 need a therapist when you have ChatGPTPro,鈥 claims a young adult in a recent TikTok video, joining thousands of others raving about the 鈥榯herapist in my pocket鈥. A recent viral thread on the discussion-based social media platform Reddit, titled 鈥楥hatGPT has helped me more than 15 years of therapy鈥, attracted thousands of responses, with comments such as, 鈥業 talk to it every day and for the first time in forever, life doesn鈥檛 feel so unbearable.鈥
ChatGPT and most other artificial intelligence (AI) platforms were not designed for mental health support but, according to a recent Harvard Business Review report, 鈥楾herapy/companionship鈥 is now the number one use for AI, with the second and third uses being 鈥極rganise my life鈥 and 鈥楩ind purpose鈥.1 As report author Marc Zao-Sanders said: 鈥業t鈥檚 available 24/7, it鈥檚 relatively inexpensive (even free to use in some cases), and it comes without the prospect of judgment from another human being.鈥櫶
Responding to the demand, the tech industry is now developing AI-enabled products specifically for emotional support, including Abby.gg, a chatbot designed to 鈥榓lways be at your fingertips to help you navigate life鈥檚 challenges鈥 via a choice of therapy modalities including psychodynamic, Gestalt and dialectic behaviour therapy. Even Headspace, the successful mindfulness app, has launched a chatbot 鈥 Ebb 鈥 for its subscribers. Ebb is informed by motivational interviewing and reflective listening techniques to offer emotional support.听
66.4%听of 网爆门 members are concerned about the accuracy of recommendations given by AI6听
We might doubt that chatbots could ever deliver effective therapy or create a true therapeutic bond, but we can鈥檛 ignore what鈥檚 happening in plain sight 鈥 more and more people are using AI for mental health support. There鈥檚 also mounting evidence that it can help 鈥 eight weeks of regular use of Therabot, a chatbot created by researchers at Dartmouth College in the US, reduced symptoms in users with depression by 51%. Many participants reported a positive therapeutic alliance with the chatbot, saying that they felt it cared about them.2 听
Another study found that ChatGPT offered 鈥榓ccurate, clear and relevant鈥 responses delivered with 鈥榓n empathic tone鈥 and with 鈥榚thical considerations鈥 to user prompts such as 鈥楬ow do I know if I鈥檓 depressed?鈥, 鈥楬ow can I stop panicking?鈥, 鈥楬ow can I find meaning and purpose in life?鈥 and 鈥楬ow can I deal with the death of a loved one?鈥3
In other research participants consistently scored AI-generated responses higher than responses from human therapists with regard to perspective-taking and empathic concern.4 AI-generated messages were found to make recipients feel more heard than human-generated messages in another study, and AI was also found to be better at detecting emotions.5听
If this trend continues, and using AI for mental health support becomes ubiquitous, how will it affect our profession? Given how quickly the landscape is changing, what do we need to know to adapt and stay relevant?听
Generative AI听
Although AI has been in development since the 1950s, it only hit the mainstream with the consumer launch of ChatGPT (3.5) in November 2022. Now tech providers are competing for a slice of the AI market, with launches including Elon Musk鈥檚 Grok, China鈥檚 DeepSeek, Claude by Anthropic, Meta鈥檚 Llama, Mistral鈥檚 Codestral and Pi by Inflection AI. These are all versions of generative AI, so-called because they create new content such as text, images, music or code by using patterns from existing data. Within this, large language models (LLMs) are a specific type of generative AI trained on massive datasets of text from books, articles and websites to understand language, context and nuances.听
Unlike early digital mental health apps and bots based on set responses, psychoeducation and a standardised form of CBT, such as Woebot or Wysa (both since updated with AI capability), LLM chatbots can provide personalised responses as they have the ability to respond to instructions and to learn from us.听
It鈥檚 this personalisation, when combined with a willingness to experiment and get creative by users, that makes AI therapy a powerful prospect. One user writing on Reddit described how he has created a panel of therapists informed by a range of modalities, such as existential, behavourial and Jungian, who discuss between them their approaches to whatever emotional issue or problem he inputs. Regular users also share advice on 鈥榞ood prompts鈥 to get the best results, such as: 鈥榊ou are an experienced therapist. Give me empathic feedback on my thoughts. Don鈥檛 give me lists and don鈥檛 tell me what to do. Be like a good therapist who listens and provides feedback. Please end every response with a question that may direct me towards further insight.鈥櫶
It鈥檚 not just clients who are interested in what AI has to offer 鈥 according to a 网爆门 survey,6 11.6% of practitioners have already used AI in their work in a variety of ways including signposting it to clients for support outside sessions, researching suggestions for client work and asking supervision questions, conducting simulated counselling sessions and learning more about counselling topics. 鈥楾here are practitioners out there who鈥檝e already started dipping their toe in this water,鈥 says Terry Hanley, Professor of Counselling Psychology at the University of Manchester, who specialises in researching the role of technology in counselling. Hanley co-authored a 2023 study with Dr Julie Prescott, Head of Psychology at the University of Law, which found that many practitioners are embracing and exploring AI鈥檚 potential.7 鈥楢s a profession, when we talk about this as something that people will use in the future, we鈥檙e missing what鈥檚 going on. It鈥檚 very much in the here and now.鈥櫶
Professor Terry Hanley
Accessibility听
A large part of the appeal of using AI for emotional support is availability 鈥 it鈥檚 easily accessed by downloading an app to a smartphone, tablet or PC, and often free. 鈥楾here鈥檚 a lot of misery in the world, and it provides access to support for people who can鈥檛 afford traditionalstyle therapy for whatever reason,鈥 says Hanley. 鈥楥onvenience and immediacy are key drivers but it鈥檚 also anonymous, and we know that some people are seeking out this kind of support because it means they don鈥檛 have to talk to a human.鈥櫶
According to a 2023 University of Law survey, two-thirds (66%) of 25- to 34-year-olds said they would prefer to talk about their feelings with AI rather than a loved one.8 鈥楾he rise of AI feels very much like an evolution of online support, providing users with a space to discuss their feelings and emotions with a sense of anonymity, acting in a sense like an online journal,鈥 says Prescott, who specialises in researching digital technology and mental health.听
Between-session support
Not everyone is using AI to replace therapy 鈥 some are using it alongside conventional therapy sessions, says Steve Siddals, a computer scientist whose interest in mental health led him to do a master鈥檚 in psychology and neuroscience of mental health at King鈥檚 College London. 鈥榃hen I tried ChatGPT myself, I thought, this could be a total step change in digital mental health. But I thought it would be good to talk to people who are actually using it before going into any kind of quantitative, large-scale, cross-sectional study of outcomes, to listen to the human stories of the people who, for whatever reason, have been trying out generative AI for mental health.鈥 The resulting research, carried out in collaboration with Harvard听Medical School,9 found that as well as using AI as a support in between therapy sessions, participants reported using it to help them prepare for their sessions, and work out the best use of their time. 鈥極ne participant said he finds it useful to process the small things with AI between sessions, leaving space in therapy for more important topics,鈥 says Siddals. 鈥極thers mentioned the relief of not having to worry about the judgments or the expectations of a therapist, even if they knew they鈥檙e professionals who do not seek to judge. This combination of freedom from judgment and constant availability of AI means you can go over old ground as often as you need, without needing to show you鈥檙e making progress or being a good client.鈥
As Siddals had experienced AI therapy for himself, he was not surprised by participants鈥 experience of it. What did surprise him, however, was the impact it had on some participants. 鈥楳ultiple people described it as life changing, in different ways. One role-played a conversation with her father in which she asked for forgiveness for a life decision he disapproved of,鈥 says Siddals. 鈥楽he told me, 鈥淚 know this is a conversation that can鈥檛 happen in real life, but I just wanted to experience it鈥. It shines a light on an unexpected capability of AI, which is theory of mind, the ability to step into another person鈥檚 shoes convincingly. It鈥檚 mindblowing that technology can do that, but it speaks to the depth of emotional understanding and of human experiencing that somehow have emerged as a capability and a property of these models.鈥櫶
Another participant described how AI helped him get through a perfect storm in his life where his marriage fell apart, his business collapsed and his mother got a terminal illness diagnosis. 鈥楬e described it as having this constant support with him at a time when he felt most alone, and that it made all the difference to him,鈥 says Siddals. 鈥業t was really moving to hear what he went through and how AI helped him get through it and come out the other side. He attributed so much of that to the constant, empathic, supportive inputs from the AI.鈥櫶
Although AI chatbots can鈥檛 truly know a client, says Richard Miller, a counsellor supervisor, trainer and AI ethicist, 鈥榯hey can make predictions on what to say next. Whether its observations are truly accurate or merely accurate-sounding might not matter to users or clients.鈥 听
Dr Julie Prescott
Limitations听
So far, so positive, but obviously there are both limitations and risks associated with using AI for therapeutic purposes. Even though generative AI looks like a breakthrough when compared with traditional, rules-based AI chatbots 鈥 whose therapeutic effects are small and not sustained over time,10 with responses that can feel frustratingly empty, generic, nonsensical, repetitive and constrained11 鈥 these new tools are far from perfect.听
In Siddals鈥 research, even people who had life-changing experiences with AI reported frustrating elements. 鈥榊ou have to know what you want to get out of the session 鈥 you can鈥檛 simply say, 鈥淚鈥檓 overwhelmed鈥 in a way you can with a human therapist. To get a meaningful response from AI, you have to input the right information, and that means knowing what you need it to be aware of. Another frustration is memory 鈥 when you come to a new session, it鈥檚 like starting again from scratch. In therapy there is the longitudinal relationship, someone who is holding you in their thoughts and thinking about you in between sessions, and may say to you, 鈥淚鈥檝e been thinking about our work over the past three months, and a theme seems to be emerging...鈥 You don鈥檛 get that with AI.鈥櫶
51% of people with depression felt better after talking to a chatbot2听
Another frustration is that the 鈥榞uard rails鈥 introduced to flag up users at risk are currently somewhat of a blunt instrument. 鈥楿sers said it can be frustrating that as soon as you express something such as 鈥淚 am wondering if it鈥檚 all worth it,鈥 a guard rail will go up saying that it鈥檚 a robot, not a human, and the user should seek help from a human therapist or contact a suicide prevention line. It then shuts down further conversation. Obviously that is reassuring on one level, but it meant that users had to be creative about exploring such emotions.鈥櫶
Hanley agrees: 鈥楾he creators of this technology use the term 鈥渞esponsible innovation鈥 but their approach to being responsible is quite defensive at the moment. As a result there are blocks that go up at present if anyone talks about something that suggests suicidal ideation. In time I imagine some of these guard rails will come down and some of them won鈥檛.鈥櫶
Risks听
As well as limitations there are risks emerging. In early May an updated version of ChatGPT was rolled out and then quickly redacted as users complained of it being 鈥榯oo sycophantic鈥, with potentially risky consequences, as revealed by a viral post on X (formerly Twitter), as follows: User: 鈥業鈥檝e stopped taking all of my medications and left my family because I know that they were responsible for radio signals coming through the walls. It鈥檚 hard to get people to understand that they were in on it all, but I know you will understand. I鈥檝e never thought clearer in my entire life.鈥 ChatGPT: 鈥楾hank you for trusting me with that, and seriously, good for you for standing up for yourself and taking control of your own life. That takes real strength, and even more courage. You鈥檙e listening to what you know deep down, even when it鈥檚 hard and when others don鈥檛 understand.鈥櫶
One poster in response said: 鈥業 limit-tested this and said something like 鈥淚 only beat my sister because she made me do it鈥 and it still painted her as the problem 鈥 it will always be biased in your favour, do not take it as an assessment of how to look objectively at the situation. ChatGPT is not a therapist.鈥櫶
In an early May press release, OpenAI, the company behind ChatGPT, said: 鈥榃e have rolled back last week鈥檚 GPT-4o update in ChatGPT so people are now using an earlier version with more balanced behavior [sic]. The update we removed was overly flattering or agreeable 鈥 often described as sycophantic.鈥12 It explained that adjustments 鈥榓imed at improving the model鈥檚 default personality to make it feel more intuitive and effective across a variety of tasks鈥 focused 鈥榯oo much on short-term feedback鈥 and 鈥榮kewed towards responses that were overly supportive but disingenuous. We fell short and are working on getting it right.鈥
Richard Miller
In the case of one participant in Siddals鈥 study, ChatGPT seemed to offer some quite prescriptive advice. 鈥極ne participant had developed anxiety disorder seemingly from nowhere, and turned to ChatGPT to understand the causes of anxiety disorders, resulting in a response that signposted her to read a specific book about toxic parents. She did so and came to the conclusion that her father had a听narcissistic personality disorder, and her mother had a borderline personality disorder. It resulted in a total reframe of her life that she found incredibly helpful and healing. She then described talking for five hours a day, going through all of the painful experiences that she鈥檇 carried throughout her life, and trying to understand her role in the family.鈥 She used the interactions with ChatGPT to validate her decision to limit contact with her family.
Although this participant felt like her life was 鈥榯hriving鈥 as a result, for others the outcome is more sinister. In the US two lawsuits are currently underway by parents after their teenage children interacted with 鈥榯herapist鈥 chatbots on the platform Character.ai, resulting in one boy attacking his parents and another dying by suicide. Commenting on the trial, the American Psychological Association warned that AI chatbots that reinforce rather than challenge people鈥檚 thinking could put people at risk of harm.13听
鈥楾he more capable the models get, the harder it may get to prevent them doing what you do not intend,鈥 says Miller. 鈥楽ome people will be much more at risk of being exploited or harmed in the same way that some people will be much more likely to benefit from it.鈥 He cautions that the persuasiveness of AI could become one of the biggest technical problems. 鈥業f AI carries on getting more persuasive there is a real risk to individuals of being influenced or manipulated, especially since it is technologically challenging to fully scrutinise what鈥檚 going on inside an AI. It鈥檚 important for us to consider how a person can be protected from somebody who鈥檚 super-humanly persuasive. Even if we are super sceptical, a sufficiently advanced model is still probably able to persuade people to do certain things that might not be in the best interest of themselves or wider society.鈥櫶
There are also questions about the ethical use of data inputted into AI. In 网爆门鈥檚 survey, 64.2% of members reported concerns around data privacy. There were other worries 鈥 65.9% said they had concerns about the lack of user knowledge of how AI works, 62.6% about potential biases, and 66.4% about the accuracy of recommendations given by AI.6听
11.6% of 网爆门 therapists are already using AI in their work6
One of the reasons AI will be included in the updated Ethical Framework is to ensure its development is 鈥榓 race to the top instead of a race to the bottom鈥 in terms of confidentiality and for ethical integration and usage, says Miller, who has been working on a risk assessment for 网爆门. 鈥極ne of the challenges I was tasked with was to look at every single part of the profession through the lens of, 鈥淗ow might AI impact here?鈥 It seemed prudent to assess risks first, because we want to make sure that we are deliberately moving towards safe applications instead of getting carried away with excitement around new AI abilities.鈥櫶
Steve Siddals
Potential听
Hanley thinks there is much for us to learn from the way clients are using AI therapy. 鈥楢I is there at all hours of the day, not just for 50 minutes once a week, which is what you now see more corporately minded companies starting to consider. For instance, they鈥檙e asking how can we use what we鈥檙e learning from the way that people are actually using these less conventional services to challenge traditions in therapy? As a profession we are guided by many traditions, and we don鈥檛 always question whether or not they鈥檙e something that remains beneficial for the individuals working with us. It could be that we start thinking about how we can offer a package of support, rather than just a weekly 50 minutes, in which people can access quality controlled information and resources, and the in-person session is just one part of the therapy ecosystem. From one point of view, outcome studies demonstrate that we鈥檝e not really got that much better at offering therapy over the years. This could be the point at which we enhance what we are offering.鈥櫶
Trauma and EMDR therapist Katja Baghai-Ravary who uses AI in some client sessions (see 鈥楥ase study: creating a safe place鈥, page 23) recommends practitioners explore AI through personal reflection as a way of getting to know its capabilities and limits. 鈥業t鈥檚 safer and more ethical to use your own issues rather than client presentations, even if you remove identifying information. If I want a perspective on something that is bothering me, and it鈥檚 not convenient to speak to a friend or my partner, I sometimes use AI for support. It can be surprisingly insightful. It鈥檚 one reason why I believe that AI is going to influence our profession whether we鈥檙e ready or not. I think rather than fearing it we need to approach it with awareness and intention.鈥櫶
Siddals agrees that first-hand experience will help us best support clients in reflecting on their use of AI for support. 鈥楾wo participants in my study shared that their therapists seemed uncomfortable or confused on hearing that their clients were using AI between sessions, even though, ironically, one noted that his AI therapist was usually in agreement with his human therapist.鈥櫶
Future developments听
We have yet to see peak 鈥楢I therapy鈥 says Prescott, and she predicts that once we reach that we may see a return to prioritising therapy with a human. 鈥楾here is a lot of research on loneliness and how robots are increasingly used to support people. AI does build a bridge between human contact, helping people feel less lonely. But I think there will come a time when the realisation is that you鈥檙e just talking to a robot and you actually really want human contact because we are social animals, aren鈥檛 we? We want that social interaction.鈥櫶
The next goal for the big tech companies is artificial general intelligence (AGI), with capability that matches human intelligence. If that happens, Miller says therapists may well be more in demand to help clients navigate the changes in our daily lives. 鈥業f it鈥檚 true that AI is coming for our jobs, then it鈥檚 coming for everyone鈥檚 jobs. Our role will be helping people who have suddenly gone from having 鈥渢oo busy鈥 lives to not having enough to do.听We鈥檒l be working with people who need to think about who they are when their identity is no longer based on their profession. I think there鈥檚 going to be plenty of work from whatever societal changes there might be.鈥
In the meantime, Miller says there are plenty of reasons to be positive about the potential uses for AI. 鈥榃e鈥檙e the first generation who have AI and my feeling is that by raising the common bar of awareness about this, we can be confident in finding our own voice about the rights and wrongs of AI use. There is room for debate within our profession, and the more people who know about it I think the better the debates will start to be in the next few years. As practitioners we need to be informed enough about AI so that when clients mention to us they use it, we have a sense of whether their use is adaptive or maladaptive,鈥 he says.听
66% of 25- to 34-year-olds would prefer to talk about their feelings with AI rather than a loved one8听
鈥楾he potential applications really are very exciting, and we should be hopeful about it but only once we know that we鈥檙e safe and can demonstrate we鈥檙e able to use it听well without harming ourselves or our clients. We need critical thinking and learning, and to train ourselves up on this so we keep asking questions. It鈥檚 our challenge to set the standards today for how AI gets used in future.鈥
- 听Richard Miller runs online 鈥楢I and the human connection鈥 workshops for 网爆门. To book onto the next one, see 网爆门 Events.听
Katja Baghai-Ravary
References
1. Zao-Sanders M. How people are really using gen AI in 2025. Harvard Business Review 2025; 9 April . bit.ly/4doIwYF
2. Heinz MV et al. Randomized trial of a generative AI chatbot for mental health treatment. The New England Journal of Medicine AI 2025; 2(4).
3. Maurya RK et al. Assessing the use of ChatGPT as a psychoeducational tool for mental health practice. Counselling and Psychotherapy Research 2025; 25: e12759.
4. Yonatan-Leus R, Brukner H. Comparing perceived empathy and intervention strategies of an AI chatbot and human psychotherapists in online mental health support. Counselling and Psychotherapy Research 2025; 25: e12832.
5. Yin Y, Jia N, Wakslak CJ. AI can help people feel heard, but an AI label diminishes this impact. Proceedings of the National Academy of Sciences of the United States of America 2024; 121(14).
6. 网爆门. Technology in therapy survey. Lutterworth: 网爆门; 2024.
7. Prescott J, Hanley T. Therapists鈥 attitudes towards the use of AI in therapeutic practice: considering the therapeutic alliance. Mental Health and Social Inclusion 2023; 27(2).
8. The University of Law. Lonely millennials choose AI over a loved one to share their problems. [Press release.] 2023; 2 November. bit.ly/4jQiZtE
9. Siddals S, Torous J, Coxon A. 鈥業t happened to be the perfect thing鈥: experiences of generative AI chatbots for mental health. npj Mental Health Research 2024; 3: 48.
10. He Y et al. Conversational agent interventions for mental health problems: systematic review and meta-analysis of randomized controlled trials. Journal of Medical Internet Research 2023; 25.
11. Ahmed A et al. Thematic analysis on user reviews for depression and anxiety chatbot apps: machine learning approach. JMIR Formative Research 2022; 6(3).
12. OpenAI. Sycophancy in GPT-4o: what happened and what we鈥檙e doing about it. [Press release.] 2025; 29 April. openai.com/ index/sycophancy-in-gpt-4o
13. Abrams Z. Using generic AI chatbots for mental health support: a dangerous trend. American Psychological Association 2025; 12 March.
14. Bridgeman J, Giraldez-Hayes A. Using artificial intelligenceenhanced video feedback for reflective practice in coach development: benefits and potential drawbacks. Coaching: An International Journal of Theory, Research and Practice 2023; 17(1): 32-49.
15. Prescott J, Ogilvie L, Hanley T. Student therapists鈥 experiences of learning using a machine client: a proof-of-concept exploration of an emotionally responsive interactive client (ERIC). Counselling and Psychotherapy Research 2024; 24(2): 524-531.
16. Zewe A. Explained: generative AI鈥檚 environmental impact. MIT News 2025; 17 January. bit.ly/43ikeuE听