Artificial Intelligence (AI) for Mental Health: Why Human Therapist Remains Essential
Over the years there has been an explosion of Artificial Intelligence (AI) tools such as Gemini, DeepSeek, Copilot and ChatGPT. They are gaining rapid popularity as accessible platforms for quick information. In 2004 “ChatGPT saw an estimated 37.5 million search-like prompts per day, giving it a 0.25% market share.” (Goodwin). These platforms are also being used for mental health support, due to the conversational experience which can mimic empathy, reflection and understanding.
Whilst AI technology is demonstrating some potential in AI Mental Health Services, they do come with limitations when they are used in place of qualified therapeutic professionals such as Counsellors, Psychotherapists and Psychologists.
This blog explores the disadvantages of relying on AI for mental health concerns and presents a case for why a human therapist, either online or in person, remains indispensable in delivering safe, ethical and effective care.
Lack of empathy and relational depth.
AI is a technology platform, trained by database. It uses a vast amount of datasets and language modelling to simulate human conversation. Essentially it’s an incredibly more advance version of predictive text - it has been taught to respond as we expect it to. However, it fundamentally lacks the capacity for genuine empathy and relational depth in a conversation as it while it appears that AI is responding with sensitivity and “Understanding”, this is a computational prediction of what you are expecting to hear, rather than emotional response to your words.
As Gilbert et al (2023) argue “AI lacks the contextual awareness and emotional attunement that forms the basis of therapeutic rapport”. The therapeutic alliance, that is the relational bond between the client and the therapist, is a robust predicator of positive outcomes (Norcross & Labert 2019).
This is a nuanced process, of emotional co-regulation. AI cannot meet the depths needed to form the trust required to work with complex emotions.
Risk of inappropriate or unsafe responses
As l therapists we humans abide by various Codes of Ethics, be that the BACP (British Association of Counselling and Psychotherapy) or UKCP (United Kingdom Counsil for Psychotherapy) or NCPS (National Counselling and Psychotherapy Society) where the primary tenant (as a summation) is “Do No Harm”.
AI does not conform to any such codes. These platforms do not possess the clinical judgments or ethical reasoning in order to do so. They are vulnerable to generating responses that may be misleading, invalidating and risky to the client in times of distress.
AI creators could argue that there are safety measures in place, yet these tools can misinterpret risk, fail to identify crisis situations, or provide inaccurate information due to biases or gaps in their data gathering training.
Studies have raised concerns about the “Hallucination” phenomenon, where AI generates plausible sounding but false information” (Ji et al, 2023). In a therapeutic relationship such misinformation can erode trust, reinforce maladaptive beliefs or lead to the neglect of critical interventions.
In addition to this, AI lacks the ability to conduct a thorough risk assessment or coordinate with emergency services as an essential component of ethical clinical care when clients present with suicidality or self-harm (Blease et al, 2022).
Limited personalisation and lack of theoretical orientation
AI is an information gathering and generating process. While it can tailor responses based on your prior inputs, it lacks the theoretical grounding and case formulation skills of a trained therapist. Psychotherapeutic interventions are grounded in models of human development, psychopathology and change, whether that is psychodynamic, cognitive-behavioural, systemic or integrative. These models guide clinicians in understanding the unique history, needs and goals of each client we see.
In opposition to a human therapist, AI works through patterns of recognition rather than clinical reasoning. It cannot construct meaning from the client’s narrative in the same way. Nor can AI adapt interventions in response to the evolving therapeutic dynamics (Hasson et al, 2020)
Ethical and Confidential Concerns
Data protection is a paramount concern in digital mental health arenas. In the UK we must follow GDPR (General Data Protection Regulation) processes that have been created to protect individuals from harm due to the disclosure of sensitive nature. Part of the codes us human therapists are bound by include the concepts of confidentiality and informed consent. The same guarantees cannot be reliably extended to AI tools where there are still unanswered questions about how conversational data is stored, processed and used. Especially when commercial interests are involved (data being a financial commodity itself).
As highlighted by Luxton (2022), “users of AI-based mental health tools may unknowingly forfeit control over sensitive personal information.” This may deter open disclosure, limit therapeutic effectiveness, and expose users to risks associated with data breaches or misuse.
AI Undermining Help-Seeking Behaviour
These platforms have convenience and anonymity, which are attractive to those who are seeking therapy. However, the over use and reliance on such tools can delay engagement with professional help.
There is some risk that users may normalise the receiving of substandard support, or perceive mental health issues as something solvable through quick, surface level conversations.
Professional therapy involves the collaborative process of exploration, insight and change that unfolds over time. The aim of therapy is to create and evolve emotional literacy, resilience and autonomy in a way that no automated tool can replicate.
AI as a Supplement, Not a Substitute
There is a growing understanding that AI can be useful as a supplement to human led mental health care. Not as a complete substitute for the process of therapy. AI could have a place - to some extent - in providing interim support, or basic psychoeducation. However, AI tools cannot address the complexity that is the human species, the risk humans face and the relational depth needed to maintain a relationship that can bring about change which is characterised by counselling and psychotherapy.
Ultimately, healing occurs in the presence of another. Through attunement, empathy and co-created meaning. Human therapists bring not only clinical expertise but also the profound capacity to bear witness to suffering, hold hope, and facilitate enduring change.
If you are ready to make change that is sustainable, why not get in touch to see how we can work together to build the future you want to exist in. Contact me at Ayna Therapies (click here) by filling out my contact form.
References
Blease, C. R., Kaptchuk, T. J., Bernstein, M. H., Mandl, K. D., & Halamka, J. D. (2022). Artificial intelligence and the future of psychiatry: Insights from a global physician survey. NPJ Digital Medicine, 5(1), 48. https://doi.org/10.1038/s41746-022-00627-2
Gelbart, R. A., Cecchi, G. A., & Reece, A. G. (2023). Large Language Models in Mental Health: Applications, Concerns, and Ethical Considerations. Journal of Medical Internet Research, 25, e46641. https://doi.org/10.2196/46641
Goodwin, Danny. “Google Search Is 373x Bigger than ChatGPT Search.” Search Engine Land, 11 Mar. 2025, searchengineland.com/google-search-bigger-chatgpt-search-453142.
Hasson, U., Nastase, S. A., & Goldstein, A. (2020). Direct fit to nature: An evolutionary perspective on biological and artificial neural networks. Neuron, 105(3), 416–434. https://doi.org/10.1016/j.neuron.2019.12.002
Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., ... & Ren, X. (2023). Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12), 1-38. https://doi.org/10.1145/3571730
Luxton, D. D. (2022). Ethics in the age of intelligent machines: A call for establishing guidelines to design ethical artificial intelligence in mental health care. Ethics and Information Technology, 24(2), 121–131. https://doi.org/10.1007/s10676-021-09612-z
Norcross, J. C., & Lambert, M. J. (2019). Psychotherapy relationships that work III. Psychotherapy, 56(4), 421–423. https://doi.org/10.1037/pst0000254