AI's Impact on Mental Health: An In-Depth Analysis
[picture reference: www.wysa.com]
The field of mental healthcare is undergoing a profound transformation, with artificial intelligence (AI) playing a pivotal role. Beyond the surface level of virtual therapists and robots, AI is stepping into the realm traditionally reserved for highly trained professionals. While these advancements sound promising, they also raise crucial ethical and societal questions that demand our attention.
Here's the situation: Initial research indicates remarkable potential. For example, Pennisi et al., (2016), found that children with Autism demonstrated improved outcomes when interacting with RoboTherapy (an AI-driven robotic companion) compared to human therapists. These AI innovations are facilitating increased engagement and enhanced language development during therapy sessions. Additionally, AI chatbots such as Wysa and Woebot are providing accessible and effective support for individuals grappling with depression and anxiety (Fiske, Henningsen & Buyx.,2019). The integration between AI and mental health care is happening, but is this truly helpful to the emotional well-being of professionals and clients?
As an ex-therapist, I can appreciate the dedication and commitment required to become a mental health professional. The amount of clinical hours, constant supervision, for often a very small pay package. It can be disheartening to contemplate that a large language model can replicate over 5 years of training in a matter of seconds. However, for those seeking quick and accessible support, this could be a more effective and practical solution to avoid the NHS’s never-ending waiting lists.
It’s very easy to get excited with a new craze that promises a solution for all, but it is essential to address the existing challenges faced by overworked mental health professionals within the NHS. Reports from the BMA* Mental Health Workforce 2022 states that approximately 17,000 professionals departed from the NHS mental health sector in 2021-2022, with 52% citing overwhelming workloads as a primary reason. Data like this suggests that perhaps we need to (once again) focus on getting the basics right and get our front-door service in order before getting carried with AI (BMA, Mental Health Workforce report 2022).
Then of course there are the cultural nuances. According to the 2023 Equality, Diversity and Inclusion data report for practitioner psychologist by the HCPC**, 84% of the psychological workforce in the UK is white (22,605) and only 12% are from an ethnic background, and if we look even deeper, only 2% are Black (510). *pretends to be shocked*...
Once again if we are not displaying diversity within our current therapeutic workforce in the offline world, how accurately will this be reflected in our online therapeutic approaches? Maybe this is an opportunity to get it right online…but can a robot provide a safe space to talk about matters such as child-trauma in the context of an asylum seeker? Or matters such as racial bullying in the workplace? Or would someone like myself have to hope that I’m lucky enough to be seen by one of the 510 Black psychologists, to get adequate culturally-relevant support?
When we think about the current developments of AI based mental health care, we must also critically assess the extent to which the clinical and psychological advisors behind these technologies represent the diverse landscape of modern-day Britain. Do they possess an understanding of the multifaceted and intersectional experiences of individuals, including those within the LGBTQ+ community?
A particular concern of mine is the expectations users have for AI based mental health support. For those in crisis, receiving text-based support instead of a face-to-face crisis assessment instantly rings alarm bells. There is a reason people require formal training to deliver mental health interventions, with one of them being legal obligations governed under the the Mental Capacity Act 2005 and the Mental Health Act 2007. How does AI mental health care negate these factors?
Here's the key takeaway: The incorporation of AI in mental health requires the inclusion of experts from a range of backgrounds. We need a psychologically-informed workforce that is not limited to just data scientists and researchers but extends to individuals with genuine clinical expertise gained outside the confines of the research sphere. We need specialists such as safeguarding leads, mental health crisis clinicians, trauma-informed practitioners and most importantly, we need to involve those with lived experience.
*British Medical Association **Health and Care Professionals Council (UK based)
#AI #Mentalhealth #diversity #inclusion #Traumainformedcare #Therapy
References:
BMA Mental Health Workforce Report 2022
HCPC Diversity Data: Practitioner Psychologist — July 2023 Report
https://www.hcpc-uk.org/resources/data/2023/diversity-data-practitioner-psychologists-2023/
Fiske, A., Henningsen, P., & Buyx, A. (2019). Your robot therapist will see you now: ethical implications of embodied artificial intelligence in psychiatry, psychology, and psychotherapy. Journal of medical Internet research, 21(5), e13216.
Pennisi, P., Tonacci, A., Tartarisco, G., Billeci, L., Ruta, L., Gangemi, S., & Pioggia, G. (2016). Autism and social robotics: A systematic review. Autism Research, 9(2), 165-183