Every year, more people are experiencing mental health difficulties. A recent study led by researchers from the University of Queensland and Harvard Medical School estimates that half of the world’s population will experience a mental health disorder over the course of their lives. Yet the supply of mental health professionals continues to lag behind the need. An analysis by the Australian government found a 32 per cent shortfall of mental health workers in 2022, projected to grow to 42 per cent by 2030. The shortage is even more profound in many other countries.
In the face of this national and worldwide mental health crisis, might artificial intelligence (AI) be part of the solution? After all, AI has shown promise in improving the quality and efficiency of medical care. It is already aiding more reliable diagnoses of some physical conditions and freeing clinicians’ time for improved patient care by streamlining some administrative tasks.
This may work, but only if we proceed thoughtfully and cautiously, bearing in mind the unique qualities and requirements of effective mental health care. The need is certainly there, but AI is still evolving, and serious risks remain.
AI and Mental Health Support
Much mental health care relies on human-to-human connection, empathic listening and close attention to emotions. It also requires strict confidentiality, a deep respect for privacy and awareness of the diverse ways in which people respond to emotional strain. These are the foundations of quality mental health care.
Concerns
Concerns about the use of AI in mental health include
- The unpredictable quality of AI-generated personal and mental health guidance, especially from large-language AI tools (of which ChatGPT is an example)
- Privacy and security of personal information, especially with open-source AI tools or tools that rely on user input as data to refine ongoing learning
- The risk of individuals becoming emotionally dependent on AI interactions
- The unknown mental health effects for children and youth of AI interactions
Opportunities
Professional associations such as the Australian Psychological Society and the American Psychological Association are advocating for more research into ways to optimise the potential of AI in mental health care while minimising its risks. For now, they recommend a limited initial focus: using AI to streamline operations so clinicians can focus more fully on their therapeutic interactions. Workplace Options (WPO), the international employee support program, is taking this approach. After obtaining the consent of the participant, our clinicians are testing the use of AI to summarise clinical notes at the end of a call or session, notes that are referred to in subsequent sessions.
Other emerging opportunities might include using AI to
- Improve the therapeutic skills of clinicians by providing feedback on patient-clinician interactions
- Track motion-related, physiological or language indicators of mental health issues to help patients and clinicians identify problems promptly
- Expand access to and improve the quality of approaches such as cognitive behavioural therapy (CBT) and mindfulness that don’t require an emotionally vulnerable relationship with a therapist
Realities
Professional restraint isn’t holding the general public back from using AI tools for therapeutic purposes, however. A 2024 study found that more than a quarter (28%) of Australians use AI tools—most often ChatGPT—for their own mental health and wellbeing. Of those who turn to AI for these purposes, the most common uses are for ‘quick advice when emotionally distressed’ (60%) or ‘as a personal therapist or coach’ to help manage emotional and mental health (47%).
People in this and other studies like the accessibility of AI and the feeling of having an empathetic confidante, but It’s important to note that ChatGPT and other open-language AI tools—the tools that are capable of these kinds of responsive conversations—were not designed for and have not been tested in these kinds of mental health applications. Tragic examples have been reported: an adolescent in the U.S. and a young father in Belgium each committed suicide after deep engagement with different AI chatbots.
The reality of widespread mental health needs, combined with limited access to professional support and the ready availability of open-language AI tools, creates a formula for inappropriate and possibly dangerous public use of AI for personal support. At the same time, it’s undeniable that many people are finding positive support through AI tools that they might not otherwise obtain. What’s needed—urgently—is government and other support to build on the positive potential of AI, as well as guidelines and regulations to address the growing risks.
Toward a Responsible Future
Workplace Options will continue to explore the responsible use of AI as we deliver mental health and wellbeing support to people around the world. We take some comfort in knowing that employee support services like ours play an important role in broadening access to quality mental health services. Employees and managers can play an important role in noticing when work colleagues show signs of mental health issues, and steering them to support services such as ours, which are available 24/7 at no cost to the employee. Family members can access the program for support, too. These are human connections, and they can make a tremendous difference in people’s lives.
Where we identify opportunities to use AI to make our services even more accessible and helpful, we will explore and test them, following the guidelines of respected professional organisations and the laws of the countries in which we operate.