Artificial intelligence is the latest new way of tackling mental health and is now being developed to be able to detect psychiatric problems like depression, obsessive compulsive disorders, and anxiety just by talking to you via an app or a ‘bot’ – yes, that’s right, they can diagnose and predict mental illnesses by just talking and listening to you.  A strained health system for health professionals has opened up a new channel: apps for digital healthcare. This method takes communication to the next level and allows the individual to talk about the way they’re feeling, if, and when, they want – and with no human interaction. With reports from the British Medical Association warning that thousands of people with serious mental health issues are waiting up to two years to receive specialist support, an alternative means to dealing with this situation is emerging.

The stuff we see in science fiction movies about artificial intelligence (AI) has turned from the fiction of yesterday to the fact of today. Mechanical therapists aren’t as far-fetched as you might think. In the 1960s, Joseph Weizenbaum of the Massachusetts Institute of Technology’s Artificial Intelligence Laboratory developed ELIZA, a chatbot that could emulate the conversations of a psychotherapist through a question-and-answer system where a patient would type questions into the system and ELIZA would just answer. Fast forward ten years later to the 1970s and PARRY was created by psychiatrist Kenneth Colby. The computer program successfully impersonated a paranoid person to the extent that it succeeded in fooling expert psychiatrists who were not able to differentiate the computerised version of the person from the actual patient!

Since then, there has been a growing number of sophisticated technological methods that are bringing AI into the mental health mainstream. One such app, the brainchild of Stanford psychologists and AI experts, Woebot, combines machine learning and natural language processing (NLP) to analyse a person´s mood and offer them appropriate therapy. The emotionally intelligent chatbot Wysa, developed by Indian entrepreneurs Jo Aggarwal and Ramakant Vempati, uses AI and NLP techniques to track users’ emotions by acting as a virtual mental wellness coach. And finally, Coignant, developed in Singapore, integrates face-to-face therapy and aims to prevent mental illness relapses by monitoring existing patients and assisting them with therapy goals. Pattern recognition in AI can apply to mental health assessments. For example, voice recognition, data input by the patient, and computer analysis of the data, can detect specific mental health problems such as depression, anxiety, and OCD.

Woebot, an app launched last year and is free to download was designed by the American psychologist Dr Alison Darcy to offer advice, based on the principles of Cognitive Behaviour Therapy, in response to text conversations. According to their website, one user says, describes the interface as being “a bit like Whatsapp”, with an automated chatbot asking how the person is feeling and what is going on in their lives. In response to statements, Woebot makes suggestions about what the person could be suffering from and encourages the patient to take action through self-reflective tasks, such as “writing out lists of what is bothering me”.

When it comes to allowing others inside our heads, especially when talking about such sensitive matters, most of us only crack open the door for a select few, likely close family members or trusted psychologists. But if you were really struggling, would you consider sharing your innermost thoughts with a robot? Could involving AI in something as potentially delicate as mental health pose any threat? Health care organisations distrust AI because they don’t always understand why it reaches the conclusions it does and the lack of face-to-face support is so important. In turn, a wrong decision based on “bad” information can lead to a serious health outcome. One of the most commonly experienced symptoms of depression and anxiety is feeling alone, or not understood by others, however, perhaps talking to a real person can make a person feels embarrassed or worried about any associated stigma.  This is where ‘therapy bots’ come in. Designed to offer accessible, convenient and (often) free help, they aim to remove barriers to treatment. The fact that most bot overtly state that it is not human is definitely a positive as this could be an effective way for encouraging people to open up as it eliminates the fear of judgement.

Innovative technology is offering new opportunities to millions with different mental health conditions especially to a generation that, at a touch of a button, can get inexpensive counselling. This message is clear though: people should not solely rely on therapy bots, or use them for more serious or long-term issues. What it does mean, however, is that these services can be used in real moments of need. The benefits of these methods need to be carefully balanced against their limitations as the long-term effects of AI for is yet to be thoroughly tested, but the initial results seem to be promising.