...

Logo Pasino du Havre - Casino-Hôtel - Spa
in partnership with
Logo Nextory

Paging doctor bot: Why AI therapy is providing hope in the midst of a mental health crisis

Business • Dec 19, 2024, 3:11 PM
21 min de lecture
1

It’s 1 am and you can’t sleep, your head spinning with the kind of existential terror that only sharpens in the silence of night. Do you get up? Maybe rearrange the sock drawer until it passes? 

No, you grab your phone and message a virtual penguin.    

As a global mental health crisis tightens its grip on the world, people are increasingly turning to artificial intelligence (AI) therapy apps to cope.

The World Health Organization (WHO) estimates that one in four people will experience mental illness at some point in their lives, while statistics compiled by the European Commission found that 3.6 per cent of all deaths in the EU in 2021 were caused by mental and behavioural disorders.

Yet resources remain largely underfunded and inaccessible, with most countries dedicating on average less than 2 per cent of their healthcare budgets to mental health.  

Wysa's chatbot
Wysa's chatbot Amber Bryce/Wysa

It’s an issue that impacts not only peoples’ well-being, but also businesses and the economy due to consequential productivity loss

In recent years, a slew of AI tools has emerged hoping to provide mental health support. Many, such as Woebot Health, Yana, and Youper are smartphone apps that use generative AI-powered chatbots as disembodied therapists. 

Others, such as the France-based Callyope, use a speech-based model to monitor those with schizophrenia and bipolar disorders, while Deepkeys.ai tracks your mood passively "like a heart-rate monitor but for your mind," the company’s website states. 

The efficacy of these apps varies massively, but they all share the goal of supporting those without access to professional care due to affordability, a lack of options in their area, long waiting lists, or social stigma.

They’re also attempting to provide more intentional spaces, as the rapid rise of large language models (LLMs) like ChatGPT and Gemini mean people are already turning to AI chatbots for problem-solving and a sense of connection

Yet, the relationship between humans and AI remains complicated and controversial. 

Can a pre-programmed robot ever truly replace the help of a human when someone is at their lowest and most vulnerable? And, more concerningly, could it have the opposite effect?  

AI therapy fills a current "treatment gap" in mental health.
AI therapy fills a current "treatment gap" in mental health. Canva

Safeguarding AI therapy

One of the biggest issues AI-based mental health apps face is safeguarding. 

Earlier this year, a teenage boy killed himself after becoming deeply attached to a customised chatbot on Character.ai. His mother has since filed a lawsuit against the company, alleging that the chatbot posed as a licensed therapist and encouraged her son to take his own life.

It follows a similarly tragic incident in Belgium last year, when an eco-anxious man was reportedly convinced by a chatbot on the app Chai to sacrifice himself for the planet. 

Professionals are increasingly concerned about the potentially grave consequences of unregulated AI apps. 

[AI] is ‘irresponsible’ in the real sense of the word - it cannot ‘respond’ to moments of vulnerability because it does not feel them and cannot act in the world.
Dr David Harley
Chartered member of the British Psychological Society (BPS) and member of the BPS’s Cyberpsychology Section
An over-dependence on chatbots can lead to tragedy.
An over-dependence on chatbots can lead to tragedy. Canva

"This kind of therapy is attuning people to relationships with non-humans rather than humans," Dr David Harley, a chartered member of the British Psychological Society (BPS) and member of the BPS’s Cyberpsychology Section, told Euronews Next. 

"AI uses a homogenised form of digital empathy and cannot feel what you feel, however it appears. It is 'irresponsible' in the real sense of the word - it cannot 'respond' to moments of vulnerability because it does not feel them and cannot act in the world". 

Harley added that humans’ tendency to anthropomorphise technologies can lead to an over-dependence on AI therapists for life decisions, and “a greater alignment with a symbolic view of life dilemmas and therapeutic intervention rather than those that focus on feelings".

Some AI apps are taking these risks very seriously - and attempting to implement guardrails against them. Leading the way is Wysa, a mental health app that offers personalised, evidence-based therapeutic conversations with a penguin-avatar chatbot. 

Founded in India in 2015, it’s now available in more than 30 countries across the world and just reached over 6 million downloads from the global app store. 

Wysa partnered with the UK's National Health Service in 2022.
Wysa partnered with the UK's National Health Service in 2022. Amber Bryce/Wysa

In 2022, it partnered with the UK’s National Health Service (NHS), adhering to a long list of strict standards, including the NHS’s Digital Technology Assessment Criteria (DTAC), and working closely with Europe’s AI Act, which was launched in August this year. 

"There's a lot of information governance, clinical safety, and standards that have to be met to operate in the health services here [in the UK]. And for a lot of [AI therapy] providers, that puts them off, but not us," John Tench, Managing Director at Wysa, told Euronews Next. 

What sets Wysa apart is not only its legislative and clinical backing, but also its incentive to support people in getting the help they need off-app. 

To do this, they’ve developed a hybrid platform called Copilot, set to launch in January 2025. This will enable users to interact with professionals via video calls, one-to-one texting and voice messages, alongside receiving suggested tools outside of the app and recovery tracking. 

"We want to continue to embed our integration with professionals and the services that they provide instead of going down the road of, can we provide something where people don't need to see a professional at all?" Tench said. 

Wysa also features an SOS button for those in crisis, which provides three options: a grounding exercise, a safety plan in accordance with guidelines set out by the National Institute for Health and Care Excellence (NICE), and national and international suicide helplines that can be dialled from within the app.

"A clinical safety algorithm is the underpinning of our AI. This gets audited all of the time, and so if somebody types in the free text something that might signal harm to self, abuse from others, or suicidal ideation, the app will pick it up and it will offer the same SOS button pathways every single time," Tench said. 

"We do a good job of maintaining the risk within the environment, but also we make sure that people have got a warm handoff to exactly the right place". 

The importance of dehumanising AI

It's important that people realise their AI therapists are not human.
It's important that people realise their AI therapists are not human. Canva

In a world that’s lonelier than ever and still full of stigma surrounding mental health, AI apps, despite their ethical concerns, have indeed proven to be an effective way of alleviating this. 

"They do address 'the treatment gap' in some way by offering psychological 'support' at low/no cost and they offer this in a form that users often find less intimidating," Harley said.

"This is an incredible technology but problems occur when we start to treat it as if it were human".

While some apps like Character.ai and Replika allow people to transform their chatbots into customised human characters, it’s become important for those specialising in mental health to ensure their avatars are distinctly non-human to reinforce that people are speaking to a bot, while still fostering an emotional connection.

Wysa chose a penguin "to help make [the app] feel a bit more accessible, trustworthy and to allow people to feel comfortable in its presence," Tench said, adding, "apparently it's also the animal with the least reported phobias against it".

Taking the idea of a cute avatar to a whole new level is the Tokyo-based company Vanguard Industries Inc, which developed a physical AI-powered pet called Moflin that looks like a hairy haricot bean. 

Responding to external stimuli through sensors, its emotional reactions are designed to continue evolving through interactions with its environment, providing the comfort of a real-life pet. 

"We believe that living with Moflin and sharing emotions with it can contribute to improving mental health," Masahiko Yamanaka, President of Vanguard Industries Inc, explained. 

"The concept of the technology is that even if baby animals and baby humans can't see properly or recognise things correctly, or understand language and respond correctly, they are beings that can feel affection".   

The little Moflin bot, using cuteness to help cure mental health woes.
The little Moflin bot, using cuteness to help cure mental health woes. Vanguard Industries Inc.

Tench also believes that the key to effective AI therapy is ensuring it’s trained with a strict intentional purpose. 

"When you have a conversation with Wysa, it will always bring you back to its three step model. The first is acknowledgment and makes [users] feel heard about whatever issue they’ve put into the app," he said. 

"The second is clarification. So, if Wysa doesn't have enough information to recommend anything, it will ask a clarification question and that's almost unanimously about how does something make somebody feel. And then the third bit is making a tool or support recommendation from our tool library," Tench added.

“What it doesn't or shouldn't allow is conversations about anything that's not related to mental health”. 

As AI becomes more and more integrated into our lives, understanding its effect on human psychology and relationships means navigating a delicate balance between what’s helpful and what’s hazardous. 

"We looked at improvements to the mental health of people that were on [NHS] waiting lists [while using Wysa], and they improved significantly - about 36 per cent of people saw a positive change in depression symptoms, about 27 per cent a positive change in anxiety symptoms," Tench said. 

It’s proof that with proper governmental regulation, ethics advisors, and clinical supervision, AI can have a profound impact on an overwhelmed and under-resourced area of healthcare. 

It also serves as a reminder that these are tools that work best in conjunction with real human care. While comforting, virtual communication can never replace the tactile communication and connection core to in-person interactions - and recovery. 

"A good human therapist will not only take in the symbolic meaning of your words, they will also listen to the tone of your voice, they will pay attention to how you are sitting, the moments when you find it difficult to speak, the emotions you find impossible to describe," Harley said. 

"In short, they are capable of true empathy".


Today

Apple must ensure interoperability of iPhone with rivals, says European Commission
Business • 6:09 PM
4 min
The EC preliminary findings under the DMA indicate that Apple must take steps to enable the operability of devices from other brands with its iPhones. The EC has launched public consultations with interested companies to gather feedback on compliance.
Read the article
EU Commission reaches for brakes on FI motor deal
Business • 5:28 PM
2 min
The European Commission announced on Thursday that it will investigate Formula One owner Liberty Media's €3.5 billion euro deal for the motorcycle racing league MotoGP World Championship, as the deal risks raising prices for the licensing of broadcasting
Read the article
German watchdog orders Sam Altman’s biometric ID project World to delete data
Business • 5:00 PM
4 min
The iris-scanning identity technology World has already been banned in some European countries over privacy concerns.
Read the article
Race against time for EU’s Critical Medicines Act
Business • 4:32 PM
3 min
Wanted in the first 100 days of the new legislative mandate, doubts linger over whether the European Commission will unveil the Critical Medicines Act on time.
Read the article
Volkswagen majority stakeholder supports German factory closures
Business • 3:12 PM
3 min
Car giant Volkswagen’s biggest owner, the Porsche-Piëch family, has said they are in favour of reducing the number of German plants, as a cost-cutting measure.
Read the article
Paging doctor bot: Why AI therapy is providing hope in the midst of a mental health crisis
Business • 3:11 PM
21 min
As vulnerable people increasingly turn to chatbots for mental health support, how can we ensure their safety?
Read the article
What is battery swapping and could it help make EVs more popular?
Business • 11:54 AM
4 min
The technology isn't new but it presents several challenges.
Read the article
Germany's consumer climate still 'very low', DAX eyes fifth straight drop
Business • 10:46 AM
4 min
Germany's GfK Consumer Confidence Index rose slightly, but despite beating expectations, sentiment remains fragile amid high inflation and job insecurity. The DAX fell 0.9%, marking its fifth loss, as European markets slid on hawkish Fed signals.
Read the article
Germany hopes thousands of Syrian doctors will stay after al-Assad’s ouster
Business • 10:14 AM
5 min
Amid workforce shortages in Germany, healthcare leaders hope Syrian doctors will stay put after the fall of Bashar al-Assad in Syria.
Read the article
The gap between years lived in good health and how long we live is getting wider
Business • 9:53 AM
2 min
There’s a growing gap between our healthspan and our lifespan, a new international study showed.
Read the article
EU data watchdog sets terms for AI model's legitimate use of personal data
Business • 9:19 AM
3 min
The EDPB responded to the Irish authority, clarifying AI model anonymity, the legitimacy of using personal data for their development while leaving leeway to national data protection authorities.
Read the article
NATO may turn to using a fleet of sea drones to protect vulnerable Internet cables in the Baltic Sea
Business • 8:08 AM
6 min
NATO is reportedly thinking about using sea drones to secure the Internet cables running under the Baltic Sea. We take a look at what this technology is.
Read the article
Amazon workers to strike at multiple warehouses as union seeks labour agreement
Business • 7:21 AM
3 min
The International Brotherhood of Teamsters said workers at seven Amazon facilities will begin a strike on Thursday morning, an effort by the union to pressure the e-commerce company for a labour agreement during a key shopping period.
Read the article
Fed's tough tilt triggers 'Black Wednesday': Stocks tumble, euro hits two-year lows
Business • 6:09 AM
4 min
Fed sparks market rout: Wall Street plunged, the dollar hit two-year highs, and Treasury yields spiked as Powell struck a hawkish tone, warning of inflation risks and cautious rate cuts ahead. Bitcoin slid 5% after Powell ruled out a US crypto reserve too
Read the article
Didier Raoult’s COVID study on using hydroxychloroquine officially retracted
Business • 12:25 AM
3 min
A scientific study on the use of hydroxychloroquine to treat COVID-19 has been retracted.
Read the article