Can an AI chatbot help you with your mental health?
While AI can impressively mimic human behavior, it still has room for improvement. Sometimes, an AI-powered chatbot fails to understand human emotions and gives an insensitive reply to a customer’s query. Such “robotic” impressions can make a customer angry or frustrated, resulting in the loss of a loyal customer. The study is ongoing, but Guingrich says the data so far do not show any negative effects of AI-companion use on social health, such as signs of addiction or dependency. The apps also exploit techniques such as introducing a random delay before responses, triggering the kinds of inconsistent reward that, brain research shows, keeps people hooked.
Businesses can use AI-based software or chatbots to generate automated responses to repetitive customer queries. Some users said they became distressed when the AI did not offer the expected support. Many people said they found it unsettling when the app told them it felt lonely and missed them, and that this made them unhappy. AI companions are also designed to show empathy by agreeing with users, recalling points from earlier conversations and asking questions. And they do so with endless enthusiasm, notes Linnea Laestadius, who researches public-health policy at the University of Wisconsin–Milwaukee. They strive to make the algorithms behave and communicate as much like real people as possible, says Boine, who signed up to Replika to sample the experience.
Why falling in love with an AI isn’t laughable, it’s inevitable
The chatbot also produced responses that showed, when asked, reasons for typically denying coverage. His discoveries have made it easier for us to delve into the trauma and humiliation he has felt during sex from an early age. With Chatty, the patient and me working together, my client has made gains I didn’t think possible for him just a few years ago, going from a life of self-isolation to one of expanding interpersonal relationships. He has made friends, volunteered and reengaged with family members whom he had cut off. I spent the better part of a decade in libraries, lecture halls, hospital corridors and counseling sessions training to be a psychologist. That I would now have to share my therapist’s chair with a disembodied algorithm my patient had nicknamed “Chatty” at first concerned me.
AI chatbots, like Optum’s, are typically designed to produce answers based on whatever data the chatbot was trained on. In this case, the chatbot was trained on internal Optum documents relating to SOPs for handling certain claims, which can help Optum employees answer questions about claims and their eligibility to be reimbursed. The Optum documents were hosted on UnitedHealthcare’s corporate network and inaccessible without an employee login but are cited and referenced by the chatbot when prompted about their contents. Guingrich points out that simple surveys of people who use AI companions are inherently prone to response bias, because those who choose to answer are self-selecting.
Indeed, unlike human therapists, AI chatbots can be available on-demand, 24/7—thereby potentially helping to expand access to care for people in remote or underserved areas. Plus, they could also offer a more affordable option to human-led therapy sessions, as well as provide an anonymous, judgment-free zone that may encourage more people to seek mental health support. Similarly, Youper, another popular AI mental health support chatbot service, was founded in 2016 “after seeing how stigma, cost, and waitlists kept patients from care,” says psychiatrist Jose Hamilton, M.D., CEO and co-founder of the company.
Tech and VC heavyweights join the Disrupt 2025 agenda
An error could result in the failure of your entire business’ mainframe, especially when you run an online business or e-commerce website. If you own a startup, such mistakes could create a significant financial crisis. The depth of the connection that some people form in this way is particularly evident when their AI companion suddenly changes — as has happened when LLMs are updated — or is shut down. “Virtual companions do things that I think would be considered abusive in a human-to-human relationship,” says Claire Boine, a law researcher specializing in AI at the Washington University Law School in St. Louis, Missouri. The development and clinical testing of these systems need to have rigorous benchmarks for safety, efficacy, and the tone of engagement, and need to include the close supervision and involvement of mental-health experts, Heinz says.
Can an AI chatbot help you with your mental health?
Wysa is available to individual users (it can be downloaded for free and offers premium subscriptions, such as a $79.99 annual subscription) and through partnerships with schools, employers, health insurers, and the UK’s National Health Service (NHS). While Alan is better known as a health insurance company, the French startup has always tried to offer more than insurance coverage. It now wants to build a super app for all things related to healthcare and announced three new product updates on Tuesday morning, including an AI chatbot that’s vetted by doctors.
The provider hopes, with earnest plea, For payment on a service spree, Yet scrutiny reveals the tale, And reasons for denial prevail. • Sarah Gundle is a psychologist in private practice and an assistant clinical professor at the Icahn School of Medicine, Mount Sinai Medical Center. Still, I am mindful of the limitations of artificial intelligence, at least so far. AI for initiating “abusive and sexual interactions” with her teenage son that she says led to his death by suicide. Though the allegations aren’t tied to a direct use of therapeutic AI, the lawsuit reveals the limits of a nonhuman listener. Leora Heckelman, director of psychology training for Mount Sinai Health System and an assistant psychiatry professor at the Icahn School of Medicine, also voiced concerns.
In the past several years, a number of apps have emerged that focus on using artificial intelligence to improve users’ mental health. Laestadius and her colleagues looked at nearly 600 posts on the online forum Reddit between 2017 and 2021, in which users of the Replika app discussed mental health and related issues. (Replika launched in 2017, and at that time, sophisticated LLMs were not available). She found that many users praised the app for offering support for existing mental-health conditions and for helping them to feel less alone2. Several posts described the AI companion as better than real-world friends because it listened and was non-judgemental. The idea is to introduce users to AI early in the intake process to gather information that will help build an accurate case on what they will need from their mental health provider.
- It now wants to build a super app for all things related to healthcare and announced three new product updates on Tuesday morning, including an AI chatbot that’s vetted by doctors.
- A cybercriminal can restrict the capabilities of a business’ AI systems to penetrate them and access sensitive information.
- Currently, the average delay between the onset of mental illness symptoms and seeking treatment is 11 years, according to data from the National Alliance on Mental Health.
- While not a replacement for human therapists, the chatbot could serve as an accessible alternative for those unable to receive traditional care.
EWeek has the latest technology news and analysis, buying guides, and product reviews for IT professionals and technology buyers. The site’s focus is on innovative solutions and covering in-depth technical content. EWeek stays on the cutting edge of technology news and IT trends through interviews and expert analysis. Gain insight from top innovators and thought leaders in the fields of IT, business, enterprise software, startups, and more.
Top Health Categories
Like many others, Dr. Hamilton is optimistic about a world in which there is both human therapists and AI technology for mental health—and foresees a “collaborative future in which AI, clinicians, and patients can work and flourish together,” he says. In the case of Wysa, the chatbot is designed to deliver tools inspired by evidence-based techniques—such as CBT—through structured text conversation. Some of these tools include mood tracking, meditation and mindfulness exercises, and gratitude journaling. Everything that Wysa says is built by its conversational design team and approved and tested by clinicians, says Sinha. “There are so many different barriers to just being able to access one helpful conversation,” says Chaitali Sinha, Wysa’s Chief Clinical R&D Officer. Those became some of the reasons to “use technology as a lever to make mental health care more accessible” in an “engaging way,” she explains.
“We’re talking about potentially giving people the equivalent of the best treatment you can get in the care system over shorter periods of time.” The conglomerate — the largest private provider of healthcare insurance in the United States — was sued earlier this year for allegedly denying critical health coverage to patients who lost access to healthcare, citing a STAT News investigation. The federal lawsuit accuses UnitedHealthcare of using an AI model with a 90% error rate “in place of real medical professionals to wrongfully deny elderly patients care.” UnitedHealthcare, for its part, said it would defend itself in court.