14 Apr 2025

Digital Tools for Mental Health: Friend or Foe?

Author:

Aoife DarlingContent ManagerHLTH Community

The World Health Organization estimates that one in four people will face a mental health issue at some point in their lives, with around one in eight—nearly 970 million people—living with a mental disorder in 2019 alone. These numbers illustrate the vast scale of the mental health crisis and emphasize the urgent need for innovative solutions. 


The Boom and the Benefits


It’s no surprise that the digital mental health app market is booming, with projections valuing it at $17.5 billion (£13.8bn) by 2030. These tools include  virtual therapy services, mood trackers, mental fitness coaches, digitised forms of cognitive behavioural therapy and chatbots. They provide timely, on-demand support, helping users overcome barriers like long waitlists, limited clinic hours, and poor access in remote areas—making them especially useful when immediate relief is needed.


Mental health apps also offer a more affordable alternative to traditional therapy, making support accessible to a wider range of users. An analysis of user reviews from 106 mental health apps found that the average cost of 11 paid apps was just $5.26—a small fraction of the typical $100–$200 per session for psychotherapy in the U.S., making them more accessible for those with limited financial resources.


Importantly, digital mental health tools offer a level of anonymity that can help individuals who feel uncomfortable with in-person therapy avoid the stigma or fear of being judged. For many, mental health apps can act as a first step in the support journey, easing the transition to face-to-face care if needed.


Incomplete Evidence in Mental Health Tech


Digital mental health tools can enhance therapeutic outcomes, particularly when used alongside traditional face-to-face therapy rather than as a standalone treatment. Yet, while the market boasts an estimated 10,000 to 20,000 mental health apps available globally, fewer than 5% have been studied for effectiveness. Most have not undergone rigorous randomized clinical trials, and only a small number are supported by peer-reviewed research validating their content.


In fact, the quality of existing studies is often questionable. Many so-called “pilot trials” set such a low bar for efficacy that their results offer little meaningful insight. One 2022 trial, for example, compared a cognitive behavioral therapy (CBT) app for individuals with schizophrenia during an acute psychotic episode to a sham app featuring only a stopwatch. Comparing a proposed intervention to essentially a behavioural placebo makes it easy for the app to appear effective, even if its true impact is minimal or insignificant.


The most concerning question is whether some mental health apps might actually cause harm, worsening the symptoms they aim to treat. In a large study, 19,000 patients with frequent suicidal thoughts were divided into three groups over 12 months. One received standard care, another received additional outreach, and the third was given a digital tool based on dialectical behavior therapy (DBT), which includes mindfulness and breathing exercises. Surprisingly, those using the app had a higher risk of self-harm than those receiving standard care alone—raising serious concerns about the potential risks of poorly evaluated digital interventions.


Are we Safe or Surveilled?


Digital mental health apps generate vast amounts of sensitive data, raising serious concerns about user privacy. For example, in March 2023, the Federal Trade Commission (FTC) filed a complaint against BetterHelp for sharing users' emails, IP addresses, and responses to health questionnaires with companies like Meta, Snapchat, Criteo, and Pinterest—despite assurances that this information would remain confidential and used only to support their services.


Beyond advertising, data from these apps can be mined to improve app functionality, further increasing exposure risks. Mental health data is especially sensitive, as it frequently includes deeply personal details gathered through intake forms. If shared or mishandled, this information can result in targeted ads that reveal intimate aspects of identity, such as gender or sexuality, potentially impacting privacy, employment, and overall well-being. Even the simple act of using a mental health app can allow data brokers to infer a user’s mental state. De-identified data is not always safe either, as it can often be re-identified when cross-referenced with other datasets—highlighting the urgent need for robust federal privacy protections.


Closing Thoughts


With thousands of mental health apps available and minimal regulation, it’s often difficult for users to know which ones are trustworthy and effective. As of November 2024, only six apps—Rejoyn, NightWare, EndeavorRx, ReSET, ReSET-O, and Somryst—have received FDA approval, underscoring the urgent need for stricter oversight of tools targeting vulnerable populations.


AI chatbots, in particular, raise red flags. Though often promoted as supportive spaces, without proper regulation, they risk offering misleading or even harmful guidance to users in distress.


Privacy is another critical issue. Many apps collect deeply personal information, yet their policies are often hidden, overly complex, or hard to access. Some require a college-level education to understand. Stronger protections are essential to ensure users clearly understand how their data is used and can give informed consent—just as they would with traditional healthcare providers.


Still, when properly regulated, mental health apps hold immense promise. They can expand access to care, support early diagnosis, and help fill the growing gaps in mental health services. The challenge now is to ensure that innovation doesn’t outpace accountability.


Interest in learning more? Check out our Community for Mental & Behavioural Health!


Keep exploring for FREE!

Create a free account or log in to unlock content, event past recordings and more!