
A woman sits down with her therapist and pulls out a transcript from her AI chatbot. The bot told her she was “probably fine.” But she isn’t. And she’s not alone.
AI is showing up in therapy in real time. For some, these tools can reinforce a therapist’s guidance or make a concept click. But for many women, the experience is far less positive. Research shows that only 37% of women use generative AI tools, compared to 50% of men. And when women do try AI for mental health, they’re significantly less likely than men to find it beneficial. Instead of opening doors, AI often overlooks or mislabels their needs, with serious consequences.
Multiple studies reveal the same troubling pattern: Because AI is often trained on male-dominated data, it’s more likely to misread women’s symptoms. And this gap doesn’t appear out of nowhere—women already face these same challenges with traditional medical providers. When AI is trained on systems that contain those inequities, it risks replicating the very biases that are already baked into health care.
A 2025 study found that certain large language models (LLMs) used in long-term health care settings consistently downplayed women’s physical and mental health needs. Identical cases were labeled “complex” for men but described as “mild” or dismissed for women, increasing the risk of reduced care.
University of Colorado Boulder researchers found that common AI tools regularly underdiagnosed women at risk for depression because they missed subtle differences in how women often express emotional distress.
As a clinician, this is deeply concerning. We know that women frequently experience depression, ADHD, and anxiety differently than men. If AI models aren’t trained to recognize those patterns, they can miss diagnoses or delay critical support. And the gap grows wider when you factor in life stages like pregnancy, perimenopause, and menopause—periods when women’s mental health needs often intensify but are rarely accounted for in AI tools.
If AI is going to expand access to care, it has to work for everyone. That means building differently:
1. Train on inclusive data. AI is only as good as its inputs. Today’s models often draw on data that underrepresent women’s experiences, leading to blind spots in care.
At Lyra, we train our models on diverse, clinically curated information. Culturally responsive care is a core philosophy in all of our products because we know how important it is to deliver equitable support across populations to meet people where they are.
2. Include women in design. Diverse datasets are essential, but they’re not enough on their own. Women need to be part of the process, as programmers, designers, clinicians, and test users, so tools reflect lived experiences, not assumptions.
3. Raise the clinical bar. A tool is only as good as its outcomes. At Lyra, we believe AI must be developed hand in hand with clinicians. To deliver real outcomes, it needs to be designed, tested, and built with both technological and mental health experts.
4. Keep humans at the center. AI can ease the administrative load on providers, reinforce therapeutic concepts, and help match people with the right clinician. But it can’t replace the empathy, accountability, and wisdom of a licensed provider.
AI holds undeniable promise for expanding access to mental health support. But if these tools ignore or misdiagnose women’s needs, they risk widening the very gaps they were designed to close.
The path forward is clear: Center equity, inclusivity, and clinical rigor in every stage of design. Women deserve AI that sees them—and when we build for women, we build better for everyone.
Create a free account or log in to unlock content, event past recordings and more!