25 Feb 2026

Canary Speech, JubileeTV Partner to Embed AI Voice Biomarkers in At-Home Care

Canary Speech has entered the consumer health market through a partnership with JubileeTV, embedding its AI-based vocal biomarker technology into video calls between older adults and their families. The deployment marks the first time Canary’s platform has been used outside clinical and research settings, extending passive cognitive and emotional health monitoring into the home.

The system analyzes acoustic and linguistic features from short segments of natural conversation. Approximately 40 seconds of speech are converted into high-density, machine-readable acoustic data, generating nondiagnostic indicators related to cognitive function, mood, stress, energy, and overall wellness. Insights are produced passively during JubileeTV calls, allowing families to track changes over time without additional devices or structured testing.

"Rather than analyzing what someone says, we analyze how they say it, capturing thousands of acoustic features every few milliseconds, including pitch dynamics, timing, prosody, jitter, shimmer, pauses, and vocal energy patterns," Henry O’Connell, CEO of Canary Speech, told MobiHealthNews.

These features are processed using machine learning models trained on clinically labeled datasets associated with Mild Cognitive Impairment, Alzheimer's disease, anxiety, depression, and stress-related states. Within seconds, the platform generates normalized decision-support scores across domains such as cognition, mood, and fatigue.

O'Connell noted that the technology operates on standard digital audio across smartphones, tablets, desktops, call center systems, and ambient documentation tools, without requiring scripted tasks or specialized hardware.

"The technology integrates directly into existing workflows, enabling passive insight without additional burden on clinicians or patients," he said. "Because voice is natural and noninvasive, monitoring feels like a conversation, not testing."

He added, "For health systems and payers, it enables low-cost risk stratification across aging populations and earlier intervention, reducing downstream costs of unmanaged dementia, depression, and frailty," O'Connell said.

The company received a patent for its neural network-powered speech analysis in 2024, raised $13 million, and previously partnered with Microsoft to expand its AI-driven speech models.

Click here for the original news story.