16 Dec 2025

Redefining Healthcare’s Next Leap: What Agentic AI Can Actually Do Today

Author:

Agentic AI is entering its “let’s be serious now” phase in healthcare. After a year of pilots, prototypes, and plenty of hallway debates, leaders have shifted from asking what the technology is to asking where it genuinely works. In a recent HLTH webinar featuring Tim Lawless, Senior Vice President and Global Health Lead at Publicis Sapient, and Aashima Gupta, Global Director of Healthcare Strategy and Solutions at Google Cloud, the conversation moved past hype cycles and into the operational reality of deploying agentic AI at scale.

The consensus was clear: we are transitioning from AI as an informational tool to AI as an action layer—a system capable of executing multi-step tasks, coordinating teams, supporting clinical decisions, and removing obstacles that get in the way of patient care. And as Aashima put it, “healthcare will move at the speed of trust” — a reminder that real adoption depends not on novelty, but reliability.

At HLTH, we break down the insights reshaping the possibilities for healthcare systems today.


Where Agentic AI Is Already Delivering: Real Outcomes, Not Vaporware

One of the most striking themes from the discussion was how far agentic AI has already come in addressing healthcare’s most documented pain points—workforce shortages, administrative burden, and fragmented patient experience.

Healthcare faces a projected 10 million healthcare worker shortfall by 2030, and clinicians are already spending 28 hours a week on administrative work. Those pressures aren’t theoretical; they’re existential. Agentic AI is proving its value precisely in these friction-heavy areas.

Take HCA Healthcare, which deployed an AI-powered nurse handoff agent capable of summarizing patient context across nursing shifts. With roughly 60,000 handoffs per day, even a conservative five-minute improvement translates into 300,000 reclaimed nurse minutes daily—time redirected from documentation to direct care. Aashima emphasized the importance of these capacity gains, noting that “documentation burnout” often stands between clinicians and meaningful preventative conversations with patients.

Similarly, Meditech is using AI-assisted search and summarization to support clinicians navigating multi-page charts during short appointment windows. Their physicians are saving 7.5 minutes per appointment, a meaningful shift in primary care’s time-constrained reality.

Other innovations are emerging beyond the EHR. Tim pointed to Diligent Robotics, whose autonomous mobile robots help nurses retrieve supplies and equipment—relieving the cognitive and physical burden of constant task-switching. And in partnership with Color, Google Cloud is powering a screening assistant that helps women navigate mammography logistics—from understanding guidelines to actually booking their appointment—removing the administrative barriers that too often delay lifesaving screening.

These are not theoretical experiments. They’re live, measurable deployments showing that AI doesn’t have to diagnose cancer to improve outcomes; sometimes it simply needs to save clinicians time, reduce patient friction, or streamline repetitive workflows.


Choosing the Right Starting Point: Avoid the ‘Cool but Wrong’ Use Case

Both speakers argued that one of the biggest mistakes organizations make is starting with the highest-stakes, most glamorous use cases. In Aashima’s words, “Agentic AI is not ready for diagnostics… Start with low-impact, high-frequency use cases.”

These high-frequency workflows—prior authorizations, eligibility checks, nurse handoffs, referral routing—have several advantages:

  • They’re repetitive and ripe for automation.

  • They carry lower clinical risk.

  • They generate fast, quantifiable efficiency gains.

  • They let organizations build the muscle needed for more complex deployments later.

Tim added another pitfall: too many organizations get stuck in an endless cycle of proof-of-concepts. “Stop doing POCs—at least the kind that never go anywhere,” he said, underscoring the need for early commitment to operationalization rather than endless piloting.

To scale responsibly, organizations must also build foundational readiness:

  • Clean, permissioned data

  • Clear workflow maps—a step skipped more often than admitted

  • Governance and bias mitigation policies

  • Cross-functional collaboration—technology and business co-owning the problem

  • Change management and staff education

Gupta emphasized that human-in-the-loop (HITL) models shouldn’t be treated as a fallback for catching errors, but as an intentional design choice that ensures humans step in only at the highest-impact moments. This reframes HITL not as an “AI babysitting” model, but as a targeted safety mechanism.

As organisations begin to adopt these patterns, many are now looking for partners who can help translate early experiments into repeatable, scalable tools. Both Publicis Sapient and Google Cloud are beginning to package elements of this work into more productised capabilities—whether through reusable agent patterns, orchestration frameworks, or deployment accelerators tailored for healthcare.


Building for Trust: Safety, Reliability, Guardrails—and Getting It Right Over Getting It Fast

The panel stressed that although AI models improve every month, healthcare can only move as fast as its trust infrastructure. And trust is built through verification, not optimism.

Aashima outlined several engineering practices that should become standard in healthcare AI development:

Red teaming & adversarial testing

Teams must intentionally feed agents ambiguous, misleading, or adversarial inputs to probe failure modes. This reveals where guardrails need strengthening long before patients or clinicians encounter problems.

“Trust comes with verification,” Aashima noted. “It’s a new level of engineering.”

Permission-aware orchestration layers

As agents transition from passive tools to active executors, ensuring they have only the access they’re allowed becomes paramount. Zero-trust design—where systems verify every interaction—helps prevent accidental or malicious data exposure.

Hallucination mitigation through system design

Tim cited research showing that decomposing tasks into subtasks and using multiple agents to “vote” on next actions dramatically reduces errors. One study showed that with the right architecture, agents could execute over a million reasoning steps with zero hallucinations.

To make these systems dependable, teams increasingly test new agentic workflows against existing ones, validating behaviour before anything touches a real clinical process. Tim commented “I’ve seen places build an AI that mimics a current system so they can compare and build trust.”

Grounding models in real clinical context

With large token windows (e.g., Google Gemini’s 1M token context), guidelines, full charts, and policy documents can now be included directly in the prompt, reducing the risk of model improvisation.

In Aashima Gupta’s words:

“It’s better to get it right than get it fast.”


4. Breaking the Data Silos: No AI Without API

Nearly every agentic AI use case breaks down if data remains locked in siloed systems. Gupta cited a striking statistic: less than 3% of healthcare data is effectively used today—not because it lacks value, but because it’s inaccessible. This harkens back to the familiar refrain that health data is “the new gold”...bring your pickaxes, folks. 

For agentic AI to act on behalf of clinicians and patients, data must flow. That requires:

  • Interoperable standards like FHIR

  • Consumer-mediated data exchange

  • Clear consent frameworks

  • Modern APIs across payers, providers, labs, and life sciences

Aashima summarized the challenge succinctly:

“No AI without API.”

Tim added that data-sharing doesn’t always require exposing raw data; it can mean sharing relationships, patterns, or conclusions that enable better cross-continuum care. But without movement toward interoperability pledges and shared infrastructures, agentic AI will remain constrained to narrow, fragmented use cases rather than supporting holistic patient journeys.


The Future of Agentic AI Depends on What We Choose to Build—and How We Build It

Healthcare is approaching a genuine inflection point. Agentic AI isn’t a story about replacing clinicians; it’s about giving them back the time and clarity they’ve steadily lost. Used well, it can strip away the friction that slows down screenings, complicates navigation, and buries teams in administrative tasks that add little value.

Lawless captured the sentiment neatly: the opportunity is practical, but also personal. Better access, smoother coordination, and a patient experience that finally feels connected rather than fragmented.

But the next phase of AI in healthcare won’t hinge on model performance alone. It will turn on the trust structures around it—governance, shared standards, and partnerships that make the technology dependable in real clinical settings.

The tools are here. What matters now is the discipline and care with which we put them to work.