
How do you take an organization from AI-wary to AI-curious to AI-in-the-everyday? Over the past year, I’ve heard versions of this question from leaders across the healthcare spectrum—from practicing physicians and health system executives to leaders in life sciences and public health.
Healthcare leaders see AI as transformational—a recent JAMA paper concluded it “will disrupt every part of health and health care delivery.” The investment is certainly there: global enterprise spending on AI is projected to exceed $450 billion in 2025. Yet, successful change management remains elusive. This can lead to a striking paradox, noted by the Society for Human Resource Management, where individual employees see productivity gains from AI, but the organization as a whole does not.
I lead the Health team at Google.Like the healthcare leaders I speak with globally, we saw a mix of nascent enthusiasm coupled with a plethora of questions about generative AI (genAI). As a team committed to advancing cutting-edge AI and focused on healthcare quality and safety, we felt an intense responsibility to navigate this transition effectively.
Here, I want to share with you our early lessons in a systematic approach to change management in the AI era. These are early lessons: the journey is just beginning, but the signs are positive. Today, our voluntary genAI program community encompasses ~98% of the Health team at Google, our dedicated chat space is buzzing with all things AI, and people’s work feels different. I’m hopeful that some of our lessons will be useful to you and your organizations.
In an organization of experts, change moves horizontally. The validation of a respected peer is infinitely more powerful than any directive from a manager. We identified a small group of motivated, forward-thinking champions to help spark the initial energy. With their crucial support, we built our program around a central, voluntary community, with a dedicated chat space that functioned as the team’s "social square" for all things AI. This created a flywheel of social proof that organically drew in even the most skeptical members.
Healthcare is a culture made up of experts. Top-down mandates in an expert culture can trigger immediate resistance. Our approach was the opposite: leadership modeled permission to experiment, and we formalized permission not just by words but by carving out dedicated time for hands-on workshopping. When leaders prioritize even a small amount of space (for example, during a protected afternoon or a session at a team-forum), it sends a clear signal: AI is a strategic priority.
This prioritization, coupled with openly discussing the technology's complexities and risks, lowers the barrier for seasoned professionals to experiment and adopt new ways of working.
Grassroots energy is powerful, but structural support is durable. We tracked initial success metrics to justify dedicating formal resources to our program. From there, we embedded AI fluency directly into our organizational objectives (OKRs). This shifted the program from a nice-to-have experiment to a core competency the organization was invested in. It also helped us make individual expectations clear, and to link them to organizational priorities, and it made individual expectations clear.
Experts can be competitive, and competition can help with change. We gamified learning through an "AI Olympics," where teammates didn't just learn about AI — they competed by applying it to solve their own biggest work headaches. This included tasks like automating detailed project timelines from meeting notes or prototyping new workflows in hours instead of weeks. This "learn through play" approach reframed AI into an immediate solution for their most annoying tasks.
The true return on an AI program like ours comes from the strategic maturation of its use. We track our team’s evolution from simple to complex AI tasks. For example, we defined three levels of maturation: Level 1 (Simple): Summarize recent health regulatory news. Level 2 (Content Creation): Generate a polished first draft of a regulatory newsletter. Level 3 (Workflow Automation): Build a reusable agent that scans news daily, identifies priorities, and summarizes next steps - transforming a manual, multi-hour process into an automated one.
Expecting every employee to become an expert AI developer is unreasonable. Instead, we built a “curated solutions repository” - a shared platform where the best tools and prototypes are documented, vetted, and accessible. Under the "built once, used by many" principle, team members contribute agents relevant to their workflows. This allows colleagues to discover and adapt existing tools, transforming isolated experiments into a collective, durable asset for the whole team.
Create a free account or log in to unlock content, event past recordings and more!