There is no denying that general societal excitement over AI
and the potential for AI-driven tools to impact a variety of industries and
personal uses is growing. But according
to McKinsey, many healthcare leaders are proceeding cautiously with deploying
AI-enhanced tools within care settings due to risk concerns.
“While [generative AI] and machine
learning are revolutionizing healthcare, it’s important to remember that no
innovation will happen without a human’s involvement in clinical
decision-making,” says Greg Samios, president and CEO of the Clinical Effectiveness business at
Wolters Kluwer, Health. “The trust and expertise of healthcare professionals
are not just invaluable and irreplaceable, they are the very foundation on
which GenAI integration in healthcare stands. Their involvement builds patient
trust.”
In a Wolters Kluwer research report, 91% of U.S. physicians said they would need to know that doctors and medical experts had created the materials used by a GenAI tool to make a recommendation before they would feel comfortable using the tool in their clinical decision-making. “Physicians are open to using generative AI in a clinical setting, provided that applications are useful and trustworthy,” explains Peter Bonis, MD, Chief Medical Officer at Wolters Kluwer Health. “The source of content and transparency are key considerations.”
Even with an AI-enhanced solution built on high-quality, responsibly developed expert content, experts agree that the healthcare industry needs to carefully consider its implementation strategies before moving forward with widespread use of clinical generative AI solutions, including:
·
Understanding how AI-driven solutions align with
patient care goals.
·
Responsible governance of technology deployment.
·
Maintaining the clinician at the center of all
decision-making.
Many organizations are addressing AI concerns by using a strategic framework for incorporating AI-driven clinical tools in a way that aligns with healthcare system goals – initially proving its value through low-risk administrative and data-driven tasks that aren’t likely to impact patient safety.
Once the low-risk value of AI is demonstrated, some healthcare
organizations have proceeded with thoughtful, measured steps to introduce AI as
a “thought partner” to explore ideas and test hypotheses.
To further assist it these efforts, the Coalition for Health AI (CHAI) recently developed a framework for responsible AI standards in healthcare to help ensure the quality of and trustworthiness of AI in healthcare and promote ethical applications of the technology that engages broad representations of stakeholders. The CHAI framework includes:
·
Certification of quality assurance labs to
validate AI models.
·
Continuous evaluation of tools in use.
·
Best practice guidelines for deploying health
AI technologies.
· Creation of a publicly available national registry of participating projects.
Rapid deployment of AI solutions, particularly for clinical situations that may impact safety, is too big leap for most healthcare organizations.
AI-driven clinical decision support tools can serve as
reference points for rare conditions, drug interactions or emerging treatment
protocols.
To be trustworthy, these AI systems must be responsibly developed and deployed with stringent ethical guidelines and be regularly updated. That includes being carefully validated by medical experts before being released.
“I think there’s a lot of potential for AI to simplify the work for medical professionals and to unlock new value,” says Manish Vazirani, Vice President of CE Health Product Software Engineering for Wolters Kluwer Health. Key elements of safeguarding that value, he says, will depend on developers adopting continual internal checks to verify that the published content hasn’t been altered by AI and implementing a combination of machine learning and search to provide a superior user experience.
To ensure that AI-driven decision support is based on trusted content and evidence-based sources, human accountability is needed.
Sheila Bond, MD,
a physician at Mass General Brigham who also serves as the Director of Clinical
Content Strategy for UpToDate® at Wolters
Kluwer Health, notes that physicians are trained to evaluate the quality of
sources used in clinical decision-making. Therefore, it’s crucial for
AI-enhanced solutions to be transparent about their models’ performance,
limitations and intended use, as well as the sources of the content provided.
It's equally important, Bond says, that providers understand
AI-generated responses are meant to assist — not replace — their own clinical
judgment. “As a clinician, you should always think critically about
AI-generated content and ultimately use it to make you think more, not less, deeply.”
Understanding that the future of AI in healthcare is founded on trusted and expert content, the teams at Wolters Kluwer and UpToDate committed themselves to responsible innovation. The vision was to create a new technology environment in which UpToDate could serve as a reliable partner to work with healthcare organizations on responsibly delivering recommendations that operate in harmony with customer values and workflows. “Patient safety and care quality are paramount, and introducing additional risk with AI is unacceptable,” explains Yaw Fellin, Vice President, Product and Solutions, Clinical Effectiveness for Wolters Kluwer Health.
The result is AI Labs, which allows for experimentation and testing of capabilities within UpToDate clinical decision support. AI-powered improvements will be layered into the trusted content of UpToDate over time, but only once rigorously tested with customers and partners to help ensure patient safety.
UpToDate AI-powered capabilities have been developed to directly address users’ expressed challenges and concerns, including:
·
Simplifying access to the most relevant
information to reduce information overload.
·
Easing AI complications and bridging
communication gaps.
·
Supporting alignment across care teams and
enhancing outcomes with workflow-native, personalized clinical answers.
· Streamlining clinical guidance to help support administrative activities and alleviate burdens.
"We are excited to partner with UpToDate to accelerate the responsible use of GenAI in clinical decision support,” says the CMIO of a national health system. “We share the vision that GenAI must be developed safely, transparently, and on a foundation of trustworthy healthcare content, such as the content provided by UpToDate. Together, we will be testing and validating that GenAI combined with UpToDate offers the ability to get fast, succinct answers in support of delivering better care.”
Expert content is the foundation of care—responsible, AI-powered technology is the future. Learn more at https://www.wolterskluwer.com/en/solutions/uptodate/ai-clinical-decision-supportCreate a free account or log in to unlock content, event past recordings and more!