14 Feb 2025

Legislation Proposes AI as Qualified Drug Prescriber in U.S.

A groundbreaking bill introduced in the U.S. House of Representatives seeks to establish artificial intelligence as a legal prescriber of FDA-approved medications. The legislation, H.R.238, known as the "Healthy Technology Act of 2025," would amend the Federal Food, Drug, and Cosmetic Act to classify AI and machine learning technologies as "practitioners licensed by law" to prescribe drugs when properly authorized.

The bill, sponsored by Rep. David Schweikert (R-AZ) on January 7, specifies that qualifying AI systems must be authorized by state statute and receive appropriate FDA clearance or approval under sections 510(k), 513, 515, or 564. Currently under review by the Committee on Energy and Commerce, the legislation represents a significant step toward integrating autonomous AI decision-making into clinical practice.

This move comes as AI continues to demonstrate promising capabilities in certain healthcare applications. Studies in the NIH National Library of Medicine have shown that AI can perform specific tasks with high accuracy, such as detecting lung nodules in radiological imaging. The technology is already being deployed for ambient documentation, drug discovery acceleration, and reduction of administrative workload across the healthcare sector.

However, the proposal faces significant scrutiny from researchers and clinicians concerned about AI's readiness for such critical responsibilities. Recent NIH research revealed that while advanced AI models like GPT-4V can outperform physicians on multiple-choice medical challenges, they often provide flawed rationales for their correct answers, particularly when interpreting images.

"Our findings emphasize the necessity for further in-depth evaluations of its rationales before integrating such multimodal AI models into clinical workflows," the NIH researchers cautioned in their assessment.

The potential risks of AI hallucinations or information omissions in clinical settings have also raised alarms. Harjinder Sandhu, CTO of health platforms and solutions at Microsoft, highlighted this concern in a HIMSS TV interview: "If the AI systems hallucinates information, it makes up information about that patient or omits important information, that can lead to catastrophic consequences for that patient."

As the bill moves through the legislative process, it is likely to spark intense debate about the appropriate boundaries for AI autonomy in healthcare, balancing the potential for improved efficiency against the paramount concern for patient safety. The outcome could significantly reshape the regulatory landscape for AI applications in medicine and set precedents for future technology integration in clinical practice.

Click here for the original news story