The Food and Drug Administration (FDA) has released its perspective on regulating AI in healthcare and biomedicine, stating that oversight needs to be coordinated across all regulated industries, international organizations and the U.S. government.
The Agency says it regulates industries that distribute their products to the global market, and therefore U.S. regulatory standards must be compatible with international standards.
It highlighted that it is working to accomplish this by coleading an AI working group of the International Medical Device Regulators Forum that both promotes AI best practices globally and leads a working group within the International Council for Harmonisation that works to accommodate AI in clinical trials.
The Agency says regulators also “need to advance flexible mechanisms to keep up with the pace of change in AI across biomedicine and healthcare,” and transparency must exist among sponsors and proficiency among regulators on the use of AI in premarket development.
“The FDA has shown openness to innovative programs for emerging technologies, such as the Software Precertification Pilot Program. However, as that program demonstrated, successfully developing and implementing such pathways may require the FDA to be granted new statutory authorities. The sheer volume of these changes and their impact also suggests the need for industry and other external stakeholders to ramp up assessment and quality management of AI across the larger ecosystem beyond the remit of the FDA,” the Agency wrote.
According to the FDA, life cycle management that includes post-market performance monitoring is also necessary, and there need to be unique mechanisms to examine LLMs and their uses.
“Applications of generative AI, such as large language models (LLMs), present a unique challenge because of the potential for unforeseen, emergent consequences; the FDA is yet to authorize an LLM. However, many proposed applications in health care will require FDA oversight given their intended use for diagnosis, treatment, or prevention of diseases or conditions,” the FDA wrote.
“Even ‘AI scribes’ meant to summarize medical notes can hallucinate or include diagnoses not discussed in the visit. The complexity of LLMs and the permutations of outputs necessitate oversight from individuals and institutions in addition to regulatory authorities. Because we cannot unduly burden individual clinicians with such oversight, there is a need for specialized tools that enable better assessment of LLMs in the contexts and settings in which they will be used.”
Approaches to regulation must also be created that balance the needs of the entire healthcare ecosystem, from large companies to startups, and regulators must concentrate on patient health outcomes while weighing the use of AI for financial optimization for health systems, developers and payers.
The Agency wrote that it has long been preparing for the incorporation of AI into healthcare and biomedical product development, but it acknowledges that AI presents unique challenges and opportunities.
“The evolution of AI illustrates a major quality and regulatory dilemma. Since the safety and effectiveness of many AI models depends on recurrent evaluation of their operating characteristics, the scale of effort needed could be beyond any current regulatory scheme,” the FDA wrote.
“It is in the interest of the biomedical, digital, and healthcare industries to identify and deal with irresponsible actors and to avoid misleading hyperbole. Regulated industries, academia, and the FDA will need to develop and optimize the tools needed to assess the ongoing safety and effectiveness of AI in healthcare and biomedicine. The FDA will continue to play a central role with a focus on health outcomes, but all involved sectors will need to attend to AI with the care and rigor this potentially transformative technology merits.”
THE LARGER TREND
On Aug. 1, the EU AI Act came into effect, outlining regulations for the development, market placement, implementation and use of artificial intelligence in the European Union.
At the time, the council pointed out that the act intends to “promote the uptake of human-centric and trustworthy artificial intelligence while ensuring a high level of protection of health, safety, [and] fundamental rights … including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation.”
The act highlights the council’s intention to protect EU citizens from the potential risks of AI; however, it outlines its objective not to stifle innovation.
“This Regulation should support innovation, should respect freedom of science, and should not undermine research and development activity. It is therefore necessary to exclude from its scope AI systems and models specifically developed and put into service for the sole purpose of scientific research and development,” EU regulators wrote.