News
Article
Author(s):
The agency will focus on collaboration to protect public health; advancing regulatory approaches; developing standards, guidelines, and best practices; and supporting research that evaluates and monitors AI performance.
The FDA Center for Biologics Evaluation and Research, Center for Drug Evaluation and Research, Center for Devices and Radiological Health, and Office of Combination Products have issued a joint paper that details how the FDA’s various centers will collaborated to maintain public health and create responsible and ethical innovations in the face of artificial intelligence (AI). The regulatory bodies included 4 focus areas that address the use of AI across medical product life cycles, according to the letter.
There are 4 areas of focus in the paper, including fostering collaboration to safeguard public health; advancing the development of regulatory approaches to support innovations; promoting the development of standards, guidelines, best practices, and tools for the medical product life cycle; and supporting research that is related to the evaluation and monitoring of AI performance.
In the first focus of safeguarding public health, the Centers are collaborating with developers, patient groups, academia, global regulators, and more to establish patient-centered regulatory approaches for health equity. As part of the focus, the agencies will be gathering input about transparency, explainability, governance, bias, cybersecurity, and quality assurance in AI medical products as well as developing educational initiative that promote the safe and responsible use of AI in these products. They will also be collaborating with globally to create standards, guidelines, and best practices on consistent use and evaluation of AI tools in the medical landscape.
In the second focus area, policies will be developed for regulatory predictability and clarity for the use of AI, including monitoring trends and issues to detect knowledge gaps and opportunities in the AI product life cycle; developing methodology to evaluate algorithms, identify and mitigate bias, and ensure robustness of AI to coincide with changes in clinical inputs and conditions; and building upon existing initiatives for evaluation and regulation of AI, according to the letter.
Additionally, the Centers will also issue guidance on the use of AI products, including but not limited to final guidance on marketing submission recommendations for predetermined change control plans for AI-enabled device software functions; draft guidance on life cycle management considerations and premarket submission recommendations for AI-enabled device software functions; and draft guidance on considerations for the use of AI to support regulatory decision-making for drugs and biological products, the letter said.
The Centers will also refine and develop considerations for safe and ethical use of AI, including transparency and addressing safety and cybersecurity concerns and identifying best practices for long-term and real-world monitoring. Further, best practices will include documenting and ensuring that data is used to train the models that is tailored to the patient population. There will also be ongoing monitoring of AI tools to ensure that adherence to standards and maintaining performance and reliability are continued.
In the final area of focus, the Centers aim to identify different projects where bias can be introduced in AI development life cycles and how to best address this, according to the letter. They will also support projects that take health inequity associated with AI into account, in order to promote equity and ensure that data is representative. Finally, the Centers will also support the ongoing monitoring of AI tools, the letter said.
In conclusion, the paper states that the agencies will tailor regulatory approaches for the use of AI in medical products to ensure patient and health care professional safety.