The U.S. Food and Drug Administration (FDA) has published an action plan for medical artificial intelligence (AI) algorithms that includes a draft guidance, efforts to harmonize good machine-learning practices, and development of methodologies for evaluating algorithms and monitoring their real-world performance.
Developed in response to feedback received on the FDA's proposed regulatory framework for in April 2019 and at its public workshop on the evolving role of AI in radiological imaging in February 2020, the agency's new AI/machine learning-based (AI/ML) action plan details five specific actions and goals:
- Develop an update to the proposed regulatory framework presented in the AI/ML-based [software-as-a-medical device] discussion paper, including through the issuance of a draft guidance on the predetermined change control plan.
In response to suggestions on its proposed regulatory framework and in particular the principle of a predetermined change control plan, the FDA said it plans to publish draft guidance in 2021 that will include a proposal for details on the plan's "pre-specifications," which are the aspects the manufacturer expects to change through learning, and the "algorithm change protocol," how the algorithm will learn and change while remaining safe and effective.
"Other areas of development will include refinement of the identification of types of modifications appropriate under the framework, and specifics on the focused review, including the process for submission/review and the content of a submission," the FDA wrote. "Continued community input will be essential for the development of these updates."
- Strengthen the FDA's encouragement of the harmonized development of good machine-learning practices (GMLP) through additional FDA participation in collaborative communities and consensus standards development efforts.
In order to encourage consensus outcomes that will be most useful for development and oversight of AI/ML-based technologies, the FDA said it's committed to deepening its engagement in a number of efforts related to GMLP -- a set of best practices similar to good software engineering practices or quality system practices. These efforts will also be pursued in close collaboration with the FDA's Medical Device Cybersecurity Program, according to the agency.
- Support a patient-centered approach by continuing to host discussions on the role of transparency to users of AI/ML-based devices. Building upon the October 2020 Patient Engagement Advisory Committee (PEAC) meeting focused on patient trust in AI/ML technologies, hold a public workshop on medical device labeling to support transparency to users of AI/ML-based devices.
The FDA said it intends to consider input from this planned workshop for identifying the type of information recommended for manufacturers to include in the labeling of AI/ML-based medical devices.
"These activities to support the transparency of and trust in AI/ML-based technologies will be informed by FDA's participation in community efforts, referenced above, such as standards development and patient-focused programs," the authors wrote. "They will be part of a broader effort to promote a patient-centered approach to AI/ML-based technologies based on transparency to users."
- Support regulatory science efforts on the development of methodology for the evaluation and improvement of machine-learning algorithms, including for the identification and elimination of bias, and on the robustness and resilience of these algorithms to withstand changing clinical inputs and conditions.
The agency is currently supporting regulatory science research initiatives for developing AI/ML-based software evaluation methods, including at its Centers for Excellence in Regulatory Science and Innovation at the University of California, San Francisco; Stanford University; and Johns Hopkins University.
"We will continue to develop and expand these regulatory science efforts and share our learnings as we continue to collaborate on efforts to improve the evaluation and development of these novel products," the FDA wrote.
- Advance real-world performance pilots in coordination with stakeholders and other FDA programs, to provide additional clarity on what a real-world evidence generation program could look like for AI/ML-based [software-as-a-medical device].
In coordination with other ongoing FDA programs focused on the use of real-world data, the FDA said it will support voluntary piloting of real-world performance monitoring. The goal is to help the FDA develop a framework for seamless gathering and validation of relevant real-world performance parameters and metrics for these types of software.
"Additionally, evaluations performed as part of these efforts could be used to determine thresholds and performance evaluations for the metrics most critical to the [real-world performance]of AI/ML-based [software-as-a-medical device], including those that could be used to proactively respond to safety and/or usability concerns, and for eliciting feedback from end users," the FDA wrote. "These efforts will include engagement with the public."
The full action plan can be found on the FDA's website.