The U.S. Food and Drug Administration (FDA) has released updated guidance that could pave the way for wider use of large language models (LLMs) in radiology.
In the January 6 document, the FDA laid out its position on regulating clinical decision-support (CDS) software as a medical device. The guidance describes the FDA’s intended oversight and clarifies use cases where CDS software functions are excluded from the definition of a device.
For AuntMinnie, Bill Lotter, PhD, of the Dana-Farber Cancer Institute and Harvard Medical School in Boston, explained two use cases of interest that he said represent a reversal of the FDA's earlier guidance from 2022.
"For traditional radiology AI use cases, I don't see the updated guidance having much effect," Lotter said in an email. "Importantly, any device that analyzes images directly (CADt, CADe/x, image denoising, etc.) is still considered a medical device that would be regulated by the FDA. There are some secondary use cases that could be affected though."
Lotter highlighted the following, which is clarified in the updated, nonbinding guidance:
- Use case 1: A software function that analyzes a radiologist’s clinical findings of an image to generate a proposed summary of the clinical findings for a patient’s radiology or pathology report, including a specific diagnostic recommendation based on clinical guidelines that should be reviewed, revised, and finalized by an [health care professional] HCP.
"In the prior version, this would have been regulated because it outputs a specific diagnostic recommendation (even though it’s based on text rather than the image pixels)," Lotter said. "In other words, this opens the door for many LLM use cases."
- Use case 2: A software function that predicts risk of future cardiovascular events for an HCP to consider based on a patient’s weight, current and historical smoking status, blood pressure, and brain natriuretic peptide (BNP) in vitro diagnostic (IVD) test results.
"Previously, the FDA would consider such risk predictions as being regulated," Lotter noted. "Like everything, there's a risk/benefit trade-off. LLM-based use cases like the one described certainly have risks, including generating the wrong recommendation that is not caught by the HCP. Similarly, a risk calculator could have a software bug or insufficient performance, which may adversely affect patient management."
Key to the discussion is the FDA's interpretation of four criteria identified in the 21st Century Cures Act (Cures Act) and in the 2026 guidance that describe "non-device" CDS software functions:
- Criterion 1: "Not intended to acquire, process, or analyze a medical image or a signal from an in vitro diagnostic device or a pattern or signal from a signal acquisition system."
- Criterion 2: "Intended for the purpose of displaying, analyzing, or printing medical information about a patient or other medical information."
- Criterion 3: "Intended for the purpose of supporting or providing recommendations to an HCP about prevention, diagnosis, or treatment of a disease or condition."
- Criterion 4: "Intended for the purpose of enabling an HCP to independently review the basis for the recommendations that such software presents so that it is not the intent that the HCP rely primarily on any of such recommendations to make a clinical diagnosis or treatment decision regarding an individual patient."
For a software function to be excluded from the medical device definition, it must meet all four criteria, FDA noted, adding context for 32 examples of device software functions.
Read the document titled "Clinical Decision Support Software," here.


















