When it comes to addressing health equity concerns in AI, there’s more to the issue than just a lack of representative training data, according to a keynote talk June 27 at the annual Society for Imaging Informatics (SIIM) meeting in National Harbor, MD.
In her presentation, Kadija Ferryman, PhD, an anthropologist and an assistant professor at Johns Hopkins Bloomberg School of Public Health in Baltimore, described how racial biases in information technologies -- and AI-based health IT -- can also be thought of as artifacts of the past that reveal valuable information.
“Information technologies are active participants in shaping social worlds,” she said. “These technologies can foreclose some possibilities, but with reflection and intention, they can be an infrastructure that opens new pathways and new destinations.”
Health informatics can be thought of as a part of the social infrastructure, enabling recognition of how social values have been embedded and how those might have limited social possibilities or even caused harm, she said.
“However, embedding values in informatics infrastructures can be intentional and even proactive and beneficial,” she said.
As an example, the FAIR (Findable, Accessible, Interoperable, and Reusable) initiative facilitates the equitable sharing of data and information technologies.
“Data should not only be accessible to a privileged few,” Ferryman said. “There should be efforts made in the community to make data more accessible to more researchers.”
Meanwhile, the CARE (Collective benefit, authority to control, responsibility, and ethics) Principles for Indigenous Data Governance have been developed by indigenous communities to act, in some cases, as a complement to the FAIR principles. However, sometimes these two sets of principles can be in conflict, she noted.
By design
In April, the Office of the National Coordinator for Health IT proposed a call to action to include health equity by design in health IT.
“Health equity by design, not as an afterthought when IT technology has already been developed, but really upstream as part of the design,” Ferryman said. “It’s important to have downstream auditing tools, but this is really a call to say, ‘let’s not just rely on the downstream auditing of tools for bias.’ ”
In terms of AI, research has shown that the path forward is one where AI, informatics, and radiologists are co-shaping each other, rather than AI replacing humans, according to Ferryman.
“There’s also growing evidence that both humans and AI work together at some tasks and that we can consider how this relationship is changing, what it means to be a radiologist, prompting fruitful reflections on what radiology practice includes today and what it can look like in the future,” she said.
Informative artifacts
In 2023, Ferryman and colleagues Maxine Mackintosh, PhD, of Genomics England and the Alan Turing Institute in London, London, and Marzyeh Ghassemi, PhD, of the Massachusetts Institute of Technology in Boston, published an article in the New England Journal of Medicine (NEJM) that made the case for biased data to be viewed as informative artifacts in AI-assisted healthcare.
They essentially turned the adage of “garbage in, garbage out” on its head, Ferryman said.
“We argue that instead of thinking of data that we might use for AI technologies as biased, missing, or otherwise lacking ... [we consider it instead] as representing and reflecting important human practices and social conditions,” she said. “So we can apply this data more broadly when we’re thinking about data that’s used for AI tools, that they are artifacts that reflect society and social experiences.”
The lack of representative data in training AI algorithms has rightfully been identified as a problem. But instead of viewing the data as biased or garbage, it’s valuable to consider what the shortcomings of this data suggest about clinical and social practices, such as a lack of uniformity in terms, according to Ferryman.
“If we approach these data as artifacts, we move away from the predominant framing of bias in AI as an issue that can be solved through technical means, such as by imputing missing data or by creating new data sets,” she said. “We don’t say that we shouldn’t try to impute data, or we shouldn’t try to create better datasets, but we shouldn’t throw out the data that we have as garbage, because it can tell us really important things.”
Complementary approaches
In the NEJM article, the authors describe the problems that can exist for data when training AI algorithms, what a technical-only approach to solving this challenge, and what a complementary or alternative “artifact” approach might look like.
For example, a technical approach for tackling data issues could include attempting to correct model performance to approximate differences in performance observed between groups, collecting additional data on groups, and imputing missing samples, as well as removing populations that are likely to have data missing from the datasets, according to Ferryman, et al. Additionally, alternative data could also be obtained from diverse sources.
“[Instead of or in addition to a purely technical solution], an artifact approach would be convening an interdisciplinary group to examine the history of the data,” she said. “Why was it racially corrected? Have there been any changes to racial corrections and how they are used? How are those racial corrections used clinically? And then adjust the problem formulation or the model assumptions based on this information.”
Furthermore, the interdisciplinary group could examine reasons why data are missing and increase education on structural barriers to medical care, as well as examine population-level differences in undertreatment and exclusion. New AI tools could then be created, as necessary, according to the authors.
Role for imaging informaticists
Hundreds of image-based AI software devices have been cleared by the U.S. Food and Drug Administration (FDA) – more than any other type of AI software. As a result, imaging informaticists are important stakeholders in federal AI policy, according to Ferryman.
In 2021, the FDA released its action plan for regulating AI and machine-learning devices. The agency noted in its plan that it heard from stakeholders that there’s a need for improved methods for evaluating algorithmic bias and pledged to do its part, Ferryman noted.
“But there’s also an opportunity for the imaging informatics community to contribute,” she said.
For example, the Medical Imaging and Data Resource Center (MIDRC) has developed a tool for AI bias identification and mitigation in medical image analysis. They can also utilize guidelines – such as the recently published recommendations for the responsible use and communication of race and ethnicity in neuroimaging research, according to Ferryman. What’s more, they can also join the Radiology Health Equity Coalition.
“This can also contribute learnings to this regulatory space, potentially embedding values like health equity, not only in informatics but into the governance of AI-based imaging informatics technologies,” she said. “This is crucial for expanding regulatory science in this area.”