Potential for Bias in AI and Medical Imaging
AI in radiology promises better care but raises equity concerns. Learn how bias in datasets impacts imaging and patient outcomes.
Read more
Alexandra Hodder, BKin
Alexandra Hodder, BKin, MS4, from Memorial University of Newfoundland, contributed to this peice.
While AI’s successes receive much emphasis, its imperfections often prompt concern, with many worried about missed findings. But what if these errors could be reframed as valuable learning opportunities? Most conversations around AI focus on two extremes: its breakthroughs and its limitations. Yet, between these lies a promising educational horizon where AI’s errors can become lessons for the humans working alongside it, and we would be doing ourselves a disservice not to explore this.
Current radiological AI models are trained to recognize patterns and continuously refine their ability to do so. A model might flag an area of pulmonary scarring as pneumonia or miss subtle abnormalities that a radiologist would identify using clinical experience and context. For example, imagine an AI system highlighting a dense area on a chest X-ray as consolidation concerning for infection. A radiologist may correctly recognize this as post-surgical fibrosis based on the clinical history and prior imaging studies. This discrepancy can prompt a more intricate train of thought: Why did the AI flag this? What contextual cues is it missing? What features led me to a different conclusion? These missteps offer teaching moments that reinforce the importance of the human elements of image interpretation, particularly the integration of history and reader experience.
For trainees especially, it can be difficult early on to conceptualize subtle distinctions in interpretation. AI miscalls may help accelerate this learning. Imagine a teaching rounds session on subdural hematomas, where a set of cases flagged by an institution’s AI system for hemorrhage is reviewed by residents who are asked to evaluate whether they agree — and, more importantly, to explain why. Such exercises sharpen pattern recognition and promote critical thinking. Compared to traditional one-on-one reads, this kind of AI-enhanced review may allow for faster exposure to a broader range of cases, particularly when curated by the nature of the error. It’s not a replacement for human mentorship, but it’s a valuable supplement.
This teaching potential expands even further with the advent of explainable AI (XAI), which emphasizes the use of models that can clearly outline the algorithmic reasoning behind their outputs1,2 This would allow radiologists to both understand and critique how an algorithm formulated its interpretation. By comparing one’s own analysis with the AI’s decision pathway, radiologists and trainees can gain a deeper understanding not just of imaging, but of how AI models process that imaging — and why they sometimes fall short. Institutions might curate libraries of AI misinterpretations to augment conventional resident teaching files. These insights can strengthen learners’ abilities to critically appraise AI and promote thoughtful conscious and subconscious clinical reasoning.2 They can also prepare us to work more confidently and harmoniously alongside these AI systems.
Perhaps most importantly, these exercises cultivate diagnostic humility and resilience. Learning to scrutinize AI “thought” processes builds a healthy skepticism and reinforces the fact that intelligent tools still require human oversight.2 In this way, we can shift our mindset from viewing AI errors as liabilities to recognizing them as educational assets. Each error becomes a case study; each false positive, a question; each omission, a conversation.
Future radiologists won’t just be asked to use AI — they will be expected to understand and challenge it. Like humans, AI will have flaws. Optimizing its potential requires learning from those flaws — and becoming better clinicians because of them.
For more learning on how to get the most out of AI’s integration into your workflow, consider exploring the various AI-based resources offered by the Radiological Society of North America (RSNA).
Potential for Bias in AI and Medical Imaging
AI in radiology promises better care but raises equity concerns. Learn how bias in datasets impacts imaging and patient outcomes.
Read moreThe Quiet but Growing Downside of ChatGPT
AI tools like ChatGPT offer convenience, but over-reliance can hinder skill development. Learn strategies to use AI wisely without losing critical thinking.
Read moreReal Problems β Real Solutions
Transitioning to attending is different for everyone, but quality, safety & informatics will always play a key role in your career.
Read more