Potential for Bias in AI and Medical Imaging
AI in radiology promises better care but raises equity concerns. Learn how bias in datasets impacts imaging and patient outcomes.
Read more
Roham Hadidchi
Roham Hadidchi, BS, MS1 from Nova Southeastern University College of Osteopathic Medicine, contributed this piece.
Like many others, I’ve found myself using tools like ChatGPT more and more over the past couple of years. Only recently has it become normal to ask a chatbot nearly anything and receive a coherent answer within seconds, something that would’ve seemed impossible not long ago. These advances are remarkable. But the more I use ChatGPT, the more I’ve begun to recognize the long-term risks of relying on it too heavily.
When social media surged in the early 2010s, most people felt genuine optimism. The internet promised to connect us, democratize ideas and expand access to information. And while much of that upside did materialize, we underestimated the downside, especially how personalized algorithms could manipulate attention and encourage compulsive use. The lesson is familiar: enormous potential comes with enormous risk. Today’s excitement about large language models is well-deserved, but we shouldn’t ignore the possibility of similar unintended consequences.
The most concerning risk, in my view, is over-reliance. A useful analogy comes from the debate over whether calculators undermine learning in math. One Reddit user described the issue well (paraphrased):
“When I write code to model systems, I make every calculation simple enough to do by hand. It gives me a feel for the numbers and helps me spot errors. It also deepens my intuitive understanding and helps me find more creative solutions.”
There’s something irreplaceable about struggling through a problem yourself. Debugging your own code, working through physics mistakes, reviewing imaging studies carefully, or rewriting a messy paragraph all build skills that shortcuts simply can’t provide. That slow, sometimes frustrating process trains your instincts. You don’t get that intuition if you skip straight to the AI-generated solution.
That’s where the danger lies. When it becomes too easy to paste code into ChatGPT instead of debugging it, or hand over poorly written sentences instead of refining them, we risk trading long-term skill development for short-term convenience. Sometimes that tradeoff is fine. But our brains naturally favor ease and underestimate what we lose by outsourcing too much.
This is not an easy issue to fix. Many of us —me included — already rely on AI more than we’d like to admit. One helpful approach is to treat AI as a feedback tool rather than a shortcut. For example, I try to spend at least five minutes attempting a task before asking the AI for help. The definition of “task” is flexible: a paragraph of writing, a block of code, a practice question. The goal isn’t to avoid AI entirely but to resist turning it into the first resort.
AI can still be valuable for small sub-problems, but we should make a point to understand the solution it gives us rather than copy it blindly. Otherwise, we’ll need to ask it again every time the same problem reappears.
Institutions also need to think critically about how AI tools are introduced. I recently heard an example at the Michigan Radiological Society Meeting from Dr. Francis Deng, who noted that some radiology departments now use AI to detect abnormalities in clinical practice. This has major implications for resident education. Without safeguards, residents could fall into the habit of letting AI read complex cases before submitting their own interpretations.
Some programs have implemented a thoughtful solution: residents aren’t shown the AI’s findings until after they’ve written and submitted their report. This preserves the trial-and-error learning process while still using AI as a feedback mechanism rather than a replacement.
In some ways, I’m grateful that ChatGPT didn’t exist while I was growing up. Knowing myself, I might have been much quicker to avoid discomfort and lean on shortcuts instead of developing the habits that come from real practice. Not every problem has a perfect solution — but recognizing the tradeoffs is an important first step.
Potential for Bias in AI and Medical Imaging
AI in radiology promises better care but raises equity concerns. Learn how bias in datasets impacts imaging and patient outcomes.
Read moreTeaching by Mistake: Radiological AI Errors as Learning Tools
AI errors in radiology can be reframed as teaching tools, helping trainees and clinicians learn critical reasoning and improve diagnostic skills.
Read moreReal Problems β Real Solutions
Transitioning to attending is different for everyone, but quality, safety & informatics will always play a key role in your career.
Read more