ACR Bulletin

Covering topics relevant to the practice of radiology

Who's Responsible (and Liable) When AI Is Used in Healthcare?

With AI-based devices, physicians can increase diagnostic accuracy and efficiency, as well as improve treatment regimens. But as the technology continues to mature, so does the risk landscape.
Jump to Article

Will physicians be liable for disagreeing with or disregarding the output of a medical AI?

January 01, 2023

As the world grows fonder of self-driving cars, manufacturing robots, smart assistants, social media monitoring software and many other AI-enabled products and services, it’s not surprising that AI-based devices are swiftly making their way into the healthcare industry. Several hundred AI/ML-enabled medical devices have received regulatory approval since 1997 via 510(k) clearance, a granted De Novo request or Premarket Approval.1,2,3 More than 70% of these are in the field of radiology. More detailed information regarding FDA-cleared AI medical products is now available from a number of available resources, including the ACR Data Science Institute’s AI Central.4

Integrating AI-based devices into medical practice has the potential to increase diagnostic accuracy and increase efficiency in treating and diagnosing patients by allowing physicians to focus on diagnoses and procedures that require greater skill and judgment. It also has the potential to improve treatment regimens. However, as the number of these devices and applications grows, the number of questions and concerns pertaining to misdiagnosis, privacy breaches, bias, cost and reimbursement is also increasing.

Potential for Patient Harm

Although fully autonomous AI diagnostic software is already a reality, such as the IDx-DR software for the diagnosis of diabetic retinopathy, at present all AI-based medical devices and software for diagnostic radiology are used as screening or confirmatory tools, rather than as replacements for a trained healthcare provider. As such, it is not surprising that, according to a recent study by Aneja et al., both the general public and the majority of physicians still believe the physician should be held responsible when an error occurs (66.0% vs. 57.3%; P= .020).5 Physicians are also more likely than the public to believe that vendors (43.8% vs. 32.9%; P= .004) and healthcare organizations (29.2% vs. 22.6%; P= .05) should also be liable.

It’s crucial that developers, physicians and professional organizations work together to safely integrate these AI-based devices into the clinical workflow.

Someday, the AI solutions we use will be able to integrate more data at faster speeds than a human and provide even more sophisticated decision support to us, the expert physicians. That raises unanswered questions about what happens in situations where the human expert disagrees with the automaton on a finding such as presence or absence of intracranial hemorrhage, and how those situations are perceived or potentially adjudicated. Will physicians be liable for disagreeing with or disregarding the output of a medical AI? Alternatively, if the AI is used for independent decision-making at any step in the care pathway and produces an output that harms a patient, will responsibility shift in any material way from the supervising physician to the AI developers or the medical device company?

For now, since no diagnostic radiology models are cleared for autonomous use in the U.S., the responsibility remains with the radiologist. However, if autonomously functioning AI solutions are developed and cleared for clinical use, AI vendors and developers will have to shoulder more risk when the model fails to detect significant disease or initiates unnecessary treatment.

Protected Health Information

In order to train and test AI-based devices, developers require access to large amounts of patient data. Data de-identification refers to the process of removing all information that could reasonably be used to identify the patient, and it is the basis of sharing data while preserving privacy.

In the U.S., the Health Insurance Portability and Accountability Act of 1996 Privacy Rule (HIPAA) governs de-identification of patient data.6 However, recent work has shown that elements of patients’ identity, such as race, can be predicted from the de-identified data.7,8 Furthermore, models that allow for data re-identification have raised concern and emphasized the need to act from a legal and regulatory perspective, beyond the de-identification release-and-forget model.9 It is important to bear in mind that the rules pertaining to data sharing and privacy are complex, and HIPAA violations can result in significant financial penalties, criminal sanctions and civil litigation.

Culture of Transparency

As we navigate the uncharted territory of AI creation and implementation in the healthcare industry, it is imperative to adopt a culture of transparency. From an end-user perspective, transparency includes both explainability — so radiologists can understand how the model reached its conclusion — and details of how models were trained and validated, including numbers of institutions, scanner types and patient demographics.

The ACR Data Science Institute® (DSI) has been an advocate for increasing transparency in AI with the FDA and participated in the FDA’s Virtual Public Workshop on AI Transparency in October 2021. The FDA’s Digital Health Center of Excellence is part of the planned evolution of the Digital Health Program in the Center for Devices and Radiological Health.10 Its main goal is to empower stakeholders to advance healthcare by fostering responsible and high-quality digital health innovation.

To ensure AI tools can be efficiently implemented into daily workflow and have the potential to improve the quality and efficiency of patient care, the ACR DSI has assembled subspecialty panels to review and publish structured-use cases. Use cases empower AI developers to produce models that are clinically relevant, ethical and effective and are published freely with common data elements that allow pathways for workflow integration.

It’s crucial that developers, physicians and professional organizations work together to safely integrate these AI-based devices into the clinical workflow. Where relevant, patients should be counseled on the risks and benefits pertaining to the use of AI-based devices so they can make informed decisions. Liability for use of AI will likely evolve over time as the sophistication of the AI models evolves. As radiologists, we will undoubtedly find ourselves at the forefront of the penetration of AI into medicine, and although this will bring challenges and uncertainties, it will also present us with the opportunity to shape this new and exciting reality.

ENDNOTES

1. U.S. Food & Drug Administration, 510(k) Clearances, bit.ly/FDA-501k-Clearances. Accessed 11-11-2022.
2. U.S. Food & Drug Administration, De Novo Classification Request, bit.ly/FDA-De-Novo. Accessed 11-11-2022.
3. U.S. Food & Drug Administration, Premarket Approval, bit.ly/FDA-premarket-approval. Accessed 11-11-2022.
4. ACR Data Science Institute® AI Central, aicentral.acrdsi.org.
5. Khullar D, Casalino LP, Qian Y, Lu Y, Chang E, Aneja S. Public vs. physician views of liability for artificial intelligence in health care. Am Med Inform Assoc. 2021;28(7):1574–1577.
6. Centers for Disease Control and Prevention, Health Insurance Portability and Accountability Act of 1996 (HIPAA), bit.ly/HIPAA-Act. Accessed 11-11-2022.
7. Gichoya JW, Banerjee I, Bhimireddy AR, Burns JL, Celi LA, Chen, LC. AI recognition of patient race in medical imaging: a modelling study. Lancet Digit Health. 2022;4(6): e406–414.
8. Adleberg J, Wardeh A, Doo FX, Marinelli B, Cook TS, Mendelson DS, Kagen A. Predicting Patient Demographics from Chest Radiographs With Deep Learning. J Am Coll Radiol. 2022;19(10):1151–1161.
9. Rocher L, Hendrickx JM, de Montjoye YA, Estimating the success of re-identifications in incomplete datasets using generative models. Nat Commun. July 23, 2019.
10. U.S. Food & Drug Administration, Digital Health Center of Excellence, bit.ly/FDA-dhce. Accessed 11-11-2022.

Author Irene Dixe de Oliveira Santo, MD, integrated interventional and diagnostic radiology resident, Yale School of Medicine, and Tessa Sundaram Cook, MD, PhD, associate professor in the department of radiology, Perelman School of Medicine, University of Pennsylvania