Jennifer Broder, MD, vice chair of Quality and Safety, Lahey Hospital and Medical Center Department of Radiology, and Vice Chair, American College of Radiology® (ACR®) Commission on Quality and Safety, contributed this post.

Continuous improvement is the dedication to the proposition that we can always become better. One of the necessary features of continuous improvement in our clinical work is ensuring that we have a system in place to identify our clinical errors, consider why they happened and learn from them. For many years in radiology, a formal documented random score-based peer review model has been used as one of the tools to accomplish this.

What we all know, though, is that while this score-based peer review model was running in the foreground, in the background during the course of our daily work we have been tapping each other on the shoulder, leaving each other notes in our mailboxes or sending emails saying, “Hey, take a look at this. Thought you would want to know.” More often than not, that communication was for cases that had presented challenges, potential misses, things that people felt we would struggle to discuss as “teachable moments.” We would privately look, wince, learn and gather the courage to move on.

Over the past few years, there has been a group of radiologists who have been asking the questions: What if we could reduce the shame associated with the identification of those errors? What if we could bring all that learning out into the open so that not just one radiologist learns from their mistake, but we all learn together? Would that help us collectively improve our performance? If so, could we do without the scoring aspect altogether? The model of peer review that has resulted from those conversations has been named “peer learning” and is described in the 2016 sentinel article Peer Feedback, Learning and Improvement: Answering the Call of the Institute of Medicine Report on Diagnostic Error published in Radiology. In a peer learning model, cases with learning opportunities —whether discrepancies or great calls — are identified during the course of one’s regular work day, submitted to a central coordinator with description, but without scoring or other expression of judgement, and then the feedback is shared with the interpreting radiologist for feedback. The coordinator chooses the highest yield cases to share with the rest of the community, most often anonymously during departmental cases conferences. Further, the learning from these conferences is then channeled into generating systems improvements.

Across the country, we’re seeing an increasing awareness of peer learning and adoption across diverse practices — whether your practice is small or large, academic or private, diagnostic or interventional, we are seeing that peer learning can be implemented anywhere. Enthusiasm for the model is growing rapidly. Almost 370 people signed up for the recent American College of Radiology® Implementing Peer Learning webinar, during which nine panelists from various practice settings across the country joined me to help answer people’s questions about how to gain support for the transition and manage the practice details of program implementation.

Transitioning to peer learning takes some work, but it is well worth the effort, and there are many more resources to help you. For instance, there is also the 2017 article Practical Suggestions on How to Move From Peer Review to Peer Learning which helps guide implementation and video recordings from the ACR-sponsored National Peer Learning Summit.

  • Does your facility practice peer review or peer learning? How do you think a shift toward peer learning will impact the culture at your institution? Please share your thoughts in the comments section below, and join the discussion on Engage (login required).

×Your comment has been successfully submitted for approval and will be published after approval.
Leave a Comment

You may also like