Breast cancer is the leading cause of premature death in U.S. women. Mammography screening has been proven effective in reducing breast cancer deaths in women ages 40 years and older, with a mortality reduction of 40% possible with regular screening.1 However, research has shown wide variation in radiologist performance interpreting these examinations. 2,3 Despite this variation, there remains a dearth of understanding regarding the factors that affect radiologists’ performance in screening mammography, a topic explored in a new study by the Harvey L. Neiman Health Policy Institute® (NHPI) and the National Mammography Database (NMD) Committee.
In essence, study authors wanted to know: What do the breast imagers with highest performance have in common? The retrospective study, published in the journal Radiology, sought to identify radiologist characteristics that affect screening mammography interpretation performance through analysis of 11 years of screening mammography performance data from the NMD. Study authors hope the results expand the knowledgebase surrounding breast imaging and demonstrate the need for more research, as well as emphasize the importance of assessing performance across measures holistically — because, ultimately, the better radiologists understand opportunities to improve breast cancer screening, the more patients’ lives they can save.
A Fortuitous Partnership
Cindy S. Lee, MD, first study author and associate professor of radiology at NYU Langone Medical Center, remembers the night she bumped into colleague Andrew B. Rosenkrantz, MD, lead study author and professor at NYU Grossman School of Medicine, at RSNA’s annual conference. “I remember exactly where it happened,” Lee says. “I have this mental image of the south entrance at RSNA, where the buses drop you off — we stood at the staircase next to the water fountain and chatted for about 15 minutes.” Lee, former NMD research subcommittee chair, recalls brainstorming with Rosenkrantz, NHPI affiliate senior research fellow, over ways in which they might collaborate to glean information from their combined data.
“This is a very hot topic,” Lee says. “Everyone wants to know: What can we do to make breast cancer screening better?”
“The NMD provides performance outcomes for radiologists nationally who read screening mammography,” says Rosenkrantz. “Medicare databases provide physician practice characteristics.” By aggregating the data from the two datasets, they hoped to gain insight into the characteristics that affect screening mammography interpretation performance.
What we found was that there are a lot of factors that affect how well a screening mammogram is read by a radiologist.
According to the study, across radiologists nationally, the most influential radiologist characteristics impacting mammography interpretive performance are geography, breast subspecialization, performance of diagnostic mammography, and performance of breast US. Radiologists in the West or Midwest, breast subspecialists, and those who perform diagnostic mammography are associated with better screening mammography performance in the NMD, while performance of breast US is associated with lower performance.
“With this study, which was blinded and aggregated, we were able to link over 1,000 radiologists nationally. Between the two databases, we were able to see how practice characteristics are associated with radiologists’ performance nationally in a way that, to our knowledge, has not been done previously,” says Rosenkrantz. “I call it a marriage of two national databases,” adds Lee.
Some of these results were perhaps unsurprising, according to Rosenkrantz and Lee. “One of the primary findings — that dedicated breast imagers had better performance than general radiologists who may also do screening mammography — may not be surprising,” says Rosenkrantz, “but I don’t know if previously there was actual objective data supporting that.”
Other results were less straightforward. “What we found was that there are a lot of factors that affect how well a screening mammogram is read by a radiologist,” says Lee. “It was interesting that in many cases, certain characteristics predicted higher performance in some areas and, at the same time, lower performance on others.” For example, she says, some breast imagers are willing to risk a slightly higher recall rate so that they can find more cancer — because that is the goal of breast cancer screening, after all. “The goal is to find more breast cancers at an earlier stage. If you’re recalling 11% or a little higher than the national average, but you’re finding the extra, super subtle cancers in women, and that helps them stay alive, then that may be worth it — so there’s a constant balance of risk and benefit,” says Lee.
Study authors assert the study conclusion is really more of a beginning, or a call for more research on screening mammography interpretation performance, than it is a definitive answer to a question. There’s not much nation-wide, validated data out there about screening mammography interpretation performance, Lee says, and they hope to show the potential that further research holds.
Rosenkrantz cautions against drawing conclusions about the study results in absence of more research. “For example, radiologists in certain parts of the country did better as a whole than others, and we don’t know the reason,” Rosenkrantz says. “We’ve received many questions as to how we might explain the observation, and we don’t know the answer at this point. But as this is new information, unique from what has been described previously in this space, I think it’s important for us to share the findings.”
Lee agrees. She also encourages radiologists to view measure performance scores with a grain of salt, to some degree. “The recall rate example highlights the importance of assessing performance across measures holistically versus individual metrics in isolation, supporting guidance in the ACR BI-RADS® atlas,” she says. “The more data we have, the more complete the picture becomes — and, ultimately, the more patients’ lives we can save.”