March 21, 2018 -- A new open-source algorithm developed at the U.S. National Institutes of Health (NIH) can identify negative and uncertain findings in radiology reports -- and could eventually help guide radiologists as they interpret images, according to a March 15 presentation at the American Medical Informatics Association (AMIA) meeting in San Francisco.
The tool, called NegBio, could help radiologists make better use of prior images -- making them more efficient, presenter Yifan Peng, PhD, told session attendees.
"Eventually, the algorithm could be integrated into a department's reporting system and make suggestions to radiologists as they review prior images," he said.
Peng is part of a research group led by Zhiyong Lu, PhD, of the National Center for Biotechnology Information (NCBI), a division of the National Library of Medicine (NLM) at the NIH. Other authors include Xiaosong Wang, PhD; Le Lu, PhD; Dr. Mohammadhadi Bagheri; and Dr. Ronald Summers, PhD.
Lu and colleagues hope that the fact that NegBio is an open-source tool will spark further research. In clinical practice, NegBio could help downstream applications, such as the reporting system or image diagnosis system, to return fewer irrelevant (i.e., negative) results.
"We believe NegBio can contribute to the research and development in the healthcare informatics community for real-world applications," Peng said.
Negative and uncertain findings are common in radiology reports, and identifying them is as important as identifying findings that are positive for pathology, Peng said. And although many natural language processing applications have been developed in recent years that can extract findings from medical reports, distinguishing between positive, negative, and uncertain findings remains challenging.
Applications for extracting findings tend to take one of two approaches: rule-based or machine learning. Rule-based algorithms rely on negative keywords and rules; a common application is NegEx. But this type of application relies on surface text and often can't capture more complex sentence constructions. And machine learning requires manually annotated data, which are often not available and can be difficult to generalize, according to Peng.
To address these extraction applications' limitations, the researchers developed NegBio, an open-source, rule-based tool for finding negative and uncertain results in radiology reports. The algorithm uses what the team calls a "universal dependency graph" to define patterns with "a simple description of the grammatical relationships in a sentence that can be easily understood by nonlinguists," Peng said.
The group evaluated NegBio using two datasets annotated for positive findings: a new radiology set Peng and colleagues created for the study called ChestX-ray, which included 900 reports from a larger radiology dataset, and a public set of general clinical texts called OpenI, which included 3,851 radiology reports. The team also included two public benchmarking report datasets annotated for negative findings: BioScope, which included 977 reports, and PK, which included 116 reports. The researchers then compared NegBio with NegEx, measuring performance by precision, recall, and F1-score, which is a measure of a test's accuracy that considers both precision and recall.
Peng and colleagues called a finding negative if it was ruled out by the radiologists, and uncertain if it was equivocal (i.e., "no evidence of pneumothorax" versus "suspicious pneumothorax"). The researchers identified all findings and their corresponding Unified Medical Language System (UMLS) concepts using a program called MetaMap, which plots biomedical text to the UMLS. With ChestX-ray and OpenI, they focused on 14 common disease finding types identified by radiologists:
NegBio achieved significant improvement on all datasets compared with NegEx, the researchers found.
Based on their results, "the use of negation and uncertainty detection on the syntactic level successfully removes false-positive cases of 'positive' findings," according to Peng and colleagues.
The algorithm did miss double-negation terms -- for example, "Findings cannot exclude increasing pleural effusions."
"Our method currently lacks rules to recognize double negatives and, thus, generates more false negatives," Peng said. "We see double negation as an open challenge that merits further investigation."
Going forward, the researchers plan to explore NegBio's performance with clinical texts other than radiology, he said.