8 Myths About AI In Radiology -- And Why They're Wrong
September 10, 2018 -- SAN FRANCISCO - Although radiology has finally warmed to the idea of artificial intelligence (AI), there is still skepticism about its potential. At this week's Conference on Machine Intelligence in Medical Imaging (C-MIMI), AI entrepreneur Jeremy Howard reviewed eight commons myths about the use of AI in radiology -- and why they're wrong.
A data scientist by training, Howard was one of the first entrepreneurs to pursue the idea that AI -- specifically neural networks -- could be harnessed to improve healthcare around the world. He founded Enlitic, one of the first developers of artificial intelligence technology, and went on to form Fast.ai, a nonprofit venture designed to make it easier for professionals from multiple disciplines, including medicine, to learn to harness the power of neural nets.
But Howard's path in AI hasn't been easy. Early criticism was levied against him and other AI developers, accusing them of being more interested in raising venture capital and replacing radiologists than in making a meaningful contribution to healthcare.
In his keynote talk at C-MIMI 2018, Howard revealed that he is motivated by a simple calculation: Other than in a few advanced countries, there is a massive shortage of physicians worldwide, particularly radiologists. In Africa, for example, the shortage is so acute that it would take 300 years to train the number of doctors needed on the continent.
"The shortage of doctors is killing people," Howard told C-MIMI attendees. "People are living their lives sick. There isn't a solution to this unless you can wait 300 years."
Can Computers Help?
The imbalance led Howard to ask: What if computers could help societies get by with fewer doctors? He began investigating the question, but as he delved deeper, he saw radiology's response go from indifference to outright hostility as radiologists became afraid that they would be replaced by computer algorithms.
Fortunately, that antipathy has given way to the realization that radiologists augmented by computers are more powerful than radiologists working alone. It's a cycle similar to ones experienced by other industries faced with automation, such as the airline industry, which initially resisted autopilot and electronic flight controls (commonly called fly by wire), Howard said.
But even though radiology has become more accepting of AI, Howard believes there are a number of myths holding some radiology professionals back from embracing deep learning more wholeheartedly:
"Computers aren't better than radiologists." This may or may not be true, but it's missing the point. If a technology is alerting radiologists to pathology, it's not replacing them, Howard believes. He gave the example of a portable ultrasound scanner in a remote area of China that could be used to acquire images that are then scanned with a convolutional neural network for signs of pathology. The flagged images could then be sent on for interpretation by a radiologist, either in China or abroad via teleradiology.
"Automation will just make radiologists sloppy." This same argument was used against autopilot in airplanes. Look how that turned out, Howard said, noting that industries typically spend a long time figuring out how to maximize automation, but they eventually do figure it out.
"Will deep learning become the next fad?" If it's a fad, it's been around a long time, Howard said. Neural networks are fundamentally different from previous technologies, and the computer industry is just beginning to get them right, he believes.
"Deep learning is just another tool." Some deep-learning skeptics have argued that neural networks are no different from other computer technologies that haven't lived up to the hype, such as support vector machines (SVMs) and random forest classifiers. The difference is that a neural network can approximate any function, Howard said; the challenge has always been the amount of time it takes for a neural network to perform the task. The arrival of graphics processing units (GPUs) has finally given computers the horsepower to accomplish these tasks with enough speed.
"We can't hire the deep-learning experts we need." Howard acknowledged that this is a problem; it's very difficult for a person to learn medicine if they have no medical background. But this is exactly the problem that his Fast.ai group is trying to address.
"Deep learning is just for images." Patently untrue, Howard said. Radiologists have many duties other than just reading images. In fact, Howard believes that some of the most promising applications for machine learning are in language processing, particularly of text in radiology reports.
"Deep learning requires Google-level amounts of computer power." A single GPU can be purchased for $600, Howard pointed out. And if that's too much of a financial commitment, GPUs can be rented for 45¢ an hour.
"We still don't have enough data -- anyone knows you need millions of images to do anything in deep learning." Totally wrong, Howard believes. Neural networks almost never need to be trained from scratch -- you start with a model someone else has trained and adjust it for the task you want, a process known as transfer learning. You can start with an algorithm designed for classifying chest nodules and build on it to develop one that targets prostate cancer.
What's the future of deep learning in radiology? Howard admitted that he didn't know. But he believes the opportunities are limitless, and they are largely bound by the work of radiology professionals -- such as those in the C-MIMI audience.
"The people who are going to be the heroes of this revolution are going to be the doctors, the vendors, the people who are day to day actually working on the problem of helping patients," Howard concluded. "You are in a much better situation than any data scientist start-up. That's why I think the future of deep learning in medicine is entirely in your hands."