Recently a team of researchers from Imperial College London and Google Health created a computer vision model intended to diagnose cases of breast cancer from X-rays. As CNN reports, the model was reportedly trained on X-rays of over 29,000 women, and when pitted against six radiologists the model managed to outperform the assessments of the doctors.
Currently, the NHS uses the combined decisions of two doctors in order to diagnose breast cancer from X-rays. If the two doctors end up disagreeing, a third will be brought in to consult on the images. While the doctors had access to the medical records of the patients, the AI device only had the mammograms to base its decisions on. Despite this limitation, the AI model proved at least as good at diagnosing breast cancer as two doctors. In fact, the model performed better than a single doctor at detecting breast cancer. When comparing the false-positive rates between the AI and the doctors, the AI made a slight reduction in false positives, being about 1.2% more accurate in general. According to the results of the research report, the AI also reduce false-negative rates (where a genuinely positive case of cancer is missed) by about 2.7%.
One of the authors of the paper, Director of the Cancer Research UK Imperial Center, Ara Darzi, explained that the research team hadn’t expected that their system would deliver such high-quality results. However, Darzi is excited by the possibility of improving productivity and accuracy when it comes to cancer screening.
Breast cancer is the second leading form of cancer death in women, but outcomes can be dramatically improved if the disease is diagnosed early. The issue is that, as stated by the American Cancer Society, currently even large-scale screening programs miss about one-in-five cases.
For this reason, the research team is hopeful that their system can be improved upon and that it can go on to outperform even the best clinicians. The research team also stated that their algorithm could potentially address a shortage in radiologists. One report conducted by the Royal College of Radiologists found that in the UK, there will be a nearly 2,000 radiologist shortage by 2023 unless something is done to remedy the situation.
However, Darzi admits that at this stage the system isn’t ready to start replacing humans and be used as a secondary interpreter. AI tools in the healthcare field often fail to deliver on their initial promises, thanks to complex factors that can’t adequately be simulated in training. One big limitation of the study is that the images all came from a single mammography system and the images lacked diversity. According to QZ, the research team didn’t have access to details that could be used to ascertain the diversity of the images in the dataset, and therefore it isn’t possible to know if the system was still highly accurate when asked to examine x-rays of minorities. There are racial disparities in both the UK and the US when it comes to the diagnosis of breast cancer, with black women being less likely to get cancer screenings in the UK.
Google plans to spend time remedying the disparity in the data that the model is trained on before making it available to healthcare partners, aiming to make a larger, more inclusive dataset. The system will also have to be tested in clinical trials before it is made available for use in clinical settings. The algorithms developed by the research team have the power to genuinely improves healthcare outcomes and save lives, but only if they are carefully and rigorously tested.