Researchers have developed a deep learning system that can create artificial magnetic resonance imaging (MRI) scans to use in the training of AI diagnostic systems for brain tumours and Alzheimer’s disease.
The team from Nvidia, the May Clinic, and the MGH & BWH Center for Clinical Data Science developed the model, which reliably outputs synthetic images.
AI systems have already shown their worth when it comes to interpreting medical images, such as x-rays, and MRI and CT scans. Yet they require many thousands of training images to ensure their accuracy. Manually labelling MRI images is a time-consuming process and often leads to unbalanced data sets.
By successfully creating a deep learning model that can sidestep the need for data from real patients, the AI training process can be streamlined and the use of sensitive patient data avoided. The absence of privacy concerns also means that data can more easily be shared between medical institutions.
The new method also ensures a well-rounded data set, improving diagnostic accuracy across the board.
“Data diversity is critical to success when training deep learning models,” wrote the researchers in their paper. “Medical imaging data sets are often imbalanced as pathologic findings are generally rare, which introduces significant challenges when training deep learning models. We propose a method to generate synthetic abnormal MRI images with brain tumours by training a generative adversarial network (GAN).”
Virtual brain, real-world gain
The GAN itself was trained on two publicly available datasets of brain MRI scans. One contains thousands of 3D, t1-weighted MRI images of brains showing damage caused by Alzheimer’s disease, and the other about 200 4D MRI scans of brain tumours.
The image-to-image translation method used is based on the pix2pix model previously developed by Nvidia researchers.
The generative model was demonstrated to produce synthetic images that could train diagnostic neural networks, with results comparable to using real subject data. The researchers were able to alter the size and position of tumours and place them into otherwise healthy brains, providing an automatable, low-cost source of diverse data.
Together, these results offer a potential solution to two of the biggest challenges facing machine learning in medical imaging: the low incidence of pathological findings, and the restrictions around sharing patient data.
Internet of Business says
Patient data privacy and training data set size, reliability, and appropriation are some of the biggest obstacles to the training of AI diagnostic systems.
This research overcomes these hurdles and makes both the method and the resulting data open to others. Combined with institution-specific datasets, other organisations are now able to train their own deep learning models.
Some studies suggest that AI-powered diagnostics could save the NHS millions in costs. Meanwhile, medical professionals worldwide are starting to use deep learning models to help diagnose stomach cancer, eye disease, lung and breast cancer, and other illnesses.
Such applications are set to grow. With this in mind, the UK government recently released an NHS supplier code of conduct to ensure high standards of patient care and privacy.
Despite the technology’s promise, however, a recent survey of US healthcare decision-makers showed that trust issues remain when it comes to working with AI. While the majority of doctors and clinicians expect AI to be prevalent in the industry within the next five years, the same proportion believe the technology will be responsible for fatal errors.
Ultimately, though, greater implementation of AI stands to not only make processes more efficient and cost-effective but, vitally, also improve the quality of care that health services provide – as explained in our recent deep-dive into AI in the NHS.
Our Internet of Health event takes place on 25-26 September 2018 in Amsterdam, Netherlands. Click on the logo for more details.