Deep Learning in Ultrasound – Ready to be Embedded?

Applying Deep Learning to Ultrasound – Is the Technology Ready to be Embedded?

Written by Simon Harris

Much like at last year’s RSNA conference, deep learning was one of the key themes at ECR 2017. Several speakers at the scientific sessions presented promising research results for the application of deep learning in specific use-cases. In one of the professional challenges sessions, Dr. Angel Alberich-Bayarri from QUIBIM suggested that convolutional neural networks (CNNs) may already be old news, with generative adversarial nets (GANs), a new architecture for unsupervised neural networks, showing promise for medical imaging applications. GANs may be a solution to one of the major challenges with developing deep learning algorithms – the need for large training data sets.

On the exhibition floor, there were fewer companies showing machine learning solutions than at RSNA (there were at least 20 at RSNA but less than 10 at ECR) and in our conversations with vendors it was evident that expectations were more measured, with less marketing hype. Several of the better known deep learning start-ups were notable by their absence, including Enlitic and Zebra Medical, as was IBM Watson Health.

Samsung chose ECR to make a big push for its S-Detect™ deep learning feature, which is currently available as an option on its RS80A premium ultrasound system. S-Detect™ for Breast makes recommendations about whether a breast abnormality is benign or cancerous. It is commercially available in parts of Europe, the Middle East and Korea and is pending FDA approval in the US. S-Detect™ for Thyroid uses deep learning algorithms to detect and classify suspicious thyroid lesions semi-automatically based on Thyroid Image Reporting and Data System (TI-RADS) scores. With both applications, S-Detect™ produces a report to show the characteristics of the lesion, including composition, echogenecity, orientation, shape, etc., along with the risk of malignancy, e.g. “high suspicion”.

ContextVision, the leading independent vendor of ultrasound image enhancement software, showcased its latest research in artificial intelligence at ECR.  Its prototype VEPiO (Virtual Expert Personal Image Optimizer), which is built on the company’s Virtual Expert artificial intelligence platform, can automatically optimize ultrasound images for individual patients. VEPiO aims to improve diagnostic accuracy and reduce scan times, particularly for more challenging patients, by making automated setting adjustments to obtain the optimal image quality. The company is also exploring the use of deep learning to optimise image quality, for organ-specific segmentation and for decision-support functionalities.

Ultrasound OEMs must decide whether deep learning technology is ready to be embedded into their systems, or to take a “wait and see” approach. Although many research papers have found that deep learning can produce good results in specific medical imaging applications, often at or near the performance of experienced radiologists, these are usually based on relatively small datasets and/or small reader studies. It remains to be seen if deep learning will perform as expected in routine clinical use. Although Samsung has taken an early lead and is the first of the major ultrasound vendors to embed deep learning, it carefully positions S-Detect™ for Breast as a decision support tool for “the beginner or non-breast radiologist”.

OEMs must also decide whether to establish an in-house deep learning capability or to partner with a specialist. Deep learning engineers are a scarce and expensive resource and most mid-tier ultrasound OEMs will struggle to attract and retain talent. Instead we expect they will partner with independent software vendors, such as ContextVision. For the major OEMs, we expect to see a combination of build, buy and partner strategies. Most of the major modality OEMs have, to varying extents, established in-house R&D efforts for machine learning and with over 50 start-ups developing artificial intelligence solutions for medical imaging, there’s certainly no shortage of options for acquisitions and partnerships.

Another limiting factor is the additional processing power, typically GPUs, required for embedded deep learning algorithms. Ultrasound is a fiercely contended and price sensitive market and OEMs will be reluctant to add additional hardware cost. Initially we expect deep learning to be an optional feature on premium systems only, such as with the Samsung example, but as is often the case in ultrasound, features that start out on premium systems typically cascade to less expensive high-end and mid-range systems over time.

With deep learning technology progressing at a rapid pace, and ultrasound OEMs constantly on the look-out for the next “big thing” to differentiate their products, it seems inevitable that deep learning will increasingly be embedded in ultrasound systems, both as workflow tools to help with productivity and decision support tools to improve clinical outcomes.  It’s no longer a question of will it happen, but when will it happen, and the OEMs that wait too long will get left behind in the AI race.


Related Reports

Machine Learning in Medical Imaging – 2017 Edition” provides a data-centric and global outlook on the current and projected uptake of machine learning in medical imaging. The report blends primary data collected from in-depth interviews with healthcare professionals and technology vendors, to provide a balanced and objective view of the market. If you would like further information please contact