Generative AI Vendor Opportunities and Strategies in EMR & Digital Health

Publication Date: 05/07/2024

Cranfield, UK, 5th July 2024, Written by Vlad Kozynchenko –

Generative AI will have a profound effect on healthcare, but which use cases are real and which ones are not, remains a question asked by healthcare technology vendors. This article will explore the opportunities and challenges of generative AI in EMR and Digital Health.

Signify Research recently published its take on vendors’ opportunities and strategies in generative AI in EMR and Digital Health. Within the report, we have divided applications of gen AI into 40 high-level, overarching use cases and our expectations of their evolution. To learn more about the report, you can find the brochure here. Alternatively, feel free to reach out to me. I would be happy to walk you through the findings.

Recap

To those who may have been living under a rock for the past year and a half (lucky you, it’s been wild!), let’s start with a quick recap: generative AI is a form of Artificial Intelligence that can create (or generate, hence its name) new content based on vast amounts of data it has learned from in the past.

One key difference between traditional AI and generative AI is the learning method used to train the model. Generative AI models analyse patterns and arrangements in large, unlabelled data sets and then use this information to create new, convincing outputs. Traditional AI models, on the other hand, require less data but need it to be labelled. This distinction is crucial in healthcare, where vast amounts of unstructured data exist.

The graphic below depicts four broad ways of enhancing the performance of generative AI models. One making waves in the industry is Retrieval-Augmented Generation (RAG), for reasons we will discuss later.

However, we will not discuss the intricacies of model development, as if we did, this post would be ten times as long. Instead, we will look at the key factors digital health vendors need to keep in mind when developing tools that leverage generative AI.

Data Sources

Signify Research looks at the digital health ecosystem through a lens consisting of five broad, interconnected application areas depicted in the graphic below:

– Data sources (structured and unstructured)

– Data aggregator platforms (e.g., Health Information Exchange vendors)

– Longitudinal Health Records

– Health Insights / Care Management Tools

– Revenue cycle management Tools

Generative AI has the potential to impact each of these areas, from improving data integration to enhancing predictive analytics in care management. This Insight will focus on the first application area of data sources. In healthcare IT, we often divide data into structured and unstructured formats. Generative AI has the potential to revolutionise how we handle both types. However, given the prominence of unstructured healthcare data that has been difficult to levrage, generative AI can bring a fresh breath of life to it.

For instance, it could automatically summarise unstructured clinical notes into structured data points, making information more accessible and analysable. Or facilitate better search functionality, moving beyond Google’s semantic search to provide more nuanced and context-aware results similar to what Perplexity is doing. While this offers exciting possibilities for healthcare information retrieval, it also presents challenges due to the probabilistic nature of AI-generated responses.

Large Language Models (LLMs), the underlying technology behind generative AI, work as giant probability machines, assigning likelihood to the next word based on context and positioning. This probabilistic nature leads to stochasticity, where the same query can produce different outputs each time. Now, in healthcare, this variability can be a major issue. Imagine two doctors asking broadly the same question and receiving two completely different answers. Not ideal, right? As we stand at this technological crossroads, we must ask ourselves: How do we balance the incredible potential of generative AI with the need for unwavering accuracy in healthcare? Can we create systems that are both innovative and reliably consistent? In short, the answer is no (unless we develop a new architecture that doesn’t face the same pitfalls as LLMs); however, if you would like to learn about different approaches that are being employed to address these challenges, I encourage you to read on…

Grounding AI in truth

What do I mean by that? For instance, with something like summarising patient records, you can ground the output using RAG I mentioned earlier, whereby you can provide a reference for where the information has been taken from, the workflow for which is depicted on the graphic below. However, specific outputs, like predicting the next best action for a patient, may be based on some probabilistic measure, and these outputs would not be interpretable due to the black-box nature of gen AI. One way for users to be more confident of the outputs of the gen AI is by associating credibility scores, effectively noting how likely an output is to be true. While still a new addition, we see more stakeholders give this thought, especially for specific outputs that are not necessarily grounded.

RAG grounds AI outputs in verifiable data. In healthcare, this could mean linking AI-generated summaries of patient records directly to the source information. For instance, when summarising a patient’s medical history, a RAG-enabled system could provide clickable references to specific lab results, imaging reports, or clinical notes, ensuring transparency and accuracy. This feature is key in low-risk tasks, such as clinical documentation. Ambient listening solutions also benefit from this type of workflow as you can trace parts of the summarised conversation to the exact point in the audio recording. All leading AI scribing vendors are doing that already, so it is not a competitive advantage; however, this is something that everyone else ought to do. In addition, one interesting proposition to reduce variation in outputs would be to limit practitioners to specific pre-determined prompts that can be used to query the system.

InterSystems had its Global Summit in June, and it showcased a few generative AI demos, all grounded in truth through RAG or other methods that can reference their output, as well as pre-determining the prompts (screenshot above). For instance, once you upload the audio transcript of the conversation, the following prompt is already pre-populated by the software:

Yet, you may notice a disclaimer at the bottom of the first image that says the AI Assistant may provide inaccurate information. This goes back to my point about stochasticity; asking the same question doesn’t guarantee the same answer, and this is the biggest pitfall of generative AI in healthcare, which needs to be much more rigorous than in other industries.

Risk-Based Classification

Hence, the second thought is to classify applications based on risk, following the EU risk-based approach. Low-risk applications like summarising patient records could be deployed more easily, while high-risk applications such as treatment planning require approval from authorities.

When using generative AI for high-risk applications like creating personalised treatment plans, we enter the realm of medical devices. The International Medical Device Regulators Forum (IMDRF) defines a Medical Device as: “Any (…) software, material or other similar or related article, intended by the manufacturer to be used, alone or in combination, for human beings, for one or more of the specific medical purpose(s) of:

– diagnosis, prevention, monitoring, treatment or alleviation of disease,

– diagnosis, monitoring, treatment, alleviation of or compensation for an injury …”

This means digital health vendors aiming to use generative AI for high-risk applications will likely need FDA approval or equivalent. This process requires significant time, human, and financial resources, which may deter some vendors from pursuing this path.

Synapse Medical in France is one example of a vendor doing something in the area. Its Software as a Medical Device (SaMD) technology offers medication management based on clinical guidelines (grounding answers in truth), which recently added generative AI features to its already regulated SaMD solution, suggesting that gen AI as a feature of SaMD solution is easier to certify than gen AI SaMD application by itself.

Future Prospects

Critics argue that the unpredictable nature of generative AI makes it too risky for healthcare applications. While these concerns are valid, integrating generative AI into healthcare IT is not just an opportunity—it’s a certainty that will reshape the industry. It is essential to implement safeguards, such as a human-in-the-loop approach, to mitigate risks while still harnessing the technology’s benefits.

Whether you’re a healthcare provider, a tech innovator, or a curious observer, understanding these developments is crucial. Dive deeper into this exciting field by exploring our comprehensive market intelligence service, trial our Premium Insights trend analysis service, or reach out for a personalised overview of how generative AI could transform your corner of the healthcare world.

Related Research

Generative AI Market Intelligence Service

This Market Intelligence Service delivers data, insights, and thorough analysis of the worldwide market potential for vendors leveraging Generative AI in healthtech. The Service encompasses Medical/Clinical IT, EMR & Digital Health, Pharma & Life Sciences, and Big Tech vendors, exploring their opportunities and strategies in the realm of generative AI

About The Author

Vlad joined Signify Research in 2023 as a Senior Market Analyst in the Digital Health team. He brings several years of experience in the consulting industry, having undertaken strategy, planning, and due diligence assignments for governments, operators, and service providers. Vlad holds an MSc degree with distinction in Business with Consulting from the University of Warwick.

About the AI in Healthcare Team

Signify Research’s AI in Healthcare team delivers in-depth market intelligence and insights across a breadth of healthcare technology sectors. Our areas of coverage include medical imaging analysis, clinical IT systems, pharmaceutical and life sciences applications, as well as electronic medical records and broader digital health solutions. Our reports provide a data-centric and global outlook of each market with granular country-level insights. Our research process blends primary data collected from in-depth interviews with healthcare professionals and technology vendors, to provide a balanced and objective view of the market.

About Signify Research

Signify Research provides healthtech market intelligence powered by data that you can trust. We blend insights collected from in-depth interviews with technology vendors and healthcare professionals with sales data reported to us by leading vendors to provide a complete and balanced view of the market trends. Our coverage areas are Medical Imaging, Clinical Care, Digital Health, Diagnostic and Lifesciences and Healthcare IT.

Clients worldwide rely on direct access to our expert Analysts for their opinions on the latest market trends and developments. Our market analysis reports and subscriptions provide data-driven insights which business leaders use to guide strategic decisions. We also offer custom research services for clients who need information that can’t be obtained from our off-the-shelf research products or who require market intelligence tailored to their specific needs.

More Information

To find out more:
E: enquiries@signifyresearch.net
T: +44 (0) 1234 986111
www.signifyresearch.net