By using website you agree to our use of cookies as described in our cookie policy. Learn More

TDWI Upside - Where Data Means Business

Q&A: Healthcare, BI, and AI

A new survey from John Snow Labs sheds light on how healthcare providers are facing BI and AI issues. David Talby, John Snow Labs’ CTO, breaks down the results.

Results from John Snow Labs’ second annual "AI in Healthcare Survey" reveal the trends, challenges, use cases, and technologies driving AI adoption in healthcare and life sciences. The survey, from the AI and NLP provider for healthcare and developer of the Spark NLP library, highlight a few interesting trends, including a rise in interest in open-source software and increasing investments in technologies such as data annotation.

For Further Reading:

What Healthcare IT Leaders Need to Know about Digital Transformation

Executive Q&A: Transforming Data Management at a Healthcare Organization

Text-Based AI in Healthcare: The Challenges and Possibilities 

We asked David Talby, CTO at John Snow Labs, about the survey results and where he sees AI headed.

Upside: Your survey found that most respondents rely on their own data to validate AI models rather than on third-party or software-vendor metrics. What’s driving that move? Are there simply a lack of third-party sources?

David Talby: We believe that people are doing the responsible thing by using their own data to validate models. There is a lot of diversity in healthcare. Each hospital practices medicine differently and sees different patients based on location and specialization. As a result, the fact that a model was pre-trained (for example) on ten years of data from the Mayo Clinic doesn’t mean that it will be usable if you’re running a children’s hospital in Boston, a veterans’ hospital in Miami, or a rural hospital in Idaho. Even if medical data was freely shared, using your own data to validate models -- and to tune them to your specific population -- is the right thing to do.

Another benefit of this approach is privacy. Healthcare is highly regulated and rightfully puts more emphasis on privacy than other industries. Many early use cases for confidential computing technologies, which focus on securing data in use, can be found in healthcare. It’s another reason why a majority of respondents choose to rely on their own data to validate models. That said, some healthcare-specific AI solutions (such as those offered by John Snow Labs) don’t require any data sharing at all. This is the way it should be, and opens more opportunities for companies who want to work with AI vendors and know their data remains safe and private.

You also found that most mature organizations relied even more heavily on using in-house evaluation and tuning models themselves. Doesn’t that require expertise that most enterprises don’t have?

This requires a level of AI expertise and there is still a very real talent shortage. As users shift from data scientists to domain experts and low- and no-code solutions become more widely available, more people will be able to carry out these functions in-house. This will be a huge step in further democratizing AI in healthcare and beyond.

Take building a website, for example. What once was a major software engineering effort is mostly a graphic design project today. This is how no-code AI trickles down to users without a data science or programming title, and ultimately how the technology gets refined for specific business use cases. The shift will be gradual, but we’ll see a lot more of this over the next few years. 

You found that just over a third of participants aren’t even considering AI as a business solution. Why is that?

For Further Reading:

What Healthcare IT Leaders Need to Know about Digital Transformation

Executive Q&A: Transforming Data Management at a Healthcare Organization

Text-Based AI in Healthcare: The Challenges and Possibilities 

Applying machine learning, data science, and AI-related techniques in healthcare is a new concept -- less than a decade old. The fact that two-thirds of practitioners have added AI as a priority within a few short years -- in an industry dealing with COVID-19, an opioid epidemic, a mental health epidemic, a physician and nursing shortage, an affordability crisis, EHR deployment, precision medicine, cybersecurity attacks, and ongoing regulatory and policy changes -- shows more than anything how big this industry considers the potential of AI.

Your survey discovered healthcare and life sciences technical leaders say the top technologies they plan to have in place by year end include data integration (46 percent). What are the biggest challenges when integrating health care data?

Bringing together structured and unstructured data is one of the biggest challenges when integrating healthcare data. In addition to data integration, the survey also found natural language processing (NLP) is one of the foundational AI technologies respondents planned to have in place by the end of 2022. NLP enables users to bridge the gap between structured data (claims, electronic medical records) and unstructured data (including clinical notes, pathology and radiology reports, lab results, research papers, clinical trial documents, and social media posts). Serving as the connective tissue, NLP can accurately understand information in unstructured formats and systems to create a clearer, more accurate picture. This enables data scientists or domain experts (in this case clinicians) to make better decisions.

I also thought it interesting that 44 percent of respondents plan to have BI in place by the end of the year. I would have thought BI was already in common use, but the survey indicates otherwise. Would you care to comment?

That is a surprising finding. One possible explanation is that half of AI practitioners have not yet reached the mature stage of integrating the models and systems they’ve built into the clinical and operational workflows where BI systems currently operate.

What survey results were as you expected?

Many results were expected and haven’t changed much from last year, but growing areas such as data annotation (especially the increase in use by domain experts) and the need for healthcare-specific models point to more sophisticated uses of healthcare AI, which is encouraging.

As part of the survey, we also identified over 40 startups that have raised significant funding specifically to build AI solutions to improve healthcare outcomes and reduce overall costs. Between aging populations and skyrocketing healthcare costs, it's good -- but not at all surprising -- that venture capital dollars are going to AI applications for the healthcare, pharmaceutical, and medical sectors.

We mentioned the use of BI being used by under half of respondents. What other survey results surprised you?

The increased importance of data annotation tools, as well as the in-house data validation and model tuning that these tools enable, is a welcome indicator of higher sophistication and maturity by users and seems to be happening faster than we expected. The reduced popularity of cloud services, given their known issues with data privacy and the ability to tune models, is another indicator of industry maturity, and also somewhat surprising given the marketing and sales investments by cloud providers.

Where do you see AI headed for healthcare and life sciences professionals in the next year or two?

As the survey indicates, there will be more individuals outside of the traditional role of data scientist operating AI as part of their daily responsibilities. This will contribute to wider adoption, more sophisticated solutions, and more use cases.

With greater adoption and usage comes a darker side, though. Healthcare organizations have increasingly become the target of cyberattacks, and with AI proliferation and a growing user base, there are more entry points for bad actors. In fact, a recent study found that over half of internet-connected devices used in hospitals have a vulnerability that could put patient safety, confidential data, or the usability of a device at risk. Greater AI proficiency in healthcare is a good thing, but we need to address the real risks and ensure that systems consistently operate in a safe and ethical manner.

[Editor’s note: David Talby, Ph.D., MBA, is CTO at John Snow Labs, helping fast-growing companies apply AI, big data, and data science techniques to solve real-world problems in healthcare, life science, and related fields. You can reach him via email,Twitter, or LinkedIn.]

TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.