An Intel survey has found that a majority of US healthcare decision-makers expect artificial intelligence (AI) to be prevalent in the industry within the next five years.
However, the same number believe it will be responsible for fatal errors. Mistrust of AI systems and fear of mistakes are tempering clinicians’ optimism about the technology’s likely impact, suggests the research.
Intel surveyed 200 US healthcare leaders in April to determine their attitudes towards AI, and any barriers to its adoption in the sector. The research was carried out in conjunction with Convergys Analytics.
Fifty-four percent of respondents expect the use of AI in healthcare to become widespread over the next five years, with a further 32 percent predicting a five-to-ten-year timeframe. In total, that’s 86 percent of decision-makers predicting its mass adoption within a decade.
Thirty-seven percent of participants already use AI, though mainly to a limited extent. The study found that, among those professionals, 77 percent use the technology in clinical applications, 41 percent in operational use cases, and 26 percent in financial processes.
Participants had few doubts about the potential benefits of AI in healthcare, said Intel. For example, 91 percent believe it will provide predictive analytics for early interventions, 88 percent that the technology will raise the quality of care, and 83 percent that it will improve the accuracy of medical diagnoses.
Trust issues abound
However, a worrying finding in the survey was that many healthcare professionals believe that fatal mistakes will be made by AI systems.
The study found that 54 percent agree with the question “AI will be responsible for a fatal error”, 53 percent with the statement “AI will be poorly implemented or won’t work properly”, and just under half (49 percent) believe that it will fail to meet user expectations.
Thirty-six percent of participants see patients’ lack of trust in AI as a barrier to increased adoption, while 30 percent believe that similar mistrust among clinicians is a significant hurdle.
So what can be done to build greater trust in the technology (something the UK government has also identified as a challenge in its AI policy review)? The Intel research identifies four keys to overcoming users’ fear and scepticism:
- Addressing the ‘black box’ perception of AI
- Leaning into areas where clinicians are ready for change
- Highlighting the benefits for all involved
- Providing input into the regulatory process.
Jennifer Esposito, worldwide general manager of Health and Life Sciences at Intel, said:
At the end of the day, we are all consumers of healthcare, and we should feel confident that advances in technology can ensure we receive high-quality, affordable care. Together, we can ensure patients and providers realise the benefits of AI in healthcare today, building trust and understanding that will help us unlock incredible advances in the future.
Internet of business says
While a majority of senior decision-makers in US healthcare believe that AI will become widespread in the next few years, the trust issue is clearly a major barrier to overcome.
With their scientific backgrounds, clinicians may be easier to convince in the medium term than a public that is regularly exposed to hysterical media coverage about AI and robotics – something else the UK government references in its recent policy statement.
That said, real concerns about patient data privacy and monopolisation abound and doctors are eager to ensure that AI and robotics aren’t brought in based on percentage gains, but only to advance patient care.
Meanwhile, AI’s potential ability to provide earlier and more accurate diagnoses, reduce recovery times, and free up busy healthcare professionals are major benefits.
In short, AI in the healthcare industry should always be about the patient’s benefit.