2024 is expected to be a huge year for AI in healthcare

No Tags | Uncategorized | No Comments on 2024 is expected to be a huge year for AI in healthcare

It feels like you can’t go more than five minutes these days without someone mentioning AI and its potential applications, benefits and risks to virtually every area of life. OpenAI’s ChatGPT 5 (GPT-5) is expected to launch at some point in 2024, and reputed features include improved reliability, personalisation and communication, along with the ability to switch instantly between AI tools, based on user requests. 

Healthcare, both in the UK and globally, is no exception. However, as we’ve seen frequently over the last couple of decades, the technology moves far more quickly than legislation and regulations do, which is a real cause for concern for many. 

A British Standards Institute (BSI) survey in late 2023 found that more than half of the UK public are excited about the possibilities of AI in healthcare and are supportive of AI tools being used in their medical treatment, but 61% said that they wanted firm guidelines in place to help regulate the use of AI in this setting. 

In this article, I look at what AI in healthcare looks like at the moment, what regulation is currently in place and how I believe that AI fits into a new future or healthcare.

What is AI in healthcare?

AI is a term that has often been used interchangeably to cover a broad range of different technologies, sometimes incorrectly. It’s important to separate what I would call ‘true AI’, from technology that uses algorithms or automation, but isn’t actually ‘intelligent’. 

Automation – works on repetitive tasks based on a set of defined commands and rules and cannot deviate from that.

Algorithm – defines the process through which a decision is made.

AI – learns from the data it is presented with and can make its own decisions and predictions based on this. 

Large Language Models (LLMs) – are a type of AI algorithm that use large data sets and machine learning to understand and summarise information, with the ability to generate and even predict new content. 

In terms of healthcare, the NHS Transformation Directorate describes it as:

“Artificial Intelligence (AI) is the use of digital technology to create systems capable of performing tasks commonly thought to require human intelligence.”

The opportunities for using AI to assist with and speed up diagnostic processes and access to treatment are huge. In mid-2023, it was announced by the UK government that ringfenced funding will assist with the rollout in the NHS. 

AI is already in use and having a positive effect in many ways in healthcare, both in the UK and globally. For example, an AI-enabled smart stethoscope is being used to help diagnose heart conditions in GP practices in Wales and London and similar devices are available in the US that can be used at home by parents to spot early signs of respiratory issues in children. It doesn’t end there. Tools like Navina, in the US,  claim to look after many of the data analysis tasks for primary care clinicians to assist with diagnosis, identify risks and simplify processes, freeing up healthcare professionals to deliver care. These are just a few examples of the type of intervention that may be possible.

The risks of AI in healthcare

The use of AI in healthcare does raise important questions about risk and what safety mechanisms are in place to protect people. This brings us to AI Standard BS30440. 

BS30440: 2023 Validation framework for the use of artificial intelligence (AI) within healthcare, is a standard that aims to provide assurance of the safety, quality and performance of healthcare AI products and healthcare organisations can adopt this British standard as a requirement for their suppliers, ensuring that any AI tools and products used have been developed, tested and validated according to the guidelines set out. 

In brief, BS30440 aims to ensure that all AI products used in healthcare:

  • Have demonstrable clinical benefits
  • Perform at a sufficient level
  • Can safely successfully be used in health and care environments
  • Deliver inclusive outcomes for any involved patients, service users and practitioners. 

The standard is structured around a product life cycle in five phases; inception, development, validation, deployment and finally monitoring. Across all phase sthere are 18 assessment criteria such as stakeholder involvement, training data, clinical effectiveness, equity and bias, patient safety and routine monitoring.

The standard will be a useful tool for developers and will enable them to self-assess against the criteria throughout the development process. At some point there may be a requirement from health trusts to have the standard in place for all new AI associated applications and products.

For healthcare organisations, while the potential benefits and opportunities that AI brings to the sector are significant, navigating the implementation of AI in various settings can seem a daunting prospect. 

As an experienced medical devices development professional, I take a keen interest in the benefits that AI can bring. Get in touch to find out more.


No Comments

Leave a comment