Artificial Intelligence and Its Role in Medicine and Healthcare
What is Artificial Intelligence and Machine Learning?
During 1941 at a secret code-breaking facility in the English countryside, mathematicians and code breakers solved a problem that would go on to significantly affect the outcome of World War II. This problem of course, was that of the Enigma cipher device which was used by German military forces to encrypt messages regarding top secret operations during the war. Among the mathematicians and code breakers working at Bletchley Park was Alan Turing. On July 9, 1941, the Enigma machine was decrypted, allowing allied forces to intercept messages of upcoming attacks.
Whilst Alan Turing and the codebreakers’ wartime work was classified until the mid 1970s, Turing continued his research and went on to pose a question built on his previous work at Bletchley Park – “Can machines think?” This would form the basis of a paper written by Turing in 1950 titled “Computing Machinery and Intelligence” in which he established the fundamental goal and vision of artificial intelligence. So, what exactly is artificial intelligence?
A branch of computer science, artificial intelligence (AI) is involved with teaching machines to perform tasks that typically require human intelligence. Essentially, scientists are attempting to teach machines how to learn in order for them to find patterns in data that may not be apparent to the human eye. The machine makes decisions in real-time when handed problems, helping people to deal with issues effectively as they arise. Machine learning is one way of doing this where computers learn over time by either interacting with the environment or learning by watching a process.
Source: Learning to Walk Through Reinforcement Learning
To explain this clearly, we can explain machine learning in terms of a game. Let’s say our game is to teach a robot to walk and the goal is to do so without it falling over. There are two ways we can teach our robot to walk. The first is for the robot to interact with the environment, or as we may also call it, trial and error. We will instruct the robot to make random movements and depending on how far our robot gets without falling over, it will receive feedback on whether to make more or fewer of these types of moves. If the robot falls over straight away, it will receive negative feedback, and make fewer of these movements, perhaps trying something else. If the robot manages to walk successfully for a few seconds, it will receive positive feedback, and will make more of these types of movements in an attempt to prolong the time the robot can successfully walk without falling over.
Over time our robot will try a vast number of different movements and optimise its behaviour based on the positive feedback it received. This is a useful method of learning; however, it can take a lot of time to perfect.
The second way our robot could learn to walk is through learning by watching a process. In this scenario, instead of making random movements and receiving positive or negative feedback, the robot will be given a data set on how robots have learned to walk in the past and analyse this data to find the best movements to make before emulating this behaviour. This way the robot doesn’t have to spend time trying random movements that may or may not work but can pick out movements that have been successful in the past. This method may cut down the amount of time the robot spends trying out different movements, but it requires a large amount of previous data in order to accurately find these patterns in movements that have been successful.
Now that we have an idea of how machine learning works, we can discuss where it can be applied in real world situations. One swiftly growing application of machine learning is in the field of healthcare.
The Rise of AI in Healthcare
Every day across the world millions of pieces of information are created within the field of healthcare. From diagnostic data such as blood tests, to results of clinical trials and data from images including X-rays, MRI and CT scans, there is a wealth of information being gathered which can be used to better understand diseases and to develop more effective treatments for patients.
Artificial intelligence has been present in the field of healthcare from as early as the 1970s. During this time a system called MYCIN was developed to identify bacteria that cause severe infections, such as bacteraemia and meningitis, and to recommend antibiotics, with the dosage adjusted for patients’ body weight. Programmed with a base knowledge and data on types of bacterial infections and their symptoms, MYCIN worked by asking the physician a long series of simple yes/no or textual questions. At the end of the process, it provided a list of possible culprit bacteria ranked from high to low based on the probability of each diagnosis, its confidence in each diagnosis’ probability, and its recommended course of antibiotic treatment. Unfortunately, due to the state of technology in hospitals at the time, MYCIN was never used in practice at hospitals. It did however demonstrate the ability of artificial intelligence and technology to aid in the diagnosis and treatment of patients.
Over the next two decades interest and funding in artificial intelligence and machine learning was significantly reduced, largely due to limitations and the expense of the technology at the time. Interest was renewed in the late 1990s however, specifically in medicine and healthcare, which set the stage for a new era of artificial intelligence research and development.
Source: What’s The Next Frontier For Healthcare?
With this new momentum alongside improved computer hardware and software programs, digitised medicine became more readily available, and the applications of artificial intelligence in medicine and healthcare started to grow rapidly. In 2007, IBM created an open-domain question–answering system, named Watson, that competed with human participants and won first place on the US based game show Jeopardy! in 2011. In contrast to traditional systems that used either forward reasoning (which follow rules from data to conclusions), backward reasoning (which follow rules from conclusions to data), or hand-crafted if-then rules, this technology, called DeepQA, used natural language processing, another subset of artificial intelligence, and various searches to analyse data over unstructured content to generate probable answers. In 2017 IBM Watson was used to successfully identify new RNA-binding proteins that were altered in a condition called amyotrophic lateral sclerosis.
Technologies similar to those developed for Apple’s virtual assistant, Siri, and Amazon’s virtual assistant, Alexa, were also used to develop a chatbot in 2015 called Pharmabot who assisted in medication education for paediatric patients and their parents. Technologies have also developed further due to the current Coronavirus pandemic. The US alone is expected to invest more than $2 billion in AI related healthcare research over the next 5 years, more than 4 times the amount spent in 2019.
What Can AI Detect/Treat?
In the last section we saw some applications of where AI can be used in the field of medicine and healthcare, but just how many areas of medicine and healthcare can be improved with the use of AI and machine learning?
To date AI technologies are used in some form in many areas of medicine and healthcare including dermatology, radiology, screening, psychiatry, primary care, disease diagnosis, and electronic health records.
Some examples of the applications of AI and machine learning in these areas are:
AI-assisted robotic surgery
The use of robots in surgery has had numerous positive outcomes as shown in recent studies. Robot-assisted surgery is considered minimally invasive, meaning that patients’ incisions are smaller and the healing time after surgeries is reduced in comparison to regular surgeries. Using artificial intelligence, robots can use data from past operations to inform new surgical techniques. They can also analyse data from pre-op medical records to guide a surgeon’s instrument during surgery, which can lead to a 21% reduction in a patient’s hospital stay. One study that involved 379 orthopaedic patients found that AI-assisted robotic procedures resulted in five times fewer complications compared to surgeons operating alone.
Currently, image analysis is very time consuming for human providers, but an MIT-led research team developed a machine-learning algorithm that can analyse 3D scans up to 1,000 times faster than what is possible today. This near real-time assessment can provide critical input for surgeons who are operating. AI image analysis could also support remote areas that don’t have easy access to healthcare providers and even make telemedicine more effective as patients can use their camera phones to send in pictures of rashes, cuts or bruises to determine what care is necessary.
Virtual nursing assistants
Most applications of virtual nursing assistants today allow for more regular communication between patients and care providers between office visits to prevent hospital readmission or unnecessary hospital visits. From interacting with patients to directing patients to the most effective care setting, virtual nursing assistants could save the healthcare industry £20 billion annually. Since virtual nurses are available 24/7, they can answer questions, monitor patients and provide quick answers.
Clinical judgement and diagnosis
AI and machine learning have been used alongside medical professional opinions to diagnose many conditions and diseases. A Stanford University study tested an AI algorithm to detect skin cancers against dermatologists, and it performed at the same level as humans. A Danish AI software company tested its deep-learning program while human dispatchers took emergency calls. The algorithm analysed what a person says, the tone of voice and background noise and detected cardiac arrests with a 93% success rate compared to 73% for humans. TREWS (targeted real-time early warning system) is another AI system that uses digitised health records and their data to analyse subtle symptoms in patients with sepsis compared to those without. Let’s take an in depth look at TREWS to see how AI and machine learning are being used to save lives.
What is Sepsis?
Sepsis is a life-threatening emergency that occurs when the body has an extreme reaction to a bacterial or viral infection and is responsible for more deaths per year globally than bowel, breast and pancreatic cancer combined. The infection that causes sepsis can start anywhere in the body, but instead of fighting the infection, the body’s immune system starts to attack itself. This happens when the body releases immune chemicals into the blood to combat the infection. Those chemicals trigger widespread inflammation, which leads to blood clots and leaky blood vessels. As a result, blood flow is impaired, and that deprives organs of nutrients and oxygen, leading to organ damage. This is characterised by symptoms such as difficulty breathing, indicating a problem with the lungs, low or no urine output, indicating a problem with the kidneys, abnormal liver tests, and changes in mental status. Almost all patients with severe sepsis require treatment in an intensive care unit (ICU).
Septic shock is the most severe form of sepsis and is diagnosed when blood pressure levels drop to dangerously low levels. Septic shock has as much as a 50% mortality rate, and for every hour treatment of sepsis is delayed, the mortality rate increases by 7-8%. Detecting and diagnosing sepsis early is therefore instrumental in treating patients successfully.
Patients who are diagnosed with sepsis receive the correct treatments but often receive them too late. This is because sepsis is often difficult to detect since the symptoms of sepsis are shared with many other conditions. With the vast amounts of patient data available on sepsis, this type of problem is a prime candidate for AI and machine learning solutions.
The Role of AI in Detecting Sepsis
The goal of using AI and machine learning to detect sepsis is to detect it earlier than healthcare professionals are currently able to. Since machine learning uses large messy data sets to find patterns and enable intelligent decision making, it is possible that machine learning can find patterns in patient data that aren’t immediately obvious to healthcare professionals.
The problem with this type of approach is that detecting symptoms of sepsis is not enough. If the system were to alert a member of staff each time any symptom of sepsis was present, it would generate many false alarms and reduce its credibility to detect sepsis. The system therefore needs to evaluate every signal in the context of every other signal. For example, creatinine is a waste product filtered out of the blood by the kidneys and deposited in urine. Sepsis decreases the ability of the kidneys to filter blood and therefore higher levels of creatinine are an indicator of sepsis. However, high levels of creatinine are also a symptom of chronic kidney disease and diabetes. The system must therefore decide whether creatinine levels are elevated due to sepsis, or whether they may be elevated due to chronic kidney disease or diabetes.
Researchers at the Johns Hopkins Malone Center for Engineering in Healthcare, which is based at Johns Hopkins University in Baltimore, USA, have developed a system that does just this and it is called TREWS (targeted real-time early warning system). TREWS also analyses patient data to identify patients who are more at risk to developing sepsis from an infection.
The uncertainty introduced by symptoms that represent multiple conditions can be handled effectively through Bayesian techniques, a branch of machine learning. Bayesian machine learning allows researchers to encode in models their prior beliefs about what those models should look like and how they should behave. Then, as additional information comes in, they can update those beliefs. Bayesian algorithms can adjust to new information, but at the same time are able to operate in domains where data is patchy and sometimes inaccurate.
TREWS handles this information by assigning a certain uncertainty or error bar to each data point that serves as a gauge of the signal’s trustworthiness. That uncertainty can be incorporated into the future modelling analyses, ensuring that it’s not lost along the way. TREWS uses Bayesian inference methods to propagate the uncertainties of each medical data point throughout the analysis. It automatically weights more accurate or trustworthy variables more heavily in the output, and it reduces the weight given to less reliable measures, just as our robot did when learning to walk!
The question you most likely want to ask at this point is does it work? Well, in 2015, the team behind TREWS first showed that a computer algorithm they developed could sift through patients’ records and predict septic shock in 85% of cases, usually more than 24 hours before onset. Two-thirds of the time the system predicted sepsis before it inflicted any damage. With the added benefit of being able to run 24/7, TREWS has also been shown to detect sepsis on average 12 hours before detection by healthcare professionals. At present TREWS is being tested in a clinical setting at Johns Hopkins hospital and will hopefully be deployed to further hospitals as a tool for the early diagnosis of sepsis.
It is obvious that AI and machine learning can provide healthcare professionals and patients a way of detecting and diagnosing life threatening conditions early on, and can even be the difference between life and death for some patients. Armed with this information we can start to ask: What is the future of AI in healthcare? Where else could machine learning be applied to offer the same benefits to patient health that TREWS has?
The Future of AI in Healthcare
In 2018, UK Prime minister Theresa May announced an AI revolution would help the NHS predict those in an early stage of cancer to ultimately prevent thousands of cancer-related deaths by 2033. The algorithms will examine medical records, habits and genetic information pooled from health charities, the NHS and AI. The NHS long term plan published in 2019 sets out the intention to digitise health records. This wealth of data could prove extremely beneficial to systems designed to detect patterns, just as TREWS has done, which could lead to the early diagnosis of thousands of conditions.
In 2019 NHSX, a joint unit of NHS England and the Department of Health and Social Care, was founded with the aim of digitising services, connecting health and social care systems through technology and transforming the way patients’ care is delivered at home, in the community and in hospital.
The UK government has already committed to a £250 million investment in AI technology, which includes the setting up of a National Artificial Intelligence Laboratory to help develop new solutions for the NHS. Alongside this, there are now a greater number of healthcare-related AI start-ups in the UK than ever before, all of which are offering promising technologies that will ultimately provide greater care to patients. Source: https://www.nhsx.nhs.uk/
Artificial intelligence and machine learning have a limitless number of applications in medicine and healthcare and have opened a door to a new way of approaching patient care. With so many applications to choose from, the question we now ask is not “Will AI be applied in healthcare?” but “Where will AI be applied in healthcare next?”
Anyoha, R. (2017, August 28). The History of Artificial Intelligence. Harvard University. https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/
Brown, S. (2021, April 21). Machine Learning, Explained. MIT Management Sloan School. https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained
The Royal Society. (2018, June 1). What is Machine Learning? https://royalsociety.org/topics-policy/projects/machine-learning/videos-and-background-information/
Ashley, S. (2017, October 11). Using Artificial Intelligence to Spot Hospitals’ Silent Killer. PBS. https://www.pbs.org/wgbh/nova/article/ai-sepsis-detection/
Davenport, T. and Kalakota, R. (2019, June 6). The Potential for Artificial Intelligence in Healthcare. Future Healthcare Journal. 6(2): 94-98. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6616181/
Quest, D., Upjohn, D., Pool, E., Menaker, R., Hernandez, S. J. and Poole Jr., K. (2020, November 9). Demystifying AI in Healthcare: Historical Perspectives and Current Considerations. American Association for Physician Leadership. https://www.physicianleaders.org/news/demystifying-ai-in-healthcare-historical-perspectives-and-current-considerations#:~:text=THE%20ORIGINS%20OF%20AI%20CAN,had%20the%20ability%20to%20think.&text=Fuzzy%20expert%20systems%2C%20Bayesian%20networks,uses%20of%20AI%20in%20healthcare.
Van Melle, W. (1978, May 1). MYCIN: A Knowledge-Based Consultation Program for Infectious Disease Diagnosis. International Journal of Man-Machine Studies. 10(3): 313-322. https://www.sciencedirect.com/science/article/abs/pii/S0020737378800492
Wilson, T. (2016, August 5). No Longer Science Fiction, AI and Robotics are Transforming Healthcare. PWC. https://www.pwc.com/gx/en/industries/healthcare/publications/ai-robotics-new-health/transforming-healthcare.html#:~:text=It%20puts%20consumers%20in%20control,and%20support%20for%20staying%20healthy.
Centers for Disease Control and Prevention. (2021, January 27). What is Sepsis? https://www.cdc.gov/sepsis/what-is-sepsis.html
The UK Sepsis Trust. (2021, January 19). About Sepsis. https://sepsistrust.org/about/about-sepsis/
Artemia. Artificial Intelligence Trends in Modern Healthcare. Retrieved May 2021 from https://artemia.com/blog_post/ai-trends-in-modern-healthcare/
GOV.UK. (2021, March 26). Future of AI in Health and Social Care. https://www.digitalmarketplace.service.gov.uk/digital-outcomes-and-specialists/opportunities/14459