Artificial Intelligence in medicine: applications, implications, and limitations


Medical practices of the future may soon be upon us: AI has developed and advanced over recent years, and it is entirely possible to see misdiagnosis and treating disease symptoms rather than their cause as a thing of the past. Consider for a moment the memory you would need to free up to fit a full 3D picture of an organ on your computer or the years of blood pressure measurements you have kept. The masses of data generated and stored in digital medical records through medical images and general tests allow for AI to develop more applications and a high-performing electronic-driven medical era. These AI applications are shaping the way that researchers and doctors approach medical problem-solving.

Some algorithms, however, can compete with clinicians in many tasks, sometimes even outperforming them. So why is AI yet to be fully established in daily medical practice? That’s because there are a variety of regulatory challenges that need answering first, even though the algorithms can impact medicine in a meaningful way.

What makes an algorithm intelligent?

AI algorithms, over time, learn how to do their just, just like a doctor would study through years at medical school, taking practice exams and assignments, receiving grades, and learning from mistakes, AI has to do the same. Usually, the jobs that AI does are those that need human intelligence to complete, like speech and pattern recognition, decision making, and image analysis. A human is needed to tell the system exactly what it should be looking for in an image that the algorithm sees. In brief, automating basic tasks using an AI algorithm is exactly what they are trained to do, and can often perform it far better than any human.

Computer systems begin with the data that they are fed in order to develop an efficient AI algorithm. The data needs to be well structured, meaning that the data points have an annotation or a label, something that is recognized by the algorithm. Once the algorithm has seen enough sets of data and their annotation, the system’s performance is assessed to check the accuracy, just like exams for students. These “exams” usually involve the entry of test data to which the system already knows the answer. This allows the user to check the algorithm’s ability to work out the right answer. The algorithm can be amended, fed more data, or rolled out following the test results.

Many algorithms exist that can learn from data points. Many AI applications in healthcare read some kind of data or another, whether it’s image-based (like MRI scans or images of tissue samples) or numerical based (like blood pressure or heart rate). The algorithms read and learn from the data, then go on to provide a result of a probability or classification. E.g., results could be labelling a tissue sample as cancerous or non-cancerous, or the probability of the patient having an arterial clot shown by the blood pressure and heart rate data. In healthcare applications, the algorithm’s performance is compared with a physician’s performance to determine whether the diagnosis matches and whether its value and ability in the clinic are acceptable.

Recent applications of AI in medicine

Recent advances in pairing huge volumes of data generated in medical systems with computer power help make healthcare systems ready for AI applications. Two of the most recent applications are listed below. They are both clinically relevant and accurate algorithms and can benefit both doctors and patients by making the diagnosis simpler.

The first algorithm is an existing example of an algorithm that exceeds doctors’ ability in image classification tasks. Researchers at Seoul National University Hospital and College of Medicine created the AI algorithm named DLAD (Deep Learning based Automatic Detection) in the fall of 2018. The algorithm analyzes chest radiographs and helps to find abnormal cell growth, like potential cancers. The algorithm’s performance was compared with the results of many physician’s detention abilities on the same image, and AI surpassed 17 of the 18 doctors.

The next algorithm was also created in the fall of 2018 by researchers at Google AI Healthcare. They developed a learning algorithm called LYNA (Lymph Node Assistant), which identifies metastatic breast cancer tumors from lymph node biopsies. This algorithm isn’t the first of its kind to try this type of analysis. Interestingly, this algorithm does something the human eye cannot do: find suspicious regions in the biopsy samples given. Two databases used LYNA for testing, and they both showed that the system can correctly identify a sample as cancerous or noncancerous 99% of the time. Moreover, LYNA halved the average slide review time when used by doctors in conjunction with their normal analysis of tissue samples.

Other image-based algorithms have recently shown similar abilities to increase the accuracy of a physician. In the short term, physicians can use algorithms to aid with double-checking a diagnosis and explaining patients faster without the accuracy sacrifice. Looking at the long-term, government-approved algorithms may be able to function independently, giving doctors the time to focus on cases that computers aren’t able to solve. DLAD and LYNA are prime examples of algorithms that can aid physician’s classifications of diseased and healthy samples by giving doctors important image features that need to be studied more closely. These examples illustrate the prospective strengths of algorithms in healthcare, so why are they being held back from clinical use?

Regulatory implications and algorithm limitations going forward

So far, healthcare algorithms have shown excellent potential benefits to both patients and doctors. However, the regulation of these algorithms is a challenging task. The U.S. Food and Drug Administration (FDA) has approved various algorithms, but there is yet to be any universal approval guidelines. Added with the fact that the people creating the algorithms aren’t always doctors that treat patients, computationalists may need to learn more about healthcare, and clinicians need to find out what tasks the algorithms can and can’t do. AI can be used in basic clinical tasks; however, it’s difficult to see how brain surgeries can be automated, such as where a doctor must quickly change his/her approach once they look into the patient. In lots of ways, AI has many possibilities in healthcare that currently overbalance AI for patient care. Specified requirements for algorithms and could be gained with clarified guidelines from the FDA, which could lead to the rise of algorithms deployed in clinical settings.

Moreover, the FDA has a strict acceptance criterion for clinical trials; they need to have extreme transparency around the scientific methods. Lots of algorithms rely on intricate mathematics, and getting from the input data to the final result can be like ‘unpacking the black box’. Clarifying the inner working of an algorithm to the FDA would by challenging, and understandably, researchers and companies would not be openly willing to share their methods with the public. The risk of losing their ideas and losing money by others, strengthening their product is too high. It is possible that patent laws change from their current state, where an algorithm is considered part of the physical machine. However, is it necessary to increase transparency in the short-term, so that patient data isn’t improperly classified as mishandles? It would help to make it easier to determine whether an algorithm could be used accurately in a clinical setting.

As well as the FDA approval obstacles, AI algorithms could also run into challenges in gaining the approval and trust of patients. Without a clear understanding of how the algorithms work by those approving them for clinical use, patients might not allow them to help with their healthcare needs. If patients were forced to choose whether they would rather be misdiagnosed by a human or an algorithm, who would they choose if an algorithm outperforms a physician.

It all comes down to having confidence in the algorithm for decision making. Determining the correct decision is a function of the data that is input. So if misleading data is input, then the algorithm may display a misleading result. It is entirely possible that those individuals developing the algorithm do not know that the information they are entering is misleading until it is too late. It is possible that the algorithm could cause medical malpractice. This kind of mistake can be avoided by programmers and clinicians being properly informed about the algorithm’s methods and data. By creating relationships between the computationalists creating the algorithms and the clinicians that are knowledgable in the specifics of the data, it is far less likely for malpractice to occur from an algorithm.

Clinicians need to be properly aware of algorithms’ limitations and be properly aware of the clinical data that programmers are keying in to create them. Companies may be required to sacrifice the secrets of an algorithm’s functionality so that a wider audience can assess the methods and highlight any sources of error or omission that could negatively impact patient care. There is still a long way to go before algorithms are approved for independent operation in clinics. However, by defining the qualities required for an algorithm to be viewed as accurate for clinical use while addressing the challenges surrounding decision-making errors, these algorithms could overcome all challenges they face and eventually increase the efficiency and accuracy of clinical practices for various tasks on a universal level.

HUMANITAS GROUP

Humanitas is a highly specialized Hospital, Research and Teaching Center. Built around centers for the prevention and treatment of cancer, cardiovascular, neurological and orthopedic disease – together with an Ophthalmic Center and a Fertility Center – Humanitas also operates a highly specialised Emergency Department.