A guide to Artificial Intelligence for physicians
Artificial Intelligence (AI) is set to be able to automate admin tasks and support decision-making for physicians, allowing physicians more physical time dedicated to the human interaction that is so vital within the medical realm. The amazing algorithms utilised in AI systems will ultimately support the physical role played.
The global market value for AI in 2019 was estimated at $3.9 billion and is bringing in the interest of many financial investors in order to support the growth. The value is estimated to reach $28 billion by 2025, but there is so much hype around AI that ambiguous information is being bandied around. Businesses who have some awareness of AI are able to falsely label their solutions and advertise falsely to financial investors. Artificial Intelligence is used to attract audiences and overstate the capabilities of a corporation. It makes it therefore harder for healthcare physicians to distinguish between the true facts and the sensationalism surrounding it.
There are already guides in place for physicians, but many of these treat the subject as trivial or in contrast are far too in-depth and some may only cover one area of study. There is an opportunity for a guide that is clear and concise with relevant information. A clear supporting guide to help those in the profession with increased knowledge of AI is important in order for the area to be best utilised. This opportunity was discussed in a recent study. In NPJ Digital Medicine, they published “A Short Guide for Medical Professionals in the Era of Artificial Intelligence” that was led by Bertalan Meskó, M.D.
Simplifying the meaning of AI
In its simplest form, AI imitates human intellectual capabilities. This intelligence is displayed by machines. Machines that review algorithms and lists of rules in order to problem solve and learning.
Nick Bostrom, a philosopher at the University of Oxford defined AI into three levels, expanding on the understanding of AI:
- Artificial Narrow Intelligence (ANI)
Uses a large amount of data, it is able to establish patterns which can understand voice, or text as well as clustering challenges and defining images into categories. Its IQ is zero but is able to create algorithms at a very high speed and carry out defined tasks.
- Artificial General Intelligence (AGI)
This is a more personal step that Artificial Intelligence hasn’t quite reached yet, but is the ability to remember, to argue, to problem solve, and to use rationale.
- Artificial SuperIntelligence (ASI)
Very much at a theoretical stage, this stage concerns all of humanity, taking the intellectual capacity of all of humanity, which many see as a risk. Many businesses have hesitations around this aspect as it would appear that humans would not be able to entirely understand the technology and reasoning therefore consider it as a risk. This is the kind of extremes we see in films.
Scientists and physicians are estimating what the ideal use of AI would be. Based on these levels, the prediction is somewhere between ANI and AGI, but not to the extreme level of AGI.
We can see that there are so many levels within AI to be investigated further, a new line of study where limitations are being identified within ANI but also a more widespread use is being identified. Below will discuss the technology behind AI and how it can be put into practice.
AI as the student, humans as the trainers: the techniques within AI.
Of course, the comparison of AI to a child has been made, even though the astounding skill it appears to offer is constantly developing. Children learn from adults, even if not directly guided at all times, when they are being taught they would usually act as a student, taking on the information given.
Developers are able to generate equations for machine learning (ML) which means the system does not need to be clearly directed for any particular project. As long as there is a lot of data then the machine can establish relationships in the patterns at a very high speed. There are 3 subheadings you could put machine learning into, of course, there are more but below are 3 main areas considering deep learning (DL) which is especially relevant for physicians.
1. Supervised learning
This area is considered exactly as a child would learn in a classroom. When the data is clear enough that the machine can fully understand the exact requirements. When considering this in the medical industry, Group A and Group B will have 2 varying sets of medical history. Group A may have diagnoses identified through information around family history and medical records. Group B may have information around family history as well as medical records, but no diagnosis yet identified. Supervised learning will enable Group B to make a diagnosis by reviewing and analyzing the data and equations that come from Group A. This is currently the most utilized way of learning.
2. Unsupervised learning
This way is less lead, and more learned individually without so much guidance from a classroom format, ultimately the child chooses the outcome. Large amounts of data are provided, but the equations are formed by the machine without support. The machine may come up with equations that humans may have never thought of. In this way, its algorithms are not toyed with and instead taken into consideration for the patient or drug trialing in the future.
3. Reinforcement learning
This is the next step after unsupervised learning, allowing the “child” to make their own choices around a certain task to achieve an outcome. Different to unsupervised learning, however, this again requires input from a teacher. A large number of actions are taken and viewed by the machine, but the AI scientists may influence slightly the course of action to the best option they can identify. In healthcare, this is not widely used yet, as a huge amount of algorithms cannot be widely tested on patients.
4. Deep learning
This area of learning is different and far more advanced than the others discussed, it is based upon artificial neural networks (ANN), a replication of the human brain. The ANN structure can have many layers, and the more layers it has, the more detailed, complex projects it can carry out. When grouping together by diagnosis, a DL equation will be able to group together, for example, “Type 1 Diabetes” alongside “T1D” abbreviation as it can identify the similarities from the data provided. Intervention by humans is not required in order to do this. There may be situations in the other ML ways of learning that need human input in order to identify the relationship between “Type 1 Diabetes” and “T1D”.
Hysteria vs. fact
The above has given a broad and brief outline of AI to increase our knowledge around it in order for us to further understand what we read about. There are increased articles becoming available that discuss the use of technology and its capabilities.
Of course, publicists will talk about AI in various ways, overstating its capabilities and muddying the water around it. Make sure you consider the below aspects when learning more about the technology that surrounds AI.
1. Data, data, data
Make sure to review the “Methods” section in any study or review you read. You can see the kind of data that was utilized in order to support the algorithm. There is a requirement for a large amount of data in order for the algorithm to be trained to the best of its ability. Partnerships between healthcare establishments and clinicians enable a huge amount of data to be utilized. However, in order to increase the size of the dataset being used, tricks such as rotating images may be used which of course doesn’t make the quality of the data any better.
2. Clinical problems to be resolved with clinical solutions
Implementation of the data identified needs to be considered carefully and be able to be utilized easily. AI can review data very quickly, however how they are used and adopted within the industry varies. The implementation needs to be clear, and the results need to be also. They need to adhere to clinical protocols and if this is not the case then AI may operate dysfunctionally with professionals in a human environment. AI requires a very broad view of data, not predefined data. The data needs to be “real” in order to be utilized in real-life successfully.
3. Understand your AI
It is important to understand what ML or DL is being utilized in the AI you are reviewing. The method needs to be clear and detail should be given around the method. There are clowns out there who are mimicking AI so be skeptical when working with AI developers.
As previously mentioned, when reading about “artificial intelligence” be sure to consider whether the method has been discussed as this will aid you in understanding whether the article is true.
4. Keep up-to-date
There are so many articles and guides you can read around the subject of AI. Reading a recent study from Dr. Bertalan Meskó and Marton Gorog may be encouraging. Keep updated with the latest news around AI from managing the Covid-19 pandemic to advancements in prosthetics and unusual developments in medicine.