A patient speaks with a doctor in an examination room. In front of the doctor is a computer, and as she asks the patient questions she clicks boxes on the screen. The software not only has access to the patient’s test results and medical data, but is connected to a huge database with information about millions of others. When the questionnaire is complete, the program analyzes all of this information to provide a drug prescription, which the doctor recommends to the patient.
This is a simplified version of a twenty-first century medical examination, but it contains elements of healthcare’s future in the age of artificial intelligence. AI applications already exist or are in development for radiology, cardiology, surgery, and other medical fields, while it is also used in biomedical research for drug discovery and personalized medicine. As its capabilities improve, AI is bound to become increasingly common in the clinic, even changing the traditional relationship between doctor and patient.
Jun 21, 2024
HLRS Projects
Social Sciences
Medical Technology
Biomedicine
Interview
AI & Data Analytics
Philosophy & Ethics
See all news
As a research assistant in the Digital Healthcare Ethics Laboratory at the Catholic University of Croatia in Zagreb, Luka Poslon has been investigating challenges related to the digitalization of healthcare. From April to June 2024, he is also a visiting researcher at the High-Performance Computing Center Stuttgart (HLRS) in its Department of Philosophy of Computational Sciences. Working alongside HLRS investigators, he is exploring two key questions: In what healthcare situations is it appropriate to use AI? And how can the explainability of AI algorithms be improved so that physicians and patients can understand, evaluate, and use their outputs in an ethical way?
In certain situations, AI has already proven its ability to make unique and more precise healthcare recommendations compared to other methods. Medical professionals also anticipate that AI could help in time-consuming administrative tasks, giving them more flexibility to focus on patients.
At the same time, AI must still earn the trust of the doctors and patients. This is because even for the programmers who write AI algorithms, their “black box” nature typically makes it impossible to know exactly how they arrive at an output. Although often accurate, AI tools are not perfect, and false positives or false negatives, for example, can make their users less inclined to believe the information they provide. Ensuring that doctors can be trust in the accuracy of AI tools in medicine is particularly crucial when questions of patient quality of life or even survival are at stake.
Because of such concerns, investigators in the HLRS Philosophy Department have been conducting research to develop a rigorous understanding of what is needed to ensure that machine learning algorithms are trustworthy and reliable. One key factor in medicine — as well as in many other fields — is the need to improve algorithms’ explainability. In this case, explainable AI means that an algorithm should not only provide a diagnosis or prescription for care, but also relevant information about how it arrived at its result. This would give physicians a more complete basis for making recommendations to patients. Building explainability into AI algorithms is still a developing approach, however, and not something that is currently done in a consistent and transparent manner.
Making AI explainable will not only be of interest to doctors and patients, but also to the developers of AI tools. As technologies in any field improve, they become more reliable and over time the need for trust goes down because one assumes that they work. At the start, however, whether a tool is adopted or not depends on its trustworthiness. Considering the complexity and diversity of human physiology, “it is important that tests, sensors, and algorithms consistently make the right prediction at the right time for the right patient,” Poslon explains.
As AI tools become available, physicians and other clinical staff will need to understand their limits, as well as how to integrate them most effectively into a medical practice. Poslon points out that in addition to the opacity of many AI algorithms, there is also a risk that physicians could become overly reliant on them. “It’s like parking your car nowadays, where it’s easy to drive it based only on sensors,” he explains. “Sometimes sensors fail to detect an obstacle, though, and if you are not paying attention you could hit it.”
Such an analogy emphasizes the fact that doctors’ ability to make judgments based on their knowledge and experience will always be essential. “Physicians should be using AI as a tool rather than as a primary mechanism for decision making,” Poslon argues. “It should help them to be better in their performance, but ultimately doctors should still be making determinations about individual patients.”
The European Union’s recently passed AI Act, which is intended to provide worldwide rules on AI, uses a framework in which greater regulation is required for applications of AI that have the potential to cause harm, such as in medicine. As the field develops, the work that researchers like Poslon and those in the HLRS Department of Philosophy are doing will be necessary to provide a clearer understanding of the risks and opportunities involved. Understanding of relevant ethical issues will inform the development of policies regulating how AI is used in healthcare and how data is handled, and ensure that use of AI respects patient privacy.
Looking at the big picture, Poslon believes that the arrival of AI marks a unique event in the history of civilization. “With AI there are no borders,” he says. “For the first time people are facing a global technology that creates social and ethical challenges. This means that it needs to be addressed on a planetary scale. Right now, we are not comprehending the limits of the technology, which means that it is important to understand ethics.”
Although still early in his career, Poslon feels that his visit at HLRS is helping him to move his thinking on such issues forward. ”Some research institutes stress philosophical or ethical aspects of artificial intelligence, while others are only good in technical aspects,” he says. “HLRS includes both sides. It is important to have a responsible combination of both.”
— Christopher Williams