A phronetic approach to healthcare

As AI systems are becoming increasingly sophisticated and taking over many of the tasks of humans, the idea is surfacing that AI systems might make doctors obsolete. However, when we consider the wisdom required in healthcare and the socio-cultural role of the doctor, we could also come to the conclusion that AI will only make the doctor more important than ever. Moreover, by reformulating the role of the doctor, a whole new normative approach to health and care can be articulated.


  • Recent research showed that an AI system was better than doctors at detecting tiny lung cancers on CT scans. Tested against 6,716 cases with known diagnoses, the system was 94% accurate, beating a group of six radiologists.
  • Last year, Google introduced Duplex, a virtual assistant that can carry out verbal and administrative tasks, such as making a reservation for a restaurant. It worked so well that many people didn’t know they were talking to a virtual assistant (Duplex passed the Turing Test), and a backlash occurred later. Because AI systems have the ability to process vast amounts of data and detect correlations and patterns that humans cannot (e.g. how my lunch choices correlate to my music taste, and vice-versa), AI systems will be superior to (some) human cognitive abilities. In their book Second Machine Age, Brynjolfsson and McAfee argue that software-driven technology will increasingly take over many of the cognitive tasks that we perform, such as administrative work, law, investing (e.g. algorithmic trading). As such, it is radically different from previous technological innovations, which primarily automated physical power.
  • In the series Star Trek, a general-purpose device called the “tricorder” was used for scanning environments, and analyzing and recording that data. Google now has the plan to create a real-life version of a medical tricorder, to virtually collect information about people’s bodies and diagnose diseases.
  • The Quantified Self movement ties in with the idea of the tricorder, as its members want to quantify every possible aspect of their lives and bodies in order to improve their human abilities, including their health. We have written before how scalable (medical) IoT devices and wearables (e.g. smartwatches, earbuds) help to quantify and objectify the human body for various purposes, giving us a magical feeling of mastery over our own lives and bodies.
  • In his book Homo Deus, historian Yuval Harari writes that mankind has long been plagued by three main evils: poverty, famine and sickness. By understanding social structures as data processing systems, and given the rapid advances of AI systems and the abundance of digital data, Harari expects that autonomous smart systems could help mankind to get rid of these three evils. The belief in this “digital salvation” is what we have called “technological divination”.


AI systems are increasingly being tested in clinical contexts, and some of their results in medical diagnostics are impressive. In a broader sense, deep learning could increasingly guide us in our behavior and make our decisions in the future, something we’ve described as “technological decisionism” before. Think of Netflix or Spotify automatically playing a new episode or song, but also of call-center work with virtual assistants answering our questions, autonomous vehicles that do the steering for us, or algorithms making investment decisions. The fear exists that when AI systems are implemented in healthcare, they will render doctors obsolete and will fully automate the medical decision-making process. But before jumping to this conclusion, it is worthwhile to consider the type of rationality involved in healthcare.

These tasks and aspects touches upon moral and even spiritual dimensions, which transcend the medical practice.

In the medical context, a decision consists of three components: the subject making the decision (i.e. the doctor in consultation with the patient), the object that is acted upon (i.e. the patient that is treated), and the deliberate process of decision-making: (i) diagnosis of the symptoms, ii) analysis of the patient’s current state and conditions, iii) identifying possible treatments, iv) choosing an actual treatment). AI systems (currently) focus on one part of this decision-making process: the diagnosis part. However, it’s also becoming clear that other intellectual moments are required in the medical decision-making process.

In his Ethica Nicomachea (book VI), Aristotle defined five intellectual virtues that constitute “perfected intelligence”, the characteristics that make a decision morally virtuous. The first is techne: productive knowledge, related to craftsmanship. The second is phronesis: practical wisdom and knowing how to live a virtuous life that is guided by reason (i.e. the general idea of the Good Life). The other three are theoretical: nous (the intuitive insights into the first principles and self-evident truths) and episteme (related to scientific knowledge), which are combined in Sophia (theoretical wisdom in the nature and purpose of reality). For Aristotle, phronesis – practical wisdom – has the form of a practical syllogism: applying the general laws of an abstract idea of the Good Life (major) to concrete situations (minor), which renders the premises and maxims for acting well or the wisdom to do the right thing in various situations.

The role of AI in healthcare resembles the ideal of episteme: detecting fundamental patterns and correlations in reality. However, phronesis is also very important to doctors for a few reasons. First, as AI systems always carry a certain bias (e.g. because they were trained on a specific dataset or because of the general ambiguity of concepts used in decision-making), it is the doctor who should interpret the conclusions presented by the AI system. This “hermeneutic task” of the doctor is also a necessary requirement for the patient to give informed consent. Lastly, it can be argued that medical decisions are decisions that should explicitly comprise a moral dimension, that medical treatments should be chosen in light of a general idea of the Good Life, as the consequences of these decisions reach far beyond the medical domain alone. As such, the doctor should be able to take the wishes and context of the patient into account (minor), which often involves moral and religious considerations, in order to provide the proper treatment (major) that suits the patient. This requires practical wisdom: reflection on the goals to be achieved in healthcare from a broader perspective of the Good Life in order to make the treatments morally acceptable treatments. As such, it’s clear that medical intelligence is a multidimensional concept and involves much more than episteme, such as practical insight, emotional intelligence as well as moral sense.

This touches upon moral and even spiritual dimensions, which transcend the medical practice. This is where we find the actual “caring” part of healthcare. Phenomenologically, sickness and disease are experienced as a “negative force” that happens to people, overtaking their bodily functions and taking away their freedom (e.g. a broken ankle that prevents me from playing football, an allergy that prohibits the consumption of certain products, up until the absolute negativity of death, which absolutely negates the human life). Doctors therefore have a “healing role” in the sense that they should help people to reconcile with their disease, in order to help them live with their illness and regain autonomy and freedom in their own life. Again, these tasks explicitly relate to ethical, spiritual and philosophical questions; indeed, bringing us to the domain of sophia: wisdom on the purpose and insight into the role and nature of human beings in this world. This is what makes medical decisions so radically different from many decisions we make and thus from AI systems suggesting a new song or episode, as an ethical and spiritual dimension are explicit additions to the epistemological act of decision-making.

These are tasks or moments of intelligence that AI systems cannot take over from real doctors.  But what, then, might the future role of the doctor look like, as it is also clear that AI systems will take over some of the cognitive (epistemic) functions of doctors? Inspiration could be drawn from chess. When Garry Kasparov, the world’s best chess player, lost to IBM’s Deep Blue in 1997, he claimed that it was an unfair match because the AI system had access to a huge database of billions of chess moves. He suggested that human players should have access to similar AI, creating a new type of chess player: a “centaur”, a team of human plus AI. Since then, this centaur has been the best chess player, beating other AI chess systems. Similarly, we can imagine the rise of medical centaurs: human doctors supported by AI systems, but with the final responsibility lying with the doctors, both epistemologically as well as ethically.


  • Historically, the role of the doctor has been closely associated with that of the priest and the philosopher. With no clear distinction between mind and body, and between magic and science, bodily diseases were often considered spiritual or religious affairs. As such, traditional medicine in many cultures had a holistic approach to healthcare, including praying, diets, rituals, and other habits to cure the patients. When medical AI systems integrate many datasets, this holistic approach could return to healthcare. When scanning for subtle, ulterior correlations (e.g. how my media consumption relates to my mental state of being, how the frying of my egg in the morning relates to the quality of my bones), medical AI could reconnect all the domains of life that have become disaggregated in our modern lives and thinking.
  • AI systems could provide huge “leapfrog” potential for developing countries. This is especially important for developing countries that are ageing relatively fast while lacking a proper healthcare system, such as China or Eastern European countries. Nonetheless, decision-makers should take the broader concept of intelligence and ethico-religious aspects of healthcare into account, and thus be aware that medical AI systems should be supported by a proper institutional and human context in order to prevent reductionism in the medical decision-making process.