Reflecting on responsibility in AI with Daniel Dennett

One of the most influential contemporary philosophers is Daniel Dennett. Based on Darwin’s theory of evolution, he has developed a new notion of free will, responsibility and consciousness. These new perspectives can help us navigate the debate on responsibility in AI, especially since some designs are inspired by the workings of the human brain. Questions and concerns we have regarding AI and responsibility appear to show great similarities with those regarding humans, and Dennett offers us tools to think them through.


  • Responsibility is traditionally associated with free will, for it seems evident that only when someone is able to choose between A and B, it is reasonable to hold him accountable for his choice. Free will furthermore implies we are capable of making decisions that are not the necessary result of any cause (e.g. enforcement, compulsion), for if that were the case, the decision would not be a matter of choice after all.
  • Causal determinism, the idea that every event is the necessary result of a concurrence of preceding other events, gained credibility from scientific discoveries in for example biology (e.g. Darwin’s theory of evolution), physics (e.g. Newton’s laws of physics) and neuroscience (e.g. Erik Kandel’s work on the neurological workings of memory). Though first applied mainly to events that concern the outer world (e.g. the movements of the stars or the way sunlight causes plants to grow), the theory later gained relevance in the study of more intrinsic phenomena such as anger or the ability to show compassion. And so, our decisions are increasingly linked to mechanical, neurological processes in the brain, which seems incompatible with the traditional idea of free will and responsibility.
  • With the emergence of artificial intelligence, we created the possibility of outsourcing many of our tasks and decisions. These tasks come with considerable responsibility, such as those concerning self-driving cars or recruiting tools. The technology (e.g. deep learning) that underlies these applications often becomes a black box for its makers and users, since its algorithmic paths are too complex to trace back. This makes it hard to determine who is responsible for its actions. We commonly consider these technologies incapable of free will since their actions are determined by the algorithms they follow. However, although as yet we still turn to their makers for accountability, this might not be feasible in the long-term, since they often have neither control over, nor insight into the way these algorithms evolve.
  • The difficulties that arise in the matter of responsibility in humans and AI show great similarities. In both debates (1) the idea that an agent can only be accountable for his actions when they are the result of free will is problematic since (2) both the actions of humans and AI seem to result from unconscious processes that obey the mechanic rules of cause and effect (if A, then B). Daniel Dennett has offered a model of responsibility that deals with these difficulties.


Daniel Dennett’s model of free will and responsibility is rooted in an unconscious process of biology and laws of physics that are determined by the mechanic rules of cause and effect. He claims that the system that underlies everything that exists and happens combines countless cases of causes with necessary effects that together cause numerous other necessary effects, and so on. This system is so extensive that we will only be able to comprehend little bits and pieces of its ingenuity at best. We and everything we do are necessary effects of this system too. Free will in the traditional sense is therefore an illusion, since ultimately, our actions are always the necessary effect of a combination of the countless events that happened before. He compares this illusion with stage magic, which is in fact a combination of greatly performed tricks. In our case, Dennett argues that it is our brain that tricks us into believing all sorts of things, among which the existence of free will.

Although the workings of the brain and the workings of AI show great similarities, it is our extensive evolutional history that sets us apart.

For Dennett, this does not mean we can’t have responsibility. Although free will in the more traditional sense is indeed impossible, free will in the sense of superior evolutional biology is possible. In the course of billions of years, greater competences have evolved and the cognitive human competences exceed those of other lifeforms. This not only makes it possible to respond in a superior manner to situations, it also gives us the opportunity to experience a sense of reason or meaning to our (re)actions. Whether these reasons are ultimately accurate or not is irrelevant here. Their function is to guide us, not to give insight in the true workings of our brain. Our highly developed cognitive competences and sense of meaning have resulted in culture, science, art, and so on.

If we accept Dennett’s emphasis on the importance of the extensive and dazzling path of cause and effect that brought us to this point of sophistication, an agent can be responsible when two conditions are fulfilled: it has developed highly cognitive abilities and it holds a set of beliefs that generate a sense of meaning (e.g. reasons why we act and live). The belief in a higher power, for example, has served an important purpose in our evolution, according to Dennett. Fragile as we humans are, it is vital for us to work together. With the emergence of human intelligence, however, we became capable of “cheating” in the game of survival by lying to one another. To illustrate: I can say that I went out hunting with no success, while in fact, I have secretly been enjoying a lazy day. This could jeopardize the survival chances of my tribe. However, my belief in a higher power causes me to feel that my misbehavior will be noticed and punished one way or another, which directs me to more responsible behavior.

How does this idea translate to AI and responsibility? Is AI able to fulfil the two conditions of responsibility according to Dennett’s philosophy? Deep learning has proven to be able to quickly evolve its impressive calculation abilities, which could result in a set of highly developed “cognitive” competences. Impressive competences are already emerging: in many specialized domains, they outperform human experts (e.g. detecting cancer). The second condition, implementing a set of beliefs that add a sense of meaning, also seems to be possible. This was already imagined in science fiction when AI was still in its infancy. In Steven Spielberg’s movie A.I., for example, a robot child who is programmed to love the woman who turns on the switch of eternal motherlove, which determines all of his actions from then on. Experiments with the implementation of human-like convictions have already been done with, for example, self-driving cars to make them better drivers.

This would mean that AI, at least in principle, is a good candidate for accountability according to Dennett’s perspective. However, Dennett’s philosophy also implies that the way our responsibility came about is impossible to duplicate because of its origin in a unique process of cause and effect. Human responsibility emerged from billions of years of evolution and from animated bodies. The way our brains came about therefore differs profoundly from the way AI programs are made. Even if AI could exercise responsibility, its particular variant might never turn out as we want it to. He argues that the biggest danger in AI is that we are so impressed with its calculating powers, we forget that the workings of our own brain are infinitely more ingenious. So, although the workings of the brain and the workings of AI show great similarities, it is our extensive evolutional history that sets us apart. Dennett concludes that in the foreseeable future, AI may offer intelligent tools which can be highly optimized, but we shouldn’t hold our breath to see them become our new colleagues.


  • If we could agree on another definition of responsibility, and many will, the answer to the question whether AI can be responsible would turn out differently. However, whether we agree or not with philosophical models like Dennett’s, they and their opponents provide guidance in complex debates like these and help us navigate our expectations and concerns. In this case, Dennett’s model shows us how a mechanic process of necessary cause and effect can still result in accountable agents, whether human or artificial, while keeping a certain distinction between humans and AI.
  • Famous inventor Ray Kurzweil argued that, even though the capabilities of our brain are still superior to those of AI, this could change more rapidly than we might think. The exponential growth of complexity in AI combined with our increasing insights in the workings of our brain can lead to so-called ‘whole brain emulation’ or mind uploading. AI could then meet the two conditions of Dennett’s responsibility model, and Dennett’s reservations on AI’s capabilities would dissolve.
  • The idea that AI will develop the same mechanism as humans to cope with certain situations is already hinted on. A DeepMind program of Google, for example, was only assigned to go from A to B without any further information or example of how this could be done. The program came up with a body that looks like a human body that can walk, jump and keep balance while being pushed. This not only implies that our intelligence and behavior can be duplicated, it also implies that it can origin in mechanic rules of cause and effect.