
The debate over digital freedom mostly focuses on the issue of privacy and our right to do or say something without it being recorded and analyzed. Somewhat overlooked are the ways in which digital systems can impose specific moral codes upon us. In other words, these systems do not only record what we do; they increasingly guide our behavior and actually limit us in our actions.
Our observations
- Geo-fencing provides an example of the way in which digital systems can impose limitations on our behavior on the basis of GPS data. A set of proposed (international) rules on the use of drones includes measures of geo-fencing to prevent users from guiding their drones into no-fly zones. Vehicles can switch engine settings (e.g. hybrid can switch to full-electric mode). In a similar vein, already in 2004, a Dutch supermarket introduced shopping carts with automatic brakes that were activated as soon as shoppers crossed a predefined perimeter.
- Philosophers of technology (e.g. Peter Paul Verbeek and Steven Dorrestijn) have questioned whether technology that imposes morality (e.g. an alcohol ignition lock in cars, on automated gates in public transport) make us better humans or whether they actually outsource morality and render us amoral. To them, the real question is how society deals with the ways in which technology guides our behavior and how technology is designed to do so.
- Smart contracts, agreements stored on distributed ledgers (i.e. blockchains), automatically execute the terms of the contract when all necessary conditions are met (e.g. a product is delivered). On the basis of this technology, money can be programmed so it can only be used to buy (or exchange) certain goods or services. These could include fuel taxes that can only be used to pay for road maintenance or child allowance that can only be spent on items and services for children.
- When technology cannot prevent people from behaving badly, it can still produce immediate and fully automated penalties. In the Netherlands, a pilot has started of fining owners of electric vehicles who occupy charging points well after their battery is fully charged. In this pilot, the fine is imposed automatically on the basis of data from the charging point.
- Since 2003, Second Life has offered a virtual platform for people to “live” in VR. The immense freedom it provides its users has also led to a number of crimes including sexual harassment. Future iterations of these kinds of platforms will probably be able to detect such misbehavior and intervene. This is already imagined in the book/film Ready Player One, in which specific VR spaces don’t allow for any fighting. Such interference may not end with preventing clearly criminal behavior. In 2007, the French Front National set up a virtual HQ in Second Life and also organized a mass demonstration, its opponents sought to disturb the rally and attack the HQ. In the future, a platform like Second Life could (be forced to) intervene and limit users’ rights to engage in political activity.
Connecting the dots
In the realm of games, we have grown used to a gaming experience that allows for ever greater freedom; from the confines of Pac-Man’s maze to the vast environment of GTA and a host of online multiplayer games that hardly contain a single narrative or script. The other way around, as digital technology pervades our everyday lives, we are increasingly confronted with rules that developers have coded into these digital systems, either deliberately or “by accident”.
With digital technologies, the norms and values inscribed in them can be much more powerful, precise and personalized
In many cases we simply have to follow the “script” of the technology to make it work for us. Voice-based assistants, for instance, require us to pronounce our queries carefully. This is not all too different from tools of the past, such as an axe that worked best when handled with both hands. In other cases, a technological design reflects the values of its developers or society as a whole. To illustrate, image-recognition software works best with pictures of white people and feminists have more than once pointed out how technology is often designed (by men) specifically for either men or women (e.g. in the early days of motoring, electric vehicles were designed as “ladies’ cars”). However, these kinds of scripts may also include an explicit moral charge and, more or less, enforce moral behavior, e.g. warning sounds when someone is not using seat belts in a car.
With traditional technologies, the morality inscribed in technology (on purpose or by accident) mostly provided nudges to change our behavior, but there was still room for digression. With digital technologies, the norms and values inscribed in them can be much more powerful, precise and personalized. That is, a conventional car could refuse to operate under certain conditions (e.g. when the driver is drunk), a self-driving car can prevent specific people from going to specific places at specific times.
Most of all, as systems are integrated and the world around us gets smarter, there will be many more possibilities to nudge, guide or limit our actions. While these systems promise to reduce friction and offer seamless services, they also filter out options and reduce freedom of choice (e.g. dating systems pre-select candidates and newsfeeds facilitate filter bubbles). Also, these systems are increasingly substituting humans (which we typically appreciate as lowering friction) and hence reduce possibilities for human lenience; while humans can make exceptions, computers don’t. This moral meddling culminates in systems like the aforementioned autonomous vehicles that simply don’t allow us to engage in certain types of behavior. Social media can (more or less) automatically filter out content that is widely considered inappropriate, disgusting or downright illegal. The aforementioned autonomous vehicles and geo-fenced drones will refuse to follow our orders and programmable money can only be spent on pre-approved goods and services.
One question pertaining to this development is whether these technologies will help us become better humans. On the one hand, one may argue that they prevent us from behaving badly. On the other hand, when we outsource morality to the technologies that we use, there is no longer a need for us to think through the consequences of our actions and we will gradually lose our own sense of morality; anything goes as long as the computer allows it.
The fact that digital systems can be excellent tools to enforce moral behavior ties in with the ongoing moralization of (Western) societies. Moreover, as we have noted before, Western governments may find inspiration in the Chinese social credit system. However, the obvious risk is that governments (or businesses) will take this too far and impose limitations on deviant opinions or behaviors that are not necessarily criminal or a threat to society otherwise (cf. Tumblr‘s ban of explicit imagery). Indeed, when philosopher Hans Achterhuis pleaded for the deliberate use of “moralizing” technologies, he was immediately accused of embracing a totalitarian ideology. While this could indeed happen in some societies, most democracies will probably be able to prevent such totalitarian technologies from taking shape. A greater “risk” likely stems from digital technologies that impose some kind of morality in a more implicit manner, simply because they reflect and reproduce dominant, but questionable, values pertaining to, for instance, race, gender or skill levels.
Implications
- Users have always shown the ability to “hack” the moral scripts in technology, by disabling “inconvenient” safety features on power tools, for instance. As such scripts become part of software, more specific skills will be necessary to circumvent or disable them. Moreover, with developments such as the blockchain and smart contracts (e.g. based on Ethereum), code becomes even more forceful (hence the saying “code is law”) and moral scripts will be virtually impossible to hack.
- In virtual environments, new codes of conduct need to be developed. Interestingly, these rules can be integrated in the software that constitutes these environments. And, if that is not possible or desirable, surveillance will be total and undesirable behavior can then be penalized immediately.