While the world is still coming to terms with the existence of fake news, Aviv Ovadya chief technologist Center for Social Media Responsibility warns in an interview that this is just the prelude to the age of misinformation, which he coins as ‘Infocalypse’. Here we further explore a world in which more extreme forms of information manipulation proliferate.
- With freely available software called FakeApp, faces can easily and realistically be swapped in prerecorded video footage. Consequently, faces of porn actors and actresses have been replaced by celebrities’ faces. Similarly, researchers from the university of Washington developed a program that can translate audio into lip-synced video and researchers from Stanford were able to manipulate in real-time the facial movement of prerecorded footage of diplomats.
- In 2016 Adobe unveiled project Voco which lets users manipulate recorded speech by removing or adding words and phrases that have never been uttered by the speaker.
- In all instances, deep learning was used to build a virtual abstract representation through publicly available data. Interestingly, algorithms that have been used in the Deepfake app have been developed with Tensorflow, an open-source machine learning framework originally developed by the Google Brain team.
- Some researchers believe that countries with high internet penetration and a considerable history of corruption like Mexico and Indonesia function as a testbed for new information manipulation techniques. Allegedly, during the election of 2012 bots were already used to support now-president Enrique Peña Nieto, thereby foreshadowing the manipulations that took place during the 2016 U.S. elections.
- Weeks before the Enigma tokensale Enigma’s website, mail and slack account were compromised and were misused to communicate a fake token pre-sale, thereby tricking out almost $500.000 from unknowing investors.
- Facebook has been criticized for being an important enabling factor in the distribution of fake news. Consequently, Facebook admitted that they should be accountable and held to similar standards as media outlets, thereby moving away from positioning themselves as a neutral tech platform.
Not surprisingly, information manipulation already predates our digital era, as exemplified by the Jacobite rebellion in the mid-1700s, where in an attempt to destabilize the establishment rumors about the king’s health were circulated. However, it is only now that we are confronted with a scalable, low-cost, multi-medial and real-time tailorable form of information manipulation. In Buzzfeed’s speculative future scenario of an infocalypse, Ovadya illustrates how artificial intelligence and publicly available data can be maliciously put to use to believably imitate information sources (‘automated laser phishing’), manipulate the diplomatic process by creating virtual doubles of diplomats (‘diplomacy manipulation’) and simulate grassroots movements (‘polity simulation’). However, we can also imagine that in addition to information manipulation through imitation, manipulation could also happen through compromising information channels and sources on a mass-scale by using smart malware powered by artificial intelligence. Subsequently, these channels and/or sources can then be respectively misused and pressured into publishing fake information.
Where tech companies are moving in the direction of becoming content producers, content producers are moving in the direction of becoming more tech oriented.
However, what are potential societal consequences of a world that is continuously plagued by misinformation? From a sociopolitical perspective information manipulation provides the actor the possibility to nudge the believes, attitudes and behavior of a society. In addition to the U.S. election, some analysts also believe that Indonesia’s move to the right has been caused by fake news social-media campaigns by the Muslim Cyber Army. In a more extreme stage and on a more psychological level Ovadya mentions the problem of ‘reality apathy’ in which humans simply do not care about what is true or not, thereby eroding an important corner stone of a democratic society. From an economic perspective, in a landscape where unmanipulated and validated information is scarce and fake news abundant, it could become possible that trusted sources with validation mechanisms in place could become more valuable.
When it comes to said validation mechanisms, the infocalypse confronts us with the deeper design flaws of our current information architecture in which accountability and identifiability are not built-in. Here we could expect that the use of blockchain, reputation systems and digital signatures can help retrofit these functionalities into a system that has initially been built for quick and scalable information distribution. Furthermore, artificial intelligence should not only be perceived as the cause for mass-misinformation but at the same time could be a solution. For instance, the Tensorflow framework that has been used to create Deepfakes, has also been applied by SAP to build an application that is able to help with the detection of fake news. But in the end, it is important to remember that the problem of information manipulation remains a moving target, especially when we are increasingly virtualizing our life world (i.e. internet of things, augmented reality, virtual reality).
- Parts of the internet will move from an anonymous and unaccountable space to a space in which actors will become more identifiable and accountable. Furthermore, to prevent that these future identity management and reputation systems become new central points of failure and privacy issues, they will presumably be built on a highly decentralized and encrypted infrastructure.
- The use of AI could face regulation from governments, as these instances exemplify the enormous power of these algorithms can yield. At the same time, AI will also be used to counter misinformation
- The most sustainable forms of countering the infocalypse will have to rely on increasing its citizen’s information literacy.