On November 30, Chinese Foreign Ministry spokesperson Lijian Zhao pinned a picture to their Twitter profile. In it, a soldier stands on an Australian flag and smiles maniacally as he holds a bloody knife to a boy’s throat. The boy, whose face is covered with a semi-transparent veil, carries a lamb. Along with the image, Zhao tweeted: “Shocked by the killing of Afghan civilians and prisoners by Australian soldiers. We strongly condemn such acts and call for to hold them accountable. “
The tweet is referencing a recent ad by the Australian Defense Force, which found “credible information” that 25 Australian soldiers were involved in the murder of 39 Afghan civilians and prisoners between 2009 and 2013. The image claims to show an Australian soldier on the verge of slicing the throat of an innocent Afghan child. Explosive stuff.
Except that the image is wrong. On closer inspection, it’s not even very convincing. It could have been set up by a Photoshop novice. This image is a so-called cheapfake, media that has been grossly manipulated, edited, mislabeled, or poorly contextualized in order to spread disinformation.
The cheapfake is now at the heart of a major international incident. Australian Prime Minister Scott Morrison said China should be “completely ashamed” and demanded an apology for the “Disgusting” image. Beijing refused, accusing Australia instead of “barbarism” and try to “distract the public’s attention” alleged war crimes committed by its armed forces in Afghanistan.
There are two important political lessons to be learned from this incident. The first is that Beijing has sanctioned the use of a cheap fake by one of its top diplomats to actively spread disinformation on Western online platforms. China has traditionally been cautious in these areas, aiming to portray itself as a benign and responsible superpower. This new approach is an important start.
More broadly, however, this skirmish also shows the growing importance of visual disinformation as a political tool. Over the past decade, the proliferation of manipulated media has reshaped political realities. (Consider, for example, the cheapfakes that catalyzed genocide against Rohingya Muslims in Burma, or contributed to covid misinformation.) Now that the global superpowers are openly sharing cheap fakes on social media, what is stopping them (or any other actor) from deploying more sophisticated visual disinformation as it emerges?
For years, journalists and technologists have warned of the dangers of “deepfakes”. Widely, deepfakeMedia are a type of “synthetic media” that has been manipulated or created by artificial intelligence. They can also be seen as the “top” successor to cheapfakes.
Advances in technology are simultaneously improving the quality of visual disinformation and making it easier for anyone to generate. As it becomes possible to produce deepfakes through smartphone apps, almost anyone will be able to create sophisticated visual disinformation at virtually no cost.
Deepfake warnings came to a head ahead of this year’s US presidential election. For months, politicians, journalists and academics debated how to counter the perceived threat. As the vote draws near, the state legislatures of Texas and California preventively banned the use of deepfakes to influence elections.