In 2018, Sam Cole, reporter for Motherboard, discovered a new and disturbing corner of the Internet. A Reddit user by the name of “deepfakes” was posting fake non-consensual porn videos using an AI algorithm to swap celebrity faces for real porn. Cole sounded the alarm on the phenomenon, just as technology was about to explode. A year later, deepfake porn had spread far beyond Reddit, with easily accessible applications who could “strip” the clothes of any woman photographed.
Since then, deepfakes have had a bad reputation, and rightly so. the the vast majority of them are still used for fake pornography. An investigative reporter was severely harassed and temporarily silenced by such activity, and more recently, a poet and novelist was scared and ashamed. There is also the risk that political deepfakes generate compelling fake news that could wreak havoc in unstable political environments.
But as media manipulation and synthesis algorithms have become more powerful, they have also given rise to some positive applications, as well as some that are humorous or mundane. Here’s a look at some of our favorites in rough chronological order, and why we think they’re a sign of what’s to come.
Protection of whistleblowers
In June, Welcome to Chechyna, an investigative film into the persecution of LGBTQ people in the Russian republic, Became the first documentary to use deepfakes to protect the identity of its subjects. The anti-persecution activists, who have been the main characters in the story, have lived in hiding to avoid being tortured or killed. After having explored many methods to conceal their identity, the director David France decided to offer them false “covers”. He asked other LGBTQ activists around the world to lend their faces, which were then grafted onto the faces of the people in his film. The technique allowed France to preserve the integrity of the facial expressions of its subjects and therefore their pain, their fear and their humanity. In total, the film protected 23 people, pioneering a new form of whistleblower protection.
In July, two MIT researchers, Francesca Panetta and Halsey Burgund, published a project to create an alternate story of the 1969 Apollo moon landing. In the event of a lunar disaster, it uses the speech President Richard Nixon would have given if the momentous occasion had not gone as planned. The researchers teamed up with two separate companies for the audio and video deepfake and hired an actor to provide the “core” performance. They then ran his voice and face through both types of software, and put them together into a final Nixon deepfake.
While this project shows how deepfakes could create powerful alternate stories, another suggests how deepfakes could bring a real story to life. In February, Time magazine recreated Martin Luther King Jr.’s walk in Washington for virtual reality to immerse viewers in the scene. The project did not use deepfake technology, but Chinese tech giant Tencent later the city in a white paper on its AI projects, claiming that deepfakes could be used for similar purposes in the future.
At the end of the summer, the memersphere got their hands on some easy-to-make deepfakes and triggered the results in the digital world. One viral meme in particular, called ‘Baka Mitai’ (pictured above), quickly exploded as people learned how to use technology to create their own versions. The specific algorithm that fuels madness came from a 2019 research paper which allows a user to animate a photo of a person’s face with a video of someone else. The effect is not high quality by any stretch of the imagination, but it certainly produces quality pleasure. The phenomenon is not entirely surprising; play and parody have played a leading role in the popularization of deepfakes and other media manipulation tools. This is why some experts emphasize the need for guardrails to prevent satire from turning into abuse.