In fact, there have already been several high-profile cases in which fake photos have been used in damaging disinformation campaigns. In December 2019, Facebook identified and dismantled a network of more than 900 pages, groups and accounts, including those with deeply faked profile photos associated with far-right outlet The Epoch Times, known for its disinformation tactics. In October 2020, a fake “intelligence” document circulated in President Trump’s circles that has become the basis for many conspiracy theories surrounding Hunter Biden was also written by a fake security analyst. with a deep profile picture.
Toler says deepfake faces have become a trend in his work as an open-source investigator of suspicious online activity, especially since launching ThisPersonDoesNotExist.com, a website that serves as a new face generated by the AI at every update. “There is always a mental checklist that you go through whenever you find something,” he says. “The first question is,” Is this person real or not? “It was a question we hadn’t really asked five years ago.
How big is the threat? For now, Toler says the use of deepfake faces hasn’t had a big impact on his work. It is still relatively easy for him to identify when a profile image is a deepfake, as it is when the photo is a stock image. The most difficult scenario is when an image of a real person is pulled from a private social network account that is not indexed on image search engines.
A growing awareness of the existence of deepfakes has also prompted people to scrutinize the media they see more carefully, Toler says, as evidenced by the speed with which people have figured out the falsification of Amazon accounts.
But Sam Gregory, the director of the nonprofit Witness human rights program, says that shouldn’t lull us into a false sense of security. Deepfakes are “constantly improving,” he says. “I think people are a little too convinced that it will always be possible to detect them.”
Hyper-awareness of deepfakes could also lead people to stop believing in real media, which could have equally disastrous consequences, for example by undermining the documentation of human rights violations.
What should we do? Gregory encourages social media users to avoid wondering if an image is a deepfake or not. Often, this is only a “tiny part of the puzzle,” he says. “The giveaway isn’t that you somehow questioned the image. It’s that you’re looking at the account and it was created a week ago, or it’s a reporter claiming to be a reporter, but never wrote anything else that you could find on a Google search. “
These investigative tactics are much more robust to advances in deepfake technology. This advice is also true for the Amazon case. It was by checking the accounts’ emails and tweet details that Toler ultimately determined that they were fake. Not by scanning profile pictures.