Skip to main content

Last month, video went viral of Prince William yelling at a photographer who he claimed was stalking his children. The act seemed so uncharacteristic of the earnest royal that it lead some observers to suggest that the footage was in fact, a Deepfake; an AI driven video simulation of a person, that can be difficult to distinguish from the real thing.

The video turned out to be real and the controversy quickly subsided. But more than a few cybersecurity firms believe the suspicion of Deepfake technology was well founded. They caution that risks of Deepfakes are real; that attacks are growing in number, and that the security implications of Deepfake usage extends well beyond the world of celebrity, into the jobs and even personal lives of ordinary people.

Originally developed in academic institutions, Deepfake technology began without great notice in the 1990s. One of its first applications was in film, as curious researchers tried to edit video footage of someone speaking, with audio such that they appeared to have said something else.

But as interest in the technology increased, private institutions started their own experimentation with deepfakes, and soon it was being adopted by industry. With that came the robots.

The major difference between the benign Deepfake technology of computer labs in the last millennium and the malicious versions in existence today is its ability to learn through artificial intelligence. Humans don’t need to carefully match audio and video anymore. Instead, by providing an algorithm enough data - that is, enough minutes of audio and video of you speaking - the technology can teach itself to emulate you, and gradually do so with higher and higher accuracy.

If you’ve never seen a Deepfake video, it’s possible to find them so realistic as to be unsettling. While many claim they can spot a fake, it is unclear for how long this will remain true as the techniques and technology undergo constant improvement.

Scroll to Continue

Recommended Articles

The primary risk of Deepfake technology is in its potential use for impersonation. If malicious actors can create functioning replicas of a face or voice, they can use them to gain access to devices or environments that use such personal identifiers for recognition. Last month the FBI issued a warning that criminals were now using this method to apply for remote work at home jobs, with the intention of gaining access to IT databases and financial data within organizations.

If fake job applications seem harmless enough, consider the 2019 case of a UK-based energy firm, where fraudsters used a version of Deepfake technology to impersonate a CEO’s voice and order the transfer of more than $200,000 to a foreign bank account. While the company’s insurance provider was able to cover the incident, it was more than enough to sound the alarm over the future risks of a similar attack.

Experts disagree about how soon and how serious the threat posed by Deepfake technology really is. Some suggest that while the frequency of these attacks is still relatively low, hacker discussion of the technique is on the rise and this should precede an increase in attacks. Others counter that similar predictions have been made since early in 2021 with a measurable increase in Deepfake attacks yet to be seen.

While the threat posed by Deepfake technology is real, efforts are already underway to counter the threat. Researchers have turned AI algorithms against themselves to develop deepfake detection technology that are gradually improving. Others have proposed blockchain based verification systems to give real video content greater credibility. Scientists also advocate for the sharing of social media data with academic institutions to improve the study of misinformation and fake content.

Most agree that automated detection alone won’t be enough to stop the threat. We need, they argue, a multi-pronged approach that marshalls resources from government and private industry to make the creation of Deepfaked content more difficult and more costly.

How can you spot a deepfake yourself? Experts say it’s much more difficult to Deepfake a presenter wearing glasses. They say that for now, a Deepfake video will usually contain a presenter who is stationary. They advise to watch for irregular blinking patterns (although they caution that Deepfakes have already improved to counter this form of detection).

sponsored postr 200

According to Alex Engler at the Brookings Institute, fighting Deepfake technology is “a Cat and Cat Game”. One expects, for Deepfakes as for many digital technologies, that time will tell whether the good cat or the bad cat reigns supreme.