Recently Tom Cruise probably woke up to find trending videos of himself playing golf and performing magic tricks for TikTok that he has no recollection of ever recording. But, unlike a plot out of a Hollywood blockbuster where the main character struggles to remember their murky past, Tom Cruise never actually recorded these videos.
2 years ago on stage I was asked “when will Deepfake video/audio impact trust & be believable in social engineering?” My response then was that we were 2 years away from undetectable Deepfakes. I wish my prediction then was wrong. We need synthetic media detection + labels ASAP. pic.twitter.com/yUUOTDepYY
— Rachel Tobac (@RachelTobac) February 26, 2021
Unfortunately for Tom Cruise, he’s the latest, viral “deepfake” and probably nowhere near the last. This quickly expanding technology has seen a growth in attention, possible future use cases (both useful and potentially harmful), and discussions surrounding ways to regulate the technology.
How Does Deepfake Technology Work?
At the heart of deepfake technology are machine learning and AI. For example, to create an uncanny replication of Tom Cruise, the videos’ creator would have trained a neural network to identify what Tom Cruise would look like from every angle or in different lightings and environments. Once the AI can generate a “face,” a user can superimpose the deepfake onto any media or another actor just as easily as someone working with photoshop.
Also, one reason why Cruise or any celebrity/public figure, in particular, are easier targets is that there are countless photos and videos that can cover every angle a machine learning network needs to piece together a recreation.
The rise of this technology mainly has to do with these projects becoming cheaper to run, and the AI behind this technology is becoming faster and faster. That’s not to say anyone can start creating their own personal Tom Cruise as this process still takes plenty of hours and concentration to adjust every little detail that may hinder the end result’s realism.
Another aspect to keep in mind is that the deep-learning technology and algorithms behind the video deepfakes of Tom Cruise differ from the fake avatars that are becoming more common across social media. The technology that The New York Times focused on is powered by generative adversarial networks (GANs). As the article mentions, there are still tiny flaws that are noticeable as new avatars are created, and GAN models specialize in creating imagery instead of video.
Where is Deepfake Technology Headed?
Forensic experts and those who analyze digital images can still determine what is and what is fake. However, that threshold is becoming harder to distinguish. For the Tom Cruise example, Hany Farid, a professor at the University of California at Berkeley (Cal-Berkeley), says the videos are well done.
Farid, who specializes in analyzing digital images, said his graduate students could identify Cruise’s eye color and shape change for a brief moment at the end of the magic trick video. So, for the moment, AI technology isn’t perfect.
It’s also suggested by Farid that, in the case of the Tom Cruise videos, part of the realism came from the actor’s mouth, though the rest of the face was generated by deep learning: “This would make sense if the actual person in the video resembles Cruise, did some good work with makeup perhaps, and the swapping of the distinct eyes is enough to finalize a compelling likeness.”
This is not Tom Cruise
This is a deep fake
This probably isn’t great news for Cameo pic.twitter.com/CqkzNOAFQp
— Damian Burns (@damianburns) February 26, 2021
For the time being, there’s still a precise skill and monetary gap before anyone can start creating their own AI-powered deepfakes, but that timeframe is gradually shrinking. There’s not much of a consensus of when this technology will become refined and easily accessible. Still, some experts suggest the range is two to ten years before it can be as easy as pulling up an app on your smartphone.
Unfortunately, the shrinking timeframe also means less time for companies and even governments to put forward solutions and regulations that can help reduce some of the harm and weariness that can come out of AI-powered deepfake technology.
How Can Deepfake Technology be Regulated?
Large technology companies like Facebook, YouTube, and Twitter understand their platforms are commonplace in the modern-day news cycles and have started to reduce the threat that all deepfake technology can impose.
Facebook recently recruited researchers from Cal-Berkeley and Oxford University to build a detector that can identify and remove any deepfake content that can be harmful. Twitter also has plans to tag any deepfake that isn’t initially removed. And YouTube prioritized removing deepfake videos that could have impacted information surrounding the 2020 election in the United States.
There are also deepfake analyzing programs such as Reality Defender that can act almost like a spam remover works on any emailing platform and alert users if a piece of media may be doctored. The company is collaborating with Cal-Berkeley, Twitter, Microsoft, and Google.
Lastly, another emerging technology and fellow buzzword of the past few years, blockchain, can potentially provide a solution to hinder unregulated deepfake content. Blockchain is now capable of utilization for creating watermarks on pieces of digital content.
For example, an original video created by a celebrity or public figure can have a generated unique signature that proves that the piece of content is original and un-doctored. That watermark can act as a way to debunk deepfake content or misinformation quickly.
What’s Next for Deepfake Technology?
The Tom Cruise deepfakes are ultimately accelerating a necessary discussion about a technology that is becoming more and more prominent. There are also some positives of the growing technology for all the potential threats of deep learning AI. Image-based media isn’t its only purpose for the world. From innovating healthcare to art to education, deep learning AI shouldn’t just ignite fear but rather encourage digital literacy among the public so that they can identify a deepfake from reality.