Facebook researchers working with Michigan State University say that they can now reverse-engineer deepfakes and identify manipulated content with the use of a single still image from the video. They claim to be able to determine where deepfakes in real world settings may have originated, and the software used to produce them.
Deepfakes are digitally altered videos produced by an AI deep learning algorithm, which typically enables the mixers to paste someone’s face on someone else’s body. They have been cited as a potential threat to security as they can enable fraud and impersonation. Deepfakes have been used to mimic celebrities on Instagram and TikTok, and create manipulated pornographic videos of popular actresses.
Facebook researchers claim that their AI software can be trained to establish if a single piece of media is a deepfake based on a single frame of the video. Furthermore, the software can be used to identify the specific model used to generate the deepfake.
On Wednesday, the researchers presented a “research method of detecting and attributing deepfakes that relies on reverse engineering from a single AI-generated image to the generative model used to produce it.”