Highlights:

  • Ilke Demir, a senior staff research scientist at Intel Labs, collaborated with Umur Ciftci of the State University of New York at Binghamton to build FakeCatcher. The device employs Intel hardware and software, operates on a server, and communicates via a web-based interface.
  • FakeCatcher collects PPG signals from 32 points on the face, and then creates PPG maps from the temporal and spectral components.

Intel introduced FakeCatcher, which it claims is the first real-time detector of deepfakes or synthetic media in which a person’s likeness is replaced in an existing image or video.

Intel says the product has a 96% accuracy rate and returns findings in milliseconds by monitoring the delicate “blood flow” in video pixels.

Ilke Demir, a senior staff research scientist at Intel Labs, collaborated with Umur Ciftci of the State University of New York at Binghamton to build FakeCatcher. The product employs Intel hardware and software, operates on a server, and communicates via a web-based interface.

Intel’s deepfake detector is based on PPG signals

FakeCatcher, unlike most deep learning-based deepfake detectors, focuses on identifying inauthenticity based on clues within actual videos. PPG, or photoplethysmography, measures the amount of light absorbed or reflected by blood vessels in living tissue. When the heart pumps blood, the blood travels to veins that change color.

Demir said, “You cannot see it with your eyes, but it is computationally visible. PPG signals have been known, but they have not been applied to the deepfake problem before.” She said that FakeCatcher collects PPG signals from 32 points on the face and then creates PPG maps from the temporal and spectral components.

Demir said, “We take those maps and train a convolutional neural network on top of the PPG maps to classify them as fake and real. Then, thanks to Intel technologies like [the] Deep Learning Boost framework for inference and Advanced Vector Extensions 512, we can run it in real-time and up to 72 concurrent detection streams.”

Detection important in the age of growing threats

According to a new study article by Microsoft’s chief science officer, Eric Horvitz, deepfake detection has become increasingly crucial as deepfake dangers loom. These include interactive deepfakes, which provide the appearance of conversing with a real person, and compositional deepfakes, in which bad actors construct a “synthetic history” by combining many deepfakes.

And in 2020, Forrester Research anticipated that deepfake fraud charges will top USD 250 million.

On the other hand, there are several legitimate and allowed applications for deepfakes. Companies such as Hour One and Synthesia provide deepfakes for corporate environments, such as staff training, education, and eCommerce. Or deepfakes may be developed by users such as celebrities and business leaders who seek to “outsource” to a virtual twin via synthetic media. In such instances, a method will hopefully emerge to guarantee complete transparency and provenance of synthetic media.

Demir confirmed that Intel is researching and it is in its beginning stages. She said, “FakeCatcher is a part of a bigger research team at Intel called Trusted Media, which is working on manipulated content detection — deepfakes — responsible generation and media provenance. In the shorter term, detection is actually the solution to deepfakes — and we are developing many different detectors based on different authenticity clues, like gaze detection.”

The subsequent step will be source detection or identifying the GAN model behind each deepfake. She said, “The golden point of what we envision is having an ensemble of all of these AI models, so we can provide an algorithmic consensus about what is fake and what is real.”

The challenge history with deepfake detection

Detecting deepfakes has proven to be difficult on several fronts. According to a study from the University of Southern California conducted in 2021, some datasets used to train deepfake identification algorithms may underrepresent individuals of a certain gender or skin color. According to the co-authors, this prejudice can be increased in deepfake detectors, with certain detectors displaying up to a 10.7% variation in mistake rate based on race group.

And in 2020, researchers from Google and the University of California, Berkeley demonstrated that even the greatest artificial intelligence systems taught to differentiate between genuine and synthetic information were subject to adversarial assaults that led them to categorize phony photographs as real.

Moreover, deepfake makers and detectors engage in an ongoing game of cat-and-mouse. However, according to Demir, Intel’s FakeCatcher cannot be fooled.

Demir said, “Because the PPG extraction that we are using is not differentiable, you cannot just plug it into the loss function of an adversarial network because it doesn’t work, and you cannot backpropagate if it’s not differentiable. If you don’t want to learn the exact PPG extraction but want to approximate it, you need huge PPG datasets, which don’t exist right now — there are [datasets of] 30-40 people that are not generalizable to the whole.”

Rowan Curran, an AI/ML analyst at Forrester Research, said, “While we’re still in the very early stages of this, Intel’s deepfake detector could be a significant step forward if it is as accurate as claimed, and specifically if that accuracy does not depend on the human in the video having any specific characteristics (e.g., skin tone, lighting conditions, amount of skin that can be seen in the video).”