What is deepfake technology?

The term deepfake comes from deep learning, i.e., machine learning and the word fake. Deepfake technology is a form of artificial intelligence that can be called ‘a subset of AI.’

The Deepfake process produces a type of fake media content that can be a video with or without audio tampered with or fabricated to make it appear like someone is saying something, but actually, he did not.

Deepfake working

The most important concept in deepfake is machine learning, making it possible to produce deepfake videos faster at a low cost. The deepfake video mechanism depends on two competing AI algorithms, where the first is a generator, and the other is a discriminator. The generator creating the phony multimedia content asks a discriminator to determine whether the content is real or artificial.

Generator and discriminator together form a network called Generative Adversarial Network (GAN). Every time the discriminator accurately identifies the content as fabricated, it gives the generator the next instruction on improving the next deepfake.

While creating a deepfake video, a creator first trains a neural network for a long time with real video footage of the person that provides a realistic “understanding” of what it looks like from different angles and under distant lightning. This is then combined into a trained network using a computer-graphics technique to superimpose a person’s copy onto another actor.

The initial stage in establishing a GAN is identifying the desired output and building a generator’s training dataset. As soon as the generator begins to create an acceptable output level, video clips can be readily fed in the discriminator.

With time, the generator starts getting better at creating fake video clips, and the discriminator starts getting better at spotting them. In contrast, the discriminator gets better at spotting fake videos, and the generator starts getting better at creating them.

Usually, it isn’t easy to alter video content in any substantial way. Besides, as AI is the mean of creating deepfakes, it doesn’t require any particular skill as much as it takes to make a real video. This means that anybody can create a deepfake to fulfill their target.

Deepfakes mostly target the spreading of false information carried by any ordinary person. However, Microsoft has designed an AI-powered deepfake detection software to solve the purpose. It functions to automatically analyze photos and videos to discern whether the media is manipulative or not. Other possibilities with deepfake are that people will lose trust in any video content validity after realizing it’s fake.

Emergence of deepfake

Deepfake started getting popularity when a user from Reddit posted that he developed a machine learning (ML) algorithm that could morph faces seamlessly. As the technology was on the verge of growing, many began to use it to create fake videos. Soon the site administrators decided to shut it down due to the increasing misuse of the technology.

Even though the idea of manipulating videos is not new, some universities in the 1990s were already performing significant research in computer vision. During this time, artificial intelligence (AI) and ML spent much of their part modifying existing videotapes of a person who speaks and merges them with other audio tracks. The Video Rewrite program in 1997 reported a similar functioning.

Deepfake impacting cybersecurity

Deepfake is like a new wrinkle on an old threat – a pure media manipulation.

Evolving with the technology to splice audio and videotapes to using photoshop and other editing suites, GANs has come up with a new way to play with media.

Deepfake attracts a lot of attention and is circulated faster, especially when it comes to propagating a media person. Such content, for sure, attracts the hackers’ attention. They can easily lure people into clicking on something containing a malicious hidden link inside or redirects users to visit a false website while displaying the content. Sometimes, hackers don’t need to dig much deeper to achieve that effect when performing the above activity.

Hackers don’t seem to waste much of their time and effort when it comes to complicated methods, as there exist other simpler means to perform similar activity types. It implies that deepfakes, at least for now, don’t pose any severe threat for any cybercrime.

There have been a limited number of reported cases of voice fraud with a vision to convince company employees to send money to fraudulent accounts. It appeared to be a different version of fraudsters on the business email compromising phishing tactics. It shows how hackers always keep on trying new ways depicting attacks and hijacking security gestures.

Amongst all, the greatest concern is the deepfake content used in personal defamation attacks, with an attempt to slander the reputations of individuals, whether in the workplace or in personal life.

Also, the possibility of using deepfake to defame the executives or businesses by competitors is one of the misconducts. The most harmful one comes from information warfare during a national emergency and political elections, which is widely thought to be the reason for disinformation campaigns using deepfake content.

Summing it up

Accessibility of tools to create deepfakes is now easily available. Drawing some limitations by law has now become a point of concern. But for every head, there is always a tail; similarly to every new technology innovation, there will always be some people who will find ways to use it to detriment others.

Moreover, deepfake technology originates from the same advancements as other machine learning tools that enhance our lives, including the act of detecting malware.

Creating a fake video for misinformation is a fraudulent practice. It is beyond the means to recognize disinformation to judge the context of other things that are true, reasonable, or probable. The answer to ‘Is deepfake a matter of concern?’ is a big yes. With the growing technology comes more problems, and to solve them, man needs to experiment and find answers to prove what it takes to be right and wrong!

Download our latest whitepapers to know more about artificial intelligence and technology.