top of page

Well, that’s deeply fake [Synthetic media and misinformation]

Our data is sold and we are being manipulated on a daily basis. The more you express yourselves on social medias, the more they own you. Don’t agree much? Watch Zuckerberg confessing the same:

Data is a powerful tool, social medias are merely survey forms to collect that. It has the power to change governments and affect politics. Even Trump tells us the same :

Astonished much?

Well, the two videos you have just viewed are FAKE, or precisely deeply fake. What is a deep fake?

It’s a fake media (audio/video/images) which is created using Deep Neural Networks. Generative Adversarial Networks (GANs) are the technology used in deepfakes. Two neural networks compete to produce and discern high quality faked images. One is the “generator” (which creates images that look like an original image) and the other is the “discriminator” (which tries to figure out if an image is real or simulated).

You want to see what kind of images a GAN can produce, check out this website, all the images shown here are generated by machine and none of them correspond to a real human being:

The cost of producing these new forms of synthetic media has decreased significantly in the last few years given increasing amounts of training data, computing power and effective publicly shared approaches and code.

So what all can be done in a deep fake?

SIMULATED VOICE : The enhanced ability to simulate individuals’ voices as developed and available commercially via providers

EDIT VIDEO ELEMENTS: There are tools to remove or add additional elements to an image/video. Eg: A person can be totally safely removed from an entire CCTV footage by using some tools.

FACIAL RE-ENACTMENT : You want someone to say something or give a particular expression you got it. Some tools allow the transfer of the facial and upper body movements of one person onto the realistic appearance of another real person’s face and upper body.


And there are many others, want to see a combination of these features in action, check out this fake video of Obama telling us about the dangers of this technology

The use of this machine learning technique was mostly limited to the AI research community until late 2017, when a Reddit user who went by the moniker “Deepfakes” — a portmanteau of “deep learning” and “fake” — started posting digitally altered pornographic videos. He was building GANs using TensorFlow, Google’s free open source machine learning software, to superimpose celebrities’ faces on the bodies of women in pornographic movies. A number of media outlets reported on the porn videos, which became known as “deep fakes”. In response, Reddit banned them for violating the site’s content policy against involuntary pornography. By this stage, however, the creator of the videos had released FakeApp, an easy-to-use platform for making forged media. The free software effectively democratized the power of GANs. Suddenly, anyone with access to the internet and pictures of a person’s face could generate their own deep fake. (Source: Guardian article)

So, what are all the dangers associated with this situation?

Apart from creation of non-consensual sexual imagery, violating human rights and privacy, there is more to it.

Editing software and manual and automatic synthesis can increasingly create perceptually realistic images that are not visible as manipulated to the naked eye and visual analysis. You read it right. Additionally, there is no effective technology, as of now, which can detect for sure whether a media is fake or artificially created. Thus, tomorrow if someone uploads a fake video on social media, of a political leader saying things that can wreak havoc, social media has no capability to identify that the video is fake and stop it from getting uploaded. Such grasping content can be easily used to manipulate people at insanely high levels.

Furthermore, existent of this technology is creating huge troubles for journalists who rely on the presence of a video as proof of an event happening.

As per this WITNESS blog post

This changing landscape allows for new challenges to human rights and reliable journalism that potentially includes categories of disruption including:

Reality edits removing or adding into photos and videos in a way that challenges our ability to document reality and preserve the evidentiary value of images, and enhances the ability of perpetrators to challenge the truth of rights abuses.

Credible doppelgangers of real people that enhance the ability to manipulate public or individuals to commit rights abuses or to incite violence or conflict.

News remixing that exploits peripheral cues of credibility and the rapid news cycle to disrupt and change public narratives.

Plausible deniability for perpetrators to reflexively claim “That’s a deepfake” around incriminating footage or taken further, to dismiss any contested information as another form of fake news.

Floods of falsehood created via computational propaganda and individualized microtargeting, contributing to disrupting the remaining public sphere and to overwhelming fact-finding and verification approaches.

As with all other issues of technology, the primary question rebounds back, who is responsible? Is it the responsibility of social medias, the tech giants or the government to ensure methods to curb misinformation? Can it be done in a way that our freedom of expression doesn't get compromised? Can the production and usage of such a technology be restricted for use to finite people and organizations? Can we still rely on videos as an evidence of a happening?

Fake news is still an unresolved issue, under such circumstances accessibility to improved deep fake technology just adds to the gravity of the issue here.

There are always pros and cons of technology but for this one the cons clearly weigh more. Especially in absence of laws to govern this.

As Netizens, what are our options? Trust nothing, be vigilant and cross verify information. We need trusted sources of news more than ever and we need people to be aware of such things more than ever.


bottom of page