>

What is deep fake technology and what are the rules regarding it in India? Know in detail

It is not a new thing for fake pictures and videos to surface or go viral. For as long as photographs and films have existed, people have been faking/manipulating them in the name of deception or entertainment. Since people have joined the Internet on a large scale and its usage has increased, there has been an increase in misleading photos, videos etc. But now, instead of just changing the image with the help of editing software like Photoshop or editing a video and giving it a different look, fake photos, videos, audios etc. are being exposed through other techniques. Due to which it is difficult to find out whether they are real or fake. All this is being done through a technique called ‘Deep Fake’. This time let’s investigate the facts related to this technology in these days…

A few days ago, when a video of actress Rashmika Mandanna went viral on social media, the audience gave many reactions to it. But the truth is that that video was not of Rashmika, but of someone else. Which was created by manipulation through deep fake technology, which looked absolutely real. After this, fake videos of many people including actress Kajol, Katrina Kaif surfaced. For some time now, there has been a trend of making fake videos of well-known personalities and making them viral. This misuse of technology is very worrying. Prime Minister Narendra Modi has also expressed concern about this technology and has asked the media to make people aware in this regard.

what is deep fake

Deep fake is a combination of ‘deep learning’ and ‘fake’. In this, by using Artificial Intelligence (AI), a fake copy of any media file (image, audio and video) is created, which looks exactly like the original file and talks in the same way or makes the same sound. As it actually happens to the person concerned. In other words, deep fakes, in their most common form, are videos where a person’s face has been replaced with a computer-generated face. These videos are artificial videos created using digital software, machine learning and face swapping.

In these, images are combined to create new footage, which shows events, statements or actions that never actually happened. It can be made more reliable by using Generative Adversarial Network (GAN). Deep fakes differ from other types of false or misleading information because it is difficult to identify whether a video is real or fake.

This is how this word came into existence

University researchers and special effects studios have been manipulating videos and images for a long time. But the term deep fake was born in 2017 when members of an online community of the same name posted doctored porn clips on the popular website Reddit. In these videos, the faces of celebrities were superimposed on the faces of adult actors – these included Gal Gadot, Taylor Swift, Scarlett Johansson, among others. This technique was taken seriously in 2019 when hackers imitating the voice of the CEO of a UK-based company sent a phone request, resulting in an illegal bank transfer of $243,000. Similarly, in 2021, criminals copied the voice of the director of a Japanese company and got the company’s branch manager to transfer an amount of $35 million to a fraudulent account. These examples highlight the emerging threat posed by deep fakes and emphasize the urgent need to adopt advanced cyber security measures to prevent fraud in all sectors, including financial.

What are the laws regarding this technology in India?

Currently, there is no law specific to deep fakes in India, but there are many other laws with the help of which this cyber crime can be dealt with.

Is. These include, in addition to legal provisions like Section 66E and Section 66D of the Information Technology Act 2000, Section 51 of the Indian Copyright Act 1957.

Section 66E: This section of the Information Technology Act 2000 deals with capturing, publishing or broadcasting a photograph of a person in mass media through deep fake and violating the privacy of that person. Under this law, there is a provision of imprisonment up to three years or a fine of Rs 2 lakh for such a crime.

Section 66D: Section 66D says that when any communication devices or computer resources are used with the intention of committing fraud, the person doing so should be prosecuted. For committing an offense under this provision, imprisonment of up to three years and/or a fine of Rs 1 lakh can be imposed.

Indian Copyright Act 1957: Under Section 51 of this Act, any unauthorized use of any work of another person over which that person has exclusive rights will be considered as a violation of this law. In this way, it allows copyright owners to take legal action. Despite the absence of a specific law related to deep fakes, the Ministry of Information and Broadcasting had issued an advisory on January 9, 2023, urging media organizations to take precautions and label manipulated content.

what is bletchley declaration

Keeping in mind the concerns arising from manipulation through AI at the global level, on November 1, 2023, 28 countries including America, Britain, Australia, China, Brazil and the European Union signed an agreement at the AI ​​Safety Conference held in Britain. This agreement is probably called the Bletchley Declaration because of this conference held in Bletchley Park in London. Through this announcement, it was agreed that all countries will work together to deal with the challenges arising from AI. In this first event of its kind, all countries seemed to agree that AI can create a devastating crisis for humanity.

What do experts say

According to American Internet pioneer Vint Cerf, as a society we are reaching the point where we have tools that can mislead in dangerous ways. To avoid this, he has asked everyone to learn critical thinking. He said that we should ask ourselves where did this material come from? What is its purpose? Am I trying to agree to something I shouldn’t agree to? According to Surf, we should think seriously about what we see. Be aware that critical thinking is the ability to effectively analyze any information and take decisions. This also includes being aware of our own biases.

Technology can be helpful in many tasks

It is fair to question whether deep fakes are always malicious. The answer is absolutely not. Many are entertaining, some helpful. Voice cloning deep fake can be helpful in restoring the voice of people who have lost their voice due to disease. Not only this, deep fake videos can bring galleries and museums to life. For example, the Dali Museum in Florida has a deepfake of a painter who shows off his art and takes selfies with visitors.

Also used in entertainment industry

For the entertainment industry, this technology can be used to improve the dubbing of foreign language films and to make dead actors appear alive. Deep fakes and other similar techniques have been used in films for a long time. For example, after the death of Fast and Furious actor Paul Walker, his brother was cast in his film. But through deep fake, his face and voice were made exactly like Paul Walker. Recent Star Wars films have featured computer-generated images of Carrie Fisher and Peter Cushing. These photos are exactly as they appeared in the original 1977 film. Whereas in many Marvel films, many actors including Michael Douglas and Robert Downey Jr. have been shown young through de-aging techniques.

Misuse of technology can be disastrous

Misuse of any technology can prove disastrous. Deep fakes are a perfect example of this. By using this, a person’s political, social and economic life can be ruined. A common concern about deep fakes is that it could be used to destabilize democracy or interfere in politics. An attempt can also be made to defame someone through this.

Democracy can be harmed in this way

The use of deep fakes can cause immense damage to democracy. In order to make a candidate whom we dislike, or to make our favorite candidate win or by taking money from another party to make another candidate win, such videos, audio clips, photographs related to the targeted candidate can be released in which such things are said. Which harms the unity, integrity and social fabric of the country, or proves the candidate corrupt. Such videos are enough to assassinate the character of the targeted candidate. In this way the election process can be easily influenced. Along with this, spreading misleading information about a person associated with any constitutional institution can also prove to be a big challenge for the democratic system.

Leave a Comment