In the recent past, numerous deepfake videos featuring famous actors have been publicly shared on social media platforms. This has rightly raised concerns over the misuse of technology for creating fake and disturbing narratives. Even many advertisements and sponsored posts on social media have been found to contain deepfakes of famous personalities and celebrities. Recently, a deepfake clip of a female celebrity had gone viral on social media and she has to express her dismay and clarification. Artificial intelligence (AI)-generated deepfakes found their way in political campaigns during the state elections of Delhi in 2020. Deepfakes are considered to be a dangerous and damaging form of misinformation. This has called for legal obligations of social media platforms and information technology (IT) rules pertaining to digital deception. India is drafting rules for the detection and limitation of the spread of deepfake content and allied harmful AI media.
Deepfakes
Deepfakes are digital media-video, audio and images that are edited and manipulated using AI. Deepfakes constitute fake content created using powerful AI tools. Deepfakes a combination of the words ‘deep learning’ and ‘fake’, are fabricated videos made from existing face-swapping techniques and technology. Deep learning technology is a branch of machine learning (ML), which applies neural net simulation to massive data sets, enabling tasks such as image recognition, natural language processing, speech recognition and more. When AI is employed and the computer is fed with enough data, it leads to the generation of fakes, which behave much like a real person. It blurs the lines between fiction and reality. Although creating fake accounts is not an intended application of deep-learning technology.
There are many benefits of deepfakes when they are used in education, film production, criminal forensics and artistic expression. However, they could also be used to exploit people, sabotage elections and spread large-scale misinformation.
The origin of the word ‘deepfake’ could be traced back to 2017 when a Reddit user, with the username ‘deepfakes’, posted explicit videos of celebrities. Such videos, though, may sometimes be entertaining, they carry plenty of ethical concerns regarding consent and potential misinformation, and can be potentially used to damage reputations, fabricate evidence and undermine trust in democratic institutions.
At present, generation of deepfakes are not that difficult. It could easily be generated by semi-skilled and even unskilled individuals by morphing audio-visual clips and images. The tools required for the creation and circulation of such disinformation have become easier, quicker, inexpensive and more accessible than before, making it even more difficult to detect deepfakes and its allied technologies. However, more resources are now accessible to equip individuals against their misuse. For instance, the Massachusetts Institute of Technology (MIT) created a Detect Fakes website to help people identify deepfakes by focusing on small intricate details.
Studies have found that the deepfakes technology has brought a sharp increase in cases of gender-based violence, instances of shaming women, making pornographic videos, etc.
Existing Laws in India to Prevent Deepfakes
India does not have any specific laws to address deepfakes and AI-related crimes. However, there are provisions under various legislations that could offer both civil and criminal relief. For instance, Section 66E of the Information Technology (IT) Act 2000 is applicable in cases of deepfake crimes related to the violation of privacy of an individual, such as capturing, publishing or transmitting of a person’s images in the mass media without their knowledge or consent. Such an act is considered as an offence and is punishable with up to three years of imprisonment and a fine of two lakhs. Further, using communication devices or computer resources with malicious intent, which leads to impersonation or cheating is made a punishable offence under Section 66D of the IT Act. If found guilty, a person could be fined rupees one lakh and/or be put behind the bars for three years.
Various sections, such as Section 67, 67A and 67B, of the IT Act could be used to prosecute individuals for publishing or transmitting deepfakes that promote obscenity or contain any sexually explicit acts. According to IT Rules, hosting ‘any content that impersonates another person’ requires social media platforms to quickly take down ‘artificially morphed images’ of individuals when altered. If any social media platform fails to take down such content, then that platform would be at the risk of losing the protection of the ‘safe harbour’.
Safe harbour is a provision that protects the social media companies from regulatory liability for third-party content shared by users on their platforms.
In addition to these, cybercrimes associated with deepfakes could also be addressed through the provisions of Indian Penal Code (IPC) 1860, Sections 509 (words, gestures, or acts intended to insult the modesty of a woman), 499 (criminal defamation), and 153 (a) and (b) (spreading hate on communal lines) among others. In the recent incident of the celebrity deepfake, the Delhi Police Special Cell has reportedly filed an FIR against unknown persons by invoking Sections 465 (forgery) and 469 (forgery to harm the reputation of a party).
In case the deepfake content involves the use of any copyrighted image or video, then the Copyright Act of 1957 could also be invoked. Section 51 of the Act prohibits the unauthorised use of any property belonging to another person, and on which, the latter has an exclusive right.
Inadequacy of Existing Laws
However, in the current scenario, all these existing laws are not proving adequate to deal with the varied range of cyber crimes as these laws were not designed keeping in mind the new and emerging technologies. Hence, in addition to cyber laws the country also needs a regulatory approach on emerging technologies like AI. The regulatory framework must be based on a market study which would assess the scope of innumerable damages that could be perpetrated by AI technology, if not controlled. The present laws are not AI specific and have many loop holes as they only address the instances wherein the illegal content has already been uploaded and the harm has happened to the victim. Thus, new laws should be designed focusing on preventive measures to minimise the scope of harm, for instance, users should be able to identify that the images are morphed. Further the current regulations focus only on either online takedown in the form of censorship or criminal prosecution. They do not have deeper understanding of how generative AI technology works and the amount of damage that it could cause. Moreover the current laws place the entire burden on the victim to file a complaint.
Proposed Law on Deepfakes
In November 2023, Ashwini Vaishnaw, the Union Minister of Electronics and Information Technology, formed a regulation board to control the spread of deepfakes on social media platforms. The minister also held a meeting with social media platforms, AI companies and industry bodies for drafting a clear actionable plan to tackle deepfakes and misinformation. According to the minister, the plan would have four key pillars: detection of deepfakes, their prevention by removing or reducing their virality, strengthening reporting mechanisms, and spreading awareness about the technology.
The new legislation could be released through a new law, new rules or amendment to existing rules. As of now, all the existing social media companies have agreed to the labelling and watermarking of deepfakes. The Indian government is also planning to collaborate with other countries and regions to ensure a global effort in combating such content.
According to Rajeev Chandrashekar, the Minister of State for Electronics and Information Technology, existing laws are adequate to deal with deepfakes, but they have to be enforced strictly. A special officer (Rule 7 officer) would be appointed to closely monitor the violations. Furthermore, an online platform could be established to assist the affected victims and citizens in filing FIRs for deepfake crimes.
What the Judiciary Says
A public interest litigation (PIL) petition to block the access to websites that generate deepfakes was filed in the Delhi High Court. The bench of Acting Chief Justice Manmohan and Justice Mini Pushkarna expressed reservations in passing any judgment as it would mean curtailing the freedom of the Internet. They stated that the government is better suited to address the issue in a balanced manner.
The court has still not reached on a consensus as it has postponed the hearing.
MeitY’s Advisory to Social Media Platforms
The Ministry of Electronics and Information Technology (MeitY) has sent advisories to the social media platforms including Facebook, Instagram, and YouTube to remove misleading deepfake contents that is generated through AI within 24 hours.
Furthermore, an advisory has also been sent to various social media firms invoking Section 66D of the IT Act and Rule 3(1)(b) of the IT Rules, and stating states that all the social media platforms are required to remove any such content within the specified timeframes as per the regulations.
After receiving reports of the potential use of AI-generated deepfakes in February 2023, and the damage they are causing by generating doctored content, the MeitY immediately issued advisories to the chief compliance officers of various social media platforms.
Moreover, the central government is planning to put into effect a contentious law that would require WhatsApp to share details about the first originator of a message on account of rising AI-led misinformation on the messaging platform. There are many deepfake videos of politicians circulating on WhatsApp, and hence, the government is in the process of sending an order to the company under the IT Rules, 2021, seeking the identity of the first person who shared the video on the platform.
Prime Minister on Deepfakes
Prime Minister Narendra Modi has time and again highlighted the issue of the use and abuse of AI on social media platforms. Even during the virtual G20 Summit on November 22, 2023, he spoke about the emergence of deepfakes on social media, calling for ‘global regulations for AI’. He shared the concern of other world leaders about the negative effects of AI. Highlighting the dangers of deepfakes for the society and individuals, he stated that one must work forward to bring the innumerable benefits of AI to common people but at the same time, its use must be well-regulated so as to be safe for the society.
How to Identify Deepfakes
Deepfake videos often exhibit unnatural eye movements or gaze patterns. They may also exhibit varied colour tones and unnatural body shapes and movements. Deepfakes may struggle to maintain a natural posture or physique.
To detect a deepfake, a screenshot of the video could be taken and a reverse image search on Google can be applied to check the source and the original video. Once the screenshot is uploaded on Google, it would show that the visuals associated with it are taken from previous videos.
How the World is Dealing with Deepfakes
US In October 2023, the President of the US, Joe Biden, signed an executive order on AI to manage its risks ranging from national security to privacy. The Department of Commerce has been asked to develop standards to label AI-generated content to enable easier detection of deepfakes through watermarking. California and Texas have passed laws to criminalise the making and spreading of deepfake videos that intend to influence the outcome of elections. Virginia has also imposed criminal penalties for the distribution of nonconsensual deepfake pornography.
Further, the US has enacted the Deep Fakes Accountability Bill 2023 to protect national security against the threats posed by deepfake technology and provide legal recourse to victims of harmful deepfakes.
China In January 2023, the Cyberspace Administration of China has introduced new regulations for the restriction and use of deep synthesis technology and curb disinformation. This new policy would ensure that any doctored content would be explicitly labelled and could be traced back to its source. Deep synthesis service providers are required to abide by local laws, respect ethics, and maintain the ‘correct political direction and correct public opinion orientation’.
European Union The EU, too, has tightened its Code of Practice on Disinformation. This is to ensure that social media giants like Google, Meta and Twitter start flagging deepfake content, else, they would have to face multi-million-dollar fines. Initially, in 2018, this was introduced as a voluntary self-regulatory instrument. However, now it has the backing of the Digital Services Act and Digital Markets Act, which aims to increase the monitoring of digital platforms and curtailing various kinds of misuse. Additionally, under the proposed EU AI Act, deepfake providers would be subject to transparency and disclosure requirements.
Way forward In India, AI governance could not just be restricted to just a law. Multifaceted reforms have to be brought in for establishing standards of safety, increasing awareness among users and institution building. As AI is a powerful and beneficial tool, it should be adapted in a way that improves human welfare while also mitigating the challenges it may pose.
© Spectrum Books Pvt. Ltd.