Gaatha Sarvaiya
Indian actress Alia Bhatt was featured in a deceptive deepfake video. The viral footage
showed Bhatt’s face digitally placed onto another woman, depicted sitting on a bed. (Ojha,
March 2024)
A 22-year-old man was arrested for using deepfake to create and share morphed images of a
woman. (Bureau, November 2023) An obscene deepfake video of a 14-year-old girl was posted on social media platforms by some young men. (India, June 2024)
These headlines are not isolated incidents; they are one of the many incidents that are the
growing face of an AI-driven assault on women’s consent and autonomy. It is a threat to
security, not just digital security, but also bodily, psychological, and societal security. Men
and women are equally at risk from deepfakes, although women are more frequently the
targets, a staggering 96–98% of deepfake videos are sexually explicit, and nearly all victims
are women. In India, deepfake pornography isn’t just hypothetical, it’s being weaponized to
humiliate public figures, erase women’s agency, and strip individuals of consent and
autonomy. (Fraser, 2025 ) Deepfakes represent a terrifying evolution of online gender-based
violence.
Alarmingly, India’s legal architecture offers no direct recourse. While India has
laws addressing cybercrime and defamation, there is no specific regulation focused on
deepfakes.
Deepfake crimes operate in a legal grey zone, a legal vacuum, as no law directly
recognizes synthetic media or AI-generated sexual content as distinct forms of harm. This is
what I call the “deepfake deficit” the serious gap in how India’s laws deal with deepfakes and
other forms of online sexual abuse.
Right now, our legal system doesn’t know how to handle this kind of digital harm, especially when it targets women. By focusing on deepfakes, this piece tries to highlight how our laws and policies are falling behind in protecting consent, privacy, and dignity online. More importantly, it argues that we need a legal system that’s built on feminist values, where safety means more than punishment, and justice begins with truly listening to those most affected.
In some ways, deepfakes are to videos what photoshopping is to photos. Convincingly fake images, videos, and audio recordings can be produced using deepfake technology, a form of artificial intelligence. The term “deepfake” combines the deep learning concept with something fake. Deepfake compiles hoaxed images and sounds and stitches them together using machine learning algorithms. As a result, it creates people and events that do not exist or did not actually happen.
(Venema, 2023) (fortinet) While deepfake technology can be used
for entertainment or satire, its most disturbing use is in the form of non-consensual sexual content, which almost always targets women. Deepfakes aren’t just about fake videos they’re about erasing a person’s control over their own image, their identity, and their dignity. For women, especially in a society where reputation and shame are weaponized, this can mean real-world consequences: harassment, job loss, social exclusion, or even self-harm.
Technology is not neutral; it reflects the societal biases and values of its developers and
users. It reflects the power structures of society. In the case of deepfakes, we’re seeing how digital tools are objectifying, harassing and silencing women in terrifying new ways. Deepfakes may be new technology, but their impact especially in non-consensual sexual content is serious and lasting.
Yet, India has no specific law to address deepfakes or AI- generated content. Currently, victims and law enforcement often turn to existing provisions in the Information Technology Act, 2000, and the Indian Penal Code (IPC). While Section 66D of the IT Act (identity fraud), Section 67 (obscene content), and Section 500 of the IPC (defamation) cover some aspects, they are insufficient to address the unique challenges posed by AI-driven deepfakes.
(Bhale, 2025) The law often sees harm only if something is “obscene,” instead of looking at whether consent was violated. That’s a major gap. As a result, many deepfake victims are left without justice. Police may not know what to file the case under, courts struggle to classify the crime, and platforms are not required to take content down quickly. Meanwhile, the videos keep spreading, and the harm keeps growing.
Indian law doesn’t focus on consent. The legal system is more concerned with whether
something is “obscene” or “offensive,” harm isn’t always physical, sometimes it’s about
being silenced, shamed, or stripped of control. Deepfakes do exactly that. They take away a woman’s right to control her own image, body, and identity, and they do it in a way that makes it hard for her to fight back. If we want laws that truly protect people, they need to start from the idea that consent, dignity, and autonomy matter in digital spaces just as much as in physical ones. Deepfake videos often spread on apps like Instagram, Telegram, and X.
Most women only find out after the video has already gone viral. The damage happens fast, but getting it removed is slow and frustrating. Even though Indian rules say platforms should take harmful content down, the process is confusing, delayed, and often doesn’t work. Victims rarely get help, and there’s no deadline for action.
To truly protect people from deepfake abuse, the law needs to change and it needs to start by listening to those most harmed. India must legally recognise deepfakes as a distinct form of digital violence, focus on consent and dignity, not just on whether something looks “obscene”, make platforms take content down quickly and be clear about how they handle complaints, set up easy, survivor-friendly ways to report and get help, include voices from marginalized communities while making these laws.
Deepfake aren’t just fake silly videos. They cause real harm. Yet, the law remains silent.
Now, it’s time to prioritise consent, dignity, and safety and build feminist laws that protect not only women but all humans in the digital age. The law must evolve with consent at its core and feminism at its foundation.
References
- Bureau, T. H. (2023, November). Youth arrested for sharing woman’s deep fake images. The Hindu. https://www.thehindu.com/news/national/karnataka/youth-arrested-for-sharing-womans-deep-fake-images/article67541185.ece
- Fortinet. (n.d.). What is deepfake. Fortinet. https://www.fortinet.com/resources/cyberglossary/deepfake
- Fraser, M. (2025). Deepfake statistical data (2023–2025). Views4You. https://views4you.com/deepfake-database/
- India, P. T. (2024, June). 14-year-old UP girl falls prey to obscene deepfake, cops file case. NDTV. https://www.ndtv.com
- Lee, S. (n.d.). Feminist philosophy of technology. Number Analytics. https://www.numberanalytics.com/blog/feminist-philosophy-of-technology-ultimate-guide
- Ojha, S. (2024, March). 9 well-known personalities who were victims of deepfake videos. Livemint. https://www.livemint.com/news/india/from-ratan-tata-sachin-tendulkar-to-madusudan-kela-9-well-known-personalities-who-were-victims-of-deepfake-videos-11710307982420.html
- Venema, A. E. (2023). Deepfakes as a security issue: Why gender matters. Women in International Security. https://wiisglobal.org/deepfakes-as-a-security-issue-why-gender-matters/
Disclaimer: All views expressed in the article belong solely to the author and not necessarily to the organisation.
About the Author
Gaatha Sarvaiya is a Psychology Student at Amity University, Mumbai. She is also a LPPYF Law and Public Policy Youth Fellowship- Cohort 5.0 fellow.
Read More at IMPRI
Refinance Scheme For Urban Housing Fund,2025
National Safai Karamcharis Finance and Development Corporation (NSKFDC)


















