AI’s Rise in India Is Exposing Women to New Digital Harms
With over 700 million weekly active users and a four-time year-over-year growth surge, artificial intelligence giant OpenAI presents extraordinary opportunities for economic growth and prosperity. The American-based research company best known for creating ChatGPT has openly recognized India as a key testing ground for many of its features and tools—consequently, the country has become the second-largest and fastest-growing market for OpenAI. In August 2025, OpenAI launched its “Learning Accelerator,” an “India-first” initiative designed to advance educational opportunities through AI research and development. In theory, the growing demand for and adoption of OpenAI could accelerate economic growth in the world’s most populous country, which currently has 1.5 billion people. However, closer examination of this landscape reveals a pervasive protection gap faced by Indian women, who face ongoing socio-economic challenges amid the widespread proliferation of AI deepfakes.
The rapid adoption of OpenAI in India has created powerful and dangerous new avenues for targeting women and gender minorities online. OpenAI claims that its goal is to develop AI in ways that benefit humanity; however, the rise of AI deepfakes associated with the platform’s tools has rendered many Indian women vulnerable to privacy violations and harassment. According to a 2025 report published by the Rati Foundation, an Indian-based charity supporting victims of online abuse, the rate of AI adoption in India has enabled many individuals to use these tools to create digitally manipulated videos or images of individuals—otherwise known as deepfakes. Women account for 92 per cent of AI-generated deepfake victims, representing an overwhelming majority of such cases. Many deepfakes feature nude images of women, which are illegal in India without a woman’s consent. Alternative deepfake images are also highly problematic, yet often overlooked. For instance, 10 per cent of all deepfake cases reported to helplines involve images that might be deemed culturally appropriate in Western countries, but are stigmatizing across many Indian households, most notably including public displays of affection. This figure likely underestimates the real number of cases in this category, as culturally-influenced social stigma across the country discourages women from reporting deepfakes. While OpenAI presents opportunities for economic growth across India, its rapid adoption has left women unprotected amid the spread of deepfakes.
The disproportionate impact of AI deepfakes on Indian women is amplified by the underdeveloped legal landscape surrounding digital consent. India is one of the many countries where deepfakes operate in a “legal grey zone,” as no law directly recognizes AI-generated sexual content as a form of harm. Currently, there is a short list of Indian statutes that may apply to certain cases involving AI deepfakes, including Sections 111, 319, 336, 353, and 356 of the 2023 Bharatiya Nyaya Sanhita (BNS). The BNS contains numerous provisions regarding deepfakes, including prohibiting “cybercrimes, personation, forgery, misinformation, [and] defamation” in instances of online misconduct. While guilty parties may face fines and up to five years imprisonment, policy researchers and feminists have criticized the Act for its limited scope that fails to encapsulate all instances of deepfakes. India’s Impact and Policy Research Institute further highlighted the gap in current legislation, acknowledging that “while India has laws addressing cybercrime and defamation, there is no specific regulation focused on deepfakes.” Moreover, it is critical to note that Indian women most often cannot or do not provide consent when their images are appropriated. In fact, the majority of cases involving AI-generated sexual content consist of the perpetrator and victim having no prior connection, meaning the victim receives no opportunity to ask for consent to use their images. The lack of legal defence mechanisms for Indian women whose images are stolen and manipulated without their knowledge or consent reflects the urgent need for increased protection for this vulnerable group.

The rise of AI deepfakes is a pertinent crisis threatening global progress towards gender equality. The stolen identities of countless Indian women reflect serious privacy and security violations, yet remain overlooked by many large legal bodies and government authorities. Worse, instances of online abuse involving nudifying AI apps—separate from OpenAI—which remove women’s clothing from their images, have normalized extreme cases of abuse. This amplifies the severity of the crisis, as even if OpenAI is subjected to increased regulation, emerging applications could perform similar damaging edits.
Already, OpenAI and nudifying apps have resulted in many Indian women receding from online spaces, which could otherwise provide enormous economic opportunities, especially for those without any post-secondary education. In many ways, social media has fueled the perception of a borderless world, allowing women to forge opportunities for gender equality by connecting, uniting, and empowering one another amid daily interactions or social movements. However, the growing fear of abuse, harassment, and lack of legal protections faced by Indian women online compromises their ability to reap such benefits. AI deepfakes significantly amplify misogyny, posing severe safety and cultural ramifications globally.
Numerous existing solutions have the potential to remedy the crisis of AI deepfakes if effectively implemented. Social media platforms must assume a key role in combating the surge of AI deepfakes. According to the Rati Foundation, platforms such as YouTube, Meta, X, Instagram, and WhatsApp often produce suboptimal responses to nonconsensual online image sharing, including failure to enforce clear protocols regarding image manipulation and involuntary sharing. Improved accountability and reporting mechanisms within each platform are crucial in order to halt the spread of AI deepfakes. Further, like many other countries, India requires new laws that specifically recognize AI-generated sexual content as a distinct form of harm. Acknowledging that legal progress may be slow, feminists and other activists must work to uplift non-governmental and civil society actors who actively provide resources to victims of online abuse in the meantime.
Without urgent action, AI deepfakes have the potential to continue to rapidly spread and severely violate women’s privacy and security. As the largest democratic country in the world with over 850 million internet users, India faces acute risks from the rise of AI deepfakes, which have already exacerbated gender-related challenges, rendering many women vulnerable and encouraging them to recede from online spaces. With the international community’s support, India must confront this crisis with decisive legal, technological, and social action to ensure that the country’s digital future does not come at the expense of its women.
Edited by Marina Gallo.
Featured image: Photo by Zulfugar Karimov is licensed under the Unsplash License.