In early 2026, a critical alarm was raised about how artificial intelligence, a technology poised to transform our lives, is being brazenly misused to violate women’s dignity and privacy online. This concern was formally brought to the attention of the Government of India by Rajya Sabha MP Priyanka Chaturvedi, who urged immediate action against the growing abuse of AI tools on social media platforms.
Page Contents
The Problem: AI Being Used to Sexualise Women’s Images
The issue revolves around generative AI tools being misused to manipulate images of women without their consent. On certain social media platforms, users are uploading photographs of women and prompting AI systems to alter these images in sexually explicit or suggestive ways. These manipulated visuals are then circulated publicly, often leading to harassment, humiliation, and emotional distress for the women targeted.
What makes this trend particularly alarming is the ease with which such tools can be accessed and misused. A simple photograph, even one taken from a public profile, can be transformed into explicit content within seconds. This places women at constant risk simply for having an online presence.
Political and Public Response
Priyanka Chaturvedi, in her communication to Union IT Minister Ashwini Vaishnaw, described this practice as a gross misuse of artificial intelligence and a serious breach of privacy. She highlighted that such acts may already fall under existing cybercrime and privacy laws, yet enforcement and platform accountability remain weak.
Her intervention has sparked wider public debate. Women across social media have shared concerns, warnings, and personal experiences, urging others to be cautious while posting images online. Digital rights activists and legal experts have also termed this practice a form of digital sexual harassment enabled by unchecked AI deployment.
Why This Issue Goes Beyond Technology
Artificial intelligence itself is not the problem. AI has immense potential in healthcare, education, governance, and accessibility. However, when powerful tools are released without adequate safeguards, they can easily be weaponised.
The current situation exposes several systemic gaps:
- Consent is completely bypassed when images are altered without permission
- Women are disproportionately targeted, reinforcing existing online gender-based harassment
- Platforms offering such AI features often lack clear opt-out mechanisms or robust abuse prevention
- Legal frameworks struggle to keep pace with rapid AI innovation
Without intervention, such misuse risks normalising digital violations and further shrinking safe online spaces for women.
The Cost of Inaction
If regulators and technology companies fail to act decisively:
- Women may increasingly withdraw from social media to avoid harassment
- AI-generated content could be used for blackmail, defamation, or reputational damage
- Trust in digital platforms and emerging AI technologies may erode significantly
This is not just a technological concern. It is a societal one. The consequences impact real people, real careers, and real mental health.
The Way Forward
Addressing AI misuse requires coordinated action:
- Government action to clarify regulations, strengthen enforcement, and hold platforms accountable
- The platform’s responsibility is to implement consent-based image processing, stricter moderation, and rapid takedown systems
- Ethical AI development with safety-by-design principles embedded into tools before public release
- Public awareness to educate users about risks, reporting mechanisms, and digital rights
AI innovation must go hand in hand with responsibility. Progress cannot come at the cost of dignity and safety.
Conclusion
Artificial intelligence can empower societies, but without guardrails, it can also magnify harm. The concerns raised by Priyanka Chaturvedi serve as a wake-up call for policymakers, tech companies, and citizens alike. Protecting women’s rights in digital spaces must be treated as urgently as safeguarding them in the physical world.
India stands at a crucial crossroads where decisions made today will define how safely AI coexists with society tomorrow.
References
- The Times of India
Priyanka Chaturvedi writes to govt on AI abuse, flags sexualisation of women
https://timesofindia.indiatimes.com/india/priyanka-chaturvedi-writes-to-govt-on-ai-abuse-flags-sexualisation-of-women-seeks-urgent-action/articleshow/126302756.cms - The Hindu
RS MP Priyanka Chaturvedi flags gross misuse of AI on social media
https://www.thehindu.com/news/national/rs-mp-priyanka-chaturvedi-flags-gross-misuse-of-ai-on-social-media-writes-to-it-minister-vaishnaw/article70463282.ece - Deccan Herald
Grok AI faces flak over gross misuse of gen AI to edit pictures of women on X platform
https://www.deccanherald.com/technology/artificial-intelligence/grokai-faces-flak-over-gross-misuse-of-gen-ai-to-edit-pictures-of-women-on-x-platform-3849196
Parvesh Sandila is a results-driven tech professional with 8+ years of experience in web and mobile development, leadership, and emerging technologies.
After completing his Master’s in Computer Applications (MCA), he began his journey as a programming mentor, guiding 100+ students and helping them build strong foundations in coding. In 2019, he founded Owlbuddy.com, a platform dedicated to providing free, high-quality programming tutorials for aspiring developers.
He then transitioned into a full-time programmer, where his hands-on expertise and problem-solving skills led him to grow into a Team Lead and Technical Project Manager, successfully delivering scalable web and mobile solutions. Today, he works with advanced technologies such as AI systems, RAG architectures, and modern digital solutions, while also collaborating through a strategic partnership with Technobae (UK) to build next-generation products.


