The Indian government has granted social media platform X, formerly known as Twitter, a short extension to respond to concerns surrounding its AI chatbot, Grok, after the tool was linked to the creation of objectionable content.
On January 6, 2026, the Ministry of Electronics and
Information Technology (MeitY) allowed X an additional 72 hours to submit its
formal response, pushing the final deadline to 5 PM on January 7, 2026. The
extension was approved after the company informed authorities that key members
of its global legal and policy teams were unavailable due to public holidays in
the United States.
What the Government Is Demanding
While the full compliance report is still awaited, officials
confirmed that X has already acted on immediate takedown directives, removing
the specific illegal and objectionable material identified by the government.
However, MeitY has gone beyond content removal and is now
seeking a deep-dive assessment of Grok’s internal safeguards. The ministry has
instructed X to conduct and submit a comprehensive review covering technical
controls, moderation processes, and governance mechanisms to ensure such misuse
cannot recur.
Why Grok Came Under Scrutiny
The notice was triggered after reports surfaced alleging
that Grok was being exploited to generate sexually explicit and digitally
altered (“morphed”) images, with women and children among the primary targets.
The issue drew sharp reactions from lawmakers and civil society groups,
prompting swift intervention from the government.
MeitY had initially issued a 72-hour ultimatum on January 2,
warning the platform to explain how such content slipped through and what corrective
steps were being taken.
What’s at Stake for X
If X fails to satisfy the government with its response, the
consequences could be significant. Authorities have indicated that the platform
risks losing its “safe harbour” protection under Section 79 of the Information
Technology Act. Such a move would strip X of legal immunity for third-party
content, potentially exposing it to direct liability for material generated or
shared on its platform.
The case is being closely watched as it could set a precedent for how AI-driven tools are regulated in India, particularly when they intersect with issues of child safety, gender-based abuse, and platform accountability.
By Advik Gupta

No comments:
Post a Comment