AI Models in India to Require Govt Approval; What are the Implications?
India’s Ministry of Electronics and Information Technology (MeitY) recently issued an advisory to tech platforms and intermediaries operating in India to comply with regulations outlined under IT Rules, 2021. The new advisory asks companies like Google, OpenAI, and other technology firms to “undertake due diligence” and ensure compliancewithin the next 15 days.
In what’s new, the IT Ministry has asked tech companies toget explicit permission from the Government of Indiabefore deploying “untested” AI models (and software products developed on such models) in India.
The advisory states, “The use of under-testing / unreliable Artificial Intelligence model(s) /LLM/Generative Al, software(s) or algorithm(s) and its availability to the users on Indian Internet must be done so with explicit permission of the Government of India and be deployed only after appropriately labeling the possible and inherent fallibility or unreliability of the output generated. Further, ‘consent popup’ mechanism may be used to explicitly inform the users about the possible and inherent fallibility or unreliability of the output generated.“
Although the advisory is not legally binding on platforms and intermediaries, it has drawn criticism from tech firms across the world, suggesting that itmight stifle AI innovationin India. Aravind Srinivas, the CEO of Perplexity AI, called it a “bad move by India.”Bad move by India.pic.twitter.com/Pg8WGotVEn— Aravind Srinivas (@AravSrinivas)March 3, 2024
Bad move by India.pic.twitter.com/Pg8WGotVEn
To clarify the advisory, Rajeev Chandrasekhar, the Union Minister of State for Electronics and Information Technology, took to X to shed light on the key points. He said that seeking permission from the government isonly applicable to large platforms, which include giants like Google, OpenAI, and Microsoft. He said advisory doesn’t apply to startups. He also points out that the advisory is aimed at “untested” AI platforms.Recent advisory of@GoI_MeitYneeds to be understood➡️Advisory is aimed at the Significant platforms and permission seeking from Meity is only for large plarforms and will not apply to startups.➡️Advisory is aimed at untested AI platforms from deploying on Indian Internet…— Rajeev Chandrasekhar 🇮🇳(Modiyude Kutumbam) (@Rajeev_GoI)March 4, 2024
Recent advisory of@GoI_MeitYneeds to be understood➡️Advisory is aimed at the Significant platforms and permission seeking from Meity is only for large plarforms and will not apply to startups.➡️Advisory is aimed at untested AI platforms from deploying on Indian Internet…— Rajeev Chandrasekhar 🇮🇳(Modiyude Kutumbam) (@Rajeev_GoI)March 4, 2024
It’s worth noting that India’s home-grown Ola released its Krutrim AI chatbot recently, marketing the chatbot as having “an innate sense of India[n] cultural sensibilities and relevance“. However, according to anIndian Expressreport, the Krutrim AI chatbot is highly prone to hallucinations.
Besides that, MeitY has asked AI companies to “not permit any bias or discrimination or threaten the integrity of the electoral process including via the use of Artificial Intelligence model(s)/ LLM/ Generative Al, software(s) or algorithm(s).“
The fresh advisory is issued in the backdrop of Google Gemini’srecent misfirewhere the AI modelresponded to a politically sensitive question, drawing ire from the establishment. Ashwini Vaishnaw, India’s IT Minister,warnedGoogle that “racial and other biases will not be tolerated.”
Google quickly addressed the issue and said, “Gemini is built as a creativity and productivity tool and may not always be reliable, especially when it comes to responding to some prompts about current events, political topics, or evolving news. This is something that we’re constantly working on improving.”
In the US, Google recently faced criticism afterGemini’s image generation model failedto produce images of white people. Users accused Google ofanti-white bias. Following the incident, Google has disabled the image generation of people in Gemini and is working to improve the model.
Apart from that, the advisory says if platforms or its users don’t comply with these rules, it might result in “potential penal consequences.”
The advisory reads, “It is reiterated that non-compliance to the provisions of the IT Act and/or IT Rules would result in potential penal consequences to the intermediaries or platforms or its users when identified, including but not limited to prosecution under IT Act and several other statues of the criminal code.“
What Could be the Implications?
While the advisory is not legally binding on tech companies, MeitY has requested intermediaries to submit anAction Taken-cum-Status reportto the Ministry within 15 days. This can have wider ramifications not just for tech giants offering AI services in India, but may also stifle AI adoption and overall technological progress in India in the long term.
I was such a fool thinking I will work bringing GenAI to Indian Agriculture from SF. We were training multimodal low cost pest and disease model, and so excited about it. This is terrible and demotivating after working 4yrs full time brining AI to this domain in India.https://t.co/Hou7hcjFOs— Pratik Desai (@chheplo)March 3, 2024
Many are concerned that it maycreate more red tapefrom the government and large companies may be hesitant to release powerful new AI models in India, fearing regulatory overreach. So far, all tech firms have kept up with the latest trends in releasing advanced AI models in India, on par with Western countries. In contrast, Western countries are being extremely cautious about AI regulations that may hinder progress.
The new regulation may create “more red tape” from the Indian government andlarge companies may be hesitant to release powerful new AI models in India,fearing regulatory overreach
Apart from that, experts say that theadvisory is “vague”and does not define what is “untested.” Companies like Google and OpenAI do extensive testing before releasing a model. However, as is the case with AI models, they are trained on a large corpus of data scraped from the web and may exhibit hallucinations, producing an incorrect response.
Nearly all AI chatbots disclose this information on their homepage. How is the government going to decide which models are untested, and under what frameworks?
Thanks for clarifying Mr Chandrasekhar. Not clear:1. How an advisory is legally binding. Not an ordinance .2. How it is backed by IT Act+legally kosher3. How it excludes startups4. What does untested meanI appreciate your focus on safety in AI. Pls do a public consultationhttps://t.co/YOwVQbXgzR— Nikhil Pahwa (@nixxin)March 4, 2024
Interestingly, the advisory asks tech firms to label or embed a “permanent unique metadata or identifier” in AI-generated data (text, audio, visual, or audio-visual) to identify the first originator, creator, user, or intermediary. This brings us to traceability in AI.
It is anevolving area of researchin the AI field, and so far, we have not seen any credible way todetect AI-written text, let alone identify the originator through embedded metadata.
OpenAI shut down its AI Classifier tool last year, which was aimed at distinguishing human-written text and AI-written text as it was giving false positive results. To fight AI-generated misinformation, Adobe, Google, and OpenAI have recently employed theC2PA(Content Provenance and Authenticity) standard on their products which adds awatermark and metadatato generated images. However, the metadata and watermark can be easily removed or edited using online tools and services.is this all it takes?pic.twitter.com/DDq3w0lfl0— SCOTT (@scottinallcaps)February 6, 2024
is this all it takes?pic.twitter.com/DDq3w0lfl0
Currently, there isno foolproof method to identify the originatoror user through embedded metadata. So, MeitY’s request to embed a permanent identifier in synthetic data is untenable at this point.
So that is all about MeitY’s new advisory for tech companies offering AI models and services in India. What is your opinion on this subject? Let us know in the comments section below.
Arjun Sha
Passionate about Windows, ChromeOS, Android, security and privacy issues. Have a penchant to solve everyday computing problems.
Add new comment
Name
Email ID
Δ
01
02
03
04
05