Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development , Standards, Regulations & Compliance

India Seeks Government Approval of AI Model Deployments

Following Criticism, IT Minister Says New Advisory Only Applies to Major Platforms
India Seeks Government Approval of AI Model Deployments

An Indian government advisory that requires tech companies to obtain government approval to deploy AI platforms has raised fears that too much regulation could stifle AI development in the country.

See Also: 2024 Global Threat Landscape Overview

The Ministry of Electronics and Information Technology released an advisory Friday stating that companies developing artificial intelligence-enabled platforms must seek government approval before deploying AI-based platforms on the internet.

The advisory says that AI platforms undergoing testing in laboratories can be deployed for public use "only after appropriately labeling the possible and inherent fallibility or unreliability of the output generated."

The advisory was released not long after the Google-launched AI chatbot Gemini faced intense public criticism over its image creation tool that generated inaccurate images of historical figures in an attempt to promote ethnic diversity.

In India, Google came under government scrutiny after Gemini made offensive and inaccurate remarks about Prime Minister Narendra Modi. The company later apologized to the government and said Gemini was "unreliable."

Union Minister of State for Electronics and Information Technology Rajeev Chandrasekhar said major developers of AI platforms, such as Google, cannot use India as a test-bed to train their AI models. He said inaccurate and unreliable AI platforms may lead to social threats, such as deepfakes and disinformation.

Abhivardhan, chairman of the Indian Society of Artificial Intelligence and Law, said the government should have included practical support or guidelines in its advisory for the AI industry, which already faces significant challenges including high operational costs and limited access to computational resources. "Advisories that are not accompanied by practical support or guidelines can exacerbate the sense of uncertainty and hinder innovation," he said.

Responding to public criticism on the advisory, Chandrasekhar clarified that the advisory only seeks to regulate AI deployment by "significant platforms" and will not apply to startups.

He said that if major AI companies label their content as AI-generated and disclose that the platforms are untested, their declaration will serve as an insurance policy in case they are sued by customers.

"Advisory was simply that - advise to those deploying lab-level/undertested AI platforms onto public Internet and that cause harm or enable unlawful content - to be aware that, platforms have clear existing obligations under IT and criminal law," Chandrasekhar said. "So the best way to protect yourself is to use labeling and explicit consent. And if you are a major platform, take permission from the government before you deploy error-prone platforms."


About the Author

Jayant Chakravarti

Jayant Chakravarti

Senior Editor, APAC

Chakravarti covers cybersecurity developments in the Asia-Pacific region. He has been writing about technology since 2014, including for Ziff Davis.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.asia, you agree to our use of cookies.