Govt removes permit requirement for untested AI models, calls for labelling content

Instead of permission for AI models under development, the fresh advisory issued by the Ministry of Electronics and IT on Friday evening fine-tuned the compliance requirement as per IT Rules of 2021.

“The advisory is issued in suppression of advisory…dated 1st March 2024,” the advisory said.

It has been observed that IT firms and platforms are often negligent in undertaking due diligence obligations underlined under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code ) Rules, 2021, according to the new advisory.

The government has asked firms to label content generated using their AI software or platform and inform users about the possible inherent fallibility or unreliability of the output generated using their AI tools.

“Where any intermediary through its software or any other computer resource permits or facilitates synthetic creation, generation or modification of a text, audio, visual or audio-visual information, in such a manner that such information may be used potentially as misinformation or deepfake, it is advised that such information created generated or modified through its software or any other computer resource is labelled….that such information has been created generated or modified using the computer resource of the intermediary,” the advisory said.

In case any changes are made by the user, the metadata should be so configured to enable identification of such user or computer resource that has effected such change, it added.

After a controversy over a response of Google’s AI platform to queries related to Prime Minister Narendra Modi, the government on March 1 issued an advisory for social media and other platforms to label under-trial AI models and prevent hosting unlawful content.

The Ministry of Electronics and Information Technology, in the advisory issued to intermediaries and platforms, warned of criminal action in case of non-compliance.

The previous advisory has asked the entities to seek approval from the government for deploying under trial or unreliable artificial intelligence (AI) models and deploy them only after labelling them of “possible and inherent fallibility or unreliability of the output generated”.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *