Besides that, MeitY has asked AI companies to “not permit any bias or discrimination or threaten the integrity of the electoral process including via the use of Artificial Intelligence model(s)/ LLM/ Generative Al, software(s) or algorithm(s).“
Google quickly addressed the issue and said, “Gemini is built as a creativity and productivity tool and may not always be reliable, especially when it comes to responding to some prompts about current events, political topics, or evolving news. This is something that we’re constantly working on improving.”
In the US, Google recently faced criticism after Gemini’s image generation model failed to produce images of white people. Users accused Google of anti-white bias. Following the incident, Google has disabled the image generation of people in Gemini and is working to improve the model.
Apart from that, the advisory says if platforms or its users don’t comply with these rules, it might result in “potential penal consequences.”
The advisory reads, “It is reiterated that non-compliance to the provisions of the IT Act and/or IT Rules would result in potential penal consequences to the intermediaries or platforms or its users when identified, including but not limited to prosecution under IT Act and several other statues of the criminal code.“
What Could be the Implications?
Apart from that, experts say that the advisory is “vague” and does not define what is “untested.” Companies like Google and OpenAI do extensive testing before releasing a model. However, as is the case with AI models, they are trained on a large corpus of data scraped from the web and may exhibit hallucinations, producing an incorrect response.
Nearly all AI chatbots disclose this information on their homepage. How is the government going to decide which models are untested, and under what frameworks?
Interestingly, the advisory asks tech firms to label or embed a “permanent unique metadata or identifier” in AI-generated data (text, audio, visual, or audio-visual) to identify the first originator, creator, user, or intermediary. This brings us to traceability in AI.
It is an evolving area of research in the AI field, and so far, we have not seen any credible way to detect AI-written text, let alone identify the originator through embedded metadata.
OpenAI shut down its AI Classifier tool last year, which was aimed at distinguishing human-written text and AI-written text as it was giving false positive results. To fight AI-generated misinformation, Adobe, Google, and OpenAI have recently employed the C2PA (Content Provenance and Authenticity) standard on their products which adds a watermark and metadata to generated images. However, the metadata and watermark can be easily removed or edited using online tools and services.
Currently, there is no foolproof method to identify the originator or user through embedded metadata. So, MeitY’s request to embed a permanent identifier in synthetic data is untenable at this point.