Government to serve notice to Google for 'problematic, illegal' AI responses on PM Modi

Minister of State for Electronics and IT, Rajeev Chandrasekhar, condemned the biased responses generated by Google's AI platform and emphasized the need for accountability.

Author
Prateek Gautam
Follow us:

ANI

The Indian government's IT Ministry is gearing up to issue a notice to Google regarding what it deems as "problematic and illegal" responses generated by its AI platform Gemini concerning Prime Minister Narendra Modi. This move comes in response to recent controversial outputs from Google's generative AI technology, which has raised concerns about biased and objectionable content.

Troubling Responses Prompt Action

A senior government official revealed that Google's AI platform, previously known as Bard and now rebranded as Gemini, had previously provided objectionable responses to user queries, including one seeking a summary of an article from a conservative outlet. However, it is the recent responses related to Prime Minister Modi that have prompted the authorities to take action.

Symbolic of Larger Debate

This development highlights the ongoing debate between lawmakers and technology companies regarding the regulation of generative AI platforms like Gemini. Google's recent apology for inaccuracies in historical image generation depictions further underscores the complexities surrounding the use of AI and its potential impact on societal norms and perceptions.

Violation of Regulations

The responses generated by Gemini have been deemed as direct violations of Rule 3(1)(b) of the Intermediate Rules of the IT Act, as well as several provisions of the Criminal code. These rules mandate basic due diligence by intermediaries like Google to ensure the accuracy and legality of content generated by their platforms.

Government's Response

Minister of State for Electronics and IT, Rajeev Chandrasekhar, condemned the biased responses generated by Google's AI platform and emphasized the need for accountability. The IT Ministry is preparing to issue a show cause notice to Google, seeking clarification on the problematic views presented by Gemini. Failure to provide satisfactory answers may result in legal action against the tech giant.

Past Controversies

This is not the first time that Google's AI system has come under scrutiny for biased responses. Last year, concerns were raised when Gemini, then known as Bard, reportedly refused to summarize articles from conservative outlets, citing the spread of false information. Google's response at the time emphasized that Bard was an experimental tool trained on publicly available data and did not reflect Google's perspective.

As the government takes steps to address the issue of biased AI responses, the case serves as a reminder of the challenges associated with regulating AI technologies and ensuring accountability in the digital age. It also underscores the importance of transparency and ethical considerations in the development and deployment of AI systems.