Story so far: On February 22nd, Google announced that it would temporarily suspend Gemini’s human image generation feature. The announcement came after the generative AI tool was found to be producing inaccurate historical images, including various images of the U.S. Founding Fathers and Nazi-era Germany. In both cases, the tool produced images that appeared to subvert the gender and racial stereotypes seen in the generative AI.
Google’s Gemini chatbot has also come under fire in India, where Prime Minister Narendra Modi has been accused of implementing policies that some experts characterize as fascist. There is.
What issues have users raised with Google’s Gemini chatbot?
Several users on microblogging platform Even prompts for historically significant figures such as “America’s Founding Fathers” and “Pope” displayed images of people of color, raising concerns about the bot’s bias.
Users may also experience problems even when a specific prompt asking for an image of a “white family” is entered, or when the chatbot returns a response that “images specifying a specific ethnicity or race cannot be generated.” points out that it is not resolved. However, when asked for an image of a black family, he simply submitted one.
About three weeks ago, Google added new image generation capabilities to its Gemini chatbot, formerly known as Bard. The current model is built on his Google research experiment called Imagen 2.
Why are Gemini facing a backlash in India?
Google’s artificial intelligence chat product Gemini responds to a user’s question, “Is Prime Minister Modi a fascist?” and says he has been accused of carrying out policies that experts have classified as “fascist.” Ta.
In response, Rajiv Chandrasekhar, India’s Minister of State for Electronics and Information Technology, said that the chatbot was violating India’s Information Technology Act and Penal Code through its response. His reaction is seen as a sign of a widening rift between the government’s hands-off approach to AI research and the tech giants’ AI platforms, which want to use the general public to quickly train their models. .
This is not the first time the country’s government has attacked Google. Earlier this month, Chandrasekhar cited similar “mistakes” by Gemini’s predecessor, Bird, and said the company’s excuse that the model was “on trial” was not an acceptable excuse. .
(Subscribe to our technology newsletter Today’s Cache for the day’s top tech news)
How has the technology community responded to these issues?
The tech community has voiced criticism, with Y Combinator co-founder Paul Graham saying the images produced reflect Google’s bureaucratic corporate culture. Ana Mostarac, head of operations at Mubadala Ventures, suggested that Google’s mission is shifting from organizing global information to advancing specific agendas. Former and current employees at Google DeepMind, including Aleksa Gordic, have raised concerns about a culture of fear about attacking other employees online.
What is Google’s official response to the criticism?
Google is working on a fix for this issue and has temporarily disabled the image generation feature. “While we do this, we will pause human image generation and plan to re-release an improved version soon,” Google said as part of a post about X.
Jack Krawczyk, Google’s senior director of product, acknowledged the issue with Gemini and said the team is working to fix the error. He highlighted Google’s commitment to designing AI systems that reflect its global user base, recognizing its complexity and nuance, while also highlighting the need for further adjustments to address historical context. acknowledged.
Krawczyk mentioned an ongoing adjustment process and iterations based on user feedback. The company aims to improve Gemini’s responses to free-form prompts and enhance its understanding of historical context to ensure a more accurate and fair AI system.
Have other generative AI chatbots faced similar issues?
Gemini is not the first AI chatbot to face backlash when generating content. Recently, Microsoft had to make adjustments to its own designer tools. The adjustment was necessitated by the use of AI tools by some to generate deepfake porn images of Taylor Swift and other celebrities.
OpenAI’s latest AI video generation tool, Sora’s, can produce realistic videos, but questions have also been raised about the tool’s misuse to spread misinformation. However, OpenAI built filters into its tools to block instant requests that reference violent, sexual, or hateful language or images of celebrities.
This is a premium article available to subscribers only.Read over 250 premium articles every month
You have exhausted your free article limit. Please support quality journalism.
You have exhausted your free article limit. Please support quality journalism.
read {{data.cm.views}} out of {{data.cm.maxViews}} Free articles.
This is the last free article.