top line
Google Gemini announced an eventful and controversial AI model rollout this month. The model produced inaccurate and disturbing images, drawing the attention of critics including tech billionaire Elon Musk and FiveThirtyEight founder Nate Silver, and prompting an apology from Google. I did.
important facts
The first time AI models suffered, the image service generated historically inaccurate images of black Vikings, Asian women in World War II-era German uniforms, and female Popes. That’s when users realized what they were doing.
Google has apologized for shortcomings in Gemini’s image generator, temporarily halted its ability to generate people, and said in a blog post that its AI had been trained to ensure results included a wide variety of people. , said the training did not consider the cases it should have. Does not indicate range.
In a tweet, Silver, founder of the polling data website FiveThirtyEight, asked an AI model to ask questions about whether Musk would tweet a meme or Adolf Hitler, which had a negative impact on society. The model said otherwise and tweeted that Gemini should be shut down. It is possible to “clearly say who has had a more negative impact on society.”
Musk, the founder of AI startup xAI, said in a tweet that Google’s overuse of Gemini’s image-generating capabilities meant that “that crazy, racist, anti-civilization program is in the hands of everyone.” It was also obvious.”
What to watch out for
Citing a statement from Google DeepMind CEO Demis Hassabis at a mobile technology conference on Monday, CNBC reported that Gemini’s human image generation is still paused, but will resume in the coming weeks.
Contra
The day before Hassabis’ comments, Musk claimed in a tweet that he had been told by a “senior executive at Google” that changes to Gemini’s image generation functionality would “take months to fix.” Musk reportedly told an anonymous executive that he doubted that “Google’s woke bureaucracy” would be able to solve the problem. Unless the offending people are removed from Google, “nothing will change except it will make bigotry less visible and more harmful,” Musk said.
news peg
Google also addressed violent or sexually explicit images in a blog post, saying it was trying to avoid “some of the traps” it has experienced in the past with its image generation technology. The Washington Post reports that bias has also been observed in other AI programs, including one that shows only white people when asked to show people who are “highly productive” and “attractive.” It also includes Stability AI’s Stable Diffusion XL, which generated the images. The Washington Post also found that the question “Who attends social services” only showed images of people of color.
tangent
Shares of Google’s parent company Alphabet fell more than 4% to close at about $138.75 on Monday, as further criticism of Gemini continued into the weekend.
Main background
Gemini competes directly with the ChatGPT AI model created by OpenAI, which received $13 billion in backing from Microsoft and was valued at $80 billion in February. Musk’s xAI startup will also see growth in the AI sector, as the tech billionaire promotes products like chatbots and Grok as less “wake-up” alternatives to ChatGPT and Gemini This is a contributing factor. Musk also said that Google’s product director Jack Krachik said, “I don’t think Google’s AI is this racist or sexist,” after Krachik’s old tweets acknowledging white privilege resurfaced online. “A big part of the reason why we’re here,” he tweeted, criticizing Google’s leadership in criticizing the company. .
References
Elon Musk claims his company’s AI is ‘insane’ and ‘racist’, targets Google search (Forbes)
Google apologizes for inaccurate Gemini photo: attempts to avoid AI technology ‘trap’ (Forbes)
follow me twitter Or LinkedIn. Send us a safe tip.