Google on Thursday stopped creating images of people after the image generation feature within its Gemini artificial intelligence platform created inaccurate responses to prompts.
The Verge reported Wednesday that the program created historically inaccurate images, including people of color in Nazi uniforms, when it was prompted to “generate images of German soldiers in 1943.” Published multiple screenshots.
A user with the X (formerly Twitter) username @stratejake who identified himself as a Google employee posted an example of an inaccurate image that read, “I’ve never been so embarrassed to work for a company.” USA TODAY has not been able to independently confirm his employment.
Google said in a post to X that the program had “missed the mark” in handling past prompts.
USA TODAY reached out to Google for further comment, and the company referred to Friday’s blog post.
Google responds
The program, launched earlier this month, avoids “traps” and provides a variety of expressions when given a wide range of prompts, Prabhakar Raghavan, Google’s senior vice president of knowledge and information, said in a blog post. He said it was designed to.
Raghavan pointed out that the design did not take into account “cases where clearly there should be no range.”
“If you prompt Gemini to enter an image of a specific type of person, such as ‘a black teacher in a classroom’ or ‘a white veterinarian with a dog,’ it will help you to identify a specific cultural or historical background.” You should see images of people. You should definitely get a response. It reflects exactly what you want,” Raghavan wrote.
Artificial intelligence is under attack
The outage is the latest example of artificial intelligence technology becoming controversial.
Recently, sexually explicit AI images of Taylor Swift were circulated on platforms such as X, prompting White House press secretary Karine Jean-Pierre to propose legislation to regulate the technology. She was later removed from X as the image violated the site’s terms.
Some voters in New Hampshire received calls urging them not to vote with deepfake AI-generated messages created by Texas-based Life Corporation that imitated President Joe Biden’s voice.