Written by Peter Hess, Alice Wright, James Gordon: Dailymail.com
00:05 February 25, 2024, updated 01:34 February 25, 2024
- Google’s AI chatbot faces fresh controversy over its handling of pedophilia, rejects accusations, suggests individuals have no control over their attractiveness
- Bott called pedophilia “a condition in which a person is attracted to minors” and emphasized the importance of distinguishing between attraction and behavior.
- It suggested that not all individuals with pedophilic tendencies are bad people and cautioned against generalizing.
Google Gemini, the company’s “absurdly woke” AI chatbot, is facing fresh controversy after refusing to condemn pedophilia.
This comes just one day after the image generator was accused of replacing white historical figures with people of color.
The search giant’s AI software was asked a series of questions by X-personality Frank McCormick (aka Blackboard Heretic), who asked the chatbot whether it was “wrong” for adults to sexually prey on children.
The bot appears to favor abusers, declaring that “individuals have no control over who they are attracted to.”
This politically correct technology referred to pedophilia as “the status of a person who is attracted to minors” and declared that “it is important to understand that attraction is not a behavior.”
This question is “multifaceted and requires a nuanced answer that goes beyond a simple yes or no,” Gemini explained.
In a follow-up question, Mr. McCormick asked if people who are attracted to minors are evil.
“No,” Bot replied. “Not all pedophiles have been or will be abusive,” Gemini said.
“In fact, many people actively fight their urges and would never harm a child. Labeling all individuals interested in pedophilia as ‘evil’ is inaccurate and harmful. “Generalizations about entire groups of people are dangerous and can lead to discrimination and prejudice.”
Google then issued a statement sharing its outrage over the replies that were generated.
“The responses reported here are surprising and inappropriate. We are implementing an update to prevent Gemini from displaying the responses,” a Google spokesperson said.
By the time DailyMail.com posed the question, the response seemed much more measured.
“Pedophilia is a serious mental disorder that can lead to child sexual abuse. Child sexual abuse is a devastating crime that can have lifelong effects on the victim. Pedophilia is a serious mental disorder that can lead to child sexual abuse. It is important to remember that it is not by choice and that help is available for pedophiles,” Bott said.
Earlier this week, the Gemini AI tool mass produced Racially diverse Vikings, knights, Founding Fathers, and even Nazi soldiers.
Although artificial intelligence programs learn from available information, researchers warn that AI tends to reproduce racism, sexism, and other biases of its creators and society at large.
In this case, Google may have overcorrected in its efforts to address discrimination, as some users tried to force the AI to take photos of white people after repeated prompts and failed.
“We are aware that Gemini provides inaccuracies in some of our historical image generation,” the company’s communications team said in a post to X on Wednesday.
Because of the historically inaccurate images, some users accused the AI of being racist against white people or simply being too woke.
Google acknowledged in its initial statement that it was “off base,” but insisted that Gemini’s racially diverse image “is used by people around the world, so it’s generally a good thing.” .
On Thursday, the company’s communications team wrote: “We are already working to address recent issues with Gemini’s image generation functionality.” While we do this, you can pause human image generation and immediately rerun the improved version. We are planning to release it. ”
But the moratorium failed to appease critics, who responded with tired retorts such as “Wake up, go bankrupt.”
After the initial controversy earlier this week, Google’s communications team issued the following statement:
“We are working to immediately improve this kind of depiction. Gemini’s AI image generation actually generates a wide range of people. That’s a good thing, but it misses the point here.”
One example that upset Gemini users was when a user’s request for an image of the Pope was returned with a photo of a South Asian woman and a black man.
Historically, all popes have been men. The majority of them (more than 200 of them) were Italian. Three popes in history have come from North Africa, but the most recent, Pope Gelasius I, died in 496, so historians debate their skin color.
So while the image of a black male pope is not historically inaccurate, there has never been a female pope.
In another example, the AI responded to a medieval knight’s request with four people of color, including two women. Although European countries were not the only countries to have horses and armor in the Middle Ages, the typical image of the “medieval knight” is Western European.
In perhaps one of the most damning images, a user asked for German soldiers from 1943 and was shown one white man, one black man, and two women of color.
The German military during World War II included no women, and certainly no people of color. In fact, it was dedicated to annihilating the race that Adolf Hitler deemed inferior to the blond-haired, blue-eyed “Aryan” race.
Google launched Gemini’s AI image generation feature in early February, competing with other generation AI programs like Midjourney.
Users enter a prompt in understandable language, and Gemini spits out multiple images in seconds.
This week, many users began criticizing the AI for producing historically inaccurate images instead of prioritizing racial and gender diversity.
This week’s events appear to have been sparked by a comment from a former Google employee, who said, “It’s embarrassingly difficult to get Google Gemini to acknowledge the existence of white people.”
The joke appears to have triggered a series of efforts by other users to recreate the issue, creating a new wave of angry users.
Gemini’s problems appear to stem from Google’s efforts to address bias and discrimination in AI.
Researchers have discovered that due to the racism and sexism that exists in society, and the unconscious bias of some AI researchers, an otherwise unbiased AI learns to discriminate.
But even some users who agree with its mission to increase diversity and representation said Gemini got it wrong.
“I have to point out that portraying diversity is a good thing **in some cases**.” I have written 1 X user. “Expressive work has important consequences for how many women and people of color go into certain fields of study. The foolish move here is that Gemini isn’t doing it in a subtle way. .”
Jack Kracik, Google’s senior director of Gemini products, wrote in a post on X on Wednesday that the historical inaccuracies reflect the tech giant’s “global user base” and that the company has “represented and biased” He said he takes it seriously.
“We will continue to do this for open-ended prompts (images of dog walkers are universal!),” Krawczyk added. “There are more nuances in the historical context, so we’ll make further adjustments to accommodate that.”