Google is consolidating various teams working on generative AI under the DeepMind team to accelerate the development of more capable systems.
The decision, made by CEO Sundar Pichai on Thursday, builds on previous consolidation efforts around AI development at chocolate factories. Last year, the search giant merged its DeepMind team with its Brain division to form a group confusingly called Google DeepMind. This team was ultimately responsible for training the model behind Google’s Gemini chatbot.
Pichai now plans to move all the teams building AI models under one roof.
“All of this work will reside in Google DeepMind, expanding our ability to deliver powerful AI to our users, partners, and customers,” he said. Science added that it will allow more focus on computer research.
Google’s so-called Responsible AI team will also join the DeepMind team and is expected to play a larger role in the early development of new models. Google said it has already moved other responsibility groups under a central Trust and Safety team and plans to build broader AI testing and evaluation protocols.
This move was likely made to avoid the vexing problems that plagued many of the early generative AI products, such as Google’s Bard and later Gemini chatbot.
In February, Google was forced to discontinue its text-to-image generation feature after Gemini failed to accurately represent Europeans and Americans in certain historical contexts. For example, if you ask it to generate images of German soldiers in 1943, it will consistently generate images of non-white people.
To avoid this, Google is “standardizing the launch requirements for AI-powered features and conducting extensive ‘red team’ testing of vulnerabilities to ensure responses are accurate and responsive to users.” We are increasing our investment in reputation,” Pichai said.
In other words, Google has now established itself as a powerhouse in generative AI alongside the likes of Microsoft and OpenAI, so it could take a little more time to figure out how to make its models misbehave before users do. will be spent.
Circumventing AI guardrails to generate content or suggestions that are abusive, toxic, or dangerous, such as how to commit suicide, has become a major problem for model builders.
Integration efforts also extend to AI hardware development. This change will combine the Platforms and Ecosystems team and the Devices and Services team into a new group called Platforms and Devices. As part of this move, employees working on computational photography and on-device intelligence will also join the combined hardware division.
Mr. Pichai concluded his memo by warning employees to leave their personal vendettas at home.
“This is a business, and you shouldn’t behave in a way that annoys your colleagues, makes them feel unsafe, tries to use the company as a personal platform, fights over disruptive issues, or debates policy. “It’s not the place,” he wrote.
Pichai’s comments appear to be referring to the 28 employees who were fired this week for protesting Google’s deal with the Israeli government, but they also apply to the ethical minefield in which the AI field is rapidly expanding. No doubt. ®