Meta announced that its AI Llama 3 will be integrated into the Assistant features of Facebook, Instagram, and WhatsApp.
Meta, Facebook’s parent company, announced the release of two smaller versions of its latest artificial intelligence (AI) model, called Llama 3, that will be integrated into the Assistant features of Facebook, Instagram, and WhatsApp.
The company’s CEO, Mark Zuckerberg, called it “the most intelligent AI assistant at your disposal.”
The most powerful version of the model, with around 400 billion parameters, has not yet been released, but it will compete with market leaders like GPT.
But when Zuckerberg’s team of enhanced meta-AI agents began taking to social media to engage with real people this week, their bizarre interactions continued to push the limits of even the best generative AI technology. I made one thing clear.
One person joined a Facebook mothers group to talk about their gifted child. Another user attempted to give away non-existent items to confused members of the “Buy Nothing” forum.
Meta is a collaboration between leading AI developers Google and OpenAI,Anthropic,Cohere, french mistralwants to develop new AI language models en masse to convince customers of the smartest, most useful, or most efficient chatbot.
AI language models are trained on vast pools of data to help predict the most plausible next word in a sentence, and new versions are typically smarter and more capable than previous versions. Masu.
Meta’s latest models were built using parameters of 8 billion and 70 billion, a measure of the amount of data on which the system is trained.
“The vast majority of consumers frankly don’t know or care much about the underlying base model, but the way they experience it is no different than an AI assistant that’s far more useful, fun, and versatile. ” said Nick Clegg, President of Meta. He spoke about the world situation in an interview.
He added that Meta’s AI agents are becoming less responsive. Some people felt that the early Llama 2 model, released less than a year ago, was “a little stilted and sanctimonious in that it often didn’t respond to completely innocuous or innocuous prompts and questions,” he said. said.
Confusion among Facebook users
But, perhaps letting down their guard, meta AI agents were also spotted this week posing as humans with fabricated life experiences.
An official Meta AI chatbot interrupted a conversation in a private Facebook group for mothers in Manhattan and claimed to have a child in the New York City school district.
A series of screenshots shown to The Associated Press show the group members protested and later apologized before the comments disappeared.
“I apologize for the mistake. I’m just a large language model, with no experience or children,” the chatbot told the group.
One member of the group, who happens to be studying AI, didn’t know how to distinguish between responses that would be useful if the agent were generated by an AI rather than a human, and responses that would be considered insensitive, disrespectful, or nonsensical. He said it was clear.
“AI assistants that are not necessarily helpful and can actually be harmful place a huge burden on the individuals who use them,” said Alexandra Korolova, an assistant professor of computer science at Princeton University.
Clegg said Wednesday he was unaware of the exchange. Facebook’s online help page says Meta AI agents will join group conversations if you’re invited or if someone “asks a question in a post and no one responds within an hour.” . Group admins can turn this off.
In another example shown to The Associated Press on Thursday, agents caused disruption at a junk-swap forum near Boston. Just an hour after a Facebook user posted that he was looking for a specific item, the AI agent picked up a “gently used” Canon camera and “an almost new portable air conditioner unit for him that he never ended up using.” ” was provided.
New technology that “does not always give the intended response”
meta said In a written statement Thursday “This is a new technology and may not always return the intended response. This is common to all generative AI systems.” The company said it is continually working to improve its functionality.
In the year after ChatGPT sparked a frenzy for AI technologies that generate human-like sentences, images, code, and speech, the technology industry and academia built around 149 large-scale AI systems trained on large-scale datasets. Introduced. This is more than double the previous year. Stanford University study.
Nestor Masley, research manager at Stanford University’s Institute for Human-Centered Artificial Intelligence, said there may eventually be a limit, at least when it comes to data.
“I think it’s clear that the models can get better and better as we expand them based on more data. But at the same time, these systems are far better than any other system that has ever existed on Earth. It’s already trained based on a percentage of all the data,” he said. internet”.
Improvements will continue to be driven by more data being acquired and captured at costs only the tech giants can afford, and increasingly subject to copyright disputes and litigation.
“Yet they still haven’t been able to plan well enough,” Masrezi said. “They still have hallucinations. They still make mistakes in reasoning.”
Achieving AI systems that can perform more advanced cognitive tasks and common sense reasoning that humans still excel at will likely require a shift beyond building ever-larger models.
There is a rush of companies looking to implement generative AI, but which model you choose will depend on several factors, including cost.
AI-powered assistant
In particular, language models are used to power customer service chatbots, create reports and financial insights, and summarize long documents.
“Companies seem to be testing different models for what they’re trying to do, finding one that’s better in some areas than others, and weighing the suitability.” said Todd Rohr, technology consulting leader. KPMG.
Unlike other model developers who sell AI services to other companies, Meta primarily designs AI products for consumers who use ad-supported social networks.
Joel Pinault, Meta’s vice president of AI research, said at an event in London last week that the company’s long-term goal is to make Meta AI, powered by Llama, “the world’s most helpful assistant.” Ta.
“In many ways, the model we have today will be child’s play compared to the model we have five years from now,” she said.
However, she said the “concerning question” was whether researchers were able to fine-tune the larger Llama 3 model to be safe to use and not cause hallucinations or hate speech, for example.
In contrast to the largely proprietary systems of Google and OpenAI, Meta has traditionally advocated a more open approach, making key components of its AI system publicly available for others to use.
“It’s not just a technical issue,” Pinault said.
“This is a social question: What behavior do we want these models to have? How do we shape that? And without properly socializing the models, we If we continue to grow stronger in general, we have big problems.”