The company also introduced Llama 3, the latest version of its large-scale language model. This is a move that puts Meta’s AI tools in direct competition with leading AI chatbots such as OpenAI’s ChatGPT, Google’s Gemini, Microsoft’s Copilot, and Anthropic’s Claude. Zuckerberg touted the revamped Meta AI product as the “most intelligent AI assistant” available for free.
But experts have warned that widespread use of AI chatbots could amplify problems that have long plagued the Meta social network, including harmful misinformation, hate speech and extremist content. ing. The company’s image generator could also spark debate over how it depicts race and gender when creating fictional scenarios.
“There was a general fear about how LLMs would interact with society and how they would exacerbate misinformation, hate speech, etc.,” said Dr. McConlogue, a senior fellow at Columbia University’s Tow Center for Digital Journalism. , former Twitter policy executive Annika Collier-Navaroli said. “And I feel like they continue to make it easier for bad predictions to come true.”
Meta spokesperson Kevin McAllister said in a statement that this is “a new technology and may not always return the intended response, which is common to all generative AI systems.”
“Since its launch, we have continuously released updates and improvements to the model and continue to work to make it better,” he added.
Until Meta AI becomes available The new standalone website also includes search boxes for WhatsApp, Instagram, Facebook, and Messenger. Meta is also We conducted an experiment to put an AI assistant into a group on Facebook. chime will sound automatically If no one answers after an hour, answer the question as a group.
Meta has long faced intense scrutiny from activists and regulators for how it handles dangerous content about politics, social issues and current events. AI-powered chatbots are known to “hallucinate” and give responses that are false or unrealistic, potentially adding to these debates.
Miranda Bogen, head of the think tank’s AI Governance Lab, said chatbots will help improve “all the areas that AI technology developers need to approach with care, from education to health to housing to local politics. “The tools are the subject of opinion.” Former AI policy manager at Center for Democracy and Technology and Meta. She said, “If developers fail to fully consider the context in which their AI tools are deployed, these tools are not only unsuitable for their intended tasks, but also risk causing confusion, disruption, and harm.”
On Wednesday, Alexandra Korolova, a professor of computer science and public affairs at Princeton University, said: I posted a screenshot on X Meta AI speaks in a Facebook group for thousands of New York City parents. Meta AI answered questions about gifted and gifted programs, claimed to be a parent with experience in the city school system, and even recommended specific schools.
McAllister said the product is evolving and some people are seeing “some responses from the Meta AI replaced with a new response that says, ‘This answer was not useful and has been removed.'” He said there might be. We will continue to improve Meta AI. ”
Meta AI claims to have children attending New York City public schools and sharing their experiences with teachers. This answer is in response to a question asking for personal feedback in a private Facebook group for parents. Meta’s algorithm also ranks it as the top comment. @AIatMeta pic.twitter.com/wdwqFObWxt
— Alexandra Korolova (@corolova) April 17, 2024
This week, an entrepreneur experimenting with meta-AI on WhatsApp discovered that it had fabricated blog posts accusing him of plagiarism, and even provided official citations to posts that didn’t exist.
Image generators such as Meta also have their own issues. Earlier this month, a Verge reporter had trouble getting MetaAI to generate images of Asian-white couples and friends, despite repeatedly giving the service specific prompts. In February, Google blocked the ability of its artificial intelligence tool Gemini to generate images of people after some users accused it of anti-white bias.
Navaroli said he is now concerned that biases built into AI tools could “feed back into society’s timeline” and reinforce biases in a “feedback loop to hell.”
Korolova, a professor at Princeton University, said MetaAI’s potentially false claims in Facebook groups are likely “just the tip of the iceberg of damage that Meta was not expecting.”
“Should we accept a lower standard for potential harm just because the technology is new?” Korolova asked. “This also sounds like ‘move fast and break things.'”