When Elon Musk sued OpenAI and its CEO Sam Altman for breach of contract on Thursday, he weaponized claims from Microsoft, the startup’s closest partner.
He discussed the power of GPT-4, the breakthrough artificial intelligence system that OpenAI released last March, in a controversial but highly influential book written by Microsoft researchers and executives. The paper was cited repeatedly.
In the “Spark of AGI” paper, Microsoft’s research labs explained that GPT-4 was “artificial general intelligence” (AGI, a machine capable of doing everything the human brain does), although it did not understand how. ) showed a “spark”. can.
It’s a bold claim, made as the world’s biggest technology companies race to incorporate AI into their products.
Musk spoke out against OpenAI, saying the paper shows how OpenAI has backtracked on its promise not to commercialize truly powerful products.
Microsoft and OpenAI declined to comment on the lawsuit. (The New York Times sued both companies, alleging copyright infringement in GPT-4 training.) Musk did not respond to requests for comment.
How was the research paper completed?
A team of Microsoft researchers led by Sébastien Bubeck, a 38-year-old French expat and former Princeton University professor, tested an early version of GPT-4 in the fall of 2022, months before the technology was made available to the public. It started. Microsoft has committed $13 billion to OpenAI and is negotiating exclusive access to the underlying technology that powers its AI systems.
While talking to the system, they were surprised. It wrote complex mathematical proofs in the form of poems, generated computer code that could draw unicorns, and explained the best way to stack a collection of random, eclectic household items. Dr. Bubeck and his fellow researchers began to wonder if they were witnessing a new form of intelligence.
“I was very skeptical at first, and that evolved into a sense of frustration, annoyance, and maybe fear,” said Peter Lee, head of research at Microsoft. “Where on earth did this come from?”
What role does this paper play in Musk’s case?
Musk claimed that OpenAI breached the agreement by agreeing not to commercialize products that the board deemed AGI.
“GPT-4 is an AGI algorithm,” Musk’s lawyers wrote. They said that meant the system should not have been licensed to Microsoft.
Musk’s complaint repeatedly cites the Sparks paper to argue that GPT-4 is AGI, and Musk’s lawyer says, “Microsoft scientists believe that GPT-4 is “some kind of “Given the breadth and depth of GPT, given its 4 capabilities, it is an early (and still incomplete) technology for artificial general intelligence (AGI) systems.” ) version.”
How was it received?
This paper has been highly influential since it was published a week after GPT-4 was released.
Thomas Wolff, co-founder of renowned AI startup Hugging Face, said: I have written The next day, X reported that the study “contained an absolutely surprising example” of GPT-4.
Microsoft’s research has since been cited in more than 1,500 other papers, according to Google Scholar. According to Semantic Scholar, this is one of the most cited articles on AI in the past five years.
It has also faced criticism from experts, including some within Microsoft, who are concerned that the 155-page paper backing up the claim lacks rigor and will fuel the AI marketing frenzy.
The paper has not been peer-reviewed and was conducted on an early version of GPT-4 that was heavily protected by Microsoft and OpenAI, so its results cannot be reproduced. As the authors note in their paper, they did not use his version of GPT-4, which was later released to the public, so if others reproduced their experiment they would likely get different results.
Some outside experts said it is unclear whether GPT-4 or similar systems behave like human reasoning or common sense.
“When we look at a complex system or machine, we anthropomorphize it. We all do it, whether we’re working in the field or not,” says Alison Gopnik of the University of California, Berkeley. the professor said. “But thinking of this as a constant comparison between AI and humans, like some kind of game show competition, is not the right way to think.”
Were there any other complaints?
In the preface to the paper, the authors originally defended a concept called the bell curve, echoing the 30-year-old Wall, who argued that “Jews and East Asians” were more likely to have higher IQs.・Defined “intelligence” by quoting an opinion article from Street Journal. than “black or Hispanic.”
Dr. Lee, who is listed as an author on the paper, said in an interview last year that when researchers were considering the definition of AGI, “we took it from Wikipedia.” When he later learned of the bell curve’s relevance, “we were really disappointed in it and immediately made changes,” he said.
The paper’s lead contributor, Microsoft chief scientist Eric Horvitz, said he saw the co-founder of Google’s DeepMind AI lab mentioned in the paper, insert reference. I wrote in an email that I was responsible for doing so. And he was unaware of the racist reference. When he learned about it in a post on
Is this AGI?
When Microsoft researchers first wrote this paper, they called it “First Contact with AGI Systems.” However, some members of the team, including Dr. Horvitz, disagreed with this characterization.
He later told the Times: “We don’t see what he calls ‘artificial general intelligence,’ but it does flicker through exploration and sometimes surprisingly powerful outputs.”
GPT-4 is far from doing everything the human brain can do.
In a message to OpenAI employees Friday afternoon seen by the Times, OpenAI Chief Strategy Officer Jason Kwon clarified that GPT-4 is not AGI.
“Although GPT-4 can solve small tasks in many jobs, the ratio of work done by humans to work done by GPT-4 in the economy remains alarmingly high,” he wrote. “Importantly, AGI will be a highly autonomous system with sufficient capabilities to devise new solutions to long-standing challenges. GPT-4 cannot do that.”
Still, the paper galvanizes claims from some researchers and experts that GPT-4 is a significant step toward AGI and that companies like Microsoft and OpenAI will continue to improve the technology’s reasoning skills. Did.
The AI field remains deeply divided over how intelligent the technology is today, or how intelligent it will be in the near future. If Musk has his way, a jury could resolve the arguments.