Former Google CEO Eric Schmidt supports Synth Labs, recognizing the need to align AI behavior with human intent. Lucas Schulz/Sports Files on Collision (via Getty Images)
Artificial intelligence software doesn’t always do what the people building it want. It’s a potentially dangerous issue that affects some of the major companies working on this technology.
Large companies like OpenAI and Alphabet’s Google are increasingly directing their employees, money and computing power to the problem. And OpenAI’s competitor Anthropic has put his OpenAI at the center of its development of Claude, a product it touts as a safer AI chatbot.
Starting this month, a new company called Synth Labs is also taking aim at this problem. The company, founded by some of the biggest names in the AI industry, is coming out of stealth this week, raising seed funding from Microsoft’s venture capital fund M12 and Eric Schmidt’s First Spark Ventures. Synth Labs primarily focuses on building software, some of which is open source, for various companies to ensure their AI systems work as intended. The company positions itself as a transparent and collaborative company.
This problem, sometimes referred to as “coordination,” is a technical challenge for AI applications such as chatbots, which are built on large language models and typically trained on vast amounts of internet data. will be done. This effort is complicated by the fact that people have different ethics and values and ideas about what should and should not be allowed to AI. Synth Labs’ products are designed to help you manipulate and customize large language models, especially those that are themselves open source.
The company started as a project within the nonprofit AI research lab EleutherAI, with two of the three founders, Louis Castricato and Nathan Lile, as well as Stella Biderman, an advisor at Synth Labs and EleutherAI executive director. . Francis D’Souza, former CEO of biotechnology company Illumina, is also a founder. Synth Labs declined to say how much funding it has raised so far.
Over the past few months, the startup has built tools that can easily evaluate large-scale language models on complex topics, Castricato said. The goal, he said, is to democratize access to easy-to-use tools that can automatically evaluate and adjust AI models.
A recent research paper co-authored by Castricato, Lile, and Biderman provides some insight into the company’s approach. The authors created the dataset using responses to prompts generated by OpenAI’s GPT-4 and Stability AI’s Stable Beluga 2 AI model. That dataset was then used as part of an automated process that instructs the chatbot to avoid talking about one topic and instead talk about another.
“Our idea with the design of these initial tools is to give you the opportunity to decide what the adjustments mean for your business and your personal preferences,” Lyle said.