Graphics: StudioM1 / iStock
Six Princetonians named to Time magazine’s list of the 100 most influential people in AI
Ten years ago, Cynthia Rudin*04 worked with New York City power company Con Edison to predict events such as manhole fires and explosions based on more than 100 years of data collected by the company. was going on. Rudin used that historical dataset to train a machine learning model, but had trouble figuring out why the model made the predictions it did. So she switched to a simpler model and found that not only was it just as accurate as before, but it was actually starting to understand what it was predicting and why.
“At that point, I realized that there was real value in interpretability,” Rudin, now a professor of computer science at Duke University, says of the characteristic of artificial intelligence (AI) systems that enables users to understand and interpret why the system makes the decisions it makes. “That’s why I started working with interpretable models. But it wasn’t really very popular when I started working with them.”
But as more and more policymakers focus on artificial intelligence and propose new laws and regulations about when high-risk AI decision-making systems must be accountable and transparent, Rudin’s Research is now very relevant and in high demand.
Photo by Cynthia Rudin*04
“At that point, I realized there was real value in interpretability, and that’s why I started working on interpretable models, but when I started, they weren’t very popular.”
— Cynthia Rudin *04
Duke University Computer Science Professor
The release of ChatGPT in November 2022 spurred a wave of interest in generative AI, as many people who had never directly interacted with a natural language algorithm were impressed by how sophisticated the text ChatGPT produced was. ChatGPT’s success has stimulated new policy debates about how best to govern these technologies, but it has also reignited fears that these tools will replace people’s jobs, make teaching and writing assignments obsolete, and make our most important decisions, from who should go to jail to who should receive a serious medical diagnosis, in ways that we could not even begin to understand or unravel.
Rudin, who has a niche in this industry, is at the forefront of many people working in this field to create increasingly impressive, sophisticated, understandable, and ethical AI systems and tools. There are many Princeton graduates who are opening up and focusing their efforts. last year, time The magazine released a list of the 100 most influential people in the field of AI, and the list included six Princeton University alumni: Dario Amodei*11, Fei-Fei Li (Class of 1999), Eric Schmidt (Class of 1976), and Richard Socher*09, as well as Princeton professor Arvind Narayanan (see “On the Campus,” page 11) and graduate student Sayash Kapur (see the December issue of “Research”).
For Schmidt, the former CEO and chairman of Google, AI was central to his post-Google career. His current focus is primarily on philanthropy through his Schmidt Futures organization, as well as on initiatives he chairs, including the Special Competitive Research Project and the National Security Commission on Artificial Intelligence, to reach policymakers in Washington, DC. We also help educate people about the potential and dangers of AI. .
Schmidt believes AI will change everything, mostly for the better. “Imagine if each of us were twice as productive as adults. We would all be better teachers, doctors, philosophers, entertainers, inventors, and even CEOs,” he told PAW in an email. “The emergence of intelligence that can recognize patterns invisible to humans, analyze choices we can’t make in our lifetimes, and generate new content and systems is a major shift in human history. The ability to rapidly advance science, especially climate change, will be a huge boon in the coming years.”
One Princeton student turning to AI for sustainability is Ha-Kyung Kwon (class of 2013), a senior research scientist at Toyota Research Institute. He is using his AI to design new polymers that will help make better batteries that will power green technology. “AI is particularly attractive in this field because there are so many things you can change when designing new polymers,” Kwon explains. “It’s a needle in a haystack problem, and you’re usually trying based on knowledge gained from what others have already tried. Sometimes that’s a good approach, but many In this case, such an approach does not necessarily lead to breakthroughs, and breakthroughs are what we really need.
Photo: Courtesy of Ha-kyung Kwon ’13
“What makes AI particularly attractive in this field is the sheer number of things that can be changed when designing a new polymer.”
— Kwon Hakyeong ’13
Toyota Research Institute, Senior Researcher
Other graduates have chosen to apply AI to a variety of problems, including James Evans ’16, co-founder of CommandBar. Evans, along with co-founders Richard Freling ’16 and Vinay Ayyala ’16, will build around 2,400 in 2023 for CommandBar, a platform that uses AI to help people interact with apps and software more easily. We raised $1,000,000.
“Instead of forcing users to navigate a maze of menus, buttons, and toolkits when using new software, we wanted to be able to verbally explain what they were trying to do,” Evans said. says. “We’re all good at using Google to find things. We thought, let’s create a tool that allows users to use the same words they’re familiar with when they first access a product.” is. like clippy [Microsoft Word’s virtual assistant, which was an animated paperclip], but less annoying and more accurate. ”
Some alumni are working directly on developing new AI systems. For example, Amodei, CEO and co-founder of Anthropic, is at the helm of one of OpenAI’s main competitors and previously worked there. Anthropic seeks to differentiate itself within the AI ecosystem by touting a safer, more ethical and responsible approach to AI technology, including giving developers a way to specify values for their AI systems.
Mr. Socher also founded and runs the AI company You.com. It’s a chat search assistant that combines elements of a search engine with a personal assistant to help people find information and answer questions. Socher is committed to ensuring that You.com offers its users more privacy than other search engines, which is why the service does not display personalized ads. Instead of selling ads, Sorcher’s plan is to eventually monetize the service through subscription fees. Like Amodei, his vision for AI goes beyond enabling cutting-edge technology and is tied to specific values such as privacy.
Lee, a computer science professor at Stanford University, has taken a similar approach to researching AI. In addition to her pioneering research in image recognition, she co-founded her non-profit organization called AI4ALL. This nonprofit organization has launched an outreach program for students interested in the field and strives to increase diversity in the field. And, like Schmidt, Lee also spends time in Washington, D.C., crafting policies to provide more computing resources for AI research and increase support for research in areas such as AI safety. We have been lobbying for people.
In Schmidt’s estimation, the U.S. government has done “pretty well” so far in regulating AI “without rushing to regulate this new and powerful technology prematurely.” U.S. policymakers are “working with the industry to understand the most important issues but trying not to slow things down,” Schmidt said. But he cautioned, “all of this is going to happen very quickly, relative to government, culture, and normal industry.”