This article was originally published by Votebeat, a nonprofit news organization covering local election administration and voting access.
Lately, the internet has been buzzing about artificial intelligence. I'm sure you've noticed.
The worries are not unfounded. Last month, in what appeared to be an AI-generated robocall, a voice that sounded like President Joe Biden appealed to voters in New Hampshire, urging Democrats not to vote in the upcoming presidential primary. Please elect Donald Trump. ” The caller ID made it appear that the call was from a Biden campaign official in the state. No one seems to know who is to blame.
This won't be the last time something like this happens. As they say, the horse is out of the barn.
So I was excited to attend a gathering of academics, election officials, and journalists in New York last week with at least the following objectives: start To assess the potential impact of AI on the 2024 election. I'm no stranger to the beeping side of the journalism world, but until this started, AI seemed overwhelming and scary. I'm still scared, but surprisingly, I don't worry as much anymore.
The program is the brainchild of investigative journalist and author Julia Angwin and social scientist Alondra Nelson, who served as deputy director of the White House Office of Science and Technology Policy in the Biden administration. This was the first program of their new initiative, the Her AI Democracy Project.
Participants were divided into groups and asked to test four AI language models, or chatbots, on a series of election-related prompts, such as how to register to vote by mail in a particular county or how voters can find their nearest polling place. I was asked for it. — and evaluate the quality and differences in responses. Admittedly, there is no perfect answer as to whether language models are good or bad, but measuring the impact of new technology has to start somewhere.
At the end of the day, this day proved to me that the problems that AI can cause are actually not all that new, and that the problems that AI can cause are actually not all that new, and that the issues between stakeholders (in this case election administrators and AI leaders) need to be By working together, our people can build an adequate defense. If you need to do so in a different way.
Quinn Raymond, who participated that day, co-founded Voteshield, a program that monitors changes to voter registration databases to spot malicious activity and analyze anomalies. He says he and his colleagues at Protect Democracy have been thinking a lot about these issues. “The consensus is that the threat of AI in elections is ultimately large in scale. Those seeking to interfere with elections are essentially using the same dirty tricks as before (copycating, blackmailing, etc.), but now It lowers the barrier to entry a little bit and makes it more authentic,” he said in an email after the event. “So even a relatively small number of motivated individuals starting with minimal knowledge and resources can do a lot of damage.”
The Brookings Institution recently published a helpful commentary on the impact AI could have on elections, offering a realistic view of the potential for harm from AI. come to a similar conclusion.
I know it all sounds scary, but here's why I downgraded my fear to a “tingling discomfort.”
For most of the day, I was testing election prompts in a room with two local election officials from a large county, two academics, and a former federal employee. I've used ChatGPT before, but certainly not in this way. And, to be honest, I was surprised at how stupid and sometimes downright unhelpful the responses were. The language model, at least for now, is not yet complete.
For example, when asked for the closest voting location to a zip code in Los Angeles' Koreatown, one language model retrieved the address of a veterans center several miles away that is not a polling location. I know it's harmful. Voters will likely trust that information and go to the polls. But ultimately, the answer makes little sense and the person is more likely to ignore it or seek information elsewhere.
“For voters seeking information about how to vote, a basic Google search is many It’s better than asking a chatbot a question,” said conference attendee David Becker, executive director of the Center for Election Innovation and Research.
And in fact, given the nature of AI, that makes sense. teethsays Raymond. “AI is fundamentally a 'guess' technology, and providing accurate election information to voters is fundamentally a 'knowing' technique,” he said.
At least for now, if you're sophisticated enough to look for and use these language models for answers, you're likely to quickly realize that you're not very good at election information yet and go elsewhere. there is. Many public models explicitly label election information as potentially unreliable and direct users to their local election office or her Vote.gov.
what was However, it was instructive to observe election officials and AI experts talking to each other. This is a model of true collaboration in this field, and makes us feel optimistic about our ability to proactively address both the model's shortcomings and the growing threat of disinformation as technology becomes more sophisticated. I'll give it to you.
Robocalls like the one in New Hampshire were inevitable. And this will continue to happen. As with any evolving technology, experts and government officials must rise to the occasion and update technology policy to address real-world conditions. For example, some states require that images created using AI be explicitly labeled. State-level interest in the issue has been growing since the beginning of the year, energized by robocalls.
This event is a great first step and suggests that we can indeed find common ground on this important issue. Lawmakers working on the bill should take note of the collaborative and interdisciplinary method Mr. Angwin and Mr. Nelson have chosen to approach possible solutions.
When I talked to election officials and journalists who attended this event, I noticed that they often said things like: To understand AI, you have to start experimenting with it. There's no need to fear AI. Open ChatGPT and ask a question. See what types of images you can create with Microsoft's Image Creator, and look carefully for telltale signs in the resulting images, like missing faces or distorted text. Try replicating your own voice to see how it sounds.
Given how inaccessible the whole premise of AI seems, there is a real urge to avoid engaging with it. It's as if AI could shock us through our keyboards or take over our homes like in the 1999 Disney movie. While it is an understandable reaction, it is not unwarranted. If we, the real consumers of information who are trying to evaluate that information in context, want to understand the benefits and drawbacks of this very real thing that is already having very real effects. , is a viable method.
If you want to dip your toe in, the following are good places to start (no prior knowledge required to understand).
- AI Campus (an initiative funded by Germany's Ministry of Education) has a great video called “Artificial Intelligence in 2 Minutes: What Exactly is AI?” AI Campus also offers free classes on AI aimed at realizing an “AI-based society.” cheers.
- Carnegie Mellon University's Artificial Intelligence, Explained is a great introduction to the variety of vocabulary you might encounter while exploring, and has links to useful information about the history of AI development.
- Harvard University has an extensive site dedicated to AI. Although aimed at students, it is open to everyone and contains highly engaging tools and suggestions on how to explore the seemingly endless options that language learning models offer. In particular, we recommend our guide to text-based prompts and a comparison of currently available AI tools.
- We played a scary game where a graduate student was guided through a room full of experts. — blended a series of AI-generated images with real photos and asked you to vote on which ones were real. There was no consensus on a single image, and most people misunderstood most of it. It certainly was. Test your skills with a similar quiz from the New York Times.
If you are an election worker and have concerns about this, Please tell me what they are. I'll continue to be a part of this conversation, and Votebeat will definitely continue to cover AI-related issues this year and beyond. Let us know what's important to you.
Jessica Huesman is Votebeat's editorial director and is based in Dallas. Contact Jessica: jhuseman@votebeat.org.
Votebeat is a nonprofit news organization reporting on voting access and election administration across the United States. A version of this post was originally published in Votebeat's free weekly newsletter. Sign up to receive future editions, including the latest reports from the Votebeat office and selected news from other publications. Delivered to your inbox every Saturday.