Recent events involving deepfakes generated by artificial intelligence (AI) Robocall impersonating President Biden The content urging New Hampshire voters to abstain from the primary underscores that malicious actors are increasingly viewing modern generative AI (GenAI) platforms as powerful weapons to target U.S. elections. It serves as a reminder.
Platforms like ChatGPT, Google’s Gemini (formerly known as Bard), or other dedicated dark web large-scale language models (LLMs) are used to combat mass influence campaigns, automated trolling, and deepfake content.
In fact, FBI Director Christopher Wray Recently, concerns have been raised About the ongoing information war using deepfakes that could spread disinformation during the next presidential election. Actors who received state support Attempts to upset the geopolitical balance.
GenAI could also automate the rise oforganized fraud” A network that seeks to cultivate an audience for disinformation campaigns through fake news outlets, persuasive social media profiles, and other means, with the goal of sowing discord and undermining the public in the electoral process. It is a loss of trust.
Election Impact: Significant Risks and Nightmare Scenarios
From the perspective of Padrick O’Reilly, chief innovation officer at CyberSaint, the risks are “significant” because technology is evolving rapidly.
“We’re seeing new variants of disinformation using deepfake technology, so it’s interesting and perhaps a little bit worrying,” he says.
Specifically, O’Reilly said the “nightmare scenario” is that micro-targeting through AI-generated content will proliferate on social media platforms.it’s a well known tactic Cambridge Analytica scandalIn , the company collected psychological profile data on 230 million U.S. voters and delivered highly customized messages to individuals via Facebook in an attempt to influence their beliefs and votes. But GenAI has the potential to automate that process at scale and create more persuasive content with few of the “bot” characteristics that make people uncomfortable.
“Targeting data was stolen [personality snapshots of who a user is and their interests] “Convergence with AI-generated content is a real risk,” he explains. “Russian disinformation campaigns from 2013 to 2017 are a sign of what else can and will happen, and we know that deepfakes generated by U.S. citizens know” [like the one] Featuring Mr. Biden, elizabeth warren. ”
social media and Deepfake technology readily available He added that it could be a doomsday weapon that polarizes an already deeply divided American public.
“Democracy presupposes certain shared traditions and information, but the danger here is growing divisions among the population and what Stanford University researcher Renee DiResta calls ‘tailor-made realities.’ , which leads people to believe in ‘alternative facts,”’ O’Reilly said.
The platforms that threat actors use to sow division will likely be of little use. For example, he added, social media platform X, formerly known as Twitter, has discontinued content quality assurance (QA).
“Other platforms offer boilerplate guarantees to combat disinformation, but the lack of free speech protections and regulation still leaves the field wide open for bad actors.” he warns.
AI amplifies existing phishing TTPs
GenAI is already being used to create more believable and targeted phishing campaigns at scale, said Scott Small, director of cyber threat intelligence at Tidal Cyber. From a security perspective, this phenomenon is even more concerning.
“Cyber-attackers are deploying generated AI to make phishing and social engineering attacks, the primary forms of election-related attacks that have been consistent in volume for years, more convincing and target targets with malicious content. We anticipate that there will be a higher likelihood of interaction,” he explains.
Small said the introduction of AI also lowers the barrier to entry for launching such attacks, due to the volume of campaigns attempting to take over candidate accounts for purposes such as campaign infiltration or impersonation. It is likely that this number will increase this year.
“Criminals and nation-state adversaries regularly use phishing and social engineering invitations to align with current events and popular themes, and these actors will almost certainly be more commonly distributed this year. “They will try to take advantage of the boom in election-related digital content to distribute malicious messages. They will provide content to unsuspecting users,” he says.
Defending against AI election threats
To protect themselves from these threats, election administrators and campaigns need to be aware of the risks posed by GenAI and how to protect against them.
“Election officials and candidates are constantly giving interviews and press conferences from which threat actors can extract soundbites for AI-based deepfakes,” said James Turgal, vice president of cyber risk at Optiv. “There is,” he says. “Therefore, it is incumbent upon businesses to ensure they have a person or team in place who is responsible for ensuring that content is managed.”
You should also make sure your volunteers and staff are trained on AI-powered threats such as enhanced social engineering, the attackers behind them, and how to respond to suspicious activity.
To do so, staff must participate in social engineering and deepfake video training. This training includes information on all forms and attack vectors, including electronic (email, text, social media platforms), in-person, and phone-based attacks.
“This is very important, especially for volunteers, because not everyone has good cyber hygiene,” Turgal says.
In addition, campaign and election volunteers must be trained on how to safely provide information online and to outside agencies, including when posting on social media, and must use caution when doing so.
“Cyber attackers could collect this information to tailor socially engineered decoys to specific targets,” he warns.
O’Reilly said long-term regulations include: Watermarking for audio and video deepfakes It would be helpful to point out that the federal government is working with LLM holders to put protections in place.
actual, The Federal Communications Commission (FCC) just declared Under the Telephone Consumer Protection Act (TCPA), AI-generated voice calls are considered “artificial” and the use of voice cloning technology is illegal, requiring state attorneys general across the country to combat such fraudulent practices. provides new tools.
“AI is advancing so quickly that there is an inherent risk that as the technology advances, the proposed rules may become invalid and fail to achieve their goals,” O’Reilly said. “In some ways, this is the Wild West, and AI is coming to market with very few safeguards.”