Generative artificial intelligence, a revolutionary technology capable of creating diverse content from existing data, is posing a significant threat to the security of the United States’ electoral process. As this technology becomes more accessible and powerful, it opens the door for malicious actors, including nations like China, Iran, and Russia, to amplify their efforts to undermine American democracy. In particular, generative AI stands to intensify existing cybersecurity risks, enabling the rapid dissemination of fake content, thereby challenging various facets of the electoral process, from voter registration to result reporting.
The forthcoming 2024 election won’t introduce entirely new risks but will undoubtedly elevate existing ones. The responsibility for countering this threat largely falls on the shoulders of state and local election officials, who have historically safeguarded the electoral process against numerous challenges. In the wake of baseless allegations of voter fraud and increased pressure since the 2020 election, these officials require support from federal agencies, voting equipment manufacturers, generative AI companies, the media, and voters alike. It is imperative to provide them with the necessary resources, capabilities, information, and trust to fortify the security of election infrastructure. Generative AI companies, in particular, can contribute by developing tools to identify AI-generated content and ensuring their technologies prioritize security to prevent misuse.
FAKE IT TILL YOU BREAK IT
Generative AI software utilizes statistical models to generate original text, images, and other media based on existing data patterns. This technology, exemplified by applications like ChatGPT, has the ability to produce a wide range of content rapidly. From crafting emails to creating synthetic media, such as deepfakes, generative AI is transforming the landscape of content creation. This accessibility, however, also raises concerns about the malicious use of generative AI in political contexts.
Foreign adversaries have long attempted to undermine U.S. elections through cyberthreats and disinformation. The emergence of generative AI further exacerbates these threats by making malicious activities cheaper and more effective. AI-enabled translation services, account creation tools, and data aggregation empower adversaries to automate processes, targeting individuals and organizations with greater precision and at scale. As the 2024 U.S. presidential election approaches, the potential for generative AI to disrupt electoral processes is a growing concern, with the United Kingdom’s National Cyber Security Centre acknowledging similar risks in their upcoming general election.
THREAT ASSESSMENT
With over two billion people expected to vote globally in 2024, concerns over the impact of generative AI on elections extend beyond the United States. Advances in AI, particularly large language models, enable the generation of fabricated content, hyper-realistic bots, and sophisticated deepfake campaigns. AI’s role in data aggregation empowers malicious actors to undertake tailored cyberattacks, such as spearphishing, targeting specific individuals or organizations. This, combined with high-quality AI-generated content, poses a significant risk to even vigilant internet users.
Generative AI could facilitate the creation of advanced malware, optimize the coordination of botnet attacks, and enhance distributed denial-of-service attacks. These attacks could disrupt election-related websites and communication channels, undermining voter confidence in the electoral process. Additionally, generative AI increases the risk of online harassment, exacerbating the unprecedented level of hostility faced by U.S. election officials.
HUMANS VS. MACHINES
Despite the escalating concerns, the United States possesses the capability to counter the malicious use of generative AI and safeguard its democracy. The resilience of the American electoral process is attributed to the dedication of state and local election officials who continually adapt to unforeseen challenges. The proactive measures taken over the past seven years, including the establishment of digital and physical controls on election systems, demonstrate the officials’ commitment to securing the electoral process.
Security best practices, such as multifactor authentication, endpoint detection, and response software, are crucial in mitigating generative AI cyberthreats. Election officials must be vigilant against phishing attempts, leveraging email authentication protocols and human authentication tools. Transparent and consistent communication with the public, coupled with partnerships with media, community leaders, and constituents, enhances election officials’ role as authoritative voices and strengthens the democratic process.
OF THE PEOPLE, BY THE PEOPLE
The private sector, comprising internet service providers, cybersecurity firms, and generative AI companies, plays a vital role in enhancing election security. Collaboration with state and local election offices to provide enhanced security measures and support services is essential. Generative AI companies, in particular, should focus on secure product design and the development of tools to identify AI-generated content, ensuring continuous improvement in quality and security.
Media outlets hold a responsibility to be aware of the threat posed by generative AI and to relay information from trusted, official sources. Journalists should combat misinformation by disseminating accurate information and amplifying election officials as trusted sources. Voter participation is equally crucial, with opportunities to serve as poll workers, election observers, and by avoiding amplification of nefarious actors seeking to undermine democracy.
The challenges posed by generative AI require a collective effort from government, private sector entities, media, and voters to fortify the democratic process against malicious use. By staying vigilant, adopting security best practices, and fostering collaboration, the United States can navigate the complex landscape of generative AI and preserve the integrity of its elections.