In the aftermath of former U.S. President Donald Trump’s third indictment, which includes accusations of spreading “pervasive and destabilizing lies about election fraud,” the inevitable surge of disinformation looms large. Trump has been fervently fanning the flames as the upcoming election season looms. In May, he disseminated a fabricated video depicting CNN host Anderson Cooper castigating President Joe Biden for ceaselessly perpetuating untruths.
Yet, Trump is not solitary in his imaginative storytelling. Florida Governor Ron DeSantis, contending with Trump for the 2024 Republican nomination, has also joined the ranks of creative spinners. DeSantis’ presidential campaign took to Twitter with a video advertisement showcasing AI-generated visuals of Trump engaging in affectionate gestures with Anthony Fauci, the former chief medical advisor and a polarizing figure on the far right. A separate counterfeit video, now viral, features former Secretary of State Hillary Clinton expressing admiration for DeSantis, “He’s just the kind of guy this country needs, and I really mean that.”
The rise of disinformation has acquired a fresh impetus from artificial intelligence (AI), enabling the democratization of deceptive content creation. The advent of novel generative AI tools like DALL-E, Reface, and FaceMagic has effectively democratized political content generation. This phenomenon was further amplified by Meta’s recent revelation regarding its forthcoming generative AI technology for public utilization, potentially fueling an exponential surge in such “creative” disinformation.
The democratization of the disinformation process poses a profound menace to the already vulnerable U.S. democracy, a concern shared even by AI industry luminaries. Former Google CEO Eric Schmidt cautioned against placing trust in visual or auditory information during elections due to AI manipulation. Sam Altman, CEO of OpenAI, expressed his disquiet about AI’s potential impact on the trajectory of democracy.
Reacting to these concerns, legislators are taking decisive steps. Senate Majority Leader Chuck Schumer proposed an innovative framework for AI regulation aimed at averting a potential democratic erosion. Representative Yvette Clarke introduced legislation mandating politicians to disclose their use of AI in campaign ads, a proposal paralleled by similar bills under consideration in the Senate. Several states, including Michigan and Minnesota, are contemplating legislation that would criminalize the deliberate dissemination of false election-related information, and some lawmakers are even receptive to the notion of establishing an entirely new federal agency tasked with overseeing AI regulation.
However, the conundrum remains: the prospect of regulating AI to safeguard U.S. democracy could inadvertently imperil democracies on a global scale. This paradox becomes conspicuous when considering the potential repercussions of more strident regulatory efforts emanating from influential markets such as the United States and the European Union. The more stringent the regulations on disinformation in these regions, the higher the likelihood of unbridled dissemination elsewhere.
Multiple factors contribute to this complex paradox. The major social media platforms, the chief conduits of disinformation, have been progressively downsizing their disinformation detection teams. This has resulted in limited resources being primarily allocated to address concerns in the U.S. and EU. Consequently, there is a dearth of resources available for monitoring content in other regions, exacerbated by the platforms’ preoccupation with other exigencies. This challenge coincides with the tumultuous year of 2024, marked by a plethora of elections far beyond the confines of the United States.
Contemplating the electoral landscape of 2024 underscores its pivotal role in testing democratic systems worldwide. Nations across Asia, including India, Indonesia, and South Korea, grapple with their own disinformation-driven political campaigns. In Africa, over a dozen countries brace for elections, where disinformation frequently exerts significant influence. Similarly, Latin American nations like Mexico and Peru confront rampant disinformation challenges in the run-up to their forthcoming elections.
Against this backdrop, one might naturally expect social media platforms to establish dedicated election war rooms and robust disinformation identification mechanisms. However, the reality paints a different picture. Companies within the tech sector are grappling with pressing profitability concerns, prompting workforce reductions and streamlining of non-revenue-generating divisions. The focus inevitably shifts towards user attraction and enhancing engagement, relegating disinformation monitoring to a secondary concern.
The ascendancy of AI-propelled disinformation presents a multifaceted dilemma. While the urgency to regulate AI for safeguarding domestic democracy is apparent, the inadvertent consequence of inadvertently facilitating disinformation propagation elsewhere demands equal consideration. The delicate equilibrium between domestic security and global ramifications underscores the intricate challenges confronting lawmakers and regulators in addressing this pressing issue. As the world navigates the turbulent electoral landscape of 2024, achieving this balance becomes an imperative of unprecedented magnitude.