The United States federal agency responsible for overseeing communications has enacted a prohibition on robocalls utilizing AI-generated voices. This announcement was made by the Federal Communications Commission (FCC) on Thursday, with the regulation immediately coming into effect. FCC emphasized that this decision empowers state authorities to pursue legal actions against individuals or entities involved in such calls.
The proliferation of robocalls imitating the voices of well-known personalities and political figures has prompted this regulatory move. FCC Chairwoman Jessica Rosenworcel stated, “Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters.” She underscored the agency’s determination to combat fraudulent activities associated with these robocalls.
This regulatory action follows an incident from the previous month wherein voters in New Hampshire received robocalls impersonating US President Joe Biden ahead of the state’s presidential primary. These calls, estimated to be between 5,000 to 25,000 in number, urged voters to abstain from participating in the primary. New Hampshire’s attorney general disclosed that investigations are ongoing and have traced the calls back to two companies based in Texas.
FCC highlighted the potential of such calls to mislead consumers by disseminating misinformation while impersonating public figures or even family members. While state attorneys general retain the authority to prosecute individuals and entities behind such calls for offenses like scams or fraud, this latest measure specifically outlaws the utilization of AI-generated voices in robocalls, thereby broadening the legal mechanisms available to hold perpetrators accountable.
This regulatory move was spurred by a joint effort from 26 state attorneys general, who urged the FCC to take action to curb the use of AI in marketing phone calls. Pennsylvania Attorney General Michelle Henry, leading this initiative, emphasized the importance of ensuring that technological advancements are not exploited to prey upon or deceive consumers. This request came subsequent to a Notice of Inquiry issued by the FCC in November 2023, soliciting input nationwide regarding the use of AI technology in consumer communications.
The emergence of deepfakes, which utilize AI to create manipulated video or audio content impersonating individuals, has raised significant concerns globally, especially in the context of major elections. Instances of senior British politicians being targeted by audio deepfakes, alongside occurrences in nations like Slovakia and Argentina, have underscored the potential threats posed by AI-generated fakes to the integrity of electoral processes.
In the United Kingdom, the National Cyber Security Centre has issued warnings regarding the risks posed by AI-generated fakes to the upcoming elections, emphasizing the need for vigilance and regulatory measures to safeguard the democratic process.