California governor signs new laws regulating AI ‘deepfakes’ related to elections News
David Jiang, CC BY-SA 4.0, via Wikimedia Commons
California governor signs new laws regulating AI ‘deepfakes’ related to elections

Gavin Newsom, governor of the US state of California, signed three bills into law on Tuesday that collectively aim to regulate election-related content generated by artificial intelligence (AI).

AB 2655 specifies that a “large online platform” must “block the posting of materially deceptive content related to elections in California.” Furthermore, these platforms would have to create procedures where California residents can notify the platform and report any content that slipped through compliance with the act. If a platform does not comply with this law, state officials can seek injunctive relief.

These three laws build off AB 730 and other existing laws. First, AB 2355 requires a “qualified political advertisement” (ad) to include a disclosure that it was generated or edited using AI. The bill also details the formatting of the disclosure and defines “qualified political advertisement” as an ad that uses AI, which may be mistaken by a “reasonable person” as authentic and that reasonable person would have a “fundamentally different understanding” of the ad when comparing the AI version and non-AI version.

Lastly, AB 2839 prohibits anyone, as defined in the act, from “knowingly distributing” an ad “with malice” that has “materially deceptive content” 120 days before an election in California and in special cases 60 days after, subject to exceptions. Anyone, as defined in the act, who receives the content then could file a civil action to seek damages against whoever had distributed the ad.

Back in 2019, Newsom made similar initiatives in AB 730 to regulate “deepfakes,” which is a blanket term used colloquially to describe content containing false, misleading or otherwise inaccurate portrayals. AB 730 prohibited anyone from distributing deepfakes of candidates within 60 days of appearing on an election ballot. The bill also enables candidates who are victims of deepfakes to seek legal action.

Newsom believes the significance of these laws rests in their ability to defend democracy and shield voters from disinformation:

Safeguarding the integrity of elections is essential to democracy, and it’s critical that we ensure AI is not deployed to undermine the public’s trust through disinformation – especially in today’s fraught political climate. These measures will help to combat the harmful use of deepfakes in political ads and other content, one of several areas in which the state is being proactive to foster transparent and trustworthy AI.

The three Assemblymembers who respectively introduced the bills have applauded Newsom for taking strides in AI regulation by signing off on their bills. Assemblymember Marc Berman, who introduced AB 2655, explained that “AB 2655 is a first-in-the-nation solution” to the issues posed by AI in elections.

Governments around the world have been increasingly implementing regulations on AI, specifically the use of deepfakes, due to the harmful effects on society and those who may be victimized by it. In June 2024, Australia introduced a bill that criminalizes deepfakes containing non-consensual sexually explicit content. In 2023, the government of India issued an advisory to social media platforms regarding deepfakes. In 2022, the Cyberspace Administration of China began drafting regulations on deepfakes.

These new California laws come at an especially critical time with the upcoming 2024 US presidential election in November.