Google requires AI-generated political ads to disclose their origin

Google requires AI-generated political ads to disclose their origin

synthetic_content

Google Cracks Down on AI-Generated Political Ads

In an effort to address concerns about the potential harm caused by AI-generated content, Google has announced an update to its policy on political ads. The new policy requires advertisers to clearly disclose when their ads contain synthetic content created by AI. The focus is on political ads that “inauthentically depict real or realistic-looking people or events.”

The policy, which is set to take effect in mid-November, mandates that the disclosure about the use of AI must be clear and conspicuous, and prominently displayed for users to notice. The requirement applies to political ads with images, video, or audio served by Google on its own platforms like YouTube, as well as on third-party websites within the Google display network.

However, the policy does not apply to ads that have simply been edited, such as resized, cropped, or color-corrected, as long as the edits do not falsify realistic scenes involving real people or events.

Tackling AI-Generated Misinformation

Google provided examples of the types of AI-generated ads that would require disclosure. One example is a political ad that uses AI-generated content to make it seem as if a person is saying or doing something they never said or did. Another example is an AI-generated ad that manipulates footage of an actual event or portrays scenes from a real event that never happened.

The problem of political ads bending or breaking the truth is not new, but advancements in technology, particularly AI, have intensified the issue in the digital age. Using AI, political campaigns and other parties can easily create convincing yet fake images, videos, or audio clips that depict a candidate saying or doing something they didn’t actually say or do. Biased voters may unwittingly accept these falsehoods without fact-checking them.

To address concerns surrounding political ads, Google has previously implemented measures to enhance transparency. In 2018, the company began requiring all political advertisers to verify their identity with in-ad disclosures that reveal who paid for the ad. Transparency reports are also available, listing information about who purchased specific ads, the amount spent, and the number of times the ad was viewed. In 2019, Google expanded transparency to include ads related to state-level candidates, political parties, and ballot initiatives.

Combatting Phony Content and Deepfakes

Google’s policies not only target AI-generated content but also prohibit the use of deepfakes and other deceptive content related to politics and social issues. The company employs both automated systems and human reviewers to identify and remove ads that violate these policies. In 2022 alone, Google removed 5.2 billion ads that violated its guidelines and blocked 2.6 million election ads that failed to complete the verification process.

According to a Google spokesperson, the company has been committed to providing transparency for election ads, including disclosures about who paid for the ads and making the ads library publicly accessible. However, with the increasing prevalence of synthetic content creation tools, Google is expanding its policies to require advertisers to disclose when their election ads include digitally altered or generated material. This step is intended to promote responsible political advertising and enable voters to make informed decisions.

Google’s move to crack down on AI-generated political ads is a positive step towards addressing the challenges posed by synthetic content. By requiring clear disclosures, the company aims to mitigate the potential harm caused by misleading advertisements. As the technological landscape continues to evolve, it is crucial for platforms and advertisers to stay vigilant and ensure transparency in order to safeguard the integrity of political discourse and protect the interests of voters.