Google is set to make a significant change to its political advertisement policy, aiming to combat the rise of synthetic content. Starting in November, political ads featuring artificial intelligence-generated images must include a clear and conspicuous disclosure for viewers. The move comes as concerns grow over the potential misuse of AI tools during election campaigns.
With the approaching 2024 US presidential election and major elections in other countries, the timing of this policy update is crucial. The rapid advancement of AI technology has made it easier for people to create convincing AI-generated text, audio, and video. However, this raises concerns about the spread of election misinformation that social media platforms and regulators may not be prepared to handle.
Instances of AI-generated images have already appeared in political advertisements. A notable example was a video posted by Florida Governor Ron DeSantis’ presidential campaign, featuring AI-generated images showing former President Donald Trump hugging Dr. Anthony Fauci. These images were difficult to distinguish from real images, potentially misleading viewers.
To address such issues, Google’s updated policy will require clear disclosures when synthetic content is used in a way that could mislead users. For instance, if an ad makes it appear as though a person is saying or doing something they didn’t say or do, a label must be added to indicate the use of synthetic content.
Notably, the policy will not restrict inconsequential alterations like image resizing, color corrections, or background edits that don’t create realistic depictions of actual events. The goal is to ensure transparency and prevent the spread of misinformation while allowing for legitimate creative adjustments.
In July, top artificial intelligence companies, including Google, committed to voluntary measures proposed by the Biden administration to enhance safety around AI technologies. As part of this agreement, the companies pledged to develop technical mechanisms, such as watermarks, to indicate when content is AI-generated.
The Federal Election Commission is also exploring ways to regulate the use of AI in political ads. This growing attention to the issue emphasizes the need to address the potential risks associated with the misuse of AI and maintain the integrity of political campaigns.
Frequently Asked Questions (FAQ)
1. What is synthetic content?
Synthetic content refers to media, such as images, video, or audio, that is created using artificial intelligence technology rather than being captured directly from real-life events or individuals.
2. Why is synthetic content a concern in political ads?
Synthetic content can be manipulated to deceive viewers by making it appear as though someone is saying or doing something they didn’t. This raises concerns about the spread of misinformation during election campaigns and the potential impact on public opinion.
3. What types of synthetic content are subject to disclosure requirements?
Google’s policy requires clear and conspicuous disclosures when synthetic content is used that inauthentically represents real or realistic-looking people or events in political advertisements.
4. What alterations are exempt from disclosure requirements?
Changes to images, such as resizing, color corrections, or background edits that don’t create realistic depictions of actual events, are considered inconsequential and not subject to disclosure requirements.
5. How are AI companies addressing the issue of synthetic content?
AI companies, including Google, have committed to developing technical mechanisms, such as watermarks, to indicate when content is generated by AI. This voluntary measure aims to improve transparency and user awareness of AI-generated content.
Note: This article is a creative interpretation of the original source. Sources are not provided as the new article is fictional and does not rely on any existing sources.