YouTube Creator Studio – users must flag if content uses AI

YouTube has rolled out a feature within its Creator Studio prompting content creators to disclose if their content, appearing realistic, has been crafted or modified through artificial intelligence (AI).

In its latest blog post, YouTube declared the implementation of AI disclosure requirements for its creators, a move initially hinted at in a November blog post. These disclosures are to be presented as labels, either within the video’s detailed description area or prominently before the video starts playing.

The platform clarified that the disclosure requirement does not apply to content that is evidently fictional, animated, incorporates special effects, or leverages generative AI merely for behind-the-scenes support, such as production assistance. The purpose of the label is to enhance transparency and trust between creators and their audience.

YouTube highlighted scenarios necessitating disclosure from creators, such as the use of AI to mimic a person’s appearance, modify footage depicting actual events or locations, or generate lifelike scenes.

Conversely, disclosures are not mandatory for content that is manifestly artificial, involves basic color or lighting adjustments, special effects, beauty enhancements, or other visual modifications.

Content related to script generation, brainstorming, or automated captioning by creators using AI also falls outside the disclosure requirement.

Disclosure labels will mainly be placed in the video’s extended description section. Yet, for content touching upon sensitive themes like health, news, politics, or financial information, a more conspicuous label will be displayed at the video’s beginning.

YouTube announced that label implementation would begin on mobile applications in the following weeks, eventually extending to desktops and other platforms.

This measure is introduced amidst growing concerns from experts and regulators over AI’s potential for spreading misinformation, especially with the approaching presidential election in the US. A notable incident involved a fabricated robocall mimicking President Joe Biden, misleading voters in New Hampshire, highlighting the dangers of AI-enabled electoral disinformation.

In light of these issues, over 20 technology firms, including prominent names like Google, Meta Platforms, Microsoft, and OpenAI, have pledged adherence to a new ‘tech accord.’ This agreement aims to curb the dissemination of misleading AI-generated content throughout the 2024 global election period.

Author

Related Post