AI

YouTube’s New Policy Requires Creators to Disclose AI-Generated Content

3 Min Read

To combat the rising issue of AI-generated content masquerading as reality, YouTube on Monday, announced a new requirement for creators.

This announcement follows YouTube’s previous commitment in November to implement updates as part of a broader set of AI policies.

The company has now introduced a tool within Creator Studio that mandates disclosure when content, which could be mistaken for authentic, is produced using altered or synthetic media, including generative AI.

The move aims to prevent viewers from being misled by videos that appear genuine but are created through artificial means. With advancements in generative AI blurring the lines between real and fake, introducing this new policy becomes crucial, particularly in light of concerns raised by experts about the potential risks posed by AI and deepfakes during events like the U.S. presidential election.

According to YouTube, the policy does not apply to obviously unrealistic or animated content, such as scenes featuring mythical creatures. Nor does it extend to content where generative AI is used for non-deceptive purposes, like script generation or automatic captioning.

Instead, the focus is on content utilizing the likeness of real individuals. Creators will be required to disclose any digital alterations that replace faces or generate voices, as well as modifications to footage depicting real events or places.

This includes scenarios like simulating fires in real buildings or creating realistic depictions of fictional major events, such as tornadoes approaching actual towns.

For most videos, disclosure labels will appear in the expanded description. However, for sensitive topics like health or news, more prominent labels will be displayed directly on the video itself.

These disclosure labels will be gradually rolled out across all YouTube formats in the coming weeks, starting with the mobile app and expanding to desktop and TV platforms.

YouTube also plans to implement enforcement measures for creators who consistently fail to use the required labels. In cases where creators do not add labels themselves, the company will intervene to ensure transparency, especially for content with the potential to mislead viewers.

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *