As a component of YouTube’s broader effort to promote transparency regarding potentially confusing or misleading content, filmmakers producing lifelike videos must now disclose their use of artificial intelligence, effective as of Monday.
Content Labeling Checklist
Upon uploading a video to the website, users will encounter a checklist prompting them to assess the appropriateness of the content.
- Makes a natural person say or do something they didn’t do
- Alters footage of a real place or event
- It depicts a realistic-looking scene that didn’t occur
Purpose of Disclosure
The aim of the disclosure is to help prevent users from being confused by synthetic content amidst the proliferation of new generative AI tools, which swiftly and easily create compelling text, images, videos, and audio that often closely resemble authentic material.
Concerns Raised by Experts
Experts in online safety have warned that the proliferation of AI-generated content poses a risk of confusion and deception for users across the internet, particularly in the lead-up to elections in the US and elsewhere in 2024.
Consequences for Non-Compliance
YouTube creators are required to indicate when their videos incorporate artificial intelligence (AI)-generated or otherwise manipulated content that appears realistic. Failure to consistently provide such notification may result in consequences for the creators.
Implementation and Updates
Exceptions to Labeling
Creators will not need to disclose instances of synthetic or AI-generated content that are deemed unrealistic or “inconsequential,” such as AI-generated animations or adjustments in lighting or color. The platform will not mandate disclosure for generative AI utilized for productivity purposes.
0 Comments