What is YouTube’s new Generative AI policy? Google-owned platform asks creators to use labels for AI-related content 

YouTube to ask creators to label their content (Image via Budrul Chukrut/SOPA Images/LightRocket via Getty Images))
YouTube to ask creators to label their content (Image via Budrul Chukrut/SOPA Images/LightRocket via Getty Images))

YouTube is set to implement a new policy change that affects users producing content involving AI (Artificial Intelligence)-generated material. The Google-owned platform took to its social media accounts to share a comprehensive policy update, addressing the changes they are introducing with regards to AI content.

As per the update, creators will now need to explicitly label their content if it prominently features AI-related material. Here's what their blog read:

"We’ll introduce updates that inform viewers when the content they’re seeing is synthetic. Specifically, we’ll require creators to disclose when they've created altered or synthetic content that is realistic, including using AI tools."
Users need to label their content (Image via YouTube)
Users need to label their content (Image via YouTube)

"Potential to mislead viewers" - YouTube explains the reason behind the policy update

AI has rapidly emerged as a tool for numerous content creators to express their creativity to the fullest. However, a small faction has also utilized it to mislead its audience.

For instance, popular content creator Jimmy "MrBeast" recently expressed concerns about AI-generated deep fakes circulating on the internet. In these videos, scammers typically portray MrBeast advertising something, but in reality, it's entirely fake. He said:

"Lots of people are getting this deepfake scam ad of me… are social media platforms ready to handle the rise of AI deepfakes? This is a serious problem."
MrBeast raises voice against misuse of AI (Image via X/@MrBeast)
MrBeast raises voice against misuse of AI (Image via X/@MrBeast)

YouTube also emphasized this issue, stating that while they believe AI has the potential to be creative, it can also mislead people. Here's what they wrote:

"AI’s powerful new forms of storytelling can also be used to generate content that has the potential to mislead viewers—particularly if they’re unaware that the video has been altered or is synthetically created."
Platform update its policy requirements (Image via YouTube)
Platform update its policy requirements (Image via YouTube)

The platform revealed that labeling alone may not solve the problem, and they might take steps by removing videos if they violate the platform's community guidelines:

"There are also some areas where a label alone may not be enough to mitigate the risk of harm, and some synthetic media, regardless of whether it’s labeled, will be removed from our platform if it violates our Community Guidelines."
The platform could also take further steps themselves (Image via YouTube)
The platform could also take further steps themselves (Image via YouTube)

Another streamer and creator who has addressed this update is Jordi "Kwebbelkop." The creator recently introduced his own AI version, visible on his account, primarily providing commentary over content like videos and games. Here's what he wrote:

"Having to disclose on YouTube that you used ai to create content is great. It helps the space become more mature and in the future it will be a label of safety and quality."
Kwebbelkop gives his take on the new update (Image via X/@Kwebbelkop)
Kwebbelkop gives his take on the new update (Image via X/@Kwebbelkop)

As of this writing, when these policies will take effect has not been discussed. That said, it is expected to roll out in the coming months.

Quick Links

Edited by Ritoban "Veloxi" Paul
Sportskeeda logo
Close menu
WWE
WWE
NBA
NBA
NFL
NFL
MMA
MMA
Tennis
Tennis
NHL
NHL
Golf
Golf
MLB
MLB
Soccer
Soccer
F1
F1
WNBA
WNBA
More
More
bell-icon Manage notifications