Due to artificial intelligence, it is becoming increasingly important to verify the content we are accessing and consuming. Due to the evolution of AI, it is becoming more and more difficult to distinguish between original and reliable content and content generated by this technology. YouTube has had enough and will implement measures to prevent fake videos generated by AI from being uploaded.
A few months ago, a fake Bad Bunny song went viral on TikTok with its voice generated using AI. The song actually seemed to be the Puerto Rican artist’s, and the tone was exactly the same.
Because of this, many users have taken advantage of the opportunity to generate videos with AI and upload them to YouTube. This is a very big problem, since the information given can be false or incorrect. This content could even be used to try to scam users.
YouTube will take on AI-generated videos
The platform wants to continue preserving the quality of its content, putting an end to these “fakes.” To prevent this, they are already working on effective measures to prevent users from accessing AI-generated content.
According to the official YouTube blog, they are working on a technology to identify authentic songs. This new solution is part of the Content ID, which is used to identify copyrighted material. The idea is that the user can know if they are looking at a real song or one generated by an AI.
The problem is that songs generated by artificial intelligence have begun to proliferate within the platform. They have the ability to imitate voices so well that it is very difficult to distinguish whether it is a real song or a fake one. But YouTube is working on a solution that can correct this problem.
The platform is currently working with several partners to refine the tool. The idea is to launch a pilot test next year. If the test works as expected, it is expected to be implemented in a general and definitive manner shortly after.
YouTube also highlights that they are working with creators, actors, musicians, and athletes, among others, to detect and manage AI-generated content that uses their face. This will allow detection of AI-generated deepfakes, thus ensuring that the identity of a well-known person is not used for scams.
As YouTube indicates on its blog:
These two new capabilities build on our track record of developing technology-driven approaches to addressing rights issues at scale. Since 2007, Content ID has provided granular control to rights holders across their entire catalogs on YouTube—billions of claims processed each year—while also generating billions of dollars in new revenue for artists and creators through the reuse of their work. We are committed to bringing this same level of protection and empowerment to the AI era.
What is clear is that the platform is tired of finding fake videos on the platform. It wants to clean it up or at least warn that we are dealing with fake content. In addition, it is intended that these users who use AI to generate content do not profit from it.