Our Mission
As artificial intelligence technology advances, we're seeing more media content that uses A.I generated video and voice. While this technology offers incredible possibilities, it also raises serious concerns about authenticity and trust in media.
We believe that all media that uses A.I generated video or voice should be required to display a clear disclaimer directly on the content itself, not just in fine print or descriptions that can be easily missed. Simply having a chip or pop-up on a sidebar is not enough, if anything people have been trained to ignore these kinds of things as we are constantly bombarded with ads and pop-ups when surfing online. When people are consuming A.I generatied content it needs to be stated clearly, so that it is almost impossible to miss.
Why This Matters
- Transparency: People have the right to know when they're viewing or listening to A.I generated content
- Trust: Clear labeling helps maintain trust in media and prevents deception.
- Education: Proper disclaimers help the public understand the increasing presence of A.I in everyday media
- Protection: Vulnerable populations need safeguards against misleading content
What We're Asking For
We're calling for legislation that requires:
- A clear visual disclaimer printed directly on any video that contains A.I-generated content
- If an ad is only audio and using A.I audio generation. An audible disclaimer at the beginning of any audio content that uses A.I generated voices should be included
- These disclaimers to be present regardless of platform or medium
- Penalties for content creators and distributors who fail to properly label A.I content. As long as a black list for those who have tried to pass A.I generated video or audio as authentic.
Fighting Misinformation and Deep Fakes
Implementing mandatory A.I disclaimers creates a powerful tool to combat misleading content and harmful deep fakes.
- Legal Framework: With clear labeling requirements in place, content that deliberately misrepresents A.I generated media as authentic can be easily identified and banned
- Enforcement Mechanism: Authorities can take swift action against unlabeled A.I content designed to mislead the public
- Deterrent Effect: Mandatory penalties will discourage bad actors from creating and distributing deceptive deep fakes
- Public Awareness: As people become accustomed to seeing A.I disclaimers, they'll be more likely to question content that lacks proper labeling
This simple regulatory measure provides a practical solution to the growing threat of A.I generated misinformation without restricting legitimate creative and educational uses of the technology.
Even if you feel you are able to easily detect A.I media, that won't always be the case. The A.I generated video and audio content that we see today is the worse that we will ever see; A.I content is only going to get better with time. We also need to think about the groups of people that are less aware of A.I and how easy it is for them to fall into the trap of believing all the content they see is real. By signing this petition, you're supporting transparency and honesty in media during this critical time of technological change.
Sign the Petition
Please fill out the form below to add your name to our petition:
We will never sell your data! The purpose of this petition is to promote regulation.