As part of a larger tech industry effort to call out “people and organisations that actively want to deceive people,” Meta says Facebook, Thread, and Instagram users will begin to see labels on AI-generated images and eventually video and audio that pop up in their feeds.
In a blog post on Tuesday, the company’s President of Global Affairs, Nick Clegg, announced that that the company would work to begin labelling AI-generated images developed on rival services.“ He noted that this was important “as the difference between human and synthetic content gets blurred, people want to know where the boundary lies.”
Meta generates content using its own AI tools, but Clegg explained that, “Meta’s AI images already contain metadata and invisible watermarks that can tell other organisations that the image was developed by AI, and the company is developing tools to identify these types of markers when used by other companies, such as Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock in their AI image generators.”
Given that generative AI technologies may produce artificially lifelike content in response to simple prompts, the announcement offers a preview of a new set of rules that tech companies are working on to reduce the possible risks associated with them.