American multinational technology conglomerate Meta has announced plans to label AI-generated content across its platforms (Facebook, Instagram, and threads) to help detect Artificially created photo, audio, and video materials.
The move comes as AI images and video generation tools continue to grow in popularity, making it difficult to distinguish between human-made and AI-created content.
While speaking at the World Economic Forum in Switzerland last month, Meta president of global affairs, Nick Clegg said the company’s effort to detect artificially generated content is the most urgent task facing the tech industry today.
Tekedia Mini-MBA edition 16 (Feb 10 – May 3, 2025) opens registrations; register today for early bird discounts.
Tekedia AI in Business Masterclass opens registrations here.
Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.
He further disclosed that Meta would promote technological standards that companies across the industry could use to recognize markers in photo, video, and audio and audio material that would signal that the content was generated using AI.
“As Americans head to the polls in 2024, tech companies should take action to assure users that they will be able to identify whether or not online content is authentic. In an election year like this, it’s incumbent upon us as an industry to make sure we do as much as the technology allows to provide as much visibility to people so they can distinguish between what’s synthetic and what’s not synthetic,” Clegg said.
Meta will employ various techniques to differentiate Al -generated images from other images. These include visible markers, invisible watermarks, and meta data embedded in the image files. The label set to roll out in the coming months will identify AI-generated images posted on Facebook, Instagram, and Threads.
Additionally, Meta is implementing new policies requiring users to disclose when a post is generated by artificial intelligence, with consequences for users who fail to comply.
Meta’s methods follow best practices recommended by the Partnership on Al (PAI), an industry group focused on responsible Al development. Over the next 12 months, Meta will closely monitor user engagement with labeled Al content. These insights will shape the platform’s long-term strategy.
Currently, Meta manually labels images created through its internal Al image generator with disclosures like “Imagined by AI.” Now, the company will leverage its detection tools to label Al content from other providers like Google, Microsoft, Adobe, and leading Al art platforms.
In the interim, Meta advises users to critically evaluate accounts sharing images and watch for visual Inconsistencies that may reveal computer generation.
With Meta’s recent plan to label AI-generated content across its platform, it is very crucial, as AI-generated content has stoked wide concerns in recent times. While it offers certain benefits, on the flip side, it has been used negatively to generate false information with significant implications.
As a number of important elections across the world are expected to take place this year, Meta is ensuring that its platform is not used to peddle fake news, to maintain its credibility and avoid sanctions. The company has also disclosed that it is important to enable users to distinguish what is synthetic and what is not synthetic.