The AI boom has been anything but uncontroversial, with copyright lawsuits popping up everywhere. It has also elicited concerns from experts over its rapid growth and use to spread disinformation. Congress has finally stepped in to create a legal definition and framework addressing these concerns.
The US Senate just introduced new legislation called the “Content Origin Protection and Integrity from Edited and Deepfaked Media Act” (COPIED Act). The cringeworthy mouthful of a bill looks to outlaw the unethical use of AI-generated content and deepfake technology. It also aims to regulate the use of copyrighted material in training machine learning models.
Proposed and sponsored by Democrats Maria Cantwell of Washington and Martin Heinrich of New Mexico, along with Republican Marsha Blackburn of Tennessee, the legislation aims to establish enforceable transparency standards in AI development. The senators intend to task the National Institutes of Standards and Technology with developing sensible transparency guidelines should the bill pass.
Tekedia Mini-MBA edition 16 (Feb 10 – May 3, 2025) opens registrations; register today for early bird discounts.
Tekedia AI in Business Masterclass opens registrations here.
Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.
“The bipartisan COPIED Act, I introduced with Senator Blackburn and Senator Heinrich, will provide much-needed transparency around AI-generated content,” said Senator Cantwell.
The legislation also wants to curb unauthorized data use in training models. Training data acquisition currently lies in a legal gray area. Over the last several years, we have seen arguments and lawsuits regarding the legality of scraping public-facing online information. Clearview AI is likely the biggest abuser of this training method, but it’s not alone. Others, including OpenAI, Microsoft, and Google, have struggled to toe an infinitely fine line between legal and ethical when it comes to training their systems on copyrighted material.
“Artificial intelligence has given bad actors the ability to create deepfakes of every individual, including those in the creative community, to imitate their likeness without their consent and profit off of counterfeit content,” said Senator Blackburn. “The COPIED Act takes an important step to better defend common targets like artists and performers against deepfakes and other inauthentic content.”
The senators feel that clarifying and defining what is okay and what is not regarding AI development is vital in protecting citizens, artists, and public figures from the harm that misuse of the technology could cause, particularly in creating deepfakes. Deepfake porn has been an issue for years now, and the tech has only gotten better. It is telling that Congress has ignored the issue until now. The proposal comes only months after someone used voice-cloning tech to impersonate President Biden, telling people not to vote.
“Deepfakes are a real threat to our democracy and to Americans’ safety and well-being,” said Senator Heinrich. “I’m proud to support Senator Cantwell’s COPIED Act that will provide the technical tools needed to help crack down on harmful and deceptive AI-generated content and better protect professional journalists and artists from having their content used by AI systems without their consent.”
The usual suspects have spoken out in favor of the bill. The Nashville Songwriters Association International, SAG-AFTRA, the National Music Publishers’ Association, RIAA, and several broadcast and newspaper organizations have issued statements applauding the effort.
Meanwhile, Google, Microsoft, OpenAI, and other AI purveyors have remained oddly silent, alluding to the notion that they don’t really want a regulation even though they have been calling for one.
Washington Post reported that a group of whistleblowers at OpenAI have filed a complaint with the Securities and Exchange Commission arguing that the company blocked its staff from warning regulators about the risks its artificial intelligence technology could pose to humanity.
In a seven-page letter obtained by The Washington Post, whistleblowers said the company gave its employees restrictive employment, severance, and nondisclosure agreements. This development signals that AI companies are not ready to address the tech’s antitrust concerns.
As AI technology companies emerge and change, there have been concerns that they are doing so for the wrong reasons. Just days ago, Microsoft relinquished its seat on the board of OpenAI, saying its participation is no longer needed as there are concerns about antitrust violations.
AI regulation has been a big subject that the US has been reluctant to address. While Europe has taken steps to address concerns emanating from the burgeoning technology with the EU AI Act, the US has been dragging its feet. The COPIED Act may be the comprehensive answer to the calls for AI regulation in the US.