
The artificial intelligence industry, already grappling with sky-high costs and a rocky path to profitability, is now embroiled in a high-stakes legal and policy battle over copyright. AI companies like OpenAI and Google find themselves at the center of lawsuits that could define the limits of AI training and reshape the industry’s future. At the heart of the conflict lies a fundamental question: should AI models have unrestricted access to copyrighted material for training purposes?
Google has thrown its weight behind OpenAI in the intensifying battle, reinforcing the argument that strict copyright enforcement threatens the future of artificial intelligence. Both companies, facing mounting legal challenges, have called on the US government for regulatory changes that would allow AI firms to train on publicly available data, including copyrighted material, without facing legal uncertainty.
Their position has sparked fierce opposition from content creators and media organizations, most notably the New York Times, whose lawsuit against OpenAI could reshape the legal landscape for AI development.
Register for Tekedia Mini-MBA edition 17 (June 9 – Sept 6, 2025) today for early bird discounts. Do annual for access to Blucera.com.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register to become a better CEO or Director with Tekedia CEO & Director Program.
The New York Times, argues that the company improperly used its copyrighted content to train ChatGPT. Google, another major AI player, is also under fire, fending off multiple lawsuits accusing it of scraping copyrighted material without permission. These cases could set legal precedents that force AI companies to pay hefty licensing fees or severely limit the datasets available for training.
The Copyright Conundrum
For AI companies, training data is everything. The more data their models consume, the better they perform. But a growing number of content creators, news organizations, and artists argue that AI firms are profiting from their work without permission or compensation.
OpenAI, in its response to the Times lawsuit, has painted stringent copyright enforcement as an existential threat to AI innovation. Google has echoed this sentiment, calling for “balanced copyright rules” that would allow AI firms to use copyrighted data without being bogged down by complex negotiations.
Yet, many believe that Google’s definition of “balance” is heavily skewed in favor of tech companies. The search giant’s latest AI policy proposal suggests that publicly available data—whether copyrighted or not—should be fair game for training. Google insists that AI training does not significantly impact rightsholders, but content creators see it differently, pointing out that AI-generated content could ultimately replace human creators.
The Government’s Role in AI Development vs. Copyright Protection
Amid the ongoing legal battles, the U.S. government is stepping into the fray. The Trump administration has called for a National AI Action Plan to shape the future of the industry, a move that AI companies have seized upon to push for regulatory changes that favor their interests.
Google’s proposal calls for government backing in multiple ways:
- Funding AI Development: Google wants the federal government to subsidize AI research and provide financial incentives for startups.
- Infrastructure Overhaul: The company argues that AI progress requires a modernization of America’s energy grid, citing estimates that global data center power demand will surge by 40 gigawatts between 2024 and 2026.
- Federal AI Adoption: Google urges the government to set an example by integrating AI into public services, advocating for open datasets to be made available for AI training.
AI firms are also pushing for a unified national policy that would override restrictive state laws. California’s recent AI safety bill, SB-1047, which sought to impose stricter regulations on AI developers, was vetoed. But the fear of a fragmented regulatory landscape remains a major concern for companies like Google and OpenAI, which would prefer a more lenient federal framework.
Another contentious issue in the debate is liability. AI companies do not want to be held responsible for the actions of their users. Google, in particular, has strongly opposed any attempt to impose liability on AI developers, arguing that their products are inherently unpredictable.
“In many instances, the original developer of an AI model has little to no visibility or control over how it is being used by a deployer and may not interact with end users,” Google states in its policy document, effectively shifting responsibility to those who deploy or interact with AI models. This stance mirrors that of OpenAI, which has consistently resisted calls for greater accountability.
The EU’s AI Act, which proposes mandatory transparency requirements—including disclosures of training data sources—is seen as a looming threat. Google has warned that such measures could force companies to reveal trade secrets, potentially giving foreign competitors an advantage. The company is now lobbying the U.S. government to oppose stringent AI regulations at the international level and instead promote a “light-touch” approach that aligns with American business interests.