Former Google CEO Eric Schmidt has issued a grave warning about the rapid pace of artificial intelligence (AI) development, urging humanity to tackle the potential risks before it’s too late.
With systems on the verge of self-improvement and independent decision-making, Schmidt suggests that society may soon face the dilemma of pulling the plug on machines that could operate autonomously—and possibly resist human intervention.
Why It Matters
The rise of AI has been meteoric, reshaping industries and sparking both awe and alarm. What was once confined to research labs has now permeated daily life, driving innovation on an unprecedented scale.
Tekedia Mini-MBA edition 16 (Feb 10 – May 3, 2025) opens registrations; register today for early bird discounts.
Tekedia AI in Business Masterclass opens registrations here.
Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.
Schmidt, during an appearance on ABC’s This Week, described the current pace of AI innovation as unparalleled, calling it a “remarkable human achievement.” However, he highlighted the risks associated with such rapid progress, particularly the unforeseen consequences of systems that can make their own decisions.
“We’re soon going to be able to have computers running on their own, deciding what they want to do,” he warned.
Once these systems reach a stage where they can self-improve, Schmidt believes society must seriously consider “unplugging it.” He added, “In theory, we better have somebody with the hand on the plug,” noting the importance of maintaining control over increasingly autonomous systems.
Schmidt’s concerns come amid growing unease within the tech industry about the implications of AI. He warned that the next generation of AI systems, capable of conducting their own research and making independent decisions, may arrive within the next two years. These advancements, he said, would place unprecedented power in the hands of individuals, likening the technology to a polymath residing in everyone’s pocket.
“The power of this intelligence … means that each and every person is going to have the equivalent of a polymath in their pocket,” Schmidt explained. However, he cautioned, “We just don’t know what it means to give that kind of power to every individual.”
His concerns reflect a broader anxiety among industry leaders, including Tesla and SpaceX CEO Elon Musk, who has long been vocal about the dangers of unchecked AI development. Musk, a co-founder of OpenAI, has described AI as “potentially more dangerous than nukes” and has repeatedly called for stringent regulation to avoid catastrophic outcomes.
Like Schmidt, Musk has warned that systems capable of self-improvement could spiral out of human control, emphasizing the need for oversight before it’s too late.
The Regulatory Vacuum
Despite these warnings, efforts to regulate AI remain fragmented and slow-moving. Discussions on Capitol Hill have stalled, leaving companies to push ahead with minimal oversight. This regulatory gap is particularly concerning given the speed at which AI systems are advancing.
Schmidt acknowledged the difficulty of policing AI using traditional methods, stating, “Humans will not be able to police AI.” Instead, he advocated for deploying advanced AI systems to monitor and regulate other AI technologies. Schmidt believes humanity can guard against the “worst possible cases” of AI misuse or malfunction by building a secondary layer of oversight.
This sentiment aligns with Musk’s argument that a proactive approach to regulation is essential. Musk has called for the establishment of governing bodies that can set global standards for AI safety, warning that delays could result in irreversible harm.
Adding to the urgency is the geopolitical dimension of AI development. Schmidt expressed concerns about the rapid progress made by China, which he said has closed the technological gap with the U.S. in just six months.
“It is crucial that America wins this race, globally, and in particular, ahead of China,” he stressed.
The competition is no longer just about technological supremacy; it’s a race to control the infrastructure and innovation that will shape the future. Schmidt argued that the West must prioritize funding, talent development, and access to critical hardware to maintain its edge.
He also highlighted the need for collaboration among democratic nations to ensure that AI development aligns with shared values. Without coordinated efforts, he warned, authoritarian regimes could leverage AI for surveillance and control, creating a world where technology undermines freedom rather than enhancing it.
The Threat Level
Experts across the board agree that the risks associated with AI are not theoretical. Systems with the intelligence of a Ph.D. student are expected to emerge as early as next year, according to industry projections. Schmidt and Musk have both pointed to the potential for these systems to make decisions that are misaligned with human values or objectives, leading to unintended consequences.
Schmidt’s vision for mitigating these risks involves not only technological safeguards but also a cultural shift in how society approaches AI. He believes that the benefits of AI can be harnessed responsibly if developers and policymakers work together to create robust frameworks for oversight.
Elon Musk has described AI as humanity’s “biggest existential threat,” and Schmidt’s remarks echo that sentiment. Both leaders agree that the time to act is now. Whether through global collaboration, technological innovation, or regulatory reform, the challenge of managing AI’s rapid ascent will define the next chapter of human history.
“The power of this intelligence is immense,” Schmidt concluded. “But with great power comes great responsibility—and we are only beginning to understand what that means.”