Home Latest Insights | News AI Experts Warn of Dangers in Artificial General Intelligence Built Like “Agents”

AI Experts Warn of Dangers in Artificial General Intelligence Built Like “Agents”

AI Experts Warn of Dangers in Artificial General Intelligence Built Like “Agents”

Two of the world’s most prominent Al scientists, Yoshua Bengio and Max Tegmark, have expressed concerns about the development of “agentic” artificial general intelligence (AGI), that can independently pursue goals and take actions, which they said could lead to unforeseen and potentially dangerous consequences.

In a recent episode of CNBC’s “Beyond The Valley” podcast, they both discussed the risks associated with building AGI systems that can act like assistants or agents, warning of the dangers of uncontrollable AI.

Their fears stem from the world’s biggest firms now talking about “AI agents” or “agentic AI,” which companies claim will allow AI chatbots to act like assistants or agents and assist in work and everyday life.

Register for Tekedia Mini-MBA edition 17 (June 9 – Sept 6, 2025) today for early bird discounts. Do annual for access to Blucera.com.

Tekedia AI in Business Masterclass opens registrations.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register to become a better CEO or Director with Tekedia CEO & Director Program.

Yoshua Bengio, dubbed one of the “godfathers of AI,” highlighted the inspiration drawn from human intelligence in developing machine intelligence, noting the combination of understanding and agentic behavior using knowledge to achieve goals. He explained that current AGI development focuses on creating agents that understand the world and act accordingly, a path he considers dangerous.

He likened it to “creating a new species or a new intelligent entity on this planet” without knowing how it would behave.

In his words,

“Researchers in AI have been inspired by human intelligence to build machine intelligence, and in humans, there’s a mix of both the ability to understand the world like pure intelligence and the agentic behavior, meaning to use your knowledge to achieve goals. Right now, this is how we’re building AGI, we are trying to make them agents that understand a lot about the world and then can act accordingly. But this is actually a very dangerous proposition.”

He further postulated the idea of self-preservation as AI gets smarter. This, according to him, is to prevent the dangers of humans competing with AI that are smarter.

Also speaking, Max Tegmark advocated for “tool Al” systems designed for specific narrowly defined purposes without agency. He suggests that while some agency, like in a self-driving car, might be acceptable, it’s crucial to have reliable guarantees of control. He believes that the benefits of Al can be realized while maintaining safety through established standards and demonstrating control before powerful Al systems are deployed.

It is understood that Tegmark’s Future of Life Institute, in March 2023, had called for a pause in developing Al systems that rival human intelligence. While that hasn’t happened, he stressed the importance of moving beyond discussion to action, establishing guardrails for AGI development. He argued that creating something vastly smarter than humans before establishing control mechanisms is “insane.”

“I think, on an optimistic note here, we can have almost everything that we’re excited about with AI if we simply insist on having some basic safety standards before people can sell powerful AI systems. They have to demonstrate that we can keep them under control. Then the industry will innovate rapidly to figure out how to do that better”, he added.

Yoshua Bengio and Max Tegmark’s concerns come as companies across the globe are increasingly integrating AI agents into their systems, which they say can automate complex tasks that would otherwise require human intervention. According to Deloitte, by 2025, 25% of enterprises are expected to use AI agents.

While these AI agents can help to navigate the complexities of the modern world, experts say that they should be developed only once in other to manage their risks.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here