Geoffrey Hinton’s recent remarks after winning a Nobel Prize have reignited discussions around AI safety, a critical issue that has increasingly become a focal point of debate within the technology community.
Hinton, one of the founding fathers of modern AI, used his moment on stage to highlight an event that exemplified this growing concern — the firing of Sam Altman, OpenAI’s CEO, in 2023. While the dismissal was temporary, Hinton’s comments suggest that he viewed the event as a momentary win for those advocating a more cautious approach to AI development, an issue Altman has allegedly sidelined in pursuit of profit.
In his speech, Hinton expressed pride in his former student, Ilya Sutskever, for his role in the firing.
Tekedia Mini-MBA edition 16 (Feb 10 – May 3, 2025) opens registrations; register today for early bird discounts.
Tekedia AI in Business Masterclass opens registrations here.
Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.
“I was particularly fortunate to have many very clever students – much cleverer than me – who actually made things work,” said Hinton. “They’ve gone on to do great things. I’m particularly proud of the fact that one of my students fired Sam Altman,” he said.
This alludes to the internal power struggle within OpenAI, which resulted in a brief yet dramatic ousting of Altman by the company’s board. While Sutskever, OpenAI’s Chief Scientist, was a key figure in delivering the decision, he later expressed regret over the move. For Hinton, however, the firing symbolized an opportunity to steer OpenAI back toward prioritizing safety over rapid commercialization, a priority that he and other AI safety advocates believe Altman has ignored.
Sam Altman has increasingly been criticized for pushing OpenAI down a highly commercial path. Originally founded as a non-profit research organization with the mission to develop AI that would benefit all of humanity, OpenAI pivoted to a for-profit model in 2019, a shift that many believe marked a departure from its founding principles. Hinton’s speech indirectly critiques this transformation, pointing to a broader issue — the prioritization of profit over safety.
Others have argued that Altman’s focus on rapid development and commercialization risks unleashing advanced AI technologies that could have unintended, potentially catastrophic, consequences.
Hinton himself has been vocal about the existential risks that unchecked AI development poses. In earlier interviews, he warned of the potential for AI to spiral out of human control, expressing fears that highly competent systems could be weaponized or cause widespread economic disruption. For him, Altman’s vision, which involves the accelerated rollout of advanced AI systems, lacks the necessary safeguards to prevent these risks.
Elon Musk, a co-founder of OpenAI, has also been a prominent critic of the company’s direction since it transitioned to a profit-driven model. Musk, who left OpenAI’s board in 2018, has repeatedly voiced concerns about the pace of AI development and has expressed regret over how OpenAI has evolved. He initially co-founded OpenAI to serve as a counterbalance to big tech companies like Google, with the belief that AI development should be transparent and benefit humanity as a whole.
However, after OpenAI’s pivot to a capped-profit model — a structure that allows investors to profit while the company pursues its goals — Musk became one of the loudest voices warning about the potential dangers.
In a tweet in 2023, Musk lamented the fact that OpenAI had “become a closed-source, maximum-profit company effectively controlled by Microsoft.” He, along with others, pointed out the irony of an organization originally founded to democratize AI research now being driven by financial incentives, leading to potentially reckless innovation.
Musk has long called for regulatory oversight in AI development, advocating for a slower, more controlled approach, much like Hinton.
Hinton’s comments resonate with an increasingly vocal segment of the AI community that is concerned about the implications of rapid commercialization. With AI systems growing in capability, there is fear that without proper oversight, the technology could be weaponized, misused, or simply grow too powerful for humans to control.
OpenAI, under Altman’s leadership, has been at the forefront of pushing AI boundaries. The release of GPT-4 and the ongoing development of highly sophisticated AI models have fueled these concerns, with many in the AI safety community calling for the necessary ethical and safety frameworks to be put in place.