Home Latest Insights | News Microsoft and OpenAI warn of Hackers’ Exploitation of AI Tools, Raising Concerns for Future AI Development

Microsoft and OpenAI warn of Hackers’ Exploitation of AI Tools, Raising Concerns for Future AI Development

Microsoft and OpenAI warn of Hackers’ Exploitation of AI Tools, Raising Concerns for Future AI Development

Microsoft and OpenAI have revealed a troubling trend: hackers are leveraging advanced AI models such as ChatGPT to refine and intensify their cyber assaults.

This revelation, made today, illuminates how malicious actors from Russia, North Korea, Iran, and China are leveraging cutting-edge technology to bolster their nefarious activities.

Recent research indicates that these adversaries are utilizing AI tools like ChatGPT for a myriad of purposes, spanning from reconnaissance of targets and optimization of scripts to the development of sophisticated social engineering tactics.

Tekedia Mini-MBA edition 16 (Feb 10 – May 3, 2025) opens registrations; register today for early bird discounts.

Tekedia AI in Business Masterclass opens registrations here.

Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.

Microsoft sounded the alarm in a blog post, cautioning, “Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent.”

Among the identified groups, the infamous Strontium, affiliated with Russian military intelligence and notorious for its involvement in high-profile cyber intrusions, has been observed employing AI models to decode satellite communication protocols and refine technical operations. This group, also known as APT28 or Fancy Bear, gained notoriety for its interference in political affairs, including the targeting of Hillary Clinton’s presidential campaign in 2016.

Similarly, North Korean hackers operating under the alias Thallium have been leveraging AI models to exploit publicly reported vulnerabilities, streamline scripting tasks, and craft deceptive content for phishing campaigns. Meanwhile, the Iranian group Curium has been deploying AI to generate convincing phishing emails and evade detection by antivirus software.

Chinese state-affiliated hackers have also joined the fray, employing AI for diverse purposes such as research, scripting, translations, and tool enhancement.

The emergence of AI-powered cyberattacks has reverberated concerns throughout the cybersecurity community. The utilization of AI tools like WormGPT and FraudGPT has facilitated the creation of malicious emails and cracking tools, posing a significant threat to cybersecurity defenses.

Acknowledging the gravity of the situation, Microsoft and OpenAI have intensified efforts to counteract these malicious activities. While significant attacks leveraging AI models have not yet been detected, the companies remain vigilant, promptly shutting down associated accounts and assets.

In response to the escalating threat, Microsoft is spearheading innovative solutions that leverage AI to fortify cybersecurity defenses. Homa Hayatyfar, principal detection analytics manager at Microsoft, noted, “AI can help attackers bring more sophistication to their attacks, and they have resources to throw at it… we use AI to protect, detect, and respond.”

However, the increasing exploitation of AI tools by hackers raises concerns about the potential ramifications for the future development of artificial intelligence. The misuse of AI for malicious purposes could undermine public trust and support for AI technologies, leading to increased scrutiny and regulatory measures that could stifle innovation in the field.

Microsoft is at the forefront of developing a Security Copilot, an AI assistant tailored for cybersecurity professionals to swiftly identify breaches and navigate the deluge of data generated by cybersecurity tools.

Furthermore, in light of recent Azure cloud attacks and espionage incidents involving Russian hackers, Microsoft is undertaking a comprehensive overhaul of its software security infrastructure, underscoring the imperative of bolstering defenses against evolving cyber threats.

The company has warned of future use cases like voice impersonation, underlining the potential escalation of AI-powered cyberattacks to other areas of the technology.

“AI-powered fraud is another critical concern. Voice synthesis is an example of this, where a three-second voice sample can train a model to sound like anyone,” says Microsoft. “Even something as innocuous as your voicemail greeting can be used to get a sufficient sampling.”

OpenAI is testing a feature that will allow ChatGPT to remember certain information from one conversation to the next, the company said Tuesday. The chatbot will also be able to decide which parts of the conversations it should remember. The memory option will be available to hundreds of thousands of users at first; OpenAI will gather feedback before rolling it out more widely, the company told Bloomberg. Personalizing the user experience in this way can be an effective method for retaining customers, suggested the news site.

Users will be notified if they have access to the memory trial, and will be able to delete some or all of the saved information, or turn off the option entirely, OpenAI said. (LinkedIn News)

No posts to display

Post Comment

Please enter your comment!
Please enter your name here