Microsoft and OpenAI Warn of Nation-State Hackers Leveraging AI for Destructive Cyber Attacks

Microsoft and OpenAI have jointly issued a warning about the growing threat of nation-state hackers employing artificial intelligence (AI) and large language models (LLMs) to enhance their cyber attack capabilities. The report identifies nation-state actors linked to Russia, North Korea, Iran, and China as actively experimenting with AI technologies for malicious cyber activities.
The collaboration between Microsoft and OpenAI revealed that they had thwarted efforts by five state-affiliated actors, disrupting their attempts to leverage AI services for destructive purposes by terminating assets and accounts. According to Microsoft, the appeal of large language models lies in their natural language support, making them attractive to threat actors specializing in social engineering and deceptive communication tailored to specific targets.
While no significant or novel attacks utilizing LLMs have been observed thus far, the report highlights that these state-affiliated actors are exploring AI technologies across various stages of the cyber attack chain. Activities range from reconnaissance and coding assistance to malware development.
The Russian nation-state group Forest Blizzard (APT28) is reported to have used OpenAI services for open-source research on satellite communication protocols and radar imaging technology, as well as for scripting tasks. Similarly, other threat actors such as North Korea's Emerald Sleet (Kimusky), Iran's Crimson Sandstorm (Imperial Kitten), and China's Charcoal Typhoon (Aquatic Panda) and Salmon Typhoon (Maverick Panda) have utilized LLMs for tasks such as identifying experts, conducting research, generating code snippets, and creating content for phishing campaigns.
In response to the growing threat, Microsoft is taking proactive measures by formulating a set of principles to mitigate risks associated with the malicious use of AI tools and APIs by nation-state actors, advanced persistent threats (APTs), advanced persistent manipulators (APMs), and cybercriminal syndicates. The principles include identifying and taking action against malicious actors, notifying other AI service providers, collaborating with stakeholders, and ensuring transparency in dealing with these threats.
This collaboration between Microsoft and OpenAI underscores the need for a collective effort to establish guardrails and safety mechanisms around AI models, reinforcing the commitment to responsible AI use and security in the face of evolving cyber threats.