Iranian Hackers Misused ChatGPT in Cyberattacks Targeting Critical Infrastructure

In a groundbreaking report, OpenAI revealed how Iranian hackers exploited ChatGPT to enhance cyberattacks on industrial control systems (ICS). The report sheds light on cyber activities carried out by groups like CyberAv3ngers, linked to Iran's Islamic Revolutionary Guard Corps (IRGC), and others sponsored by China.
These threat actors used ChatGPT for reconnaissance, vulnerability exploitation, and post-compromise actions. While OpenAI emphasizes that the AI didn't offer new capabilities, it did help these hackers conduct attacks more efficiently using publicly available techniques.
Table of Contents
Iranian Group CyberAv3ngers and Water Facility Attacks
CyberAv3ngers gained notoriety this year for targeting water utilities in Ireland and Pennsylvania, causing significant disruptions. The group exploited poorly secured ICS systems that were exposed to the internet and used default passwords. Their focus was on programmable logic controllers (PLCs), critical devices controlling industrial processes.
Their use of ChatGPT involved asking the chatbot for information on industrial routers, PLCs, and default passwords for critical infrastructure devices like Tridium Niagara and Hirschmann RS routers.
Misuse of AI for Cyberattacks
In addition to ICS-specific targets, CyberAv3ngers sought assistance from ChatGPT to obfuscate malicious code and scan networks for exploitable vulnerabilities. The group also attempted to find ways to access macOS passwords.
However, OpenAI clarified that these activities didn’t provide the hackers with any new or advanced capabilities beyond what is available through non-AI tools. This highlights the dangers of AI misuse, even if the information gained is not revolutionary.
Other Threat Actors Involved
Besides CyberAv3ngers, another Iranian group known as Storm-0817 misused ChatGPT. They tried to develop Android malware and scrape Instagram data, leveraging the AI for translating LinkedIn profiles into Persian. While not as directly damaging as ICS attacks, these activities reflect broader efforts to weaponize AI in cyber operations.
China-linked SweetSpectre was also mentioned in the report. This group used ChatGPT for malware development and vulnerability research. Notably, SweetSpectre attempted to send malware-laden emails to OpenAI employees, but the attack was thwarted before it reached its targets.
The Bigger Picture
The report underscores the growing risk of AI misuse in cyberwarfare. Although AI, including ChatGPT, can streamline tasks for legitimate users, it can also serve as a tool for bad actors. OpenAI's proactive approach to detecting and neutralizing these threats is vital to curbing their impact.
This raises questions for the cybersecurity industry: How can AI be safeguarded from misuse? What measures are necessary to prevent threat actors from using such tools to enhance their capabilities?
These revelations should prompt industries and governments to rethink security strategies, especially for critical infrastructure, and prioritize closing the gaps that allow hackers to exploit vulnerable systems.
AI has tremendous potential to advance industries and improve efficiency. But as the report shows, it's also a double-edged sword. When misused, it can amplify the capabilities of cybercriminals and nation-state actors. Staying ahead of these threats requires constant vigilance, stronger defenses, and responsible development of AI technologies.
The role of ChatGPT in these incidents may be limited, but it serves as a wake-up call for the broader implications of AI in the cybersecurity landscape.