Threat Database Vulnerability AI-Developed 2FA Exploit

AI-Developed 2FA Exploit

Cybersecurity researchers have uncovered a previously unidentified threat actor leveraging a zero-day exploit believed to have been developed with the assistance of artificial intelligence. This marks the first documented case of AI being actively used in real-world malicious operations for vulnerability discovery and exploit generation.

Investigators attribute the campaign to coordinated cybercriminal groups that appear to have collaborated on a large-scale vulnerability exploitation initiative. Analysis of the associated attack chain revealed a zero-day vulnerability embedded within a Python script capable of bypassing two-factor authentication (2FA) protections in a widely used open-source, web-based system administration platform.

Although no direct evidence links Google's Gemini AI tool to the operation, researchers concluded with high confidence that an AI model played a significant role in discovering and weaponizing the flaw. The Python code displayed multiple characteristics commonly associated with large language model (LLM)-generated output, including highly structured formatting, extensive educational docstrings, detailed help menus, and a clean ANSI color implementation. The script also contained a fabricated CVSS score, a common example of AI hallucination.

How the 2FA Bypass Exploit Worked

The identified vulnerability required legitimate user credentials to function successfully. Researchers determined that the flaw originated from a semantic logic weakness caused by a hard-coded trust assumption within the application's authentication process. Such high-level logic flaws are increasingly within the analytical capabilities of modern LLM systems.

Security experts warn that AI is dramatically accelerating every stage of the cyberattack lifecycle, from vulnerability discovery to exploit validation and operational deployment. The growing use of AI by threat actors is reducing the time and effort required to identify weaknesses and launch attacks, placing defenders under increasing pressure.

AI Expands the Malware and Exploitation Landscape

Artificial intelligence is no longer limited to assisting vulnerability research. Threat actors are now using AI to build polymorphic malware, automate malicious operations, and conceal attack functionality. One notable example is PromptSpy, an Android malware strain that abuses Gemini to analyze on-screen activity and issue instructions that help the malware remain pinned within the recent applications list.

Researchers have also documented several high-profile cases involving Gemini-assisted malicious activity:

The suspected China-linked cyber espionage group UNC2814 reportedly used persona-driven jailbreaking prompts to force Gemini into assuming the role of a network security expert. The objective was to support vulnerability research targeting embedded devices, including TP-Link firmware and Odette File Transfer Protocol (OFTP) implementations.

The North Korean threat actor APT45 allegedly issued thousands of recursive prompts designed to analyze CVEs and validate proof-of-concept exploits.

The Chinese hacking group APT27 reportedly used Gemini to accelerate development of a fleet management application likely intended to manage an operational relay box (ORB) infrastructure.

Russia-linked intrusion operations targeting Ukrainian organizations deployed AI-assisted malware families known as CANFAIL and LONGSTREAM, both of which incorporated LLM-generated decoy code to disguise malicious behavior.

Weaponized Training Data and Autonomous AI Operations

Threat actors have additionally been observed experimenting with a specialized GitHub repository named 'wooyun-legacy,' designed as a Claude code skill plugin. The repository contains more than 5,000 real-world vulnerability cases originally collected by the Chinese vulnerability disclosure platform WooYun between 2010 and 2016.

By feeding this dataset into AI systems, attackers can enable in-context learning that trains models to approach source code analysis with the precision of experienced security researchers. This significantly improves the AI's ability to identify subtle logic flaws that standard models might overlook.

Researchers also revealed that a suspected China-aligned threat actor deployed agentic AI tools such as Hexstrike AI and Strix during attacks against a Japanese technology company and a major East Asian cybersecurity platform. These tools reportedly enabled automated reconnaissance and discovery operations with minimal human intervention.

The Growing Security Implications of Offensive AI

The findings underscore a major shift in the cyber threat landscape. AI is rapidly evolving from a productivity tool into a force multiplier for offensive cyber operations. From discovering zero-day vulnerabilities to automating malware deployment and enhancing operational stealth, artificial intelligence is fundamentally changing how cyberattacks are planned and executed.

As AI-driven cyber capabilities continue to mature, organizations face a future where attacks become faster, more adaptive, and increasingly difficult to detect before damage occurs.

Trending

Most Viewed

Loading...