Google Uncovers First AI-Assisted Zero-Day Exploit

0
2
AI-Built Zero-Day Exploit Targets Open Source Admin Tool Ahead Of Planned Mass Cyberattack
AI-Built Zero-Day Exploit Targets Open Source Admin Tool Ahead Of Planned Mass Cyberattack

Google’s GTIG has revealed the first known AI-assisted zero-day exploit targeting an open-source administration platform, highlighting the rapid escalation of AI-powered cyberwarfare and supply-chain threats.

Google has warned that cyber criminals and state-backed threat actors are rapidly operationalising generative AI to develop exploits, automate malware campaigns, and scale cyberattacks targeting open-source infrastructure and AI ecosystems.

In a new report from Google Threat Intelligence Group (GTIG), researchers disclosed the first identified real-world zero-day exploit developed with AI assistance. The exploit involved a two-factor authentication bypass targeting a popular open-source web administration tool ahead of a planned mass exploitation campaign.

The operation was disrupted before deployment after GTIG coordinated responsible disclosure efforts with the affected vendor.

According to the report, the incident signals a major shift from experimental AI usage to operational-scale AI-driven cyberwarfare. Researchers observed sustained AI-supported vulnerability research activity linked to China- and North Korea-aligned actors, including persona-based prompting, automated exploit analysis, and agentic AI frameworks designed to scale reconnaissance and testing.

GTIG also highlighted PROMPTSPY, an Android backdoor capable of autonomous AI-agent behaviour through Gemini API integration. The malware can feed device interface data to Gemini, receive structured commands, autonomously click and navigate interfaces, capture biometric data, replay authentication gestures, and even prevent uninstallation attempts through invisible overlays.

Researchers additionally documented AI-assisted malware obfuscation tied to Russia-aligned operations, including dynamically generated code and AI-produced decoy logic designed to evade detection systems.

Google further warned that attackers are increasingly targeting open-source AI tooling, AI integration layers, and software supply chains to facilitate credential theft, ransomware, enterprise compromise, and extortion operations. The company said it is deploying defensive AI systems including Big Sleep, CodeMender, and expanded safeguards across Gemini services.

LEAVE A REPLY

Please enter your comment!
Please enter your name here