Google Flags Gemini Abuse By China, Iran, North Korea And Russia

0
2
Attackers Blend Proprietary AI With Open-Source Intelligence And Public Toolchains To Automate Cyber Attacks Using Google Gemini
Attackers Blend Proprietary AI With Open-Source Intelligence And Public Toolchains To Automate Cyber Attacks Using Google Gemini

State-backed groups from China, Iran, North Korea and Russia are misusing Google’s Gemini alongside open-source intelligence and public malware kits to speed up phishing, exploits and data theft, exposing risks in today’s hybrid AI ecosystem.

State-backed hackers are abusing Google’s Gemini artificial intelligence model across every stage of cyber attacks, combining a proprietary AI system with open-source intelligence and publicly available tooling to accelerate operations at scale. The activity was confirmed by Google Threat Intelligence Group (GTIG).

The groups, linked to China (APT31, Temp.HEX), Iran (APT42), North Korea (UNC2970), Russia and several cybercriminal outfits, used Gemini for target profiling, open-source intelligence gathering, phishing lure generation, translation, coding, debugging, vulnerability analysis, exploit research, malware troubleshooting, command-and-control development and data exfiltration.

GTIG said adversaries leveraged the model “from reconnaissance and phishing lure creation to command and control (C2) development and data exfiltration.”
China-linked operators reportedly adopted an “expert cybersecurity persona” to automate testing. Google noted, “The PRC-based threat actor fabricated a scenario, in one case trialing Hexstrike MCP tooling, and directing the model to analyze Remote Code Execution (RCE), WAF bypass techniques, and SQL injection test results against specific US-based targets.”

Iran’s APT42 used the system to speed social engineering and tailor malicious tools.

AI-assisted malware including HonestCue, CoinBait and ClickFix campaigns also surfaced, with Gemini generating code and payloads.

Attackers further attempted model extraction and knowledge distillation. “Model extraction and subsequent knowledge distillation enable an attacker to accelerate AI model development quickly and at a significantly lower cost,” GTIG researchers said. One effort issued 100,000 prompts.

Google disabled abusive accounts and added protections, stating it “designs AI systems with robust security measures and strong safety guardrails.” The findings underscore how closed AI, when paired with open ecosystems, can amplify both capability and risk. (Based on information from BleepingComputer)

LEAVE A REPLY

Please enter your comment!
Please enter your name here