Google says it blocked AI-assisted cyberattack plot, warns of North Korean hacking activity

By Lee Jung-woo Posted : May 12, 2026, 13:55 Updated : May 12, 2026, 13:55
A Google logo is seen at a company research facility in Mountain View, California, U.S., May 13, 2025. Reuters-Yonhap
SEOUL, May 12 (AJP) - Google claimed it had preemptively blocked hackers who were preparing large-scale cyberattacks using artificial intelligence and identified North Korean state-linked hacking activities leveraging AI to refine cyber operations.

According to its report published on the Cloud Security blog, Google’s Threat Intelligence Group (GTIG) uncovered a threat actor believed to have used AI in preparations for a “zero-day” attack campaign. 

Google said the actor appeared to be planning broad operations, but the company’s early intervention likely prevented the attacks from being executed.

A zero-day attack exploits previously unknown software vulnerabilities before developers can issue security patches, making such intrusions especially difficult to defend against.

The disclosure adds to mounting concerns in the cybersecurity industry that rapid advances in AI-assisted vulnerability detection could accelerate the discovery and weaponization of software flaws.

According to the report, the attackers sought to exploit vulnerabilities to bypass two-factor authentication systems. Google stressed there was no evidence its own AI model, Gemini, had been used in the operation.

While Google did not identify the actor behind the attempted attacks, it separately warned that state-backed hacking groups linked to China and North Korea are showing “particular interest” in applying AI to cyber operations.

The company said such groups are adopting increasingly sophisticated AI-assisted techniques for vulnerability discovery and exploitation, including integrating specialized, high-quality security datasets into their workflows.
 
This image is generated by NotebookLM.
Google specifically highlighted North Korean hacking group APT45, saying there were indications the group had conducted automated research by repeatedly submitting thousands of prompts to analyze vulnerabilities and validate exploit code.

“Attackers are not hesitating to experiment and innovate, and neither are we,” Google said in the report, adding that it is sharing research findings and defensive measures across the cybersecurity and AI communities to stay ahead of evolving threats.

The warning comes amid broader concerns over AI-powered cyber threats following Anthropic’s recent announcement of “Claude Mitos,” an AI model reportedly capable of expert-level vulnerability discovery. Anthropic said access to the model would initially be restricted to selected companies and institutions because of security concerns.

Security experts have also warned that threat actors may be able to assemble comparable cyber capabilities by combining already publicly available AI models.

Meanwhile, OpenAI recently introduced “GPT-5.5-Cyber,” a cybersecurity-focused AI model reportedly accessible only to a limited group of researchers and organizations.

Copyright ⓒ Aju Press All rights reserved.