Anthropic’s Claude Mythos preview accessed without authorization on launch day

by Kim Seong Hyeon Posted : April 26, 2026, 15:24Updated : April 26, 2026, 15:24
Screenshot from Anthropic’s website
[Photo = Screenshot from Anthropic’s website]


Anthropic’s newest AI cybersecurity model, released under tight controls, was accessed without authorization the same day it was made available to a select group, according to the IT industry.
 
Industry officials said April 26 that Anthropic has opened an investigation after detecting unauthorized access to a preview of its latest model, “Claude Mythos,” through a third-party vendor environment. The access is believed to have occurred on the 7th, when Anthropic announced “Project Glasswing” and began providing the Mythos preview to selected companies.
 
Mythos is described as capable of finding software vulnerabilities on its own and autonomously generating attack code based on them. Anthropic has said the model can detect and exploit vulnerabilities “to a degree that could surpass top-tier human hackers.” Citing dual-use risks — it could be used for attacks as well as defense — the company chose an invitation-only, limited research release rather than a broad public launch.
 
The suspected route was a classic insider-risk scenario. A partner-company employee used their authorized access to enter the Mythos environment and, using information about Anthropic’s file system exposed in a hack of the AI evaluation startup Mercer, inferred where the model was hosted online. The employee then shared access with a small Discord group.
 
The group reportedly avoided cybersecurity-related prompts to evade detection, using the model only for simple tasks such as building websites. The group also claimed it accessed other unreleased models, but that claim has not been confirmed.
 
Anthropic said there is “no evidence the intrusion spread beyond the vendor environment,” and no damage to Anthropic systems has been confirmed. Still, security experts said the incident highlights broader structural weaknesses. Gabriel Hempel, a security operations strategist at security firm Exabeam, said the preview was intentionally limited because of dual-use risks, yet it “leaked almost immediately through a partner environment.”
 
Some analysts said Project Glasswing’s design carries an inherent contradiction. Anthropic limited the Mythos preview to companies including Amazon, Apple, Google, Microsoft, Nvidia, Cisco, CrowdStrike and JPMorgan Chase. U.S. Treasury Secretary Scott Bessent recently convened the heads of major investment banks — including Goldman Sachs, Citigroup, Bank of America and Morgan Stanley — to discuss ways to use Mythos, the report said.
 
The strategy was framed as a nationally designed “controlled release,” but as more institutions participate, the number of people with access inevitably grows, exposing the weakest link.
 
The incident also underscored limits of even advanced code-focused defenses: a model may detect vulnerabilities in software, but it cannot by itself prevent weaknesses in unvetted third-party tools or social-engineering attacks. The use of Mercer breach information to enable access to Mythos was cited as a real-world example of “AI supply chain chain risk,” in which one breach fuels another.
 
The case is also raising questions about whether “only a few get access” remains a workable control strategy in the AI era. Unlike export-control models used in semiconductors and defense, the access itself can become the leak path.
 
South Korea is also watching the debate. Ryu Je-myeong, second vice minister of science and ICT, told reporters at the World IT Show on the 22nd that the government is exploring official participation in security discussions involving Anthropic’s Glasswing and OpenAI, adding that direct information-sharing is needed “in some form.”
 
The Ministry of Science and ICT has been holding a series of emergency issue-review meetings with domestic IT and security industry officials, including SK Telecom, KT, LG Uplus, Naver and Kakao, to assess cybersecurity readiness as AI capabilities advance.
 



* This article has been translated by AI.