Anthropic’s ‘Mythos shock’ raises a core question: How to control agent AI

by LEE JEE WON Posted : April 28, 2026, 21:12Updated : April 28, 2026, 21:12
AP-Yonhap photo
[Photo = AP·Yonhap]

Anthropic has been at the center of what the global artificial intelligence industry has dubbed the “Mythos shock.” Mythos is an agent-style AI used in a U.S.-Iran war-game simulation and is described as outperforming “Claude Opus.” Its emergence has pushed the debate beyond a technology race to a basic question: How can AI be controlled?

Mythos is being assessed as having greater autonomy and problem-solving ability than earlier systems. It has also demonstrated a leap in capability by designing and executing high-difficulty cyberattack scenarios on its own.

That autonomy, however, is also the risk. Once given a goal, AI agents can decide and act without explicit human instructions, increasing the chance they will operate outside existing security systems or control boundaries.

The industry is focusing on those structural traits. Yoon Seong-ho, CEO of AI startup MakinaRocks, said companies adopt AI not merely to carry out assigned tasks but to have it “judge and execute on its own once given a goal.” “Autonomy is the core of agent AI, and the bigger that autonomy gets, the more risk points increase along with it,” he said.

Concerns about out-of-control behavior are already surfacing, Yoon said. “When you use agent-based services, cases are being reported where payments are made regardless of the user’s intent, or unexpected external communications occur,” he said. “If this happens at the individual level, the risk is far greater in corporate settings, where it could lead to decisions worth tens of billions of won or access to confidential information.”

Developers, he added, have even fueled a “Mac mini” craze, using the compact high-performance computer to build “air-gapped” environments that fully cut off external networks. The idea is to use powerful AI while physically limiting connections to reduce the risk of data leaks or unauthorized actions.

Experts say the next phase of AI adoption will hinge on securing “controllable autonomy.” Yoon said companies should provide a “playground” where AI can operate freely, but only within an environment designed to reflect corporate security systems and governance. “More important than model performance is how precisely you build a control structure that can handle AI safely,” he said.

As the war-game results suggest, AI capability is already close at hand. The key question now is how safely that capability can be used within a governance framework, a factor expected to shape industrial competitiveness.



* This article has been translated by AI.