Artificial Intelligence

Anthropic Reveals Claude Mythos but Withholds Release Over Safety Concerns

San Francisco: Anthropic has unveiled Claude Mythos, its most powerful AI model to date, and announced it will not release it to the public. The reason: the model is too capable at finding and exploiting software vulnerabilities to be safely deployed at scale.

Over the past several weeks, Anthropic used a preview version of Mythos to scan critical software infrastructure. The model found thousands of zero-day vulnerabilities, previously unknown flaws, across every major operating system and every major web browser. Many of the bugs are decades old. Over 99% remain unpatched, which is why Anthropic cannot disclose details.

The capabilities were not intentional. Anthropic said it did not train Mythos for cybersecurity work. They emerged as a byproduct of general improvements in coding, reasoning, and autonomy. The same skills that make Mythos better at fixing code make it better at breaking it.

The scale of what Mythos can do is striking. Previous Claude models had a near-zero success rate at autonomous exploit development. Mythos converted 72.4% of known Firefox JavaScript vulnerabilities into working exploits. Engineers with no security training asked it to find remote code execution vulnerabilities overnight and woke to complete, working exploits.

In response, Anthropic launched Project Glasswing, a coordinated defensive effort to use Mythos to patch vulnerabilities before bad actors develop similar capabilities. Twelve core partners are involved, including AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, and Nvidia. Forty organisations in total will receive access. Anthropic is backing the effort with USD 100 million in usage credits, along with USD 4 million in donations to open-source security organisations.

The announcement lands as Anthropic faces a complicated backdrop. The company has suffered two major security lapses in recent weeks, the accidental exposure of a draft Mythos blog post and the Claude Code source code leak via npm. It is also in a legal dispute with the US Department of Defense. Now it is asking the industry to trust it with one of the most dangerous AI capabilities ever disclosed.

Yashika Aneja

Yashika Aneja is a journalist at Tea4Tech with over five years of experience in reporting and editorial writing. Her work spans technology, environment, education, politics, social media, travel, and lifestyle, with a focus on fact-based reporting and explanatory storytelling. || At Tea4Tech, Yashika contributes original reporting and analysis that adheres to the publication’s editorial standards for accuracy, originality, and responsible journalism. Her reporting is informed by curiosity-driven research and a multidisciplinary approach to news coverage.

Recent Posts

X Brings Back Voice Notes to X Chat

San Francisco: X has reintroduced Voice Notes to X Chat, allowing users to send audio…

1 day ago

OpenAI Launches USD 100 ChatGPT Pro Plan

San Francisco: OpenAI has introduced a new USD 100/month ChatGPT Pro subscription, offering a mid‑tier…

1 day ago

Meta Unveils Muse Spark As Its First Proprietary AI Model

San Francisco: Meta has launched Muse Spark, its first proprietary AI model and the inaugural…

1 day ago

Aria Networks Raises $125M to Build AI-Native Networking for Data Centres

London: Aria Networks has raised USD 125 million to build networking infrastructure designed specifically for…

1 day ago

Spirit AI Raises $145M as Humanoid Robots Hit Factory Production Lines

BEIJING: Spirit AI has raised $145 million in a new funding, bringing total funding across…

2 days ago

AI Patent Startup Patlytics Raises $40M to Automate IP Workflows

New York: In a bid to expand its AI platform for patent work, Patlytics has…

2 days ago