Anthropic Introduces Mythos AI Model for Cybersecurity Applications with Limited Access
Anthropic announced its new AI model, Mythos, on Tuesday, focusing on cybersecurity defenses. The company is restricting full release and collaborating with over 40 organizations, including rivals like Apple and Google, through Project Glasswing. A preview version is being tested by select customers to explore AI's role in preventing cyberattacks.
cbsnews.com (News photo)Anthropic released details on Tuesday about Mythos, a new AI model designed for cybersecurity purposes. The company stated it is withholding a full public release of the technology. Instead, it is partnering with 40 companies to assess Mythos's potential in countering cyberattacks.
Mythos is part of Anthropic's broader efforts in AI safety and application. The model builds on Anthropic's Claude series, with the preview version labeled Claude Mythos Preview. Access is limited to a select group of high-profile customers for defensive cybersecurity testing.
Collaboration Initiative Anthropic launched Project Glasswing, involving more than 45 organizations, including competitors Apple and Google.
The initiative aims to advance AI capabilities in cybersecurity through collaborative testing of the Mythos preview. Participants will explore how the model can enhance defenses against AI-enabled threats. This partnership includes both tech giants and cybersecurity firms.
Wired reported that the project seeks to prevent AI from being used to hack systems. Anthropic emphasized the model's potential to strengthen protective measures in digital environments.
Access and Testing Details The Claude Mythos Preview is available only to approved testers.
Ars Technica noted that Anthropic is limiting broader access to ensure controlled evaluation. TechCrunch highlighted that the model will support defensive work among a small number of companies. Anthropic's approach reflects caution in deploying powerful AI tools.
The company is working directly with partners to identify risks and benefits in cybersecurity contexts. No timeline for wider availability has been specified.
Background and Implications Anthropic positions Mythos as a significant development in AI for security.
The New York Times described it as a potential cybersecurity reckoning, though the company focuses on practical testing outcomes. This initiative occurs amid growing concerns over AI's dual-use potential in offensive and defensive roles. Project Glasswing represents a rare collaboration among AI rivals.
Such efforts could standardize approaches to AI safety in cybersecurity. Outcomes from the testing phase may influence future regulations and industry practices.
Story Timeline
3 events- Tuesday
Anthropic announced Mythos AI model and Project Glasswing collaboration with 40+ companies.
4 sourcesThe New York Times · TechCrunch · Ars Technica · Wired - Ongoing since announcement
Select customers began testing Claude Mythos Preview for defensive cybersecurity.
3 sourcesTechCrunch · Ars Technica · Wired - Prior to Tuesday
Anthropic developed Mythos as an advancement in Claude AI series for security applications.
2 sourcesArs Technica · Wired
Potential Impact
- 01
AI firms collaborate on standardized cybersecurity testing protocols.
- 02
Regulators reference Project Glasswing outcomes in AI safety guidelines.
- 03
Tech companies expand investments in AI security partnerships.
- 04
Defensive AI tools reduce effectiveness of AI-driven cyberattacks.
Transparency Panel
Related Stories
SemaforAnthropic Co-Founder Warns of Upcoming AI Capabilities for Exploiting Web Vulnerabilities
Anthropic's co-founder stated that powerful AI models capable of exploiting website vulnerabilities will emerge soon. The company's new model, Claude Mythos, identified unknown security flaws in major web browsers and operating systems. Financial authorities have responded by dis…
Los Angeles TimesGallup Poll Shows Increasing AI Use Among US Workers with Persistent Skepticism
A Gallup poll conducted in February indicates that more American workers are using artificial intelligence in their jobs, with about 3 in 10 using it frequently. However, skepticism remains common, with many non-users citing preferences for traditional methods, ethical concerns,…
Federal Bureau of Investigation / Wikimedia (Public domain)AI Assistant Poke Charges Billionaire $136,000 Monthly Fee
Poke, an AI assistant without a price ceiling, charged one billionaire $136,000 a month. Marvin von Hagen stated this pricing detail. The information highlights Poke's premium service model.