Anthropic Releases Preview of AI Model Mythos for Cybersecurity Initiative Amid Security Concerns
Anthropic has introduced a preview of its new AI model, Mythos, as part of Project Glasswing, a collaboration with over 40 companies including Microsoft, Amazon, Apple, and Google to enhance cybersecurity. The model has identified thousands of vulnerabilities in major operating systems and browsers. Anthropic is limiting the full rollout due to risks of misuse by hackers for cyberattacks.
cnbc.comAnthropic announced on Tuesday the preview release of its new AI model, Mythos, designed to detect software vulnerabilities. The model is integrated into Project Glasswing, a cybersecurity initiative involving more than 40 organizations, including Microsoft, Amazon, Apple, Google, Nvidia, CrowdStrike, and Palo Alto Networks.
Anthropic stated that Mythos has already identified thousands of bugs and vulnerabilities in popular software, including every major operating system and web browser.
The initiative aims to enable large companies and potentially governments to flag system vulnerabilities with minimal human intervention. Participants will use the Claude Mythos Preview model to test and advance AI-driven cybersecurity capabilities. Anthropic is withholding the full release of Mythos due to concerns that hackers could exploit the model for cyberattacks.
Project Glasswing Collaboration Project Glasswing brings together rivals in the tech industry to address AI's dual role in cybersecurity threats and defenses.
Companies involved include Amazon Web Services and others focused on preventing cyberattacks through proactive vulnerability detection. The New York Times reported that Anthropic is working with 40 companies to explore Mythos's applications in cyber defense.
This partnership highlights the growing integration of AI in cybersecurity, where advanced models can both accelerate attacks and strengthen protections.
CoinDesk noted that the model has uncovered vulnerabilities across major platforms, demonstrating its potential despite the rollout limitations.
Security Risks and Model Capabilities Anthropic's decision to limit Mythos stems from fears of its misuse in offensive cyber operations.
The Verge detailed that the model found security problems in every major operating system and web browser during testing. Wired described the collaboration as an effort to keep AI from enabling widespread hacking.
“The company claims that the new model has already identified 'thousands' of bugs and vulnerabilities in popular software programs, including every major operating system and browser." — CoinDesk, date not specified The New York Times characterized the development as a cybersecurity reckoning, emphasizing the need for defensive AI advancements against faster hacker attacks enabled by similar technologies.”
Broader Implications for AI in Defense As AI systems like Mythos and those from OpenAI evolve, they are expected to upend traditional cybersecurity practices. Defenses will increasingly rely on AI to counter AI-assisted threats. Anthropic's approach balances innovation with caution, prioritizing controlled access through partnerships.
Story Timeline
4 events- Tuesday
Anthropic announced preview of Mythos AI model and Project Glasswing initiative.
6 sourcesCNBC · TechCrunch · NYT · The Verge - Prior to announcement
Mythos identified thousands of bugs in major operating systems and browsers.
3 sourcesCoinDesk · The Verge · NYT - Ongoing
Anthropic limited full Mythos rollout due to hacker misuse risks.
3 sourcesCNBC · CoinDesk · NYT - Initiative launch
Over 40 companies joined Project Glasswing to test AI cybersecurity tools.
4 sourcesNYT · The Verge · Wired · CNBC
Potential Impact
- 01
Tech companies integrate Mythos to automate vulnerability scanning in software.
- 02
Governments explore Mythos for national cybersecurity infrastructure protection.
- 03
AI-driven defenses counter faster hacker attacks using similar models.
- 04
Industry partnerships expand to develop shared AI security standards.
- 05
Delayed full AI releases set precedent for risk assessments in tech.
Transparency Panel
Related Stories
SemaforAnthropic Co-Founder Warns of Upcoming AI Capabilities for Exploiting Web Vulnerabilities
Anthropic's co-founder stated that powerful AI models capable of exploiting website vulnerabilities will emerge soon. The company's new model, Claude Mythos, identified unknown security flaws in major web browsers and operating systems. Financial authorities have responded by dis…
Los Angeles TimesGallup Poll Shows Increasing AI Use Among US Workers with Persistent Skepticism
A Gallup poll conducted in February indicates that more American workers are using artificial intelligence in their jobs, with about 3 in 10 using it frequently. However, skepticism remains common, with many non-users citing preferences for traditional methods, ethical concerns,…
Federal Bureau of Investigation / Wikimedia (Public domain)AI Assistant Poke Charges Billionaire $136,000 Monthly Fee
Poke, an AI assistant without a price ceiling, charged one billionaire $136,000 a month. Marvin von Hagen stated this pricing detail. The information highlights Poke's premium service model.