- Learn Prompting's Newsletter
- Posts
- Anthropic Built Its Best Model Ever. And You Can't Use It.
Anthropic Built Its Best Model Ever. And You Can't Use It.
Anthropic's most capable model ever just launched. Here's why you can't use it yet.
Learn Prompting Newsletter
Your Weekly Guide to Generative AI Development
Anthropic Built Its Best Model Ever And You Can't Use It.
A deep dive into Anthropic's new model and why you can't use it.
Hey there,
Anthropic has built their most capable model ever. And immediately locked it away. Their new Mythos model was so powerful that they decided to not release it publicly, and instead create a new initiative to use Mythos to improve cybersecurity. They’ve partnered with Google, Apple, JP Morgan, and 40 other companies to improve digital security as a part of Project Glasswing.
What Is Mythos?
Mythos is Anthropic’s new general-purpose frontier model. What sets Mythos apart from other state-of-the-art models is its powerful agentic coding and reasoning skills. But more importantly, it's the fact that you can’t access this model. Anthropic explained that this model was able to identify thousands of previously unknown vulnerabilities in major operating systems, web browsers, and backend infrastructure. The crazy part is that Mythos was able to find and develop exploits for most of these vulnerabilities without direct human involvement. So in other words, Anthropic created an agent so powerful it can autonomously exploit thousands of products we use on a daily basis. Because of its shocking capabilities, Anthropic has launched Project Glasswing in an effort to improve cybersecurity.
What is Project Glasswing?
This new initiative brings together some of the largest tech companies in the world in an effort to use Mythos to identify and fix vulnerabilities instead of exploiting them. The goal is to address as many of these exploits as possible before other AI models reach the same level as Mythos and are widely available. So in short, the goal is to give the developers behind the most important web infrastructure a head start by arming them with the best AI models.
“AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.”
The Future Precedent
Anthropic’s reasoning here is sound: releasing a model this powerful to the public would be irresponsible. Mythos is powerful enough to find exploits in the platforms and infrastructure we rely on everyday, so controlling who can access it is a smart decision. Limiting access to AI tools that can cause harm is the duty of these companies and it's good to see that they aren’t prioritizing profits over safety. Creating Project Glasswing was a great way to use this model.
But there is a secondary point we need to consider when we look at the broader picture. What Anthropic also did this week, probably without intending to, is establish a precedent that it's acceptable for frontier labs to grant access to the latest and most capable models exclusively to large enterprises rather than the public.
This is worth taking a moment to consider because while the justification this time is airtight, other frontier labs may use this template to only release their new models to enterprise customers in the future. We need to ensure that this doesn’t become the new norm for the AI space, and instead becomes a limited measure to ensure that our digital spaces remain secure by giving these companies time to adjust to a powerful new model.
My Thoughts
Mythos represents an interesting problem that AI will seemingly continue to encounter. As these models move closer and closer to fulfilling the promises that the labs behind them often make, will they continue to be so accessible? Whenever pressed on the issue of disruption and job loss, many of these AI companies fallback on statements like “Everyone will have their own AI” or something similar. But what if these models are too good to be released to the general public? Anthropic’s inability to control what actions Mythos can take without limiting its overall intelligence is a major problem. I’m someone who uses AI on a daily basis and is always looking forward to using the latest models and tools. But I don't think we can let our optimism blind us to the very real concerns that need to be addressed with AI. I applaud Anthropic for having the discipline to not release Mythos to its customers and to create an initiative like Project Glasswing to help improve cybersecurity.
Reply