OpenAI has launched GPT-5.4-Cyber, a more permissive version of its flagship model designed for defensive cybersecurity work, just days after Anthropic’s Mythos release. The new model arrives as OpenAI expands its Trusted Access for Cyber program, which the company introduced in February as a pathway for vetted defenders to use frontier models on legitimate cyber tasks. OpenAI said on April 14 that it is now scaling the program to “thousands of verified individual defenders and hundreds of teams responsible for defending critical software.”
OpenAI is framing the move as a deliberate alternative to a more exclusive rollout model. In its announcement, the company said its cyber strategy is built on “democratized access, iterative deployment, and ecosystem resilience,” arguing that it wants to avoid “arbitrarily deciding who gets access for legitimate use and who doesn’t.” OpenAI added that its goal is to make advanced defensive tools available to legitimate actors “large and small,” including organizations protecting critical infrastructure and public services.
Technically, GPT-5.4-Cyber is a version of GPT-5.4 that “lowers the refusal boundary for legitimate cybersecurity work” and enables more advanced defensive workflows. OpenAI said that includes binary reverse engineering, allowing security professionals to analyze compiled software for “malware potential, vulnerabilities and security robustness without needing access to its source code.”
The competitive backdrop is Anthropic’s much tighter Mythos rollout. Axios reported that Anthropic is giving access to Mythos Preview to “only about 40 organizations,” while OpenAI is expanding access to “thousands of individuals and hundreds of security teams” through verification checks. Axios also reported that Anthropic viewed the model as too dangerous for broad release because it was especially adept at “finding and exploiting security flaws.”
OpenAI executives are openly making the case that cyber defense models should not be reserved for a narrow group of insiders. “This is a team sport, we need to make sure that every single team is empowered to secure their systems,” OpenAI cyber researcher Fouad Matin told reporters. He added: “No one should be in the business of picking winners and losers when it comes to cybersecurity.”
OpenAI is also signaling that GPT-5.4-Cyber is part of a broader buildout, not a one-off release. In its latest post, the company said it is preparing for “increasingly more capable models” over the coming months and expects future deployments to pair stronger cyber capabilities with stronger controls. At the same time, it emphasized that GPT-5.4-Cyber will not be fully open: “Because this model is more permissive, we are starting with a limited, iterative deployment to vetted security vendors, organizations, and researchers.”
The stakes are rising well beyond the tech industry. Bloomberg reported on April 10 that “Wall Street banks are starting to test Anthropic PBC’s Mythos model internally as Trump administration officials encourage them to use it to detect vulnerabilities,” with JPMorgan the only bank named publicly at that stage. That highlights how quickly frontier AI cyber tools are moving from lab experiments into critical infrastructure and financial-sector testing.
Why important?
It is still not clear how GPT-5.4-Cyber will compare with Anthropic’s Mythos on raw benchmark performance. But the larger trend is already visible: the next phase of frontier AI competition is not just about which model is strongest. It is about who gets access to advanced cyber capabilities, under what safeguards, and at what scale. OpenAI is betting that broader, verified access will strengthen the defensive security ecosystem faster than a tightly gated model.
Sources:
- https://openai.com/index/scaling-trusted-access-for-cyber-defense/
- https://openai.com/index/trusted-access-for-cyber
- https://www.axios.com/2026/04/14/openai-model-cyber-program-release
- https://www.bloomberg.com/news/articles/2026-04-10/wall-street-banks-try-out-anthropic-s-mythos-as-us-urges-testing

