Major Funding Milestone
AI security firm Irregular has raised $80 million in fresh funding, marking one of the largest investments in the fast-growing field of AI model security. The round was led by Sequoia Capital and Redpoint Ventures, with participation from Assaf Rappaport, CEO of cloud security unicorn Wiz.
A source familiar with the deal noted that the funding round valued Irregular at $450 million — a clear signal of investor confidence in the startup’s ability to address the emerging risks posed by the next wave of artificial intelligence.

From Pattern Labs to Industry Player
Irregular, formerly known as Pattern Labs, has quickly become a name to watch in the AI security landscape. The company’s evaluations are already used by some of the most important players in the field. Its work has been cited in security reports for Anthropic’s Claude 3.7 Sonnet as well as OpenAI’s o3 and o4-mini models.
One of Irregular’s most influential contributions is SOLVE, a framework designed to score a model’s ability to detect vulnerabilities. Today, SOLVE is widely adopted across the AI industry, serving as a benchmark for how companies measure a system’s defensive capabilities.

Shifting the Focus: Anticipating Emergent Risks
But Irregular is not just focused on today’s challenges. The company is using its new funding to take on a more ambitious goal: identifying emergent risks and behaviors that could arise as AI models grow more capable.
“Our view is that soon, a lot of economic activity will come from both human-on-AI interactions and AI-on-AI interactions,” said co-founder Dan Lahav in an interview with TechCrunch. “That’s going to break the security stack along multiple points, and we need to be ahead of that curve.”
In other words, while many companies are working to secure existing models, Irregular is trying to anticipate tomorrow’s vulnerabilities before they cause real-world damage.
ALSO SEE: OpenAI Unveils GPT-5-Codex for Coding
Building Complex AI Testing Environments
To do this, the company has developed an elaborate simulation infrastructure that allows it to test new models under extreme conditions. These simulations are not simple test cases; they are dynamic network environments where AI systems take on both attacker and defender roles.
“When a new model comes out, we can place it into a simulated environment and watch how it performs under stress,” explained co-founder Omer Nevo. “We can see where the defenses hold, and more importantly, where they don’t. This helps us surface potential weak points before the model is deployed in the real world.”
This proactive approach essentially red-teaming AI with AI gives Irregular a unique advantage in identifying hidden risks that human evaluators might miss.

Why Security Is Becoming the Center of AI Development
The need for strong AI security is growing rapidly. Over the summer, OpenAI completely restructured its internal security protocols, citing concerns about corporate espionage and model misuse. At the same time, AI systems are themselves becoming adept at identifying software vulnerabilities, a skill that could be weaponized if it falls into the wrong hands.
This dual-use capability is one of the industry’s biggest challenges: the same AI that can help defenders secure systems can also empower attackers to break them.
For Irregular, this is more than just a technical challenge it’s a moving target that requires constant innovation.
Looking Ahead
The founders are under no illusion about the scope of the task ahead. “If the goal of the frontier lab is to create increasingly more sophisticated and capable models, our goal is to secure these models,” said Lahav. “But it’s a moving target, so inherently there’s much, much more work to do in the future.”
With $80 million in new funding and growing recognition in the industry, Irregular is positioning itself at the heart of one of the most pressing issues in AI: how to ensure that frontier models remain powerful, but safe.
Sources ( Techcrunch )