On Wednesday, AI Security Company irregularly announced a new round of funding led by Sequoia Capital and Redpoint Ventures, with WIZ CEO Assaf Rappaport involved. Sources close to the deal said the irregular round was worth $450 million.
“Our view is fast, and soon, a lot of economic activity will come from human interaction and Ai-ai On-An-An-An-An-An-An-An-An-An-An-An-An-An-An-An-An-An-An-An-An-An-An-An-EToction,” Co-founder Dan Lahav told TechCrunch.
Formerly known as the Pattern Laboratory, Irregularity has become an important participant in AI evaluation. The company’s work is cited in the security assessment For Claude 3.7 sonnet also O3 and O4-Mini models of Openai. More generally, the framework for companies to score the vulnerability detection capabilities of models (Called Solving) is widely used in the industry.
Despite irregularly important work on the existing risks of the model, the company has raised a bigger fundraising campaign: uncovering emerging risks and behaviors before the wild surfaces. The company has built a well-crafted simulated environment system that allows in-depth testing of the model before it is released.
“We have complex network simulations where AI plays the role of attackers and guards,” said co-founder Omer Nevo. “So when a new model emerges, we can see where the defense abilities are and where they are not.”
As more and more risks emerge, security in the security industry has become a strong focus of the AI industry. Openai Overhauled its internal security measures This summer, focus on potential corporate espionage.
At the same time, the AI model is More and more proficient When looking for software vulnerabilities – it has serious implications for both attackers and defenders.
TechCrunch Events
San Francisco
|
October 27-29, 2025
For irregular founders, this is the first of many safety headaches caused by the growing capabilities of the big-word model.
“If the goal of the Border Laboratory is to create increasingly complex and capable models, then our goal is to ensure those models,” Rahaf said. “But it’s a moving goal, so there’s a lot of work that can never be done, and there’s still a lot of work to be done in the future.”