Products of the artificial intelligence boom – from deep learning, large language models and generative AI to autonomous bots and agents – bring visions of artificial general intelligence (AGI) closer to reality. But superintelligence optimism is tempered by darker portents, as in the recent detection by Anthropic of a cyber espionage campaign, believed to be the first use of agentic AI “not just as an advisor, but to execute the cyberattacks themselves.”
With successive generations of technology, cybersecurity has only gotten more challenging. “AI attacking AI,” though anticipated, adds more layers of complexity and uncertainty. Presented as a counterforce: CyberAGI.
“We’ll need new guardians” to protect a world reshaped or remade by AI, says a blog by Saket Modi, co-founder and CEO of cyber risk technology company SAFE Security. “The maker always needs a checker. In this new era, CyberAGI must emerge as the balance.”
Assuming that AGI will be trained on specific domains, such as “HealthAGI” and “PhysicsAGI”, Modi fits CyberAGI into that pattern “to solve the defining problem of the digital age: making companies and individuals unhackable.”
SAFE’s Saket Modi: “Defining problem of the digital age.”
The “partially autonomous” cyber incident reported by Anthropic led subcommittees of the U.S. House Committee on Homeland Security to schedule a December 17 hearing. Members of the panel chaired by Representative Andrew Garbarino, Republican of New York, stressed in letters to the CEOs of Anthropic, Google Cloud and Quantum Xchange “the urgent need to understand how emerging AI-driven capabilities and the cloud systems that increasingly enable them can be misused against the United States.”
Meredith Whittaker, president of the foundation that oversees the Signal messaging app, which is known for its strong data encryption, has warned of agentic AI as a threat to user privacy. She told Fortune that AI agents are an existential threat to secure messaging.
Whittaker also acknowledged that Signal is dependent on cloud computing; it was affected by the October 19-20 Amazon Web Services outage.
In a November 24 statement, the Cybersecurity and Infrastructure Security Agency (CISA) said that “multiple cyber threat actors [were] actively leveraging commercial spyware to target users of mobile messaging applications,” with tactics including “zero-click exploits which require no direct action from the device user.”
Modi of SAFE described in his July post components of his Palo Alto, California-based company’s “Cyber Singularity Platform”: Cyber Risk Quantification (CRQ), Cyber Threat Exposure Management (CTEM) and Third Party Risk Management (TPRM).
SAFE, whose customers include Google, Fidelity and T-Mobile, announced a $70 million Series C funding on July 31, with Modi noting that CTEM had just joined CRQ and TPRM in being transformed with agentic-AI autonomy. These domains “are critical building blocks in our singular pursuit” of CyberAGI.
In November, SAFE acquired CTEM leader Balbix. Together they unveiled “the ultimate agentic-AI-powered cyber risk platform,” according to the announcement. “For the first time, organizations can run on a single, living source of truth, enabling remediation, reporting, and resource allocation to be driven by a unified, real-time understanding of cyber risk.”
There is a competitive flair in SAFE’s innovations and claims. The company is recognized as a Representative Vendor in the 2025 Gartner Market Guide for TPRM Technology Solutions. Forrester Wave deemed the SAFE One Platform “the most comprehensive CRQ-native risk management solution in the market”; it includes “the full set of FAIR standards” for cyber risk quantification.
At the same time, Modi portrays the positioning as other than “a detection [tool] vendor; we remain uniquely neutral – laser-focused on helping our customers manage cyber risk with clarity, precision and speed.” He maintains that “perfect storm” conditions position SAFE “to be the first company to build true cybersecurity superintelligence.”
The agentic components collaboratively collect, enrich, prioritize, remediate and report on risks continuously and at machine speed. “This means CISOs [chief information security officers] and CIOs [chief information officers] can finally move beyond reactive firefighting and manage cyber risk as a measurable, controllable business function,” Modi explains. “Just as importantly, they can scale their security programs within existing budgets, eliminating the need to constantly add headcount or tools while still keeping pace with the growing volume and complexity of threats.”
“This is about moving from dashboards that show risk, to systems that actually fix risk.”
On the premise that “AI generates unpredictable risk,” startup DeepKeep contends that “only AI-native security can comprehend and protect the boundless connections and intricate logic of AI/LLM.”
“Using generative AI to secure generative AI sets [DeepKeep] apart from competitors. We leverage genAI to protect large language models and AI systems throughout the entire AI lifecycle,” founder and CEO Rony Ohayon stated in a 2024 interview.
Eric Vaughan of GFI Software
The shift from reactive tools to proactive, reasoning systems defines CyberAGI, enterprise software entrepreneur Eric Vaughan, CEO of GFI Software, IgniteTech and Khoros, observes. He compares it to the difference between a smoke detector and a fire-prevention engineer who understands building materials, electrical systems and human behavior.
“The AGI designation is deliberate but specific,” Vaughan goes on. “While we're nowhere near human-level general intelligence, we can achieve superhuman performance in narrow domains. CyberAGI aims to be that domain expert for cybersecurity – understanding context, anticipating threats and making decisions that currently require teams of analysts.”
Modi and Kirsten Bay, co-founder and CEO of Cysurance, look at “6 levels of autonomous driving” as a proxy for CyberAGI. It ranges from Level 0 (no AI, equivalent to standard equipment in autos) to “superhuman” Level 5 (100% outperformance of human analysts).
“Researchers agree we are at Level 1 for AGI,” says SAFE’s Saket Modi, “and we think it’s the same” for CyberAGI.
Level 5 “is what we can reasonably describe as true CyberAGI, which is the ultimate goal,” Bay says. “In this scenario, we are talking about a system capable of reacting to risk and cyberattacks in real time, using human-level reasoning boosted by advanced machine learning to identify, quantify and shut down emerging threats.” Remediation is instantaneous – “and even ahead of incidents on a predictive basis.”
“Of course, we are far from these capabilities,” Bay continues. “But laying the foundation to map the capabilities of CyberAGI to the evolution of AGI is a sound approach.”
Vaughan says that SAFE's approach resonates with GFI Software’s pursuit of sophisticated, democratized cybersecurity through AI.
“Their vendor-neutral orchestration philosophy parallels what we’re achieving with GFI AppManager AI, our ‘single pane of glass’ that unifies management across distributed security deployments,” Vaughan elaborates. The real value derives from understanding patterns across thousands of deployments. “Our platform correlates data from diverse sources – firewalls, mail servers, monitoring tools – creating intelligence that no single organization could develop alone.”
Vaughan believes that CyberAGI aligns well with the National Institute of Standards and Technology (NIST) Cybersecurity Framework. Its “Govern” function, in particular, emphasizes risk-based decision-making and can be enhanced by quantification of cyber risk in business terms. Govern is the most recent addition to the Identify, Protect, Detect, Respond, and Recover parts of the framework.
For Identify, CyberAGI transcends static points in time, recognizing that attack surfaces constantly change. Predictive capabilities aid Protect and Detect. Automated playbooks that learn from experience support Respond and Recover.
CyberAGI also fits well with the NIST framework’s applicability to supply-chain and third-party risk management, according to Vaughan.
Kirsten Bay of Cysurance
How the NIST elements relate to CyberAGI “will depend on the degree to which the system is able to autonomously define and build risk frameworks, monitor environments, react to incidents, and recover” from breaches, Bay says. “Of course, each stage of CyberAGI, assuming we move through the stages outlined by SAFE, will be matched by the threat landscape, as adversaries will develop increasingly more sophisticated techniques for breaching systems.”
Modi envisions multiple capabilities combining to push cybersecurity from ordinary automation to true AGI-level autonomy: generalization (the ability to transfer and apply knowledge across domains); common-sense reasoning; and interdisciplinary intelligence.
“We’ll see convergence between cyber risk and enterprise risk management,” Vaughan predicts. “When CyberAGI can articulate how a technical vulnerability translates to operational disruption, revenue impact and regulatory exposure, cyber risk naturally integrates into broader ERM frameworks.”
Superintelligence does not by itself resolve trust and explainability issues. Vaughan says, for example, that if CyberAGI recommends shutting down a production system, then security teams need to understand and confirm the reasoning.
And it must be recognized that “AI systems are only as good as their training data. Many organizations have fragmented, inconsistent security telemetry. Getting CyberAGI to work effectively requires clean, comprehensive data flows.”
CyberAGI will bring about or accelerate a shift from periodic risk assessment to continuous adaptation, Vaughan adds. Real-time risk intelligence fundamentally changes the dynamic from quarterly or less frequent reviews. Risk appetite statements will more easily adjust to changing circumstances and be more adaptable, precise and granular for different assets.
Cyber risk professionals won’t be eliminated, in Vaughan’s view. Technical roles will become more strategic and advisory. CyberAGI handles data collection, correlation and initial analysis, freeing humans to make judgments that call for business context and stakeholder management.
Jeffrey Kutler of GARP contributed reporting for this article.