
Zenity launches a new open source tool to assess LLM manipulation risks while enhancing AI security with incident intelligence and enterprise-wide agentic browser governance.
Zenity, a leading end-to-end security and governance platform for AI agents, has expanded its AI security offerings with the launch of a new open source tool to evaluate emerging large language model (LLM) manipulation risks. The tool enables organizations to assess vulnerabilities in AI agents, detect manipulation attempts, and contribute to a collaborative, transparent security ecosystem.
Alongside the open-source release, Zenity introduced Incident Intelligence capabilities through its new Issues feature, which correlates posture findings, runtime anomalies, identity relationships, and graph-based insights into high-confidence security incidents. This generates a coherent narrative that explains what happened, why it happened, and the impact, eliminating the need for manual event reconstruction.
The platform’s Correlation Agent interprets AI agent behaviour, surfaces manipulation attempts, explains agent intent, and accelerates investigations—providing visibility into intent, a critical element that traditional detections cannot capture.
Zenity also expanded support for agentic browsers across enterprises, allowing governance and monitoring of AI-driven actions in web environments.
Ben Kliger, Co-Founder and CEO of Zenity, said: “With this release we are giving security teams something they have never had before, real visibility into intent. Our new Correlation Agent does not just detect signals, it interprets them. It understands what an agent is trying to do, by connecting every signal, data point and insight that the Zenity platform collects and generates throughout the agent lifecycle into a single coherent story.”
“This is a game changer for AI security, especially as organizations embark on their goals towards 1 billion agents. By transforming scattered signals into high confidence security narratives, we are eliminating guesswork, accelerating investigations and giving teams the clarity they need to operate safely at massive scale.”













































































