
Dataiku has launched one of the first open source explainability frameworks for enterprise AI agents, now supporting NVIDIA Nemotron open models to bring transparency, governance, and sovereign AI readiness to high-stakes deployments.
Dataiku has introduced Kiji Inspector through its 575 Lab open source office, positioning it as one of the first open-source explainability frameworks purpose-built for enterprise AI agents. The framework’s first supported model family is NVIDIA Nemotron open models, giving enterprises a way to combine open foundation models with open explainability tooling inside governed AI stacks.
The launch addresses one of the biggest barriers to enterprise agent adoption: limited visibility into how AI agents make decisions, especially in regulated and compliance-sensitive environments. At its core, Kiji Inspector uses a Sparse Autoencoder to inspect the model at the exact moment it commits to a tool choice, converting internal signals into clear, traceable explanations without slowing inference.
This enables stronger validation, trust, auditability, and governance, while supporting enterprise priorities such as sovereign AI, decision traceability, regulatory readiness, risk review, and production-scale deployment.
“Enterprises are embedding AI agents into decisions that influence revenue, safety, compliance, and customer trust, yet most still lack structural visibility into how those systems reason,” said Hannes Hapke, Director of 575 Lab at Dataiku.
The move also extends the broader Dataiku–NVIDIA alignment for production-grade agentic AI, reducing enterprise dependence on closed-source black-box APIs. The combined stack is particularly relevant for BFSI, healthcare, industrial automation, energy, telecom, and the public sector, where explainable reasoning is becoming foundational to trusted AI operations.













































































