AGIBOT Open Sources Real-World Robot Simulation With Genie Sim 3.0

0
2
AGIBOT Open Sources Genie Sim 3.0 At CES 2026 As The First Real-World-Based Robot Simulation Platform For Embodied AI
AGIBOT Open Sources Genie Sim 3.0 At CES 2026 As The First Real-World-Based Robot Simulation Platform For Embodied AI

AGIBOT has open sourced Genie Sim 3.0 at CES 2026, positioning it as a foundational simulation platform for embodied AI. Built on real-world robot data and NVIDIA Isaac Sim, the platform aims to standardise benchmarking while accelerating open, scalable model development.

AGIBOT has unveiled Genie Sim 3.0 at CES 2026, introducing what it claims is the world’s first open-source robot simulation platform built directly on real-world robot operations. Designed as foundational infrastructure for embodied AI, the platform aims to standardise how models are trained, evaluated, and compared at scale.

Genie Sim 3.0 is integrated with NVIDIA Isaac Sim, an open-source reference framework built on NVIDIA Omniverse, enabling interoperability with widely adopted open simulation ecosystems and accelerating community-driven innovation.

The platform delivers a unified, end-to-end open simulation workflow within a single toolchain, bringing together digital asset generation, scene generalisation, data collection, automated evaluation, and physics-based simulation. At its core is Genie Sim Benchmark, a standardised evaluation system intended to establish authoritative and transparent benchmarks for embodied intelligence models.

AGIBOT states that Genie Sim 3.0 provides the largest open source simulation dataset in embodied AI, comprising more than 10,000 hours of synthetic data derived from real-world robot operation scenarios. The evaluation framework spans over 200 tasks across more than 100,000 scenarios, enabling comprehensive capability profiling of models.

The platform integrates 3D reconstruction with visual generation to create high-fidelity, physics-accurate environments. It also pioneers LLM-driven scene generation, allowing users to create complex simulation scenes and evaluation metrics in minutes using natural language, without manual logic coding. Vision-language models further refine scenes to specification-level requirements, improving model generalisation.

Real-world environments are captured using Skyland Innovation’s MetaCam handheld 3D laser scanner, combining high-resolution RGB imagery, 360-degree LiDAR point clouds, and centimetre-level RTK positioning. Simulation-ready interactive assets can be generated from a single 60-second orbital video, significantly reducing scene creation time.

By open-sourcing the platform, AGIBOT aims to reduce dependence on physical hardware, lower experimentation costs, and democratise access to high-fidelity robot simulation across research, startup, and enterprise ecosystems.

LEAVE A REPLY

Please enter your comment!
Please enter your name here