Open source conversational AI platform TEN Framework completes its first year, empowering developers worldwide to build low-latency, human-like voice systems free from vendor lock-in.
One year after its open source debut, TEN Framework has emerged as a cornerstone for developers building real-time, voice-based AI systems. Supported by Agora and a fast-growing global developer community, TEN was designed to overcome the technical challenges of low-latency, multimodal AI development.
Since its launch in 2024, the framework has seen widespread adoption across use cases including AI companions, language translators, customer support bots, and interactive learning tools. Developers are leveraging TEN to move from prototype to production efficiently, without dependence on proprietary infrastructure.
Built for real-time, production-ready performance, TEN enables full-duplex audio streaming with millisecond-level latency. It remains extensible and vendor-neutral, integrating seamlessly with any LLM, STT, or TTS service. Developers can use Python, Node.js, C++, Go, or the TMAN Designer for drag-and-drop workflow creation. With multimodal support, TEN combines voice, vision, and context for truly human-like interaction.
In 2025, TEN expanded its open-source stack with TEN VAD, a high-performance voice activity detector improving transcription accuracy, and TEN Turn Detection (TTD), achieving 98% accuracy in natural conversational turn-taking. Together, these components enable fluid, context-aware dialogue.
Further strengthening its ecosystem, TEN released over 10 open source voice agent templates for assistants, transcription, and SIP integration, enabling deployment in minutes. Global meetups across San Francisco, Tokyo, Paris, Beijing, and Kyoto, along with an online hackathon, have nurtured a vibrant, collaborative developer base.












































































