Arcee AI has released two fully open-weight MoE models trained end-to-end in the United States, aiming to counter China’s dominance in open source frontier AI and reassert U.S. model sovereignty.
Arcee AI has launched Trinity Mini and Trinity Nano Preview, the first models in its Trinity family, marking a rare U.S.-trained, fully open-weight MoE release under the Apache 2.0 licence. The suite is positioned as a direct response to China’s lead in open-weight innovation, offering enterprises and developers unrestricted commercial use, full modification rights, and on-prem deployment without foreign dependencies.
“We want to add something that has been missing in that picture. A serious open weight model family trained end-to-end in America… that businesses and developers can actually own,” said Arcee CTO Lucas Atkins. He added: “I am experiencing a combination of extreme pride in my team and crippling exhaustion… Especially Mini.”
Trinity Mini, a 26B-parameter model with strong results in MMLU (84.95), Math-500 (92.10), GPQA-Diamond (58.55) and BFCL V3 (59.67), is engineered for long-context reasoning and high-throughput tool use. Trinity Nano Preview, at 6B parameters, is a lighter, chat-focused variant. Both employ Arcee’s AFMoE architecture with sigmoid routing, gated attention, and depth-scaled normalisation for training stability.
A 420B-parameter model, Trinity Large, is now in training using 20T curated and synthetic tokens, set for release in January 2026. If successful, it will be one of the only U.S.-trained, fully open-weight frontier models.
Arcee partnered with DatologyAI for a legally vetted, deduplicated corpus and with Prime Intellect for U.S.-based compute, reinforcing full data and infrastructure sovereignty. The models are available on Hugging Face, OpenRouter and chat.arcee.ai, with early integrations across Benchable.ai, Open WebUI and SillyTavern.













































































