Arcee-ai

Arcee AI: Trinity Large Preview (free)

Chamber
Symphonietta
Symphony
REGULAR
Data Collection

Trinity-Large-Preview is a frontier-scale open-weight language model from Arcee, built as a 400B-parameter sparse Mixture-of-Experts with 13B active parameters per token using 4-of-256 expert routing. It excels in creative writing, storytelling, role-play, chat scenarios, and real-time voice assistance, better than your average reasoning model usually can. But we’re also introducing some of our newer agentic performance. It was trained to navigate well in agent harnesses like OpenCode, Cline, and Kilo Code, and to handle complex toolchains and long, constraint-filled prompts. The architecture natively supports very long context windows up to 512k tokens, with the Preview API currently served at 128k context using 8-bit quantization for practical deployment. Trinity-Large-Preview reflects Arcee’s efficiency-first design philosophy, offering a production-oriented frontier model with open weights and permissive licensing suitable for real-world applications and experimentation.

Input Price
Free

per million tokens

Output Price
Free

per million tokens

Context Window
131,000

tokens

Capabilities
Input
TEXT
Output
TEXT
Section Leaderboards
See how Arcee AI: Trinity Large Preview (free) ranks against all other models across each section
Data Collection Notice

The model provider may use your prompts for model training according to their terms of service. Partitura retains no user prompts - data collection is strictly at the provider level.