Qwen

Qwen: Qwen-Max

Symphony
REGULAR

Qwen-Max, based on Qwen2.5, provides the best inference performance among [Qwen models](/qwen), especially for complex multi-step tasks. It's a large-scale MoE model that has been pretrained on over 20 trillion tokens and further post-trained with curated Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF) methodologies. The parameter count is unknown.

Input Price
32 credits/M

per million tokens

Output Price
128 credits/M

per million tokens

Context Window
32,768

tokens

Capabilities
Input
TEXT
Output
TEXT
Section Leaderboards
See how Qwen: Qwen-Max ranks against all other models across each section