Google
Google: Gemini 2.5 Flash Lite
Symphonietta
Symphony
FAST
Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better performance across common benchmarks compared to earlier Flash models. By default, "thinking" (i.e. multi-pass reasoning) is disabled to prioritize speed, but developers can enable it via the [Reasoning API parameter](https://openrouter.ai/docs/use-cases/reasoning-tokens) to selectively trade off cost for intelligence.
Input Price
2 credits/M
per million tokens
Output Price
8 credits/M
per million tokens
Context Window
1,048,576
tokens
Capabilities
Input
TEXT-AND-IMAGE
Output
TEXT
Section Leaderboards
See how Google: Gemini 2.5 Flash Lite ranks against all other models across each section