Mercury 2 is an extremely fast reasoning LLM, and the first reasoning diffusion LLM (dLLM).
Instead of generating tokens sequentially, Mercury 2 produces and refines multiple tokens in parallel, achieving >1,000 tokens/sec on standard GPUs. Mercury 2 is 5x+ faster than leading speed-optimized LLMs like Claude 4.5 Haiku and GPT 5 Mini, at a fraction of the cost.
Mercury 2 supports tunable reasoning levels, 128K context, native tool use, and schema-aligned JSON output. Built for coding workflows where latency compounds, real-time voice/search, and agent loops. OpenAI API compatible. Read more in the blog post.
Recent activity on Mercury 2
Total usage per day on OpenRouter
Prompt
210M
Reasoning
13.7M
Completion
10.4M
Prompt tokens measure input size. Reasoning tokens show internal thinking before a response. Completion tokens reflect total output length.