new-modelannouncementai-models

Xiaomi MiMo-V2-Pro & Flash Now Available on OpenClawUP — Day-1 Support

OpenClawUP Team··5 min read

Xiaomi Just Dropped a Full AI Model Family — We Already Support Them

Xiaomi released the MiMo-V2 series — four models including MiMo-V2-Pro, MiMo-V2-Omni, MiMo-V2-TTS, and MiMo-V2-Flash. Today, both Pro and Flash are live on OpenClawUP. No waiting, no beta access — deploy your OpenClaw assistant with MiMo right now.

Here's why this matters for OpenClawUP users.

MiMo-V2-Pro: Near-Flagship Intelligence, Built for Agents

MiMo-V2-Pro is Xiaomi's flagship model, purpose-built for the agent era. The benchmarks speak for themselves:

  • ClawEval 61.5 — approaching Claude Opus 4.6 (66.3), far ahead of GPT-5.2 (50.0)
  • SWE-bench Verified 78.0% — strong real-world coding capability
  • 1M token context window with up to 128K output tokens — handles massive conversations and documents
  • Built-in capabilities: deep thinking, function calling, structured output, and web search

The architecture is clever: 1T total parameters with only 42B active per request (sparse MoE with a 7:1 hybrid attention ratio). You get near-top-tier intelligence without the premium price tag.

MiMo-V2-Pro Official Pricing

Context Range Input Output Cache Read
≤ 256K tokens $1.00 / 1M tokens $3.00 / 1M tokens $0.20 / 1M tokens
256K – 1M tokens $2.00 / 1M tokens $6.00 / 1M tokens $0.40 / 1M tokens

Cache write is currently free (limited-time promotion). For comparison, Claude Opus 4.5 costs $15/$75 per million tokens — MiMo-V2-Pro gives you comparable agent performance at roughly 1/15th the price.

MiMo-V2-Flash: Ultra-Fast, Ultra-Cheap

MiMo-V2-Flash is the lightweight sibling — 309B total parameters, 15B active, with a 256K context window and 64K max output. Don't let the "Flash" name fool you — it packs the same capability set as Pro:

  • Deep thinking and efficient reasoning for complex questions
  • Function calling and structured output for tool-use workflows
  • Web search built in
  • 10x cheaper than Pro when you don't need the 1M context

MiMo-V2-Flash Official Pricing

Input Output Cache Read
$0.10 / 1M tokens $0.30 / 1M tokens $0.01 / 1M tokens

One of the most affordable models on the market — with capabilities that rival models costing 10–50x more. Cache write is also free during the current promotion.

The Full MiMo-V2 Family

Xiaomi released three flagship models plus one specialty model in this series:

Model Type Context Output Pricing (Input/Output)
MiMo-V2-Pro Text generation 1M 128K $1.00 / $3.00
MiMo-V2-Omni Multimodal understanding 256K 128K $0.40 / $2.00
MiMo-V2-Flash Text generation 256K 64K $0.10 / $0.30
MiMo-V2-TTS Voice synthesis 8K 8K Free (limited-time)

All three main models share the same capability set: deep thinking, function calling, structured output, and web search. Cache write is free across all models during the current promotion.

OpenClawUP currently supports Pro and Flash — the two text generation models best suited for AI assistant workloads.

Why This Matters: 11 Models, One Platform

With MiMo-V2-Pro and Flash, OpenClawUP now offers 11 AI models — the widest selection of any OpenClaw hosting platform:

Model Strength
Claude Sonnet 4.6 Balanced intelligence
Claude Opus 4.5 Maximum capability (Premium)
GPT-5.4 Strong all-rounder
Gemini 3 Flash Fast and affordable
DeepSeek V3.2 Deep reasoning
Qwen 3.5 Multilingual excellence
GLM-5 Chinese language specialist
MiniMax M2.5 Creative writing
Kimi K2.5 Long-context specialist
MiMo-V2-Pro Near-flagship, agent-optimized
MiMo-V2-Flash Ultra-fast, ultra-cheap

Every model is accessible through the same dashboard, same billing, same one-click deploy. Switch models anytime with zero configuration changes.

The Cost Story

This is where MiMo gets really interesting for OpenClawUP users.

MiMo-V2-Pro delivers agent performance close to Claude Opus 4.6 — at roughly 1/15th the cost. MiMo-V2-Flash goes even further at just $0.10 per million input tokens. Combined with OpenClawUP's included AI credits ($15/month with Pro plan) and QMD's intelligent document search (saving ~92% tokens on document queries), your credits stretch dramatically further with MiMo.

For users who've been using Claude Opus 4.5 and finding the premium credits drain too fast, MiMo-V2-Pro is an excellent alternative — comparable agent capabilities, standard pricing, and cache write is currently free.

How to Use MiMo on OpenClawUP

If you're an existing user:

  1. Go to your Dashboard
  2. Select MiMo-V2-Pro or MiMo-V2-Flash from the model picker
  3. That's it — your next message uses MiMo

If you're new:

  1. Sign up at openclawup.com — free 3-day trial, no credit card
  2. Enter your bot tokens (Telegram, Discord, or WhatsApp)
  3. Choose MiMo-V2-Pro as your model
  4. Deploy in 60 seconds

Day-1 Support Is Our Standard

This isn't a one-off. When a noteworthy new model drops, we integrate it fast. Our users had access to DeepSeek V3.2 on launch day. Qwen 3.5 on launch day. And now MiMo-V2-Pro and Flash on launch day.

Why? Because model choice matters. The best model for your use case might not be the most famous one — and you shouldn't have to wait weeks to try it.

Deploy with MiMo-V2-Pro now →