Meet MiniMax‑M1: a 456 B‑parameter, hybrid-attention reasoning model under Apache 2.0. Thanks to a hybrid Mixture‑of‑Experts and lightning attention, it handles 1 M token contexts with 75% lower FLOPs—delivering top-tier math, coding, long‑context, and RL‑based reasoning.