A family of Large Language Models developed by Chinese company Alibaba. In July of 2024, it was ranked as the top Chinese language model in some benchmarks, and third globally behind Anthropic and OpenAI.

The Model architecture is based on that of the Llama architecture developed by Meta AI.

On April 28, 2025, the Qwen 3 model family was released, with all models licensed under the Apache 2.0 license. The Qwen 3 model family includes both dense (0.6B, 1.7B, 4B, 8B, 14B, and 32B parameters) and Mixture of Experts (MOE) models (30B with 3B activated parameters, 235B with 22B activated parameters). They were trained on 36 trillion tokens in 119 languages and dialects.

Additionally, on July of 2025 they released Qwen3-Coder, their agentic coding model/CLI.