DeepSeek’s distilled new R1 AI model can run on a single GPU



DeepSeek’s updated R1 reasoning AI model might be getting the bulk of the AI community’s attention this week. But the Chinese AI lab also released a smaller, “distilled” version of its new R1, DeepSeek-R1-0528-Qwen3-8B, that DeepSeek claims beats comparably-sized models on certain benchmarks.

The smaller updated R1, which was built using the Qwen3-8B model Alibaba launched in May as a foundation, performs better than Google’s Gemini 2.5 Flash on AIME 2025, a collection of challenging math questions.

DeepSeek-R1-0528-Qwen3-8B also nearly matches Microsoft’s recently released Phi 4 reasoning plus model on another math skills test, HMMT.

So-called distilled models like DeepSeek-R1-0528-Qwen3-8B are generally less capable than their full-sized counterparts. On the plus side, they’re far less computationally demanding. According to the cloud platform NodeShift, Qwen3-8B requires a GPU with 40GB-80GB of RAM to run (e.g., an Nvidia H100). The full-sized new R1 needs around a dozen 80GB GPUs.

DeepSeek trained DeepSeek-R1-0528-Qwen3-8B by taking text generated by the updated R1 and using it to fine-tune Qwen3-8B. In a dedicated webpage for the model on the AI dev platform Hugging Face, DeepSeek describes DeepSeek-R1-0528-Qwen3-8B as “for both academic research on reasoning models and industrial development focused on small-scale models.”

DeepSeek-R1-0528-Qwen3-8B is available under a permissive MIT license, meaning it can be used commercially without restriction. Several hosts, including LM Studio, already offer the model through an API.




Source