Wednesday, December 03, 2025 | 06:49 AM ISTहिंदी में पढें
Business Standard
Notification Icon
userprofile IconSearch

Microsoft enhances Copilot+ PCs with distilled Deepseek R1 models: Details

Distilled Deepseek R1 models will be available through Azure AI Foundry, Microsoft's platform that allows developers to build, manage, and deploy AI applications

Copilot Plus PC, DeepSeek

Copilot Plus PC, DeepSeek

Aashish Kumar Shrivastava New Delhi

Listen to This Article

Don't want to miss the best from Business Standard?

Microsoft has announced the availability of DeepSeek R1 7B and 14B distilled models for Copilot+ PCs via Azure AI Foundry. This means that developers building experiences for the Copilot+ PCs can now access smaller, more efficient AI models that offer similar intelligence to larger models but require less computing power. These models will be available through Azure AI Foundry, Microsoft’s platform that allows developers to build, manage, and deploy AI applications.
 
What are distilled models and why do they matter
 
Previously, Microsoft introduced the DeepSeek R1 model —a powerful AI model that can handle complex tasks – on Copilot+ PCs. However, running such a large model requires high computing power, making it difficult to operate smoothly on everyday devices. To solve this, Microsoft is now offering distilled versions of DeepSeek R1, which retain the knowledge of the original model while being optimised for faster, more efficient performance on standard hardware.
 
Think of it like a teacher-student relationship—the original DeepSeek R1 model (the “teacher”) trains smaller versions (the “students”), which learn the same concepts but work more efficiently on specific tasks. These distilled models allow AI-powered features to run directly on your PC without always relying on the cloud, making them faster and more accessible.
 
How will this impact users
 
According to Microsoft, the DeepSeek R1 distilled models will significantly improve AI-powered tasks for both developers and everyday users.
 
For developers: The ability to run AI models directly on a PC means faster, real-time responses without depending on an internet connection. Developers can now build smarter software, including virtual assistants, speech recognition tools, and automation systems that work instantly and without cloud processing delays.
 
For everyday users: AI-powered tools will work quicker and more efficiently. Tasks like writing emails, summarising documents, and managing schedules will be faster and more reliable. Since the AI runs on the device itself, it also ensures multitasking, longer battery life, and better privacy by keeping sensitive information from being transmitted to cloud servers.
 
How Copilot+ PCs run AI models locally
 
The Neural Processing Unit (NPU) is the key technology enabling AI processing on-device. Unlike traditional CPUs (processors) – sequential processing – and GPUs (graphics processors), which handle multiple computing tasks, NPUs are specially designed for AI-related workloads.
 
NPUs process AI tasks faster while using less power, which means they can run complex AI models without slowing down your PC or draining the battery. They also prevent overheating, ensuring that AI features work without affecting system performance. Since AI tasks are handled by the NPU, CPUs and GPUs remain free for other tasks, improving the overall efficiency of your PC.
 
Which Copilot+ PCs will support distilled models
 
The DeepSeek R1 distilled models will first be available on Copilot+ PCs powered by Qualcomm Snapdragon X. Support will later extend to Intel Core Ultra 200V and AMD Ryzen processors.

Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: Mar 04 2025 | 3:06 PM IST

Explore News