Intel on Friday launched the Movidius Neural Compute Stick, a USB-based deep learning inference kit and self-contained artificial intelligence (AI) accelerator that delivers dedicated deep neural network processing capabilities to a wide range of host devices at the edge.
Designed for product developers and researchers, the Movidius Neural Compute Stick aims to reduce barriers to developing, tuning and deploying AI applications by delivering dedicated high-performance deep-neural network processing in a small form factor.
As more developers adopt advanced machine learning approaches to build innovative applications and solutions, Intel is committed to providing the most comprehensive set of development tools and resources to ensure developers are retooling for an AI-centric digital economy.
Whether it is training artificial neural networks on the Intel Nervana cloud, optimising for emerging workloads such as artificial intelligence, virtual and augmented reality, and automated driving with Intel Xeon Scalable processors, or taking AI to the edge with Movidius vision processing unit (VPU) technology, Intel offers a comprehensive AI portfolio of tools, training and deployment options for the next generation of AI-powered products and services.
"The Myriad 2 VPU housed inside the Movidius Neural Compute Stick provides powerful, yet efficient performance - more than 100 gigaflops of performance within a 1W power envelope - to run real-time deep neural networks directly from the device. This enables a wide range of AI applications to be deployed offline," said Remi El-Ouazzane, VP and general manager of Movidius, an Intel company.
Machine intelligence development is fundamentally composed of two stages- training an algorithm on large sets of sample data via modern machine learning techniques, and running the algorithm in an end-application that needs to interpret real-world data.
This second stage is referred to as "inference," and performing inference at the edge - or natively inside the device - brings numerous benefits in terms of latency, power consumption and privacy.
Layer-by-layer performance metrics for both industry-standard and custom-designed neural networks enable effective tuning for optimal real-world performance at ultra-low power. Validation scripts allow developers to compare the accuracy of the optimised model on the device to the original PC-based model.
Unique to Movidius Neural Compute Stick, the device can behave as a discrete neural network accelerator by adding dedicated deep learning inference capabilities to existing computing platforms for improved performance and power efficiency.
Disclaimer: No Business Standard Journalist was involved in creation of this content
You’ve reached your limit of {{free_limit}} free articles this month.
Subscribe now for unlimited access.
Already subscribed? Log in
Subscribe to read the full story →
Smart Quarterly
₹900
3 Months
₹300/Month
Smart Essential
₹2,700
1 Year
₹225/Month
Super Saver
₹3,900
2 Years
₹162/Month
Renews automatically, cancel anytime
Here’s what’s included in our digital subscription plans
Exclusive premium stories online
Over 30 premium stories daily, handpicked by our editors


Complimentary Access to The New York Times
News, Games, Cooking, Audio, Wirecutter & The Athletic
Business Standard Epaper
Digital replica of our daily newspaper — with options to read, save, and share


Curated Newsletters
Insights on markets, finance, politics, tech, and more delivered to your inbox
Market Analysis & Investment Insights
In-depth market analysis & insights with access to The Smart Investor


Archives
Repository of articles and publications dating back to 1997
Ad-free Reading
Uninterrupted reading experience with no advertisements


Seamless Access Across All Devices
Access Business Standard across devices — mobile, tablet, or PC, via web or app
