AWS Open-sources Neo-AI project to Accelerate Machine Learning Deployments on Edge Devices

0
3187

Currently, Neo-AI supports platforms from Intel, NVIDIA, and ARM, with support for Xilinx, Cadence, and Qualcomm expected to arrive soon.

Amazon Web Services

Amazon Web Services(AWS) has decided to release the code behind one of its key machine learning services as an open-source project.

At re:Invent 2018, the company announced Amazon SageMaker Neo, a new machine learning feature that it said could be used to train a machine learning model once and then run it anywhere in the cloud and at the edge. Amazon is now making the code available as the open source Neo-AI project under the Apache Software License.

Amazon believes that this release will enable processor vendors, device makers, and deep learning developers to rapidly bring new and independent innovations in machine learning to a wide variety of hardware platforms.

“Neo-AI eliminates the time and effort needed to tune machine learning models for deployment on multiple platforms by automatically optimizing TensorFlow, MXNet, PyTorch, ONNX, and XGBoost models to perform at up to twice the speed of the original model with no loss in accuracy. Additionally, it converts models into an efficient common format to eliminate software compatibility problems,” AWS’s Sukwon Kim and Vin Sharma wrote in a blog post.

By working with the Neo-AI project, they said, processor vendors can quickly integrate their custom code into the compiler at the point at which it has the greatest effect on improving model performance.

The project will also allow device makers to customize the Neo-AI runtime for the particular software and hardware configuration of their devices.

Neo-AI project to be steered by contributions from several organizations

Currently, Neo-AI supports platforms from Intel, NVIDIA, and ARM, with support for Xilinx, Cadence, and Qualcomm expected to arrive soon.

Naveen Rao, General Manager of the Artificial Intelligence Products Group at Intel, “Using Neo, device makers and system vendors can get better performance for models developed in almost any framework on platforms based on all Intel compute platforms.”

Jem Davies, fellow, General Manager and Vice President for the Machine Learning Group at ARM, is also confident that the combination of Neo and the ARM NN SDK will help developers optimize machine learning models to run efficiently on a wide variety of connected edge devices.

Xilinx provides the FPGA hardware and software capabilities that accelerate machine learning inference applications in the cloud and at the edge.

Sudip Nag, Corporate Vice President at Xilinx, said, “We are pleased to support developers using Neo to optimize models for deployment on Xilinx FPGAs. We look forward to enabling Neo-AI to use Xilinx ML Suite to deliver optimal inference performance per watt.”

 

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here