Baidu Research, a division of Chinese Internet giant Baidu, has released its open source deep learning benchmark tool. Called DeepBench, the new solution comes with inference measurement.
As an upgrade to the initial DeepBench version that was released in September last year, the new technology offers a better understanding of the performance of inference that works with multiple chips and neural networks. The development also provides new kernels to train from available deep learning models. Additionally, results for training and inference are available across a variety of processors.
“With the addition of the ability to measure inference, researchers will now have a more comprehensive benchmark for the performance fo their AI hardware,” said Sharan Narang, systems researchers, Baidu Research Silicon Valley AI Lab.
Addresses benchmarking inference challenge
DeepBench benchmarks fundamental operations to solve the benchmarking inference issue. “Measuring inference is critical. It covers the operations needed to run neural networks on a device, be it in the cloud, on a phone or a wearable,” Baidu’s researcher Dr. Greg Diamos.
Baidu Research conducted an internal study that revealed DeepBench’s performance in a real time. The tool established a 16-bit floating point for multiplication and a 32-bit floating point for addition for training operations. Similarly, it established an 8-bit fixed point for multiplication and a 32-bit fixed point for addition.
You can access the updated DeepBench code on GitHub. The online repository also includes documentation to help you deploy the deep learning benchmark tool for your next project.