Engineers sped up neural network training on the CPU more than twice




Israeli artificial intelligence startup Deci has announced that it has achieved “breakthrough deep learning performance” using central processing units (CPUs).

The DeciNets image classification model is optimized for use on Intel Cascade Lake processors. It uses Deci’s patented Automated Neural Architecture Construction (AutoNAC) technology and runs more than twice as fast and more accurately on the CPU than Google’s EfficientNets on similar hardware.

    Engineers sped up neural network training on the CPU more than twice

Comparison of the speed of training models on different equipment. Data: Dec.

Deci co-founder and CEO Jonathan Geifman said his goal is to develop models that are not only more accurate, but also more resource-efficient.

“AutoNAC builds the best computer vision models available today, and now a new class of DeciNet networks can be deployed and efficiently run AI applications on processors,” he added.

The company also said that it has already been working with Intel for almost a year to optimize deep learning on Intel processors. Several Deci customers have already adopted its AutoNAC technology in manufacturing industries, they added.

Image classification and object recognition are among the main applications of deep learning algorithms. According to experts, bridging the performance gap between GPU and CPU will not only help reduce the cost of developing modern AI algorithms, but also reduce the burden on the video accelerator market.

Recall that in April 2021, scientists from Rice University developed a new deep learning mechanism. training, which trains neural networks on the CPU 4 to 15 times faster than on the GPU.

In May, scientists using AI accelerated the simulation of the universe by 1000 times.




Comments are closed.