IBM new concept chip can improve the training speed of AI 30000 times

IBM researchers recently published a paper, the paper describes the a so-called resistance type processing unit (resistive processing unit, the RPU) new chip concept, allegedly with traditional CPU compared. This chip can will be high to the original 30000 times the depth of the neural network training speed extract.

Depth of the neural network (DNN) is a multi hidden layer artificial neural network. The neural network can be carried out supervision and training, can also be used for unsupervised training. The results of is able to own learning machine learning (or called Artificial Intelligence), namely the so-called deep learning.

Not long ago, DeepMind (Alphabet) Google in the man-machine war beat Li Shishi AI Go program AlphaGo on the use of a similar algorithm. AlphaGo consists of a search tree algorithm and two multi layer neural networks with millions of neurons. In a network, called “network strategy” for computation of which step of the highest winning percentage of. Another network called “value network”, used to tell AlphaGo how to move the albino and sunspots are better, so that you can reduce the likelihood of depth.

As the outlook is good, many machine learning researchers have focused on the depth of the neural network. However, in order to achieve a certain degree of intelligence, these networks need a lot of computing chips, such as the number of AlphaGo used to calculate the number of chips to reach a thousand. So this is a consumption of computing resources, but also costly task. But now researchers at IBM put forward a new concept of chip and its powerful computing ability can be a top traditional chip thousands of, and if the thousands of this chip combined, AI in the future may appear more breakthroughs.

This chip called RPU mainly uses the depth of the learning algorithm and other two features: local and parallel. To this end ROU with the next generation of non volatile memory (NVM) technology concept, the algorithm uses the weight values stored in the local, so as to minimize the data movement during the training process. The researchers said that if the large-scale application of the RPU to 10 million weight depth of the neural network, the training speed highest can accelerate 30000 times, that is to say, usually need to thousands of machines training a few days to the results using this chip for several hours to fix and energy efficiency but also much lower.

Of course. This paper only presents a concept, the chip is still in the research stage. At the same time, in view of the common nonvolatile memory have not yet entered the mainstream market, so such a chip market is estimated to also need a few years time. But if the chip does so much calculation and energy efficiency advantages, I believe in AI research and application of giant will pay attention to Google, Facebook, IBM itself is AI, data one of the active participants, something if you do it is market should not have to worry about.

Finding electronic components, jitcomp.com is a specialized obsolete and hard to find electronic components of the site, in the above can be found more than 30 million pieces of components types, such as: 41084-bwb and surt5000xlt, want to know more, please visit our website http://www.jitcomp.com.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>