IBM wants to accelerate AI learning with new processor tech

So why does it take so much computing power and time to teach AI? The problem is that modern neural networks like Google’s DeepMind or IBM Watson must perform billions of tasks in in parallel. That requires numerous CPU memory calls, which quickly adds up over billions of cycles. The researchers debated using new storage tech like resistive RAM that can permanently store data with DRAM-like speeds. However, they eventually came up with the idea for a new type of chip called a resistive processing unit (RPU) that puts large amounts of resistive RAM directly onto a CPU. Google’s Deepmind AI topples Go champ Lee Seedol Such chips could fetch the data as quickly as they can process it, dramatically decreasing neural network training times and power required. “This massively…


Link to Full Article: IBM wants to accelerate AI learning with new processor tech