Nvidia packs AI supercomputer into one box

Nvidia’s DGX-1 is said to be equivalent in performance to a 250-node x86 system for deep learning applications. Described by Nvidia co-founder and CEO Jen-Hsun Huang as “the world’s first deep-learning supercomputer,” the DGX-1 contains eight of the new Tesla P100 GPUs, together capable of delivering 170 16-bit teraflops. Training the AlexNet neural network takes just two hours on a DGX-1, compared with 150 hours on a dual Xeon server. And because of the diminishing returns from adding nodes, it requires more than 250 Xeon servers to match the speed of the DGX-1, he said. At the GPU Technology Conference 2015 Huang predicted the company would deliver a 10x speed improvement in a year, but it has actually delivered a 12x improvement. What a system with four Maxwell GPUs could achieve…


Link to Full Article: Nvidia packs AI supercomputer into one box

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about homeAI.info and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!