Stanford Team Develops 11 Billion Parameter Deep Learning System Using COTS GPU Hardware

Recently, Adam Coates and others at Stanford developed a deep learning system with over 11 billion learnable parameters. One of the key drivers to progress in deep learning has been the ability to scale up these algorithms. Ng’s team at Google had previously reported a system that required 16,000 CPU cores to train a system with 1 billion parameters. This result shows that it is possible to build massive deep learning systems using only COTS (commercial off-the-shelf) hardware, thus hopefully making such systems available to significantly more groups. Read More


Link to Full Article: Stanford Team Develops 11 Billion Parameter Deep Learning System Using COTS GPU Hardware

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about homeAI.info and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!