Google’s Big Chip Unveil For Machine Learning: Tensor Processing Unit With 10x Better Efficiency

While other companies are arguing about whether GPUs, FPGAs, or VPUs are better suited for machine learning, Google came out with the news that it has been using its own custom-built Tensor Processing Unit (TPU) for over a year, achieving a claimed 10x increase in efficiency. The comparison is likely made in relation to GPUs, which are currently the industry standard chips for machine learning.Tensor analysis is an extension of vector calculus, which is at the basis of Google’s (recently released as open source) Tensorflow framework for machine learning. The new Tensor Processing Units, as you might expect, are specifically designed to do only tensor calculations, which means the company can fit more transistors on the chip that do only one thing well–achieving higher efficiency than other types of chips.…


Link to Full Article: Google’s Big Chip Unveil For Machine Learning: Tensor Processing Unit With 10x Better Efficiency