Why Deep Learning Works – Key Insights and Saddle Points

           Tweet Previous post Tags: Deep Learning, Distributed Representation, Matthew Mayo, Yoshua Bengio A quality discussion on the theoretical motivations for deep learning, including distributed representation, deep architecture, and the easily escapable saddle point. By Matthew Mayo.This post summarizes the key points of a recent blog post by Rinu Boney, based on a lecture by Dr. Yoshua Bengio from this year’s Deep Learning Summer School in Montreal, which discusses the theoretical motivations for deep learning.”To generalize locally, we need representative examples for all relevant variations.”Deep learning is about learning multiple levels of representations, corresponding to multiple levels of abstractions. If we are able to learn these multiple levels of representation, we are able to generalize well.After setting the general tone of the post with the above (paraphrased) statement,…


Link to Full Article: Why Deep Learning Works – Key Insights and Saddle Points