Google Grants Teaching Machines to See, Hear, and Learn

Google is a global leader in improving how computers take in and sift through huge amounts of data. Aside from its in-house research team, the company’s academic funding program is making several grants for work on how machines perceive and learn from information.

As much as it freaks out Elon Musk, Google is pioneering work in artificial intelligence and related fields of machine learning and machine perception. The company’s research and products are steadily improving the ability of computers to recognize information in all forms, and then learn how to make sense of it. (Some of this work is being done by entities that have been spun off from Google, under the new corporate umbrella, Alphabet.)

Related – Sci-Fi-lanthropy: When Giving Boldly Goes Where No One Has Gone Before

While the Googleplex is filled with brainiacs probably working on the next digital assistant that can diagnose your illness before you even catch it, they also have an ongoing grant program that funds academic faculty outside the company. Google just announced its second and final round of funding in 2015, and two standout topics were machine perception and machine learning.

The grant program doesn’t draw quite as much attention as the big prizes or flashy funding initiatives the tech giant periodically unveils, but this year the company backed 235 projects, with a maximum grant per project at $150,000. They received about 1,600 proposals for 2015.

Related: 

Funding goes toward high-level research at universities around the world, supplementing the company’s R&D program through faculty projects that overlap with Google’s interests. Proposals can be related to a current list of 17 topics that include privacy, security, mobile, human-computer interaction, and maps. 

Giving is pretty spread out, but two topics that stood out in 2015, landing 44 grants, were machine perception and machine learning. 

Machine perception involves improving how machines can organize and index information that’s not written text or numbers, but images and sounds. This means giving their computing systems the ability to perceive photos, video, voices, music, etc., recognize complex patterns, and come to meaningful conclusions based on what they take in.  

The second topic, machine learning and data mining, involves using computing systems to learn things from large amounts of data, discovering hidden properties and making predictions. Basically teaching computers how to think through complex problems. This is really Google’s wheelhouse in research, as they clearly lead the way in sifting out useful conclusions from massive stores of information. 

Examples of 2015 grantees include Tamara Berg of UNC Chapel Hill, whose machine perception work involves the ability to handle visual and textual information combined, such as captioned photos or video with speech or closed captioning. Katherine Heller of Duke conducts machine learning research that strives to “model human behavior, including human categorization and human social interactions.” 

These fields are heavily ingrained in many of Google’s products, including Google Now, Image Search, self-driving cars, facial recognition, understanding speech. But recent strides are opening up many new possibilities.

Related: 

For example, with recently acquired company DeepMind, they’ve created an AI software that can learn to outplay humans at a variety of classic Atari games, getting better over time. While pretty cool, researchers say it’s a general-learning algorithm that can be applied for many other things. Another application the Google research team is working on is using its data-crunching power to accelerate the discovery of new chemical compounds that can be used as drug treatments. 

Learn more about the latest grantees and the faculty research program here




Source: Google Grants Teaching Machines to See, Hear, and Learn

Via: Google Alert for ML