Using Google’s Deep Learning To Model Visual Portrayal In The News

With the debut of the GDELT Visual Global Knowledge Graph (VGKG), which uses Google’s Cloud Vision API deep learning algorithms to catalog global news imagery, we’ve been immensely excited about the ways this incredible technology can be used to catalog and understand global visual narratives. This time, Felipe Hoffa came up with an incredible way of combining the GKG and VGKG’s together into a single query that lets you specify two GKG queries and get back a list of the visual topics that appear more commonly in images appearing in articles that match the first query than those that match the second query. Applying this query to Donald Trump and Hillary Clinton, we get the following query, which will output a list of tags and a sample image and article for each, that show…


Link to Full Article: Using Google’s Deep Learning To Model Visual Portrayal In The News