University of Pittsburgh

Multimodal Communication: Commonsense, Grounding, and Computation

Assistant Professor
Date: 
Friday, October 23, 2020 - 12:50pm - 1:30pm

From the gestures that accompany speech to images in social media posts, humans effortlessly combine words with visual presentations. Communication succeeds even though visual and spatial representations are not necessarily wired to syntax and conventions, and do not always replicate appearance. Machines, however, are not equipped to understand and generate such presentations due to people’s pervasive reliance on commonsense and world knowledge in relating words and external presentations. I show the potential of discourse modeling for solving the problem of multimodal communication. I start by presenting a novel framework for modeling and learning a deeper combined understanding of text and images by classifying inferential relations to predict temporal, causal, and logical entailments in context. This enables systems to make inferences with high accuracy while revealing author expectations and social-context preferences. I proceed to design methods for generating text based on visual input that use these inferences to provide users with key requested information. The results show a dramatic improvement in the consistency and quality of the generated text by decreasing spurious information by half. Finally, I describe the design of two multimodal communicative systems that can reason on the context of interactions in the areas of human-robot collaboration and conversational artificial intelligence and describe my research vision: to build human-level communicative systems and grounded artificial intelligence by leveraging the cognitive science of language use.

Copyright 2009–2020 | Send feedback about this site