Jointly Improving Parsing and Perception for Natural Language Commands through Human-Robot Dialog

Main Article Content

Jesse Thomason
Aishwarya Padmakumar
Jivko Sinapov
Nick Walker
Yuqian Jiang
Harel Yedidsion
Justin Hart
Peter Stone
Raymond J. Mooney

Abstract

In this work, we present methods for using human-robot dialog to improve language understanding for a mobile robot agent. The agent parses natural language to underlying semantic meanings and uses robotic sensors to create multi-modal models of perceptual concepts like red and heavy. The agent can be used for showing navigation routes, delivering objects to people, and relocating objects from one location to another. We use dialog clari_cation questions both to understand commands and to generate additional parsing training data. The agent employs opportunistic active learning to select questions about how words relate to objects, improving its understanding of perceptual concepts. We evaluated this agent on Amazon Mechanical Turk. After training on data induced from conversations, the agent reduced the number of dialog questions it asked while receiving higher usability ratings. Additionally, we demonstrated the agent on a robotic platform, where it learned new perceptual concepts on the y while completing a real-world task.

Article Details

Section
Articles