Proceedings of the Combined Workshop on Spatial Language Understanding (SpLU) and Grounded Communication for Robotics (RoboNLP)

Archna Bhatia, Yonatan Bisk, Parisa Kordjamshidi, Jesse Thomason (Editors)


Anthology ID:
W19-16
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Venues:
NAACL | RoboNLP | SpLU | WS
SIG:
Publisher:
Association for Computational Linguistics
URL:
https://aclanthology.org/W19-16
DOI:
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
https://aclanthology.org/W19-16.pdf

pdf bib
Proceedings of the Combined Workshop on Spatial Language Understanding (SpLU) and Grounded Communication for Robotics (RoboNLP)
Archna Bhatia | Yonatan Bisk | Parisa Kordjamshidi | Jesse Thomason

pdf bib
From Virtual to Real : A Framework for Verbal Interaction with Robots
Eugene Joseph

A Natural Language Understanding (NLU) pipeline integrated with a 3D physics-based scene is a flexible way to develop and test language-based human-robot interaction, by virtualizing people, robot hardware and the target 3D environment. Here, interaction means both controlling robots using language and conversing with them about the user’s physical environment and her daily life. Such a virtual development framework was initially developed for the Bot Colony videogame launched on Steam in June 2014, and has been undergoing improvements since. The framework is focused of developing intuitive verbal interaction with various types of robots. Key robot functions (robot vision and object recognition, path planning and obstacle avoidance, task planning and constraints, grabbing and inverse kinematics), the human participants in the interaction, and the impact of gravity and other forces on the environment are all simulated using commercial 3D tools. The framework can be used as a robotics testbed : the results of our simulations can be compared with the output of algorithms in real robots, to validate such algorithms. A novelty of our framework is support for social interaction with robots-enabling robots to converse about people and objects in the user’s environment, as well as learning about human needs and everyday life topics from their owner.

pdf bib
Multi-modal Discriminative Model for Vision-and-Language Navigation
Haoshuo Huang | Vihan Jain | Harsh Mehta | Jason Baldridge | Eugene Ie

Vision-and-Language Navigation (VLN) is a natural language grounding task where agents have to interpret natural language instructions in the context of visual scenes in a dynamic environment to achieve prescribed navigation goals. Successful agents must have the ability to parse natural language of varying linguistic styles, ground them in potentially unfamiliar scenes, plan and react with ambiguous environmental feedback. Generalization ability is limited by the amount of human annotated data. In particular, paired vision-language sequence data is expensive to collect. We develop a discriminator that evaluates how well an instruction explains a given path in VLN task using multi-modal alignment. Our study reveals that only a small fraction of the high-quality augmented data from Fried et al., as scored by our discriminator, is useful for training VLN agents with similar performance. We also show that a VLN agent warm-started with pre-trained components from the discriminator outperforms the benchmark success rates of 35.5 by 10 % relative measure.