Learning Web-based Procedures by Reasoning over Explanations and Demonstrations in Context

Abstract

We explore learning web-based tasks (such as sharing a post on social media or forwarding an email) from a human teacher through natural language explanations and a single demonstration. Our approach investigates a new direction for semantic parsing that models explaining a demonstration in a context, rather than mapping explanations to demonstrations. By leveraging the idea of inverse semantics from program synthesis to reason backwards from observed demonstrations, we ensure that parsed logical forms are consistent with executable actions in the context. We present a dataset of explanations paired with demonstrations for web-based tasks. Our methods show better task completion rates than a supervised semantic parsing baseline (40% relative improvement on average), and are competitive with exploration-and-demonstration based methods, while requiring substantially less supervision. In learning to align explanations with demonstrations, basic properties of natural language syntax emerge as learned behavior. This provides an interesting example of language acquisition from grounded contexts without any linguistic annotation.