Recent Blog Posts

Navigation

Skipper

Status: 
Active

Overview

Skipper aims to make interactive spoken dialog technology broadly useful. For this to be possible, it cannot be aimed only at the “average user”, at the motivated user, or at a few stereotyped users. Not all users are willing or able to produce scripted utterances or to understand the same messages; future dialog systems should adjust to individual users dynamically. If this project is successful, its products and knowledge will inspire and support longer-term research benefiting a large and diverse population of users, both through an improved basic understanding of communication, and through guidance for follow-on prototypes. The interdisciplinary methods and results will lay foundations for developing algorithms and architectures for language adaptation and generation in both stationary and mobile applications, including those accessible by special-needs users. A second part of the project will collect a large and parameterized set of ‘Walking-Around’ dialogs in which a remotely located partner gives directions to a pedestrian; the variability captured will inform the design of a mobile spoken dialog system. Parameters will include each partner’s degree of prior knowledge about the navigation environment, their common ground from previous interaction with each other, the degree of visual evidence available about the current state of the task, and various other parameters selected to support both observational studies and testing of hypotheses about variability and adaptive processing in human spoken dialog. The parameterized Walking-Around set of dialogs will be made available to other researchers for additional impact on the design of mobile GPS navigation systems. The domain of collaborative navigation is modeled as a realistic environment in which at least one interacting partner in the pair is mobile. The corpus will be collected and analyzed in a cascaded fashion, enabling it to inform and provide criteria for a spoken dialog prototype that will eventually use (rather than ignore) the natural variability in human speech. The prototype’s platform will rely on off-the-shelf and other components developed for this project. The ultimate goal of this exploratory effort is to support the synthesis of entirely new, flexible, and robust spoken dialog systems that are capable of both adapting and being evaluated on-line (in real time). Key to that effort will be to determine which potential adaptations are actually functional, that is, beneficial for a particular task or context, and to eventually test those adaptations in human-computer spoken dialog systems.

Funded By

National Science Foundation Robust Intelligence Program

 

Team

Kris Liu, Jennifer Sawyer, Zhichao Hu, Natalia Blackwell, Megan Marie Vassey, Carolynn Jimenez

Advisors

Marilyn Walker, Jean Fox Tree (Psychology), Susan Brennan (Psychology/Computer Science at S.U.N.Y Stony Brook)

Research Labs

Natural Language and Dialogue Systems
Fox Tree Lab

Groups and Channels: 
Posted: Feb.11.2011
Tags:
Category: NLDS