New York Tech Journal
Tech news from the Big Apple

More than words: why can’t machines learn to converse

Posted on March 16th, 2015

#NUI Central

3/16/2015 @ WeWork, 69 Carlton St, NY

Rebecca J. Passonneau @Columbia University

Rebecca talked about some ways to increase the efficiency of #human-computer spoken dialog. Her main conclusions were computer systems responded to the human queries faster and more accurately when the system concentrated on understanding information that was

  1. most accessible to the user
  2. had the highest diagnostic value when querying the database

Becky started by describing some of the characteristics that separated spoken dialog from text queries:

20150316_195832

She next described two experiments she conducted to better understand the user interaction when patrons called a librarian to request material in the “spoken book” collection of the New York Public Library. The experiments used these results to test how book retrieval could be facilitated using a better model of the user’s queries and information about the structure of the data in the database.

An example of information the user has at hand would be the author’s last name as opposed to the ISBN number. The database query, would quickest using the ISBN number, but would consider the book title to be more diagnostic than the author’s name.

In addition to her conclusion that the best dialog would be a compromise between what the requestor knows and what the database finds most diagnostic, she talked about how the computer response is improved as the program remembers more of the previous parts of the conversation. So, for instance, a mispronounced author name might not useful at first, but might be the key piece of information once other facts are known.

The experimental results show how different strategies of the program produce improved results, even if the strategies are mutually contradictory.

20150316_200753 20150316_210259

posted in:  AI, NUI Central, UI, UX    / leave comments:   1 comment