Building Open-domain Conversational Agent by Statistical Learning with Various Large-scale Corpora

Hiroaki Sugiyama (1461201)


Conversation is important and nature activity for humans to develop social ties and form solidarity of our society. Also for dialogue agents that talk with people, conversation is important not only for counseling or entertainment purpose but also improving performance of task-oriented dialogues. Despite of the usefulness, the development of conversational agents has difficulties that arise from the considerable variations of user and system utterances in conversation.

In this presentation, I introduce the development of our conversational agents, which leverage various large-scale corpora that are specially designed to solve the difficulties. My presentation is consist of four parts. First one is dialogue control that decide appropriate agent actions on for user and dialogue states. Our conversational agents aim to make relationship with users, but the appropriate agent actions to achieve this objective is not obvious. Our preference-learning based inverse reinforcement learning estimates appropriate reward function of reinforcement learning using human-human dialogues with ratings.

Second one is a novel utterance generation method for conversational agents, which are required to respond to open-domain user utterances. Our method generates utterances that have new information relevant to the cur-rent topics, with which users are easier to continue talking than conventional methods

Third one is a question-answering system for questions that ask agent’s specific personalities. Such questions frequently appear in conversation and they are used as conversation triggers, which should be answered otherwise the dialogue will easily break We developed the QA system on the basis of Person DataBase (PDB), which we developed with large-scale personality question-answer pairs gathered from many questioners and a few answerers. I also show the detailed analysis of frequently asked personality questions.

Finally, I introduce our automatic evaluation system for conversational agents with less-obvious goals. Previous manual evaluations not only take huge cost but are not replicable, which makes it difficult to compare newly proposed approach and previous ones. Our proposed method leverages large-scale multi-references with ratings to estimate the agents’ evaluations.

I conclude the presentation with the future directions of the development of conversational agents.