Teach Machiens to Write from Data

磯颯


In the field of artificial intelligence (AI), automating such language productions, which is called natural language generation (NLG), is one of the crucial research areas, and it has been vigorously studied since the dawn of AI. It poses crucial perspectives for developing general-purpose AI, such as representing non-linguistic forms of information into the form that machines can understand and how to translate it into the form that humans can understand. Despite the progress of NLG systems over the past decades, traditional NLG systems still have challenges to generate fluent and diverse text: many systems rely on template-based generation system, which requires too much human effort to build a database of templates.

Recently, the advances of neural network-based language generations have led to producing such fluent texts to be indistinguishable from human-written texts. On the other hand, neural network-based generation systems also pose new challenges: it generates inconsistent sentences with the input and becomes more challenging to process as the input length increases.

This dissertation address problem involved in the data-to-text generation system and the text editing system related to the challenges described above. First, we explore how to build a data-to-text generation system that can generate document-scale text from redundant data records~\cite{iso2019learning}. Second, we introduce a new text editing task, referred to as the fact-based text editing, in which the goal is to revise a given document to describe better the facts in a knowledge base (e.g., several triples)~\cite{iso2020fact}. We also develop a more accurate, efficient, and interpretable model for fact-based text editing than the standard encoder-decoder model.