Syntactic and semantic dependency parsing are a fundamental step in recovering the meaning of text. So far, a variety of techniques have been proposed for improving syntactic and semantic dependency parsers. However, there still remains some room for further improvement. This thesis describes several methods for improving syntactic and semantic dependency parsing.
To improve syntactic dependency parsing, we design and use supertags. Supertags, which are lexical templates extracted from dependency structure annotated corpus, encode linguistically rich information that imposes complex constraints in a local context. We present a supertag design framework that allows us to design various granularity-level supertag sets. To investigate the appropriate granularity or design of supertags needed to improve parsing performance, we build various supertag sets based on the framework. Then, using the supertag sets as features, we perform experiments on multilingual syntactic dependency parsing. The experimental results show that appropriately designed supertags are effective for syntactic dependency parsing.
To improve semantic dependency parsing, we capture and exploit multi-predicate interactions. This approach is based on the linguistic intuition that the predicates in a sentence are semantically related to each other, and capturing this relation can be useful for semantic dependency parsing. To capture this information, we propose two distinct methods using (i) bipartite graphs and (ii) grid-type recurrent neural networks. Performing experiments on Japanese predicate argument structure analysis, we demonstrate that our proposed methods yield considerable improvement.