In Japanese dependency parsing, Kudo's relative preference-based model outperforms both deterministic and probabilistic CFG-based parsing models. In the relative preference-based model, a log-linear model estimates selectional preferences for all candidate heads, which cannot be considered in the deterministic parsing models. We propose a parsing model in which the selectional preferences are directly modeled by one-on-one games in a step-ladder tournament. In evaluation experiment with Kyoto Text Corpus Version 4.0, the proposed model outperforms the previous researches, including the relative preference-based model.
We also investigates the accuracy of partial parsing of three SVM-based Japanese dependency parsing models: the shift-reduce model, the cascaded chunking model, and the tournament model. We show coverage-accuracy curves based on the scores produced by SVMs. Performance evaluation with the Kyoto Text Corpus shows that the partial parsing accuracy of the tournament model is the highest among the three dependency parsing models. The tournament model achieves 99% accuracy with 60% bunsetsu-based coverage. We also explore the usage of the SVM scores for active sampling. Preliminary experiments show that SVM score-based active sampling can be used to select effective training examples for Japanese dependency parsing with less manual effort.