An Empirical Study on Reduction of Parameter Redundancy in a Weight Matrix in a Biaffine Parser

Tomoki Matsuno


Currently, the biaffine classifier has been attracting attention as a method to introduce an attention mechanism into the modeling of binary relations. For instance, in the field of dependency parsing, the Deep Biaffine Parser by \cite{deepbiaffine} has achieved state-of-the-art performance as a graph-based dependency parser on the English Penn Treebank and the CoNLL 2017 shared task. In this paper, we show that the number of parameters in biaffine classifiers, $O(n^2)$ ($n$ is the number of dimensions), is actually redundant by showing that reduction of the number of parameters improves the parsing performance. We reduce the number of parameters by assuming either symmetry or circularity of weight matrices. In our experiments on the CoNLL 2017 shared task dataset, our model achieved better or comparable accuracy on most of the treebanks with more than 16\% parameter reduction.