Meanwhile, dependency parsing is increasingly gaining its popularity recently. One reason for the recent interest in dependency parsing is that despite its simplicity, dependency structures are useful for a variety of applications in NLP and related fields.
In this work we explore the synergies between word representations with dependency parsing. We first investigated the effectiveness of unsupervised word representations as simple additional features for dependency parsing. We mainly considered two kinds of representations, namely Brown clusters and continuous word embeddings. We observed that they improve the performances under some circumstances.
We then explore the novel methods to learn representations using dependency information. Most of the existing representation learning methods look at the sequence of words. However, if we consider the dependency relations we can take into account more linguistically rich information about words and sentence. We observed that though they did not outperform the existing representations in evaluation tasks, the proposed model learned some interesting and different word representations.