Modeling of Semantic Co-Compositionality and Learning of Word Representations

Masashi Tsubaki (1251067)


We present a novel vector space model for semantic co-compositionality. Inspired by Generative Lexicon Theory, our model allows both predicates and arguments to modify each others’ meaning representations while generating the overall semantics. This readily addresses some major challenges with current vector space models, notably the polysemy issue and the use of one representation per word type. We implement co-compositionality using prototype projections we named, matrix operation with projection onto latent subspaces formed by prototypical predicates/arguments, and show that this is effective in adapting their word representations. We further cast the model as a neural network for learning of word vectors, and propose an unsupervised algorithm to jointly train word representations with co-compositionality. The model achieves the best result to date (ρ = 0.47) on the semantic similarity task of transitive verbs.