Word Embedding Interpretation using Co-Clustering

Authors
Zainab Albujasim 1, Diana Inkpen 2 and Yuhong Guo 2, 1 Carleton University, Canada, 2 University of Ottawa, Canada

Abstract
Word embedding is the foundation of modern language processing (NLP). In the last few decades, word representation has evolved remarkably resulting in an impressive performance in NLP downstream applications. Yet, word embedding's interpretability remains a challenge. In this paper, We propose a simple technique to interpret word embedding. Our method is based on post-processing technique to improve the quality of word embedding and reveal the hidden structure in these embeddings. We deploy Co-clustering method to reveal the hidden structure of word embedding and detect sub-matrices between word meaning and specific dimensions. Empirical evaluation on several benchmarks shows that our method achieves competitive results compared to original word embedding.

Keywords
Word Embedding, Interpretation, Quantization, Post-processing.

Comments

Popular posts from this blog

A DEFENSE MECHANISM FOR CREDIT CARD FRAUD DETECTION

DYNAMIC VALIDITY PERIOD CALCULATION OF DIGITAL CERTIFICATES BASED ON AGGREGATED SECURITY ASSESSMENT