Mathematics of Embeddings: Spillover of Polarities over Financial Texts
in Annual Review in Modern Quantitative Finance: Volume 1
Abstract
In this chapter, we perform a mathematical analysis of the word2vec model. This sheds light on how the decision to use such a model makes implicit assumptions on the structure of the language. Beside, under Markovian assumptions that we discuss, we provide a very clear theoretical understanding of the formation of embeddings and, in particular, the way it captures what we call frequentist synonyms. These assumptions allow to conduct an explicit analysis of the loss function commonly used by these NLP techniques that asymptotically reaches a cross-entropy between the language model and the underlying true generative model. Moreover, we produce synthetic corpora with different levels of structures and show empirically how the word2vec algorithm succeed, or not, to learn them. It leads us to empirically assess the capability of such models to capture structures on a corpus of around 42 millions of financial News covering 12 years. That for, we rely on the Loughran-McDonald Sentiment Word Lists largely used on financial texts and we show that embeddings are exposed to mixing terms with opposite polarity, because of the way they can treat antonyms as frequentist synonyms. Beside we study the non-stationarity of such a financial corpus, that has surprisingly not be documented in the literature. We do it via time series of cosine similarity between groups of polarized words or company names, and show that embedding are indeed capturing a mix of English semantics and joined distribution of words that is difficult to disentangle.
A preprint is available under this title: Li, Mengda, and Charles-Albert Lehalle. “Do Word Embeddings Really Understand Loughran-McDonald’s Polarities?” arXiv preprint arXiv:2103.09813 (2021).