A Large-scale Text Analysis with Word Embeddings and Topic Modeling

This research exemplifies how statistical semantic models and word embedding techniques can play a role in understanding the system of human knowledge. Intuitively, we speculate that when a person is given a piece of text, they first classify the semantic contents, group them to semantically similar...

Full description

Saved in:
Bibliographic Details
Published inJournal of Cognitive Science Vol. 20; no. 1; pp. 147 - 188
Main Authors Won-Joon Choi, Euhee Kim
Format Journal Article
LanguageEnglish
Published 인지과학연구소 01.03.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This research exemplifies how statistical semantic models and word embedding techniques can play a role in understanding the system of human knowledge. Intuitively, we speculate that when a person is given a piece of text, they first classify the semantic contents, group them to semantically similar texts previously observed, then relate their contents with the group. We attempt to model this process of knowledge linking by using word embeddings and topic modeling. Specifically, we propose a model that analyzes the semantic/thematic structure of a given corpus, so as to replicate the cognitive process of knowledge ingestion. Our model attempts to make the best of both word embeddings and topic modeling by first clustering documents and then performing topic modeling on them. To demonstrate our approach, we apply our method to the Corpus of Contemporary American English (COCA). In COCA, the texts are first divided by text type and then by subcategory, which represents the specific topics of the documents. To show the effectiveness of our analysis, we specifically focus on the texts related to the domain of science. First, we cull out science-related texts from various genres, then preprocess the texts into a usable, appropriate format. In our preprocessing steps, we attempt to fine-grain the texts with a combination of tokenization, parsing, and lemmatization. Through this preprocess, we discard words of little semantic value and disambiguate syntactically ambiguous words. Afterwards, using only the nouns from the corpus, we train a word2vec model on the documents and apply K-means clustering to them. The results from clustering show that each cluster represents each branch of science, similar to how people relate a new piece of text to semantically related documents. With these results, we proceed on to perform topic modeling on each of these clusters, which reveal latent topics cluster and their relationship with each other. Through this research, we demonstrate a way to analyze a mass corpus and highlight the semantic/thematic structure of topics in it, which can be thought as a representation of knowledge in human cognition. KCI Citation Count: 5
ISSN:1598-2327
1976-6939
DOI:10.17791/jcs.2019.20.1.147