Bahasa Indonesia pre-trained word vector generation using word2vec for computer and information technology field

Abstract Words embedding or distributed representations is a popular method for representing words. In this method, the resulting vector value is a set of real values with specific dimensions that are more effective than the Bag of Word (BoW) method. Also, the advantages of distributed representatio...

Full description

Saved in:
Bibliographic Details
Published inJournal of physics. Conference series Vol. 1898; no. 1; pp. 12007 - 12016
Main Authors Putri, Syarifah K, Amalia, A, Nababan, E B, Sitompul, O S
Format Journal Article
LanguageEnglish
Published Bristol IOP Publishing 01.06.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Abstract Words embedding or distributed representations is a popular method for representing words. In this method, the resulting vector value is a set of real values with specific dimensions that are more effective than the Bag of Word (BoW) method. Also, the advantages of distributed representations can produce word vectors that contain semantic and syntactic information, so that word vectors with close meanings will have close word vectors. However, distributed representation requires a huge corpus with a long training time. For this reason, many researchers have created trained word vectors that can be used repeatedly. The problem is that the available trained word vectors are usually general domain word vectors. This study aims to form pre-trained word vectors for specific domains, namely computers and information technology. Researchers used a dataset of student scientific papers from the Universitas Sumatera Utara (USU) repository. Researchers used the word2vec model, where the model has two architectures, namely the Continuous Bag-of-Words (CBOW) and Skip-gram. This research’s result is word2vec model with the CBOW method is more effective than the Skip-gram method.
ISSN:1742-6588
1742-6596
DOI:10.1088/1742-6596/1898/1/012007