Analysis of sentiment in tweets addressed to a single domain-specific Twitter account: Comparison of model performance and explainability of predictions

•Comparison of selected popular and recent natural language processing methods.•Use of explainable Artificial Intelligence tools in Twitter sentiment analysis.•Analysis of sentiment in tweets addressed to a single Twitter account.•Performance of selected transformer models on the SemEval-2017 data s...

Full description

Saved in:
Bibliographic Details
Published inExpert systems with applications Vol. 186; p. 115771
Main Authors Fiok, Krzysztof, Karwowski, Waldemar, Gutierrez, Edgar, Wilamowski, Maciej
Format Journal Article
LanguageEnglish
Published New York Elsevier Ltd 30.12.2021
Elsevier BV
Subjects
Online AccessGet full text

Cover

Loading…
Abstract •Comparison of selected popular and recent natural language processing methods.•Use of explainable Artificial Intelligence tools in Twitter sentiment analysis.•Analysis of sentiment in tweets addressed to a single Twitter account.•Performance of selected transformer models on the SemEval-2017 data set. Many institutions and companies find it valuable to know how people feel about their ventures; hence, scientific research in sentiment analysis has been intensely developed over time. Automated sentiment analysis can be considered as a machine learning (ML) prediction task, with classes representing human affective states. Due to the rapid development of ML and deep learning (DL), improvements in automatic sentiment analysis performance are achieved almost every year. Since 2013, Semantic Evaluation (SemEval) has hosted a worldwide community-acknowledged competition that allows for comparisons of recent innovations. The sentiment analysis tasks focus on assessing sentiment in Twitter posts authored by various publishers and addressing multiple subjects. Our study aimed to compare selected popular and recent natural language processing methods using a new data set of Twitter posts sent to a single Twitter account. For improved comparability of our experiments with SemEval, we adopted their metrics and also deployed our models on data published for SemEval-2017. In addition, we investigated if an unsupervised ML technique applied for the detection of topics in tweets can be leveraged to improve the predictive performance of a selected transformer model. We also demonstrated how a recent explainable artificial intelligence technique can be used in Twitter sentiment analysis to gain a deeper understanding of the models’ predictions. Our results show that the most recent DL language modeling approach provides the highest quality; however, this quality comes at reduced model transparency.
AbstractList •Comparison of selected popular and recent natural language processing methods.•Use of explainable Artificial Intelligence tools in Twitter sentiment analysis.•Analysis of sentiment in tweets addressed to a single Twitter account.•Performance of selected transformer models on the SemEval-2017 data set. Many institutions and companies find it valuable to know how people feel about their ventures; hence, scientific research in sentiment analysis has been intensely developed over time. Automated sentiment analysis can be considered as a machine learning (ML) prediction task, with classes representing human affective states. Due to the rapid development of ML and deep learning (DL), improvements in automatic sentiment analysis performance are achieved almost every year. Since 2013, Semantic Evaluation (SemEval) has hosted a worldwide community-acknowledged competition that allows for comparisons of recent innovations. The sentiment analysis tasks focus on assessing sentiment in Twitter posts authored by various publishers and addressing multiple subjects. Our study aimed to compare selected popular and recent natural language processing methods using a new data set of Twitter posts sent to a single Twitter account. For improved comparability of our experiments with SemEval, we adopted their metrics and also deployed our models on data published for SemEval-2017. In addition, we investigated if an unsupervised ML technique applied for the detection of topics in tweets can be leveraged to improve the predictive performance of a selected transformer model. We also demonstrated how a recent explainable artificial intelligence technique can be used in Twitter sentiment analysis to gain a deeper understanding of the models’ predictions. Our results show that the most recent DL language modeling approach provides the highest quality; however, this quality comes at reduced model transparency.
Many institutions and companies find it valuable to know how people feel about their ventures; hence, scientific research in sentiment analysis has been intensely developed over time. Automated sentiment analysis can be considered as a machine learning (ML) prediction task, with classes representing human affective states. Due to the rapid development of ML and deep learning (DL), improvements in automatic sentiment analysis performance are achieved almost every year. Since 2013, Semantic Evaluation (SemEval) has hosted a worldwide community-acknowledged competition that allows for comparisons of recent innovations. The sentiment analysis tasks focus on assessing sentiment in Twitter posts authored by various publishers and addressing multiple subjects. Our study aimed to compare selected popular and recent natural language processing methods using a new data set of Twitter posts sent to a single Twitter account. For improved comparability of our experiments with SemEval, we adopted their metrics and also deployed our models on data published for SemEval-2017. In addition, we investigated if an unsupervised ML technique applied for the detection of topics in tweets can be leveraged to improve the predictive performance of a selected transformer model. We also demonstrated how a recent explainable artificial intelligence technique can be used in Twitter sentiment analysis to gain a deeper understanding of the models' predictions. Our results show that the most recent DL language modeling approach provides the highest quality; however, this quality comes at reduced model transparency.
ArticleNumber 115771
Author Karwowski, Waldemar
Wilamowski, Maciej
Fiok, Krzysztof
Gutierrez, Edgar
Author_xml – sequence: 1
  givenname: Krzysztof
  surname: Fiok
  fullname: Fiok, Krzysztof
  email: fiok@ucf.edu
  organization: Department of Industrial Engineering and Management Systems, University of Central Florida, Orlando, FL 32816, USA
– sequence: 2
  givenname: Waldemar
  surname: Karwowski
  fullname: Karwowski, Waldemar
  email: wkar@ucf.edu
  organization: Department of Industrial Engineering and Management Systems, University of Central Florida, Orlando, FL 32816, USA
– sequence: 3
  givenname: Edgar
  surname: Gutierrez
  fullname: Gutierrez, Edgar
  email: edfranco@mit.edu
  organization: Department of Industrial Engineering and Management Systems, University of Central Florida, Orlando, FL 32816, USA
– sequence: 4
  givenname: Maciej
  surname: Wilamowski
  fullname: Wilamowski, Maciej
  email: mwilamowski@wne.uw.edu.pl
  organization: University of Warsaw, Faculty of Economic Sciences, Warsaw, Poland
BookMark eNp9kUFv1DAQhS3USmwLf6AnS5yz2E4cJ4hLtaKAVIlLOVtee1zNKrGD7WXZf9Kfi6PlxKGXmcv7nmbeuyFXIQYg5I6zLWe8_3jYQj6ZrWCCbzmXSvE3ZMMH1Ta9GtsrsmGjVE3HVfeW3OR8YIwrxtSGvNwHM50zZho9zRAKznVQDLScAEqmxrkEOYOjJVJDM4bnCaiLs8HQ5AUserT06YSlQKLG2ngM5RPdxXkxCXMMq_EcHUx0geRjmk2wQE1wFP4sU3Uxe5ywnFfdksChLRhDfkeuvZkyvP-3b8nPhy9Pu2_N44-v33f3j41txVCabt8qsx-8GiX3nPu2xuHd6Pt-cFLspefMSdOKbuwcN73nTvaid52VTgg29O0t-XDxXVL8dYRc9CEeUw0layHHsYbZcVlVw0VlU8w5gdcWi1kPLcngpDnTaw_6oNce9NqDvvRQUfEfuiScTTq_Dn2-QFBf_42QdLYINTiHCWzRLuJr-F9BxKaN
CitedBy_id crossref_primary_10_1007_s10844_022_00736_2
crossref_primary_10_1016_j_ipm_2022_103058
crossref_primary_10_1080_01969722_2023_2296251
crossref_primary_10_1145_3524022
crossref_primary_10_1177_15485129211028651
crossref_primary_10_1007_s10639_023_11943_x
crossref_primary_10_1186_s42492_024_00154_x
crossref_primary_10_3390_info14010021
crossref_primary_10_1016_j_caeai_2024_100338
crossref_primary_10_1016_j_eswa_2021_116472
crossref_primary_10_1109_TIM_2022_3171613
crossref_primary_10_3390_analytics1020009
crossref_primary_10_1038_s41598_024_70766_z
crossref_primary_10_1016_j_compeleceng_2024_109302
crossref_primary_10_1016_j_eswa_2022_119128
crossref_primary_10_3390_computers13040092
crossref_primary_10_7717_peerj_cs_1039
crossref_primary_10_1007_s10489_023_04471_1
crossref_primary_10_1007_s10115_024_02214_3
crossref_primary_10_1007_s00521_023_08236_2
crossref_primary_10_3390_fi14050141
Cites_doi 10.18653/v1/S17-2094
10.1007/s00521-020-05102-3
10.3758/s13428-016-0743-z
10.18653/v1/2020.acl-main.703
10.3390/sym12061054
10.1109/ACCESS.2018.2870052
10.18653/v1/W17-5221
10.1038/s42256-019-0138-9
10.1145/2938640
10.1371/journal.pone.0239441
10.1016/j.eswa.2013.05.057
10.1016/j.ins.2016.06.040
10.18653/v1/2020.acl-main.747
10.18653/v1/P19-3007
10.1016/j.inffus.2019.12.012
10.18653/v1/S17-2088
10.1016/j.procs.2016.06.095
10.18653/v1/N16-1082
10.1002/cpe.5107
10.3115/1220575.1220643
10.18653/v1/D18-2029
10.1162/tacl_a_00051
10.1016/j.cogsys.2018.10.001
ContentType Journal Article
Copyright 2021 Elsevier Ltd
Copyright Elsevier BV Dec 30, 2021
Copyright_xml – notice: 2021 Elsevier Ltd
– notice: Copyright Elsevier BV Dec 30, 2021
DBID AAYXX
CITATION
7SC
8FD
JQ2
L7M
L~C
L~D
DOI 10.1016/j.eswa.2021.115771
DatabaseName CrossRef
Computer and Information Systems Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Computer and Information Systems Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Advanced Technologies Database with Aerospace
ProQuest Computer Science Collection
Computer and Information Systems Abstracts Professional
DatabaseTitleList
Computer and Information Systems Abstracts
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1873-6793
ExternalDocumentID 10_1016_j_eswa_2021_115771
S0957417421011428
GroupedDBID --K
--M
.DC
.~1
0R~
13V
1B1
1RT
1~.
1~5
4.4
457
4G.
5GY
5VS
7-5
71M
8P~
9JN
9JO
AAAKF
AABNK
AACTN
AAEDT
AAEDW
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AARIN
AAXUO
AAYFN
ABBOA
ABFNM
ABMAC
ABMVD
ABUCO
ABYKQ
ACDAQ
ACGFS
ACHRH
ACNTT
ACRLP
ACZNC
ADBBV
ADEZE
ADTZH
AEBSH
AECPX
AEKER
AENEX
AFKWA
AFTJW
AGHFR
AGJBL
AGUBO
AGUMN
AGYEJ
AHHHB
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJOXV
ALEQD
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
APLSM
AXJTR
BJAXD
BKOJK
BLXMC
BNSAS
CS3
DU5
EBS
EFJIC
EFLBG
EO8
EO9
EP2
EP3
F5P
FDB
FIRID
FNPLU
FYGXN
G-Q
GBLVA
GBOLZ
HAMUX
IHE
J1W
JJJVA
KOM
LG9
LY1
LY7
M41
MO0
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
PQQKQ
Q38
ROL
RPZ
SDF
SDG
SDP
SDS
SES
SPC
SPCBC
SSB
SSD
SSL
SST
SSV
SSZ
T5K
TN5
~G-
29G
AAAKG
AAQXK
AATTM
AAXKI
AAYWO
AAYXX
ABJNI
ABKBG
ABWVN
ABXDB
ACNNM
ACRPL
ACVFH
ADCNI
ADJOM
ADMUD
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AFXIZ
AGCQF
AGQPQ
AGRNS
AIGII
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
ASPBG
AVWKF
AZFZN
BNPGV
CITATION
EJD
FEDTE
FGOYB
G-2
HLZ
HVGLF
HZ~
R2-
RIG
SBC
SET
SEW
SSH
WUQ
XPP
ZMT
7SC
8FD
EFKBS
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c328t-4b37ab8f7951f11f3101fd9f668d52b5f10d5a32494d1a6f1d5626d4c5d220863
IEDL.DBID .~1
ISSN 0957-4174
IngestDate Fri Jul 25 06:29:30 EDT 2025
Thu Apr 24 22:55:24 EDT 2025
Tue Jul 01 04:05:55 EDT 2025
Fri Feb 23 02:40:46 EST 2024
IsPeerReviewed true
IsScholarly true
Keywords Deep learning
Sentiment analysis
Twitter
Natural language processing
Explainability
Machine learning
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c328t-4b37ab8f7951f11f3101fd9f668d52b5f10d5a32494d1a6f1d5626d4c5d220863
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
PQID 2599115415
PQPubID 2045477
ParticipantIDs proquest_journals_2599115415
crossref_citationtrail_10_1016_j_eswa_2021_115771
crossref_primary_10_1016_j_eswa_2021_115771
elsevier_sciencedirect_doi_10_1016_j_eswa_2021_115771
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2021-12-30
PublicationDateYYYYMMDD 2021-12-30
PublicationDate_xml – month: 12
  year: 2021
  text: 2021-12-30
  day: 30
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle Expert systems with applications
PublicationYear 2021
Publisher Elsevier Ltd
Elsevier BV
Publisher_xml – name: Elsevier Ltd
– name: Elsevier BV
References Adadi, Berrada (b0005) 2018; 6
Wang, Can, Kazemzadeh, Bar, Narayanan (b0315) 2012
Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., … Zettlemoyer, L. (2019). Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
SemEval-2017 Task 4. (2020). from
Kouloumpis, Wilson, Moore (b0135) 2011
Sklearn.metrics.mean_absolute_error. (2020). from
Ribeiro, Singh, Guestrin (b0240) 2016
Pennebaker, Francis, Booth (b0215) 2001; 71
Rosenthal, S., Farra, N., & Nakov, P. (2019). SemEval-2017 task 4: Sentiment analysis in Twitter. arXiv preprint arXiv:1912.00741.
Si, Mukherjee, Liu, Li, Li, Deng (b0275) 2013
Miller (b0190) 1998
Blei, Ng, Jordan (b0045) 2003; 3
Scipy.stats.wasserstein_distance. (2020) from
Krippendorff, K. (2011). Computing Krippendorff's alpha-reliability.
Bojanowski, Grave, Joulin, Mikolov (b0050) 2017; 5
Ghiassi, Skinner, Zimbra (b0100) 2013; 40
Pagolu, Reddy, Panda, Majhi (b0205) 2016
Saif, He, Alani (b0250) 2012
Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., … Stoyanov, V. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
Giachanou, Crestani (b0105) 2016; 49
Kumar, Jaiswal (b0145) 2020; 32
Karpathy (b0130) 2015; 21
Schwarz, Theóphilo, Rocha (b0255) 2020
Transformers. (2020). from
Li, J., Chen, X., Hovy, E., & Jurafsky, D. (2015). Visualizing and understanding neural models in nlp. arXiv preprint arXiv:1506.01066.
.
Pak, Paroubek (b0210) 2010; 10
Lundberg, Lee (b0175) 2017
Mishra, Mishra (b0195) 2019
Arras, L., Montavon, G., Müller, K. R., & Samek, W. (2017). Explaining recurrent neural network predictions in sentiment analysis. arXiv preprint arXiv:1706.07206.
Munson, A., Cardie, C., & Caruana, R. (2005, October). Optimizing to arbitrary NLP metrics using ensemble selection. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing (pp. 539-546). Association for Computational Linguistics.
Cer, D., Yang, Y., Kong, S. Y., Hua, N., Limtiaco, N., John, R. S., … Sung, Y. H. (2018). Universal sentence encoder. arXiv preprint arXiv:1803.11175.
Fiok, K., (2020). Analysis of Twitter sentiment with various Language Models. Github
Lundberg, Erion, Chen, DeGrave, Prutkin, Nair, Lee (b0180) 2020; 2
Sousa, Sakiyama, de Souza Rodrigues, Moraes, Fernandes, Matsubara (b0295) 2019
Zhao, S., Fard, M. M., Narasimhan, H., & Gupta, M. (2018). Metric-optimized example weights. arXiv preprint arXiv:1805.10582.
Fiok, Karwowski, Gutierrez, Ahram (b0085) 2020; 12
Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
Gambino, G., & Pirrone, R. (2019). Investigating Embeddings for Sentiment Analysis in Italian.
Go, Huang, Bhayani (b0110) 2009; 17
Ibrahim (b0125) 2019; 1073
González, J. Á., Hurtado, L. F., & Pla, F. (2019). ELiRF-UPV at TASS 2019: Transformer Encoders for Twitter Sentiment Analysis in Spanish.
Gensim Python Package.
XGboost Python Package Introduction. (2020). from
(Accessed May 15, 2020).
Xiang, Zhou (b0320) 2014
Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez, Polosukhin (b0305) 2017
Vig, J. (2019). A multiscale visualization of attention in the transformer model. arXiv preprint arXiv:1906.05714.
Song, Y., Wang, J., Liang, Z., Liu, Z., & Jiang, T. (2020). Utilizing BERT intermediate layers for aspect based sentiment analysis and natural language inference. arXiv preprint arXiv:2002.04815.
Beel, Langer, Genzmehr, Gipp, Breitinger, Nürnberger (b0035) 2013
Xue, Chen, Chen, Zheng, Li, Zhu (b0325) 2020; 15
Pennington, Socher, Manning (b0220) 2014
Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
Potamias, R. A., Siolas, G., & Stafylopatis, A. G. (2019). A Transformer-based approach to Irony and Sarcasm detection. arXiv preprint arXiv:1911.10401.
Severyn, Moschitti (b0270) 2015
Crossley, Kyle, McNamara (b0070) 2017; 49
Yang, Dai, Yang, Carbonell, Salakhutdinov, Le (b0330) 2019
Agarwal, Xie, Vovsha, Rambow, Passonneau (b0010) 2011
Hutto, Gilbert (b0120) 2014
Singh, Kumari (b0280) 2016; 89
Language recognition chart. (2019, August). from
Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
Akbik, Bergmann, Blythe, Rasul, Schweter, Flair (b0015) 2019
(Accessed June 15, 2020).
Bertviz. (2020). Master branch commit 590c957799c3c09a4e1306b43d9ec10785e53745 from
Alharbi, de Doncker (b0020) 2019; 54
Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., … Chatila, R. (2019). Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. arXiv, arXiv-1910.
Accessed November 3, 2020).
Ren, Wang, Ji (b0235) 2016; 369
Cliche, M. (2017). Bb_twtr at semeval-2017 task 4: Twitter sentiment analysis with cnns and lstms. arXiv preprint arXiv:1704.06125.
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., … Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
Pennington (10.1016/j.eswa.2021.115771_b0220) 2014
Wang (10.1016/j.eswa.2021.115771_b0315) 2012
Bojanowski (10.1016/j.eswa.2021.115771_b0050) 2017; 5
Sousa (10.1016/j.eswa.2021.115771_b0295) 2019
Akbik (10.1016/j.eswa.2021.115771_b0015) 2019
Alharbi (10.1016/j.eswa.2021.115771_b0020) 2019; 54
Vaswani (10.1016/j.eswa.2021.115771_b0305) 2017
Miller (10.1016/j.eswa.2021.115771_b0190) 1998
10.1016/j.eswa.2021.115771_b0150
10.1016/j.eswa.2021.115771_b0030
10.1016/j.eswa.2021.115771_b0075
Lundberg (10.1016/j.eswa.2021.115771_b0180) 2020; 2
10.1016/j.eswa.2021.115771_b0230
10.1016/j.eswa.2021.115771_b0155
Adadi (10.1016/j.eswa.2021.115771_b0005) 2018; 6
Hutto (10.1016/j.eswa.2021.115771_b0120) 2014
10.1016/j.eswa.2021.115771_b0310
10.1016/j.eswa.2021.115771_b0115
Go (10.1016/j.eswa.2021.115771_b0110) 2009; 17
Kouloumpis (10.1016/j.eswa.2021.115771_b0135) 2011
Pak (10.1016/j.eswa.2021.115771_b0210) 2010; 10
Ribeiro (10.1016/j.eswa.2021.115771_b0240) 2016
Xue (10.1016/j.eswa.2021.115771_b0325) 2020; 15
10.1016/j.eswa.2021.115771_b0080
10.1016/j.eswa.2021.115771_b0160
10.1016/j.eswa.2021.115771_b0040
Ren (10.1016/j.eswa.2021.115771_b0235) 2016; 369
10.1016/j.eswa.2021.115771_b0285
10.1016/j.eswa.2021.115771_b0165
Mishra (10.1016/j.eswa.2021.115771_b0195) 2019
10.1016/j.eswa.2021.115771_b0200
10.1016/j.eswa.2021.115771_b0245
Beel (10.1016/j.eswa.2021.115771_b0035) 2013
Fiok (10.1016/j.eswa.2021.115771_b0085) 2020; 12
Crossley (10.1016/j.eswa.2021.115771_b0070) 2017; 49
10.1016/j.eswa.2021.115771_b0090
Giachanou (10.1016/j.eswa.2021.115771_b0105) 2016; 49
10.1016/j.eswa.2021.115771_b0290
10.1016/j.eswa.2021.115771_b0170
10.1016/j.eswa.2021.115771_b0095
10.1016/j.eswa.2021.115771_b0055
Lundberg (10.1016/j.eswa.2021.115771_b0175) 2017
Saif (10.1016/j.eswa.2021.115771_b0250) 2012
10.1016/j.eswa.2021.115771_b0335
Yang (10.1016/j.eswa.2021.115771_b0330) 2019
Xiang (10.1016/j.eswa.2021.115771_b0320) 2014
Ghiassi (10.1016/j.eswa.2021.115771_b0100) 2013; 40
Singh (10.1016/j.eswa.2021.115771_b0280) 2016; 89
Pennebaker (10.1016/j.eswa.2021.115771_b0215) 2001; 71
Ibrahim (10.1016/j.eswa.2021.115771_b0125) 2019; 1073
Agarwal (10.1016/j.eswa.2021.115771_b0010) 2011
Kumar (10.1016/j.eswa.2021.115771_b0145) 2020; 32
Schwarz (10.1016/j.eswa.2021.115771_b0255) 2020
Blei (10.1016/j.eswa.2021.115771_b0045) 2003; 3
10.1016/j.eswa.2021.115771_b0060
Si (10.1016/j.eswa.2021.115771_b0275) 2013
10.1016/j.eswa.2021.115771_b0260
Severyn (10.1016/j.eswa.2021.115771_b0270) 2015
10.1016/j.eswa.2021.115771_b0140
10.1016/j.eswa.2021.115771_b0185
10.1016/j.eswa.2021.115771_b0065
Karpathy (10.1016/j.eswa.2021.115771_b0130) 2015; 21
10.1016/j.eswa.2021.115771_b0265
10.1016/j.eswa.2021.115771_b0025
10.1016/j.eswa.2021.115771_b0300
Pagolu (10.1016/j.eswa.2021.115771_b0205) 2016
10.1016/j.eswa.2021.115771_b0225
References_xml – start-page: 54
  year: 2019
  end-page: 59
  ident: b0015
  article-title: An easy-to-use framework for state-of-the-art nlp
  publication-title: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)
– start-page: 1345
  year: 2016
  end-page: 1350
  ident: b0205
  article-title: Sentiment analysis of Twitter data for predicting stock market movements
  publication-title: 2016 International Conference on Signal Processing, Communication, Power and Embedded System (SCOPES)
– reference: Krippendorff, K. (2011). Computing Krippendorff's alpha-reliability.
– reference: Cer, D., Yang, Y., Kong, S. Y., Hua, N., Limtiaco, N., John, R. S., … Sung, Y. H. (2018). Universal sentence encoder. arXiv preprint arXiv:1803.11175.
– volume: 21
  start-page: 23
  year: 2015
  ident: b0130
  article-title: The unreasonable effectiveness of recurrent neural networks
  publication-title: Andrej Karpathy Blog
– reference: Potamias, R. A., Siolas, G., & Stafylopatis, A. G. (2019). A Transformer-based approach to Irony and Sarcasm detection. arXiv preprint arXiv:1911.10401.
– reference: > <Accessed November 3, 2020).
– volume: 5
  start-page: 135
  year: 2017
  end-page: 146
  ident: b0050
  article-title: Enriching word vectors with subword information
  publication-title: Transactions of the Association for Computational Linguistics
– volume: 40
  start-page: 6266
  year: 2013
  end-page: 6282
  ident: b0100
  article-title: Twitter brand sentiment analysis: A hybrid system using n-gram analysis and dynamic artificial neural network
  publication-title: Expert Systems with Applications
– reference: Language recognition chart. (2019, August). from <
– reference: Sklearn.metrics.mean_absolute_error. (2020). from <
– reference: Song, Y., Wang, J., Liang, Z., Liu, Z., & Jiang, T. (2020). Utilizing BERT intermediate layers for aspect based sentiment analysis and natural language inference. arXiv preprint arXiv:2002.04815.
– start-page: 4765
  year: 2017
  end-page: 4774
  ident: b0175
  article-title: A unified approach to interpreting model predictions
  publication-title: Advances in neural information processing systems
– volume: 12
  start-page: 1054
  year: 2020
  ident: b0085
  article-title: Predicting the volume of response to tweets posted by a single Twitter account
  publication-title: Symmetry
– reference: Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
– volume: 2
  start-page: 56
  year: 2020
  end-page: 67
  ident: b0180
  article-title: From local explanations to global understanding with explainable AI for trees
  publication-title: Nature Machine Intelligence
– volume: 54
  start-page: 50
  year: 2019
  end-page: 61
  ident: b0020
  article-title: Twitter sentiment analysis with a deep neural network: An enhanced approach using user behavioral information
  publication-title: Cognitive Systems Research
– start-page: 2777
  year: 2020
  end-page: 2781
  ident: b0255
  article-title: EMET: Embeddings from multilingual-encoder transformer for fake news detection
  publication-title: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
– volume: 32
  year: 2020
  ident: b0145
  article-title: Systematic literature review of sentiment analysis on Twitter using soft computing techniques
  publication-title: Concurrency and Computation: Practice and Experience
– reference: Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., … Zettlemoyer, L. (2019). Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
– start-page: 1597
  year: 2019
  end-page: 1601
  ident: b0295
  article-title: BERT for stock market sentiment analysis
  publication-title: 2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)
– volume: 17
  start-page: 252
  year: 2009
  ident: b0110
  article-title: Twitter sentiment analysis
  publication-title: Entropy
– reference: Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
– reference: Transformers. (2020). from <
– year: 2019
  ident: b0195
  article-title: 3Idiots at HASOC 2019: Fine-tuning Transformer Neural Networks for Hate Speech Identification in Indo-European Languages.
  publication-title: Proceedings of the 11th annual meeting of the Forum for Information Retrieval Evaluation (December 2019)
– reference: SemEval-2017 Task 4. (2020). from <
– reference: Cliche, M. (2017). Bb_twtr at semeval-2017 task 4: Twitter sentiment analysis with cnns and lstms. arXiv preprint arXiv:1704.06125.
– reference: González, J. Á., Hurtado, L. F., & Pla, F. (2019). ELiRF-UPV at TASS 2019: Transformer Encoders for Twitter Sentiment Analysis in Spanish.
– reference: Rosenthal, S., Farra, N., & Nakov, P. (2019). SemEval-2017 task 4: Sentiment analysis in Twitter. arXiv preprint arXiv:1912.00741.
– start-page: 959
  year: 2015
  end-page: 962
  ident: b0270
  article-title: August). Twitter sentiment analysis with deep convolutional neural networks
  publication-title: Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval
– reference: Gensim Python Package. <
– start-page: 5998
  year: 2017
  end-page: 6008
  ident: b0305
  article-title: Attention is all you need
  publication-title: Advances in neural information processing systems
– reference: Li, J., Chen, X., Hovy, E., & Jurafsky, D. (2015). Visualizing and understanding neural models in nlp. arXiv preprint arXiv:1506.01066.
– start-page: 1135
  year: 2016
  end-page: 1144
  ident: b0240
  article-title: “Why should i trust you?” Explaining the predictions of any classifier
  publication-title: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining
– volume: 3
  start-page: 993
  year: 2003
  end-page: 1022
  ident: b0045
  article-title: Latent dirichlet allocation
  publication-title: Journal of Machine Learning Research
– year: 2014
  ident: b0120
  article-title: Vader: A parsimonious rule-based model for sentiment analysis of social media text
  publication-title: Eighth international AAAI conference on weblogs and social media
– reference: > (Accessed June 15, 2020).
– reference: Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., … Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
– volume: 10
  start-page: 1320
  year: 2010
  end-page: 1326
  ident: b0210
  article-title: Twitter as a corpus for sentiment analysis and opinion mining
  publication-title: LREc
– start-page: 115
  year: 2012
  end-page: 120
  ident: b0315
  article-title: A system for real-time twitter sentiment analysis of 2012 us presidential election cycle
  publication-title: Proceedings of the ACL 2012 system demonstrations
– year: 1998
  ident: b0190
  article-title: WordNet: An electronic lexical database
– reference: Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., … Chatila, R. (2019). Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. arXiv, arXiv-1910.
– reference: > (Accessed May 15, 2020).
– start-page: 434
  year: 2014
  end-page: 439
  ident: b0320
  article-title: June). Improving twitter sentiment analysis with topic-based mixture modeling and semi-supervised training
  publication-title: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
– volume: 49
  start-page: 1
  year: 2016
  end-page: 41
  ident: b0105
  article-title: Like it or not: A survey of twitter sentiment analysis methods
  publication-title: ACM Computing Surveys (CSUR)
– reference: Vig, J. (2019). A multiscale visualization of attention in the transformer model. arXiv preprint arXiv:1906.05714.
– reference: Munson, A., Cardie, C., & Caruana, R. (2005, October). Optimizing to arbitrary NLP metrics using ensemble selection. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing (pp. 539-546). Association for Computational Linguistics.
– year: 2011
  ident: b0135
  article-title: Twitter sentiment analysis: The good the bad and the omg!
  publication-title: Fifth International AAAI conference on weblogs and social media, Barcelona, Spain
– reference: XGboost Python Package Introduction. (2020). from <
– volume: 49
  start-page: 803
  year: 2017
  end-page: 821
  ident: b0070
  article-title: Sentiment analysis and social cognition engine (SEANCE): An automatic tool for sentiment, social cognition, and social order analysis
  publication-title: Behavior Research Methods
– reference: Fiok, K., (2020). Analysis of Twitter sentiment with various Language Models. Github <
– reference: Scipy.stats.wasserstein_distance. (2020) from <
– volume: 89
  start-page: 549
  year: 2016
  end-page: 554
  ident: b0280
  article-title: Role of text pre-processing in twitter sentiment analysis
  publication-title: Procedia Computer Science
– reference: Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., … Stoyanov, V. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
– reference: >.
– start-page: 30
  year: 2011
  end-page: 38
  ident: b0010
  article-title: Sentiment analysis of twitter data
  publication-title: Proceedings of the Workshop on Language in Social Media (LSM 2011)
– start-page: 24
  year: 2013
  end-page: 29
  ident: b0275
  article-title: August). Exploiting topic-based twitter sentiment for stock prediction
  publication-title: Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
– volume: 1073
  start-page: 428
  year: 2019
  ident: b0125
  article-title: TwitterBERT: Framework for Twitter Sentiment Analysis Based on Pre-trained Language Model Representations
  publication-title: Emerging Trends in Intelligent Computing and Informatics: Data Science, Intelligent Information Systems and Smart Computing
– start-page: 1532
  year: 2014
  end-page: 1543
  ident: b0220
  article-title: Glove: Global vectors for word representation
  publication-title: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)
– volume: 6
  start-page: 52138
  year: 2018
  end-page: 52160
  ident: b0005
  article-title: Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI)
  publication-title: IEEE Accessed
– start-page: 15
  year: 2013
  end-page: 22
  ident: b0035
  article-title: October). Research paper recommender system evaluation: A quantitative literature survey
  publication-title: Proceedings of the International Workshop on Reproducibility and Replication in Recommender Systems Evaluation
– volume: 369
  start-page: 188
  year: 2016
  end-page: 198
  ident: b0235
  article-title: A topic-enhanced word embedding for Twitter sentiment classification
  publication-title: Information Sciences
– start-page: 508
  year: 2012
  end-page: 524
  ident: b0250
  article-title: Semantic sentiment analysis of twitter
  publication-title: International semantic web conference
– volume: 71
  year: 2001
  ident: b0215
  article-title: Linguistic inquiry and word count: LIWC 2001
  publication-title: Mahway: Lawrence Erlbaum Associates
– reference: Gambino, G., & Pirrone, R. (2019). Investigating Embeddings for Sentiment Analysis in Italian.
– reference: Arras, L., Montavon, G., Müller, K. R., & Samek, W. (2017). Explaining recurrent neural network predictions in sentiment analysis. arXiv preprint arXiv:1706.07206.
– reference: Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
– volume: 15
  year: 2020
  ident: b0325
  article-title: Public discourse and sentiment during the COVID 19 pandemic: Using Latent Dirichlet Allocation for topic modeling on Twitter
  publication-title: PLoS ONE
– reference: Zhao, S., Fard, M. M., Narasimhan, H., & Gupta, M. (2018). Metric-optimized example weights. arXiv preprint arXiv:1805.10582.
– reference: Bertviz. (2020). Master branch commit 590c957799c3c09a4e1306b43d9ec10785e53745 from <
– start-page: 5754
  year: 2019
  end-page: 5764
  ident: b0330
  article-title: Xlnet: Generalized autoregressive pretraining for language understanding
  publication-title: Advances in neural information processing systems
– year: 2019
  ident: 10.1016/j.eswa.2021.115771_b0195
  article-title: 3Idiots at HASOC 2019: Fine-tuning Transformer Neural Networks for Hate Speech Identification in Indo-European Languages.
– start-page: 30
  year: 2011
  ident: 10.1016/j.eswa.2021.115771_b0010
  article-title: Sentiment analysis of twitter data
– ident: 10.1016/j.eswa.2021.115771_b0060
  doi: 10.18653/v1/S17-2094
– ident: 10.1016/j.eswa.2021.115771_b0150
– ident: 10.1016/j.eswa.2021.115771_b0225
  doi: 10.1007/s00521-020-05102-3
– year: 2011
  ident: 10.1016/j.eswa.2021.115771_b0135
  article-title: Twitter sentiment analysis: The good the bad and the omg!
– volume: 49
  start-page: 803
  issue: 3
  year: 2017
  ident: 10.1016/j.eswa.2021.115771_b0070
  article-title: Sentiment analysis and social cognition engine (SEANCE): An automatic tool for sentiment, social cognition, and social order analysis
  publication-title: Behavior Research Methods
  doi: 10.3758/s13428-016-0743-z
– ident: 10.1016/j.eswa.2021.115771_b0160
  doi: 10.18653/v1/2020.acl-main.703
– start-page: 434
  year: 2014
  ident: 10.1016/j.eswa.2021.115771_b0320
  article-title: June). Improving twitter sentiment analysis with topic-based mixture modeling and semi-supervised training
– start-page: 4765
  year: 2017
  ident: 10.1016/j.eswa.2021.115771_b0175
  article-title: A unified approach to interpreting model predictions
– start-page: 5754
  year: 2019
  ident: 10.1016/j.eswa.2021.115771_b0330
  article-title: Xlnet: Generalized autoregressive pretraining for language understanding
– volume: 12
  start-page: 1054
  issue: 6
  year: 2020
  ident: 10.1016/j.eswa.2021.115771_b0085
  article-title: Predicting the volume of response to tweets posted by a single Twitter account
  publication-title: Symmetry
  doi: 10.3390/sym12061054
– volume: 6
  start-page: 52138
  year: 2018
  ident: 10.1016/j.eswa.2021.115771_b0005
  article-title: Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI)
  publication-title: IEEE Accessed
  doi: 10.1109/ACCESS.2018.2870052
– ident: 10.1016/j.eswa.2021.115771_b0140
– ident: 10.1016/j.eswa.2021.115771_b0025
  doi: 10.18653/v1/W17-5221
– start-page: 15
  year: 2013
  ident: 10.1016/j.eswa.2021.115771_b0035
  article-title: October). Research paper recommender system evaluation: A quantitative literature survey
– volume: 2
  start-page: 56
  issue: 1
  year: 2020
  ident: 10.1016/j.eswa.2021.115771_b0180
  article-title: From local explanations to global understanding with explainable AI for trees
  publication-title: Nature Machine Intelligence
  doi: 10.1038/s42256-019-0138-9
– ident: 10.1016/j.eswa.2021.115771_b0230
– volume: 49
  start-page: 1
  issue: 2
  year: 2016
  ident: 10.1016/j.eswa.2021.115771_b0105
  article-title: Like it or not: A survey of twitter sentiment analysis methods
  publication-title: ACM Computing Surveys (CSUR)
  doi: 10.1145/2938640
– start-page: 1597
  year: 2019
  ident: 10.1016/j.eswa.2021.115771_b0295
  article-title: BERT for stock market sentiment analysis
– volume: 15
  issue: 9
  year: 2020
  ident: 10.1016/j.eswa.2021.115771_b0325
  article-title: Public discourse and sentiment during the COVID 19 pandemic: Using Latent Dirichlet Allocation for topic modeling on Twitter
  publication-title: PLoS ONE
  doi: 10.1371/journal.pone.0239441
– start-page: 959
  year: 2015
  ident: 10.1016/j.eswa.2021.115771_b0270
  article-title: August). Twitter sentiment analysis with deep convolutional neural networks
– ident: 10.1016/j.eswa.2021.115771_b0075
– start-page: 54
  year: 2019
  ident: 10.1016/j.eswa.2021.115771_b0015
  article-title: An easy-to-use framework for state-of-the-art nlp
– volume: 17
  start-page: 252
  year: 2009
  ident: 10.1016/j.eswa.2021.115771_b0110
  article-title: Twitter sentiment analysis
  publication-title: Entropy
– volume: 40
  start-page: 6266
  issue: 16
  year: 2013
  ident: 10.1016/j.eswa.2021.115771_b0100
  article-title: Twitter brand sentiment analysis: A hybrid system using n-gram analysis and dynamic artificial neural network
  publication-title: Expert Systems with Applications
  doi: 10.1016/j.eswa.2013.05.057
– volume: 369
  start-page: 188
  year: 2016
  ident: 10.1016/j.eswa.2021.115771_b0235
  article-title: A topic-enhanced word embedding for Twitter sentiment classification
  publication-title: Information Sciences
  doi: 10.1016/j.ins.2016.06.040
– ident: 10.1016/j.eswa.2021.115771_b0170
– ident: 10.1016/j.eswa.2021.115771_b0065
  doi: 10.18653/v1/2020.acl-main.747
– ident: 10.1016/j.eswa.2021.115771_b0115
– volume: 10
  start-page: 1320
  issue: 2010
  year: 2010
  ident: 10.1016/j.eswa.2021.115771_b0210
  article-title: Twitter as a corpus for sentiment analysis and opinion mining
  publication-title: LREc
– ident: 10.1016/j.eswa.2021.115771_b0310
  doi: 10.18653/v1/P19-3007
– ident: 10.1016/j.eswa.2021.115771_b0040
– volume: 1073
  start-page: 428
  year: 2019
  ident: 10.1016/j.eswa.2021.115771_b0125
  article-title: TwitterBERT: Framework for Twitter Sentiment Analysis Based on Pre-trained Language Model Representations
  publication-title: Emerging Trends in Intelligent Computing and Informatics: Data Science, Intelligent Information Systems and Smart Computing
– ident: 10.1016/j.eswa.2021.115771_b0030
  doi: 10.1016/j.inffus.2019.12.012
– start-page: 2777
  year: 2020
  ident: 10.1016/j.eswa.2021.115771_b0255
  article-title: EMET: Embeddings from multilingual-encoder transformer for fake news detection
– ident: 10.1016/j.eswa.2021.115771_b0095
– volume: 21
  start-page: 23
  year: 2015
  ident: 10.1016/j.eswa.2021.115771_b0130
  article-title: The unreasonable effectiveness of recurrent neural networks
  publication-title: Andrej Karpathy Blog
– start-page: 5998
  year: 2017
  ident: 10.1016/j.eswa.2021.115771_b0305
  article-title: Attention is all you need
– ident: 10.1016/j.eswa.2021.115771_b0245
  doi: 10.18653/v1/S17-2088
– ident: 10.1016/j.eswa.2021.115771_b0265
– start-page: 24
  year: 2013
  ident: 10.1016/j.eswa.2021.115771_b0275
  article-title: August). Exploiting topic-based twitter sentiment for stock prediction
– ident: 10.1016/j.eswa.2021.115771_b0290
– volume: 3
  start-page: 993
  issue: 1
  year: 2003
  ident: 10.1016/j.eswa.2021.115771_b0045
  article-title: Latent dirichlet allocation
  publication-title: Journal of Machine Learning Research
– ident: 10.1016/j.eswa.2021.115771_b0185
– start-page: 1532
  year: 2014
  ident: 10.1016/j.eswa.2021.115771_b0220
  article-title: Glove: Global vectors for word representation
– volume: 89
  start-page: 549
  year: 2016
  ident: 10.1016/j.eswa.2021.115771_b0280
  article-title: Role of text pre-processing in twitter sentiment analysis
  publication-title: Procedia Computer Science
  doi: 10.1016/j.procs.2016.06.095
– start-page: 1135
  year: 2016
  ident: 10.1016/j.eswa.2021.115771_b0240
  article-title: “Why should i trust you?” Explaining the predictions of any classifier
– ident: 10.1016/j.eswa.2021.115771_b0165
  doi: 10.18653/v1/N16-1082
– year: 1998
  ident: 10.1016/j.eswa.2021.115771_b0190
– volume: 32
  issue: 1
  year: 2020
  ident: 10.1016/j.eswa.2021.115771_b0145
  article-title: Systematic literature review of sentiment analysis on Twitter using soft computing techniques
  publication-title: Concurrency and Computation: Practice and Experience
  doi: 10.1002/cpe.5107
– year: 2014
  ident: 10.1016/j.eswa.2021.115771_b0120
  article-title: Vader: A parsimonious rule-based model for sentiment analysis of social media text
– ident: 10.1016/j.eswa.2021.115771_b0155
– ident: 10.1016/j.eswa.2021.115771_b0200
  doi: 10.3115/1220575.1220643
– ident: 10.1016/j.eswa.2021.115771_b0300
– ident: 10.1016/j.eswa.2021.115771_b0260
– ident: 10.1016/j.eswa.2021.115771_b0285
– ident: 10.1016/j.eswa.2021.115771_b0055
  doi: 10.18653/v1/D18-2029
– start-page: 115
  year: 2012
  ident: 10.1016/j.eswa.2021.115771_b0315
  article-title: A system for real-time twitter sentiment analysis of 2012 us presidential election cycle
– ident: 10.1016/j.eswa.2021.115771_b0090
– volume: 71
  year: 2001
  ident: 10.1016/j.eswa.2021.115771_b0215
  article-title: Linguistic inquiry and word count: LIWC 2001
  publication-title: Mahway: Lawrence Erlbaum Associates
– ident: 10.1016/j.eswa.2021.115771_b0335
– start-page: 1345
  year: 2016
  ident: 10.1016/j.eswa.2021.115771_b0205
  article-title: Sentiment analysis of Twitter data for predicting stock market movements
– volume: 5
  start-page: 135
  year: 2017
  ident: 10.1016/j.eswa.2021.115771_b0050
  article-title: Enriching word vectors with subword information
  publication-title: Transactions of the Association for Computational Linguistics
  doi: 10.1162/tacl_a_00051
– volume: 54
  start-page: 50
  year: 2019
  ident: 10.1016/j.eswa.2021.115771_b0020
  article-title: Twitter sentiment analysis with a deep neural network: An enhanced approach using user behavioral information
  publication-title: Cognitive Systems Research
  doi: 10.1016/j.cogsys.2018.10.001
– ident: 10.1016/j.eswa.2021.115771_b0080
– start-page: 508
  year: 2012
  ident: 10.1016/j.eswa.2021.115771_b0250
  article-title: Semantic sentiment analysis of twitter
SSID ssj0017007
Score 2.5063388
Snippet •Comparison of selected popular and recent natural language processing methods.•Use of explainable Artificial Intelligence tools in Twitter sentiment...
Many institutions and companies find it valuable to know how people feel about their ventures; hence, scientific research in sentiment analysis has been...
SourceID proquest
crossref
elsevier
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 115771
SubjectTerms Artificial intelligence
Data mining
Deep learning
Explainability
Explainable artificial intelligence
Machine learning
Natural language processing
Performance prediction
Sentiment analysis
Social networks
Twitter
Title Analysis of sentiment in tweets addressed to a single domain-specific Twitter account: Comparison of model performance and explainability of predictions
URI https://dx.doi.org/10.1016/j.eswa.2021.115771
https://www.proquest.com/docview/2599115415
Volume 186
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LS8QwEA6iFy--xecyB28Sd5OmL2-yuKwKXlTwFtKmgcraLW5Fvfg7_Llm0nRFQQ8eW5JQOpOZSft93xByFJnEpuWBpjlPIyq0yGlacEGVsMWHNpixHED2Ohrficv78H6BDDsuDMIqfexvY7qL1v5O37_Nfl2W_RtbHNh0aI92zBFCkfArRIxefvI-h3mg_Fzc6u3FFEd74kyL8SpmL6g9xNkJas7E7Lfk9CNMu9wzWiMrvmiEs_a51slCUW2Q1a4hA_j9uUk-OokRmBpAVpGT7oeyAgRjNTOwYcaJhWtopqAAvxNMCtDTR1VWFEmXCByC25cSOT6g2k4SpzCcNyvEhV3zHKi_GAegKg3Faz1xTCwE277huPoJfwI5v94id6Pz2-GY-tYLNA940lCRBbHKEhPbAswwZm02YEanJooSHfIsNGygQ2WLsVRopiLDtK2jImvpUHNuT0nBNlmsplWxQ0BkaRAFQuksTUQR5irLAmXrTsV4niec7RLWvXOZe11ybI8xkR0A7UGinSTaSbZ22iXH8zl1q8rx5-iwM6X85lvSpo0_5x10dpd-Z8-kPS6mKGHEwr1_LrtPlvHKqUUODshi8_RcHNrKpsl6znV7ZOns4mp8_QmfqPnG
linkProvider Elsevier
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1NT9wwELUoHNoLpdCqUErn0Bsyu3acr96qVdHyUS5dJG6WE8dS0DYbsUHQS39Hf25nHGdRkeDQa-JEUcaeeU7ee8PY58RlWJbHlpcyT7iyquR5JRU3CsGHdVSxPEH2IpleqtOr-GqNTQYtDNEqQ-7vc7rP1uHIKLzNUVvXox8IDrAc4tZOeEFo9oJtKFy-1Mbg6PeK50H-c2lvuJdyGh6UMz3Jq1rekfmQFEdkOpOKp6rTozzti8_xFtsMqBG-9g_2hq1VzTZ7PXRkgLBAd9ifwWMEFg5IVuS9-6FugNhY3RIwz3i3cAvdAgzQh4J5BXbx09QNJ9UlMYdgdleTyAdM30riC0xW3Qrpxr57DrQPkgMwjYXqvp17KRaxbX_RuPaG_gL5if2WXR5_m02mPPRe4GUks46rIkpNkbkUEZgTAoM2Fs7mLkkyG8sidmJsY4NoLFdWmMQJi0AqwVDHVkrcJkXv2HqzaKr3DFSRR0mkjC3yTFVxaYoiMgg8jZBlmUmxy8TwznUZjMmpP8ZcDwy0a01x0hQn3cdplx2urml7W45nR8dDKPU_k0tj3Xj2uv0h7jos7aXG_WJOHkYi3vvP235iL6ez7-f6_OTi7AN7RWe8deR4n613N7fVR4Q5XXHgp_FfTFz7VA
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Analysis+of+sentiment+in+tweets+addressed+to+a+single+domain-specific+Twitter+account%3A+Comparison+of+model+performance+and+explainability+of+predictions&rft.jtitle=Expert+systems+with+applications&rft.au=Fiok%2C+Krzysztof&rft.au=Karwowski%2C+Waldemar&rft.au=Gutierrez%2C+Edgar&rft.au=Wilamowski%2C+Maciej&rft.date=2021-12-30&rft.pub=Elsevier+Ltd&rft.issn=0957-4174&rft.eissn=1873-6793&rft.volume=186&rft_id=info:doi/10.1016%2Fj.eswa.2021.115771&rft.externalDocID=S0957417421011428
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0957-4174&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0957-4174&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0957-4174&client=summon