Compressing and interpreting word embeddings with latent space regularization and interactive semantics probing

Word embedding, a high-dimensional (HD) numerical representation of words generated by machine learning models, has been used for different natural language processing tasks, for example, translation between two languages. Recently, there has been an increasing trend of transforming the HD embedding...

Full description

Saved in:
Bibliographic Details
Published inInformation visualization Vol. 22; no. 1; pp. 52 - 68
Main Authors Li, Haoyu, Wang, Junpeng, Zheng, Yan, Wang, Liang, Zhang, Wei, Shen, Han-Wei
Format Journal Article
LanguageEnglish
Published London, England SAGE Publications 01.01.2023
SAGE PUBLICATIONS, INC
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Word embedding, a high-dimensional (HD) numerical representation of words generated by machine learning models, has been used for different natural language processing tasks, for example, translation between two languages. Recently, there has been an increasing trend of transforming the HD embeddings into a latent space (e.g. via autoencoders) for further tasks, exploiting various merits the latent representations could bring. To preserve the embeddings’ quality, these works often map the embeddings into an even higher-dimensional latent space, making the already complicated embeddings even less interpretable and consuming more storage space. In this work, we borrow the idea of β VAE to regularize the HD latent space. Our regularization implicitly condenses information from the HD latent space into a much lower-dimensional space, thus compressing the embeddings. We also show that each dimension of our regularized latent space is more semantically salient, and validate our assertion by interactively probing the encoding-level of user-proposed semantics in the dimensions. To the end, we design a visual analytics system to monitor the regularization process, explore the HD latent space, and interpret latent dimensions’ semantics. We validate the effectiveness of our embedding regularization and interpretation approach through both quantitative and qualitative evaluations.
AbstractList Word embedding, a high-dimensional (HD) numerical representation of words generated by machine learning models, has been used for different natural language processing tasks, for example, translation between two languages. Recently, there has been an increasing trend of transforming the HD embeddings into a latent space (e.g. via autoencoders) for further tasks, exploiting various merits the latent representations could bring. To preserve the embeddings’ quality, these works often map the embeddings into an even higher-dimensional latent space, making the already complicated embeddings even less interpretable and consuming more storage space. In this work, we borrow the idea of [Formula: see text] VAE to regularize the HD latent space. Our regularization implicitly condenses information from the HD latent space into a much lower-dimensional space, thus compressing the embeddings. We also show that each dimension of our regularized latent space is more semantically salient, and validate our assertion by interactively probing the encoding-level of user-proposed semantics in the dimensions. To the end, we design a visual analytics system to monitor the regularization process, explore the HD latent space, and interpret latent dimensions’ semantics. We validate the effectiveness of our embedding regularization and interpretation approach through both quantitative and qualitative evaluations.
Word embedding, a high-dimensional (HD) numerical representation of words generated by machine learning models, has been used for different natural language processing tasks, for example, translation between two languages. Recently, there has been an increasing trend of transforming the HD embeddings into a latent space (e.g. via autoencoders) for further tasks, exploiting various merits the latent representations could bring. To preserve the embeddings’ quality, these works often map the embeddings into an even higher-dimensional latent space, making the already complicated embeddings even less interpretable and consuming more storage space. In this work, we borrow the idea of β VAE to regularize the HD latent space. Our regularization implicitly condenses information from the HD latent space into a much lower-dimensional space, thus compressing the embeddings. We also show that each dimension of our regularized latent space is more semantically salient, and validate our assertion by interactively probing the encoding-level of user-proposed semantics in the dimensions. To the end, we design a visual analytics system to monitor the regularization process, explore the HD latent space, and interpret latent dimensions’ semantics. We validate the effectiveness of our embedding regularization and interpretation approach through both quantitative and qualitative evaluations.
Word embedding, a high-dimensional (HD) numerical representation of words generated by machine learning models, has been used for different natural language processing tasks, for example, translation between two languages. Recently, there has been an increasing trend of transforming the HD embeddings into a latent space (e.g. via autoencoders) for further tasks, exploiting various merits the latent representations could bring. To preserve the embeddings’ quality, these works often map the embeddings into an even higher-dimensional latent space, making the already complicated embeddings even less interpretable and consuming more storage space. In this work, we borrow the idea of β VAE to regularize the HD latent space. Our regularization implicitly condenses information from the HD latent space into a much lower-dimensional space, thus compressing the embeddings. We also show that each dimension of our regularized latent space is more semantically salient, and validate our assertion by interactively probing the encoding-level of user-proposed semantics in the dimensions. To the end, we design a visual analytics system to monitor the regularization process, explore the HD latent space, and interpret latent dimensions’ semantics. We validate the effectiveness of our embedding regularization and interpretation approach through both quantitative and qualitative evaluations.
Author Zheng, Yan
Shen, Han-Wei
Wang, Liang
Wang, Junpeng
Zhang, Wei
Li, Haoyu
Author_xml – sequence: 1
  givenname: Haoyu
  orcidid: 0000-0002-7138-8263
  surname: Li
  fullname: Li, Haoyu
  email: li.8460@osu.edu
– sequence: 2
  givenname: Junpeng
  surname: Wang
  fullname: Wang, Junpeng
– sequence: 3
  givenname: Yan
  surname: Zheng
  fullname: Zheng, Yan
– sequence: 4
  givenname: Liang
  surname: Wang
  fullname: Wang, Liang
– sequence: 5
  givenname: Wei
  surname: Zhang
  fullname: Zhang, Wei
– sequence: 6
  givenname: Han-Wei
  surname: Shen
  fullname: Shen, Han-Wei
BookMark eNp9kE1LAzEQhoNUsK3-AG8Bz6352N3sHqX4BQUvel6y2UlN2U3WJLXorze1YkHR08y8zPPOxwSNrLOA0Dklc0qFuKSZ4KWgBWOUcsJ5eYTGO21WCpaNvnNanKBJCGtCmMhINUZu4frBQwjGrrC0LTY2gk9K3Alb51sMfQNtm8qAtyY-405GsBGHQSrAHlabTnrzLqNx9uAgVTSvgAP00kajAh68a5LHKTrWsgtw9hWn6Onm-nFxN1s-3N4vrpYzxSmLMw5a5ypnOs8rIguo2gyYUkyqjJaQSSgEzYumqdLRXGnSZo0WoIFS4ETSgk_Rxd43zX3ZQIj12m28TSNrJvJcCMJzkrrovkt5F4IHXQ_e9NK_1ZTUu7_Wv_6aGPGDUSZ-Xh-9NN2_5HxPBrmCwz5_Ax9GTo2N
CitedBy_id crossref_primary_10_1111_cgf_14859
crossref_primary_10_1186_s12859_024_05643_7
crossref_primary_10_1109_TVCG_2024_3357065
Cites_doi 10.1162/tacl_a_00051
10.1007/s41095-020-0191-7
10.1109/TVCG.2019.2903943
10.1109/MCG.2018.042731661
10.1109/TVCG.2017.2744478
10.1613/jair.1.11640
10.1109/TVCG.2016.2598831
10.1109/TVCG.2020.3030350
10.1111/cgf.13672
10.1109/TVCG.2008.153
10.1007/978-3-319-57959-7
10.1109/TVCG.2017.2745141
10.1109/TVCG.2019.2903946
10.18653/v1/S17-2002
10.1109/TVCG.2016.2598828
ContentType Journal Article
Copyright The Author(s) 2022
Copyright_xml – notice: The Author(s) 2022
DBID AAYXX
CITATION
7SC
8FD
E3H
F2A
JQ2
L7M
L~C
L~D
DOI 10.1177/14738716221130338
DatabaseName CrossRef
Computer and Information Systems Abstracts
Technology Research Database
Library & Information Sciences Abstracts (LISA)
Library & Information Science Abstracts (LISA)
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Library and Information Science Abstracts (LISA)
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList CrossRef
Technology Research Database

DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1473-8724
EndPage 68
ExternalDocumentID 10_1177_14738716221130338
10.1177_14738716221130338
GroupedDBID -TM
.2L
.2N
.DC
01A
0R~
1~K
29I
54M
5GY
77K
8R4
8R5
AACTG
AADIR
AADUE
AAGLT
AAJPV
AAQDB
AAQXI
AARIX
AATAA
AATBZ
ABAWP
ABCCA
ABCJG
ABEIX
ABFWQ
ABFXH
ABIDT
ABJNI
ABKRH
ABLUO
ABPNF
ABQPY
ABQXT
ABRHV
ABUJY
ACDXX
ACFUR
ACFZE
ACGFS
ACJER
ACLZU
ACOFE
ACOXC
ACROE
ACRPL
ACSIQ
ACUAV
ACUIR
ACXKE
ADDLC
ADEBD
ADNMO
ADNON
ADRRZ
ADTOS
ADVBO
ADYCS
AEDXQ
AENEX
AEOBU
AEPTA
AEQLS
AESZF
AEUHG
AEVPJ
AEVXP
AEWDL
AEWHI
AEXNY
AFEET
AFKRG
AFMOU
AFQAA
AFUIA
AFWMB
AGDVU
AGKLV
AGNHF
AGNWV
AGQPQ
AGWFA
AHDMH
AHHFK
AHWHD
AJUZI
ALFTD
ALMA_UNASSIGNED_HOLDINGS
ANDLU
ARCSS
ARTOV
ASPBG
AUTPY
AUVAJ
AVWKF
AYAKG
AYPQM
AZFZN
BBRGL
BDDNI
BDZRT
BMVBW
BPACV
CAG
CEADM
CFDXU
COF
CS3
DG~
DO-
DOPDO
DU5
DV7
DV8
EBS
EJD
F5P
FEDTE
FHBDP
GROUPED_SAGE_PREMIER_JOURNAL_COLLECTION
H13
HVGLF
HZ~
J8X
K.F
MK~
O9-
P.B
P2P
PQQKQ
Q2X
Q7P
Q83
ROL
S01
SASJQ
SAUOL
SCNPE
SFC
SPV
SSDHQ
ZPLXX
ZPPRI
ZRKOI
~32
AAYXX
ACCVC
AJGYC
AMNSR
CITATION
7SC
8FD
AAPII
AJVBE
E3H
F2A
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c312t-3eff5c52f5590a6e9d4e2cc2ac418e4ae67156bb93873cf0d4bf7efe11e30a163
ISSN 1473-8716
IngestDate Fri Jul 25 02:39:02 EDT 2025
Tue Jul 01 05:19:57 EDT 2025
Thu Apr 24 23:06:59 EDT 2025
Tue Jun 17 22:30:16 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 1
Keywords visual analytics
word embedding
High-dimensional data visualization
neural networks
Language English
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c312t-3eff5c52f5590a6e9d4e2cc2ac418e4ae67156bb93873cf0d4bf7efe11e30a163
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-7138-8263
PQID 2755770350
PQPubID 25946
PageCount 17
ParticipantIDs proquest_journals_2755770350
crossref_primary_10_1177_14738716221130338
crossref_citationtrail_10_1177_14738716221130338
sage_journals_10_1177_14738716221130338
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 20230100
2023-01-00
20230101
PublicationDateYYYYMMDD 2023-01-01
PublicationDate_xml – month: 1
  year: 2023
  text: 20230100
PublicationDecade 2020
PublicationPlace London, England
PublicationPlace_xml – name: London, England
– name: Thousand Oaks
PublicationTitle Information visualization
PublicationYear 2023
Publisher SAGE Publications
SAGE PUBLICATIONS, INC
Publisher_xml – name: SAGE Publications
– name: SAGE PUBLICATIONS, INC
References Wang, Gou, Zhang 2019; 25
Heinrich, Weiskopf
Higgins, Sonnerat, Matthey 2017
Gou, Zou, Li 2021; 27
Van der Maaten, Hinton 2008; 9
Elmqvist, Dragicevic, Fekete 2008; 14
El-Assady, Kehlbeck, Collins 2020; 26
Rathore, Dev, Phillips 2021
Conneau, Lample, Ranzato 2017
Mohiuddin, Bari, Joty 2020
Mikolov, Chen, Corrado 2013
Bojanowski, Grave, Joulin 2017; 5
Ji, Shen, Ritter 2019; 25
Kingma, Welling 2013
Liu, Bremer, Thiagarajan 2018; 24
Ren, Amershi, Lee 2017; 23
Liu, Jun, Li 2019; 38
Park, Kim, Lee 2018; 24
Yuan, Chen, Yang 2021; 7
McInnes, Healy, Melville 2018
Ruder, Vulić, Søgaard 2019; 65
Burgess, Higgins, Pal 2018
Mikolov, Le, Sutskever 2013
Choo, Liu 2018; 38
Liu, Shi, Li 2017; 23
Higgins I (bibr7-14738716221130338) 2017
Li Q (bibr2-14738716221130338)
Hoffman P (bibr12-14738716221130338)
bibr32-14738716221130338
Kandogan E (bibr13-14738716221130338); 650
bibr25-14738716221130338
Wang J (bibr9-14738716221130338)
bibr28-14738716221130338
Kingma DP (bibr6-14738716221130338) 2013
bibr24-14738716221130338
bibr11-14738716221130338
bibr1-14738716221130338
bibr17-14738716221130338
bibr3-14738716221130338
bibr19-14738716221130338
Rathore A (bibr21-14738716221130338) 2021
El-Assady M (bibr22-14738716221130338) 2020; 26
Heinrich J (bibr10-14738716221130338)
Conneau A (bibr30-14738716221130338) 2017
Mohiuddin T (bibr4-14738716221130338) 2020
bibr26-14738716221130338
Mikolov T (bibr31-14738716221130338) 2013
bibr16-14738716221130338
bibr33-14738716221130338
bibr20-14738716221130338
bibr23-14738716221130338
Higgins I (bibr8-14738716221130338) 2017
Inselberg A (bibr34-14738716221130338)
McInnes L (bibr15-14738716221130338) 2018
bibr5-14738716221130338
Van der Maaten L (bibr14-14738716221130338) 2008; 9
Yang W (bibr18-14738716221130338)
Mikolov T (bibr29-14738716221130338) 2013
Burgess CP (bibr27-14738716221130338) 2018
References_xml – volume: 7
  start-page: 3
  issue: 1
  year: 2021
  end-page: 36
  article-title: A survey of visual analytics techniques for machine learning
  publication-title: Comput Vis Media
– volume: 27
  start-page: 261
  issue: 2
  year: 2021
  end-page: 271
  article-title: Vatld: a visual analytics system to assess, understand and improve traffic light detection
  publication-title: IEEE Trans Vis Comput Graph
– year: 2021
  article-title: Verb: Visualizing and interpreting bias mitigation techniques for word representations
  publication-title: arXiv preprint
– volume: 23
  start-page: 61
  issue: 1
  year: 2017
  end-page: 70
  article-title: Squares: Supporting interactive performance analysis for multiclass classifiers
  publication-title: IEEE Trans Vis Comput Graph
– volume: 25
  start-page: 2168
  issue: 6
  year: 2019
  end-page: 2180
  article-title: Deepvid: Deep visual interpretation and diagnosis for image classifiers via knowledge distillation
  publication-title: IEEE Trans Vis Comput Graph
– volume: 24
  start-page: 361
  issue: 1
  year: 2018
  end-page: 370
  article-title: Conceptvector: text visual analytics via interactive lexicon building using word embedding
  publication-title: IEEE Trans Vis Comput Graph
– volume: 38
  start-page: 84
  issue: 4
  year: 2018
  end-page: 92
  article-title: Visual analytics for explainable deep learning
  publication-title: IEEE Comput Graph Appl
– year: 2017
  article-title: Word translation without parallel data
  publication-title: arXiv preprint
– year: 2013
  article-title: Auto-encoding variational bayes
  publication-title: arXiv preprint
– volume: 24
  start-page: 553
  issue: 1
  year: 2018
  end-page: 562
  article-title: Visual exploration of semantic relationships in neural word embeddings
  publication-title: IEEE Trans Vis Comput Graph
– year: 2017
  article-title: Scan: learning hierarchical compositional visual concepts
  publication-title: arXiv preprint
– volume: 5
  start-page: 135
  year: 2017
  end-page: 146
  article-title: Enriching word vectors with subword information
  publication-title: Trans Assoc Comput Linguist
– year: 2013
  article-title: Exploiting similarities among languages for machine translation
  publication-title: arXiv preprint
– volume: 9
  start-page: 2579
  issue: 11
  year: 2008
  end-page: 2605
  article-title: Visualizing data using t-SNE
  publication-title: J Mach Learn Res
– volume: 14
  issue: 6
  year: 2008
  article-title: Rolling the dice: multidimensional visual exploration using scatterplot matrix navigation
  publication-title: IEEE Trans Vis Comput Graph
– volume: 38
  start-page: 67
  year: 2019
  end-page: 78
  article-title: Latent space cartography: visual analysis of vector space embeddings
  publication-title: Comput Graph Forum
– year: 2013
  article-title: Efficient estimation of word representations in vector space
  publication-title: arXiv preprint
– volume: 23
  start-page: 91
  issue: 1
  year: 2017
  end-page: 100
  article-title: Towards better analysis of deep convolutional Neural Networks
  publication-title: IEEE Trans Vis Comput Graph
– start-page: 95
  end-page: 116
  article-title: State of the art of parallel coordinates
  publication-title: Eurographics (State of the Art Reports)
– year: 2020
  article-title: Lnmap: Departures from isomorphic assumption in bilingual lexicon induction through non-linear mapping in latent space
  publication-title: arXiv preprint
– year: 2018
  article-title: Umap: Uniform manifold approximation and projection for dimension reduction
  publication-title: arXiv preprint
– volume: 65
  start-page: 569
  year: 2019
  end-page: 631
  article-title: A survey of cross-lingual word embedding models
  publication-title: J Artif Intell Res
– volume: 25
  start-page: 2181
  issue: 6
  year: 2019
  end-page: 2192
  article-title: Visual exploration of neural document embedding in information retrieval: semantics and feature selection
  publication-title: IEEE Trans Vis Comput Graph
– year: 2018
  article-title: Understanding disentangling in β-VAE
  publication-title: arXiv preprint
– volume: 26
  start-page: 1001
  issue: 1
  year: 2020
  end-page: 1011
  article-title: Semantic concept spaces: guided topic model refinement using word-embedding projections
  publication-title: IEEE Trans Vis Comput Graph
– year: 2021
  ident: bibr21-14738716221130338
  publication-title: arXiv preprint
– ident: bibr33-14738716221130338
  doi: 10.1162/tacl_a_00051
– volume: 9
  start-page: 2579
  issue: 11
  year: 2008
  ident: bibr14-14738716221130338
  publication-title: J Mach Learn Res
– ident: bibr17-14738716221130338
  doi: 10.1007/s41095-020-0191-7
– ident: bibr25-14738716221130338
  doi: 10.1109/TVCG.2019.2903943
– year: 2013
  ident: bibr29-14738716221130338
  publication-title: arXiv preprint
– year: 2013
  ident: bibr31-14738716221130338
  publication-title: arXiv preprint
– year: 2020
  ident: bibr4-14738716221130338
  publication-title: arXiv preprint
– ident: bibr16-14738716221130338
  doi: 10.1109/MCG.2018.042731661
– year: 2017
  ident: bibr30-14738716221130338
  publication-title: arXiv preprint
– start-page: 51
  volume-title: IEEE Pacific visualization symposium
  ident: bibr9-14738716221130338
– ident: bibr23-14738716221130338
  doi: 10.1109/TVCG.2017.2744478
– ident: bibr32-14738716221130338
  doi: 10.1613/jair.1.11640
– start-page: 437
  volume-title: Proceedings. Visualization’97 (Cat. No. 97CB36155)
  ident: bibr12-14738716221130338
– ident: bibr19-14738716221130338
  doi: 10.1109/TVCG.2016.2598831
– year: 2013
  ident: bibr6-14738716221130338
  publication-title: arXiv preprint
– year: 2017
  ident: bibr8-14738716221130338
  publication-title: arXiv preprint
– ident: bibr26-14738716221130338
  doi: 10.1109/TVCG.2020.3030350
– volume: 650
  start-page: 22
  volume-title: Proceedings of the IEEE information visualization symposium
  ident: bibr13-14738716221130338
– ident: bibr24-14738716221130338
  doi: 10.1111/cgf.13672
– start-page: 95
  ident: bibr10-14738716221130338
  publication-title: Eurographics (State of the Art Reports)
– start-page: 48
  volume-title: 2018 IEEE VAST
  ident: bibr2-14738716221130338
– ident: bibr11-14738716221130338
  doi: 10.1109/TVCG.2008.153
– ident: bibr5-14738716221130338
  doi: 10.1007/978-3-319-57959-7
– volume: 26
  start-page: 1001
  issue: 1
  year: 2020
  ident: bibr22-14738716221130338
  publication-title: IEEE Trans Vis Comput Graph
– start-page: 12
  volume-title: 2020 IEEE conference on visual analytics science and technology (VAST)
  ident: bibr18-14738716221130338
– start-page: 361
  volume-title: Proceedings of the First IEEE conference on visualization
  ident: bibr34-14738716221130338
– year: 2018
  ident: bibr15-14738716221130338
  publication-title: arXiv preprint
– year: 2018
  ident: bibr27-14738716221130338
  publication-title: arXiv preprint
– ident: bibr1-14738716221130338
  doi: 10.1109/TVCG.2017.2745141
– volume-title: 5th International conference on learning representations
  year: 2017
  ident: bibr7-14738716221130338
– ident: bibr3-14738716221130338
  doi: 10.1109/TVCG.2019.2903946
– ident: bibr28-14738716221130338
  doi: 10.18653/v1/S17-2002
– ident: bibr20-14738716221130338
  doi: 10.1109/TVCG.2016.2598828
SSID ssj0027409
Score 2.546114
Snippet Word embedding, a high-dimensional (HD) numerical representation of words generated by machine learning models, has been used for different natural language...
SourceID proquest
crossref
sage
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 52
SubjectTerms Compressing
Embedding
Machine learning
Natural language processing
Photovoltaic cells
Regularization
Representations
Semantics
Words (language)
Title Compressing and interpreting word embeddings with latent space regularization and interactive semantics probing
URI https://journals.sagepub.com/doi/full/10.1177/14738716221130338
https://www.proquest.com/docview/2755770350
Volume 22
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1La9wwEBbb5NIeSp9006ToUCh0cVnJsmQfQ0kJJSktJJCcFlselZTEG7LeQvM78oMzetjWdpvS9GKM1h6zmk-jmdE8CHlrDBg0fbJEy1QmIheQFLWqElErVI-lKLXLcj38IvePxeeT7GQ0uomilpZt9UFf_zGv5H-4imPIV5slew_O9kRxAO-Rv3hFDuP1n3hsF7OLYw2JhmddBKHzsNpEQLiooHbHS97jeo6qZdNOUIxo2y_F9qG_CpmYA4XSycDJAi5w2m0VZ9t1ptvifnSh733a4-Tn2cKmZl6vHOof-HbY5fzXcnDah_DfZXMJgZzzWYMfPx2Q2j16gOj9HnsmeBp5JrwwFSpNrEHm95p4zOdNdxKY8zWkeXHqi9uGjdm331kX-coXDVCp_RRHgxZ3ZV8xZrW89m_bXh-MyELF8zUSD8gmR-MDpefm7unXb4eRIe9Ch_r_F07LbSGvNSKr-s5gxERxg06VOXpCHgcbhO56QD0lI2iekUdRZcrnZB5BiyIwaAwtaqFFB2hRCy3qoUUdtOgqtAYKHlq0hxYN0HpBjj_tHX3cT0JvjkSnjLdJCsZkOuMGLdJpKaGoBXCteakFy0GUIBXLZFUVOBupNtNaVEaBAcYgnZZoBLwkG828gVeEqkzmWhTSMJ0LrXVeoUqpGIiCa8m0HJNpN4MzHQrX2_4p57M7OTcm7_tXLn3Vlr89vN2xZRYW92LGVZYpZY_dx-SdZdXw052Etu7z1dfk4bBmtslGe7WEHdRv2-pNgNstIZ2hqQ
linkProvider SAGE Publications
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1LT9wwEB7BcoAe-kbdlhYfQJWQgjaOYyeHHlBbtDwWgQQSPaXJZLxChVCxQVX7k_pX-qc6zjrs0lLEhUOveUwm4_H4G_vzGGDFWrKc-sQB6kgHKlEUpKUpAlUahsda5djsch3s6f6R2j6Oj2fgZ7sXxltwtO5oVaxRE6yverfbJ65M5EC-5MyFw2-UeELlDn3_xuna6N3WB27bVSk3Px6-7wf-RIEAo1DWQUTWxhhLyzi6l2tKS0USUeaowoRUTtpwPlMUKX8iQtsrVWENWQpDino5QxeWOwtziavC2YG5jU_7B4Op9K4hlDgFA6ehX0O9Uenro-AE2k6xyZoBbvMR_GpNM-a1fFm_rNkqP_6oGvl_2O4xPPQ4W2yMO8YTmKHqKTyYqr74DM5dLGxowNVQ5FUpTloCprvgVBV0VlDZrM4JN2EtThmZV7XgKIwkLmjoOLx-I-tEQt4MIWJEZ-y1JzgS7tAelvEcju7ljxehU51X9AKEiXWCKtU2xEQhYlIwbDIhqVSiDlF3odf6Q4a-OLs7I-Q0C3099r_argtrV698HVcmue3hpdbJstZNMmni2Bi3tNyFt85nJrf-KejlnZ9chvn-4WA3293a23kFC5LB4Xjqagk69cUlvWYwVxdvfC8S8Pm-ne83ojxOEA
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1PT9VAEJ8gJEYOiH8ITxH3oDExKbzdbnfbgwcivIAIwUQSPNV2OkuIUAivhOiH8qv4lZzt2_IeosYLB6_9M53Ozs78Zmd2FuCFc-Q49EkiNLGJdKopyipbRrqyDI-NLrDd5bqzazb39buD5GAKvnd7YYIEhyu-rIo5ao21n91nlVsNOcZVqW3sgb7i6IVNcJyGospt-nrJIdvwzdY6j-9LpQYbH99uRuFUgQhjqZooJucSTJRjLN0vDGWVJoWoCtQyJV2QsRzTlGXGn4jR9StdOkuOpKS4XzB8Ybp3YIYdowdkM2uf9j7sTIR4bVGJZzDyHIY86m-Zvu4Jx_B2oqKsdXKD-_CjE8-otuXLykXDkvn2S-fI_0d-8zAX8LZYG02QBzBF9UOYnejC-AhOvU1sy4HrQ1HUlTjqCjH9Bc-qoJOSqjZLJ_zCtThmhF43gq0xkjinQ1_LGza0jikUrSsRQzph7T3CofCH9zCNx7B_K3-8ANP1aU2LIGxiUtSZcRJTjYhpyfDJStKZQiPR9KDf6USOoUm7PyvkOJehL_uNsevB66tXzkYdSv728FKnaHmnKrmySWKtTzH34JXXm_GtPxJ68s9PPoe7e-uD_P3W7vZTuKcYI45WsJZgujm_oGeM6ZpyOUwkAZ9vW_d-ArC0UIU
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Compressing+and+interpreting+word+embeddings+with+latent+space+regularization+and+interactive+semantics+probing&rft.jtitle=Information+visualization&rft.au=Li%2C+Haoyu&rft.au=Wang%2C+Junpeng&rft.au=Zheng%2C+Yan&rft.au=Wang%2C+Liang&rft.date=2023-01-01&rft.issn=1473-8716&rft.eissn=1473-8724&rft.volume=22&rft.issue=1&rft.spage=52&rft.epage=68&rft_id=info:doi/10.1177%2F14738716221130338&rft.externalDBID=n%2Fa&rft.externalDocID=10_1177_14738716221130338
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1473-8716&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1473-8716&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1473-8716&client=summon