Simplifying approach to node classification in Graph Neural Networks

Graph Neural Networks (GNNs) have become one of the indispensable tools to learn from graph-structured data, and their usefulness has been shown in wide variety of tasks. In recent years, there have been tremendous improvements in architecture design, resulting in better performance on various predi...

Full description

Saved in:
Bibliographic Details
Published inJournal of computational science Vol. 62; p. 101695
Main Authors Maurya, Sunil Kumar, Liu, Xin, Murata, Tsuyoshi
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.07.2022
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Graph Neural Networks (GNNs) have become one of the indispensable tools to learn from graph-structured data, and their usefulness has been shown in wide variety of tasks. In recent years, there have been tremendous improvements in architecture design, resulting in better performance on various prediction tasks. In general, these neural architectures combine node feature aggregation and feature transformation using learnable weight matrix in the same layer. This makes it challenging to analyze the importance of node features aggregated from various hops and the expressiveness of the neural network layers. As different graph datasets show varying levels of homophily and heterophily in features and class label distribution, it becomes essential to understand which features are important for the prediction tasks without any prior information. In this work, we decouple the node feature aggregation step and depth of graph neural network, and empirically analyze how different aggregated features play a role in prediction performance. We show that not all features generated via aggregation steps are useful, and often using these less informative features can be detrimental to the performance of the GNN model. Through our experiments, we show that learning certain subsets of these features can lead to better performance on wide variety of datasets. Based on our observations, we introduce several key design strategies for graph neural networks. More specifically, we propose to use softmax as a regularizer and ”soft-selector” of features aggregated from neighbors at different hop distances; and L2-Normalization over GNN layers. Combining these techniques, we present a simple and shallow model, Feature Selection Graph Neural Network (FSGNN), and show empirically that the proposed model achieves comparable or even higher accuracy than state-of-the-art GNN models in nine benchmark datasets for the node classification task, with remarkable improvements up to 51.1%. Source code available at https://github.com/sunilkmaurya/FSGNN/. •Current Graph Neural Networks (GNNs) have inconsistent performance in homophily and heterophily graphs.•We analyze importance of feature selection over hops with extensive experiments.•With good feature selection strategy, simple NN model is sufficient for high accuracy.•We propose a model FSGNN for node classification task.•FSGNN outperforms SOTA on heterophily datasets and at par on homophily datasets.
AbstractList Graph Neural Networks (GNNs) have become one of the indispensable tools to learn from graph-structured data, and their usefulness has been shown in wide variety of tasks. In recent years, there have been tremendous improvements in architecture design, resulting in better performance on various prediction tasks. In general, these neural architectures combine node feature aggregation and feature transformation using learnable weight matrix in the same layer. This makes it challenging to analyze the importance of node features aggregated from various hops and the expressiveness of the neural network layers. As different graph datasets show varying levels of homophily and heterophily in features and class label distribution, it becomes essential to understand which features are important for the prediction tasks without any prior information. In this work, we decouple the node feature aggregation step and depth of graph neural network, and empirically analyze how different aggregated features play a role in prediction performance. We show that not all features generated via aggregation steps are useful, and often using these less informative features can be detrimental to the performance of the GNN model. Through our experiments, we show that learning certain subsets of these features can lead to better performance on wide variety of datasets. Based on our observations, we introduce several key design strategies for graph neural networks. More specifically, we propose to use softmax as a regularizer and ”soft-selector” of features aggregated from neighbors at different hop distances; and L2-Normalization over GNN layers. Combining these techniques, we present a simple and shallow model, Feature Selection Graph Neural Network (FSGNN), and show empirically that the proposed model achieves comparable or even higher accuracy than state-of-the-art GNN models in nine benchmark datasets for the node classification task, with remarkable improvements up to 51.1%. Source code available at https://github.com/sunilkmaurya/FSGNN/. •Current Graph Neural Networks (GNNs) have inconsistent performance in homophily and heterophily graphs.•We analyze importance of feature selection over hops with extensive experiments.•With good feature selection strategy, simple NN model is sufficient for high accuracy.•We propose a model FSGNN for node classification task.•FSGNN outperforms SOTA on heterophily datasets and at par on homophily datasets.
ArticleNumber 101695
Author Murata, Tsuyoshi
Maurya, Sunil Kumar
Liu, Xin
Author_xml – sequence: 1
  givenname: Sunil Kumar
  surname: Maurya
  fullname: Maurya, Sunil Kumar
  email: skmaurya@net.c.titech.ac.jp
  organization: Department of Computer Science, Tokyo Institute of Technology, Tokyo, Japan
– sequence: 2
  givenname: Xin
  surname: Liu
  fullname: Liu, Xin
  email: xin.liu@aist.go.jp
  organization: Artificial Intelligence Research Center, AIST, Tokyo, Japan
– sequence: 3
  givenname: Tsuyoshi
  surname: Murata
  fullname: Murata, Tsuyoshi
  email: murata@c.titech.ac.jp
  organization: Department of Computer Science, Tokyo Institute of Technology, Tokyo, Japan
BookMark eNp9kM1OwzAQhC1UJErpC3DKC6T4J86PxAUVKEgVHICz5dhr6pDGkW1AfXuSFnHg0L3MaqVvNTPnaNK5DhC6JHhBMMmvmkXjVFhQTOn-UPETNCVlUaQFJ2Tyt2N2huYhNHgYVpYVYVN0-2K3fWvNznbviex776TaJNElndOQqFaGYI1VMlrXJbZLVl72m-QJPr1sB4nfzn-EC3RqZBtg_qsz9HZ_97p8SNfPq8flzTpVLMtiCgXPtcoppxrKnGlaMMqN1DWngxdcSF3K0lQ8lyBpBlmtakorzLA2kHOo2QyVh7_KuxA8GKFs3FuLXtpWECzG_KIRYyFiLEQcChlQ-g_tvd1KvzsOXR8gGEJ9WfAiKAudAm09qCi0s8fwH91WfHg
CitedBy_id crossref_primary_10_1016_j_knosys_2024_112813
crossref_primary_10_48084_etasr_6844
crossref_primary_10_1109_ACCESS_2024_3424474
crossref_primary_10_1109_ACCESS_2024_3483839
crossref_primary_10_1016_j_physa_2024_130131
crossref_primary_10_48084_etasr_9443
crossref_primary_10_3389_fpsyg_2023_1032291
crossref_primary_10_1016_j_jocs_2023_102178
crossref_primary_10_1016_j_neucom_2024_128629
crossref_primary_10_1007_s10618_024_01038_7
crossref_primary_10_1109_TETCI_2024_3380481
crossref_primary_10_1109_TNNLS_2024_3370918
crossref_primary_10_1007_s10489_022_04222_8
crossref_primary_10_1109_ACCESS_2024_3505603
crossref_primary_10_3390_app13127150
crossref_primary_10_3390_electronics14051047
crossref_primary_10_1016_j_jocs_2024_102518
crossref_primary_10_3390_math11173680
crossref_primary_10_1007_s11042_024_19525_w
crossref_primary_10_1016_j_asoc_2025_112739
crossref_primary_10_3390_s24051580
crossref_primary_10_1109_JBHI_2023_3327755
crossref_primary_10_1016_j_softx_2023_101466
crossref_primary_10_1049_cit2_12166
Cites_doi 10.1016/j.compeleceng.2013.11.024
10.1109/TSP.2018.2889984
10.1162/qss_a_00021
10.1145/3446217
ContentType Journal Article
Copyright 2022 Elsevier B.V.
Copyright_xml – notice: 2022 Elsevier B.V.
DBID AAYXX
CITATION
DOI 10.1016/j.jocs.2022.101695
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Sciences (General)
Business
EISSN 1877-7511
ExternalDocumentID 10_1016_j_jocs_2022_101695
S1877750322000990
GroupedDBID --K
--M
.~1
0R~
1B1
1~.
1~5
4.4
457
4G.
5VS
7-5
71M
8P~
AACTN
AAEDT
AAEDW
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAXUO
AAYFN
ABBOA
ABFRF
ABJNI
ABMAC
ABXDB
ABYKQ
ACDAQ
ACGFO
ACGFS
ACNNM
ACRLP
ACZNC
ADBBV
ADEZE
ADMUD
AEBSH
AEFWE
AEKER
AENEX
AFKWA
AFTJW
AGHFR
AGUBO
AGYEJ
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJBFU
AJOXV
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
AXJTR
BKOJK
BLXMC
EBS
EFJIC
EFLBG
EJD
EP3
FDB
FEDTE
FIRID
FNPLU
FYGXN
GBLVA
GBOLZ
HVGLF
HZ~
J1W
KOM
M41
MO0
N9A
O-L
O9-
OAUVE
P-8
P-9
P2P
PC.
Q38
RIG
ROL
SDF
SES
SPC
SPCBC
SSV
SSZ
T5K
UNMZH
~G-
AATTM
AAXKI
AAYWO
AAYXX
ABWVN
ACRPL
ACVFH
ADCNI
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AFXIZ
AGCQF
AGRNS
AIGII
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
BNPGV
CITATION
SSH
ID FETCH-LOGICAL-c344t-e756dc6252de863d27325fadb5291307ad8a8f956aea24e4bcb229030dfe65eb3
IEDL.DBID .~1
ISSN 1877-7503
IngestDate Tue Jul 01 03:46:11 EDT 2025
Thu Apr 24 22:51:52 EDT 2025
Fri Feb 23 02:40:34 EST 2024
IsPeerReviewed true
IsScholarly true
Keywords Node classification
Feature selection
Graph Neural Networks
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c344t-e756dc6252de863d27325fadb5291307ad8a8f956aea24e4bcb229030dfe65eb3
ParticipantIDs crossref_citationtrail_10_1016_j_jocs_2022_101695
crossref_primary_10_1016_j_jocs_2022_101695
elsevier_sciencedirect_doi_10_1016_j_jocs_2022_101695
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2022-07-01
PublicationDateYYYYMMDD 2022-07-01
PublicationDate_xml – month: 07
  year: 2022
  text: 2022-07-01
  day: 01
PublicationDecade 2020
PublicationTitle Journal of computational science
PublicationYear 2022
Publisher Elsevier B.V
Publisher_xml – name: Elsevier B.V
References Sun, Gu, Hu (b48) 2021
Chien, Peng, Li, Milenkovic (b33) 2021
Defferrard, Bresson, Vandergheynst (b38) 2016
Ying, You, Morris, Ren, Hamilton, Leskovec (b9) 2018
Bo, Wang, Shi, Shen (b25) 2021
Chandrashekar, Sahin (b32) 2014; 40
Rozemberczki, Allen, Sarkar (b44) 2020
Xu, Li, Tian, Sonobe, Kawarabayashi, Jegelka (b39) 2018
Godwin, Schaarschmidt, Gaunt, Sanchez-Gonzalez, Rubanova, Veliković, Kirkpatrick, Battaglia (b41) 2021
Chami, Ying, Ré, Leskovec (b8) 2019
Ying, He, Chen, Eksombatchai, Hamilton, Leskovec (b6) 2018
Gilmer, Schoenholz, Riley, Vinyals, Dahl (b11) 2017
Velickovic, Cucurull, Casanova, Romero, Liò, Bengio (b2) 2018
Zhu, Rossi, Rao, Mai, Lipka, Ahmed, Koutra (b24) 2021
Berberidis, Nikolakopoulos, Giannakis (b37) 2019; 67
Li, Cheng, Wang, Morstatter, Trevino, Tang, Liu (b31) 2017; 50
McPherson, Smith-Lovin, Cook (b27) 2001
Ma, Liu, Shah, Tang (b42) 2021
van den Berg, Kipf, Welling (b7) 2017
Abu-El-Haija, Perozzi, Kapoor, Harutyunyan, Alipourfard, Lerman, Steeg, Galstyan (b3) 2019
Klicpera, Bojchevski, Günnemann (b19) 2018
Sun, Lin, Zhu (b51) 2020
Hamilton, Ying, Leskovec (b17) 2017
Frasca, Rossi, Eynard, Chamberlain, Bronstein, Monti (b29) 2020
Fan, Zeng, Ding, Chen, Sun, Liu (b14) 2019
Sen, Namata, Bilgic, Getoor, Gallagher, Eliassi-Rad (b28) 2008
Wu, Pan, Chen, Long, Zhang, Yu (b16) 2019
Maurya, Liu, Murata (b13) 2021; 15
Li, Müller, Ghanem, Koltun (b40) 2021
Madhawa, Ishiguro, Nakago, Abe (b12) 2019
Wang, Shen, Huang, Wu, Dong, Kanakia (b45) 2020; 1
Huang, He, Singh, Lim, Benson (b52) 2020
Tang, Sun, Wang, Yang (b43) 2009
Wang, Derr (b5) 2021
Pei, Wei, Chang, Lei, Yang (b34) 2020
Wu, Souza, Zhang, Fifty, Yu, Weinberger (b21) 2019
NT, Maehara, Murata (b36) 2020
Li, Han, Wu (b50) 2018
Zhang, Yin, Sheng, Ouyang, Li, Tao, Yang, Cui (b49) 2021
Kipf, Welling (b1) 2017
Bhagat, Cormode, Muthukrishnan (b26) 2011
Marcheggiani, Titov (b15) 2017
Zhang, Cui, Neumann, Chen (b10) 2018
Hu, Fey, Zitnik, Dong, Ren, Liu, Catasta, Leskovec (b46) 2021
Grover, Leskovec (b47) 2016
Zhu, Yan, Zhao, Heimann, Akoglu, Koutra (b23) 2020; 33
Rong, Huang, Xu, Huang (b22) 2020
Chen, Ma, Xiao (b18) 2018
Chen, Wei, Huang, Ding, Li (b4) 2020
Jin, Derr, Wang, Ma, Liu, Tang (b20) 2021
Suresh, Budde, Neville, Li, Ma (b35) 2021
Tang, Alelyani, Liu (b30) 2014
McPherson (10.1016/j.jocs.2022.101695_b27) 2001
Xu (10.1016/j.jocs.2022.101695_b39) 2018
Marcheggiani (10.1016/j.jocs.2022.101695_b15) 2017
Sen (10.1016/j.jocs.2022.101695_b28) 2008
Godwin (10.1016/j.jocs.2022.101695_b41) 2021
Jin (10.1016/j.jocs.2022.101695_b20) 2021
Tang (10.1016/j.jocs.2022.101695_b30) 2014
Wang (10.1016/j.jocs.2022.101695_b45) 2020; 1
Tang (10.1016/j.jocs.2022.101695_b43) 2009
Gilmer (10.1016/j.jocs.2022.101695_b11) 2017
Zhang (10.1016/j.jocs.2022.101695_b10) 2018
Hu (10.1016/j.jocs.2022.101695_b46) 2021
Chami (10.1016/j.jocs.2022.101695_b8) 2019
Grover (10.1016/j.jocs.2022.101695_b47) 2016
Pei (10.1016/j.jocs.2022.101695_b34) 2020
Bo (10.1016/j.jocs.2022.101695_b25) 2021
Wu (10.1016/j.jocs.2022.101695_b21) 2019
Kipf (10.1016/j.jocs.2022.101695_b1) 2017
Zhu (10.1016/j.jocs.2022.101695_b24) 2021
van den Berg (10.1016/j.jocs.2022.101695_b7) 2017
Maurya (10.1016/j.jocs.2022.101695_b13) 2021; 15
Ying (10.1016/j.jocs.2022.101695_b9) 2018
Chen (10.1016/j.jocs.2022.101695_b18) 2018
Li (10.1016/j.jocs.2022.101695_b40) 2021
Sun (10.1016/j.jocs.2022.101695_b51) 2020
Sun (10.1016/j.jocs.2022.101695_b48) 2021
Zhu (10.1016/j.jocs.2022.101695_b23) 2020; 33
Fan (10.1016/j.jocs.2022.101695_b14) 2019
Ying (10.1016/j.jocs.2022.101695_b6) 2018
Wu (10.1016/j.jocs.2022.101695_b16) 2019
Li (10.1016/j.jocs.2022.101695_b50) 2018
Wang (10.1016/j.jocs.2022.101695_b5) 2021
Li (10.1016/j.jocs.2022.101695_b31) 2017; 50
Zhang (10.1016/j.jocs.2022.101695_b49) 2021
Rozemberczki (10.1016/j.jocs.2022.101695_b44) 2020
Hamilton (10.1016/j.jocs.2022.101695_b17) 2017
Suresh (10.1016/j.jocs.2022.101695_b35) 2021
Madhawa (10.1016/j.jocs.2022.101695_b12) 2019
Berberidis (10.1016/j.jocs.2022.101695_b37) 2019; 67
Abu-El-Haija (10.1016/j.jocs.2022.101695_b3) 2019
Rong (10.1016/j.jocs.2022.101695_b22) 2020
Chen (10.1016/j.jocs.2022.101695_b4) 2020
Defferrard (10.1016/j.jocs.2022.101695_b38) 2016
Klicpera (10.1016/j.jocs.2022.101695_b19) 2018
Huang (10.1016/j.jocs.2022.101695_b52) 2020
Chien (10.1016/j.jocs.2022.101695_b33) 2021
Chandrashekar (10.1016/j.jocs.2022.101695_b32) 2014; 40
Frasca (10.1016/j.jocs.2022.101695_b29) 2020
Ma (10.1016/j.jocs.2022.101695_b42) 2021
Velickovic (10.1016/j.jocs.2022.101695_b2) 2018
Bhagat (10.1016/j.jocs.2022.101695_b26) 2011
NT (10.1016/j.jocs.2022.101695_b36) 2020
References_xml – year: 2021
  ident: b49
  article-title: Graph attention multi-layer perceptron
– year: 2021
  ident: b41
  article-title: Very deep graph neural networks via noise regularisation
– volume: 40
  start-page: 16
  year: 2014
  end-page: 28
  ident: b32
  article-title: A survey on feature selection methods
  publication-title: Comput. Electr. Eng.
– year: 2021
  ident: b42
  article-title: Is homophily a necessity for graph neural networks?
– start-page: 37
  year: 2014
  end-page: 64
  ident: b30
  article-title: Feature selection for classification: A review
  publication-title: Data Classif.: Algorithms Appl.
– year: 2019
  ident: b12
  article-title: GraphNVP: AN invertible flow model for generating molecular graphs
– year: 2018
  ident: b10
  article-title: An end-to-end deep learning architecture for graph classification
  publication-title: AAAI
– year: 2021
  ident: b33
  article-title: Adaptive universal generalized PageRank graph neural network
  publication-title: ICLR
– year: 2018
  ident: b2
  article-title: Graph attention networks
  publication-title: ICLR
– year: 2019
  ident: b3
  article-title: MixHop: Higher-order graph convolutional architectures via sparsified neighborhood mixing
  publication-title: ICML
– year: 2017
  ident: b1
  article-title: Semi-supervised classification with graph convolutional networks
  publication-title: ICLR
– year: 2017
  ident: b7
  article-title: Graph convolutional matrix completion
– year: 2021
  ident: b24
  article-title: Graph neural networks with heterophily
  publication-title: AAAI
– volume: 33
  year: 2020
  ident: b23
  article-title: Beyond homophily in graph neural networks: Current limitations and effective designs
  publication-title: NeurIPS
– year: 2021
  ident: b40
  article-title: Training graph neural networks with 1000 layers
  publication-title: ICML
– year: 2020
  ident: b36
  article-title: Stacked graph filter
– year: 2020
  ident: b4
  article-title: Simple and deep graph convolutional networks
  publication-title: ICML
– year: 2017
  ident: b11
  article-title: Neural message passing for quantum chemistry
  publication-title: Proceedings of the 34th International Conference on Machine Learning - Volume 70
– year: 2016
  ident: b47
  article-title: Node2vec: Scalable feature learning for networks
  publication-title: KDD
– year: 2019
  ident: b8
  article-title: Hyperbolic graph convolutional neural networks
  publication-title: NeurIPS
– year: 2018
  ident: b39
  article-title: Representation Learning on Graphs with Jumping Knowledge Networks
– start-page: 1024
  year: 2017
  end-page: 1034
  ident: b17
  article-title: Inductive representation learning on large graphs
  publication-title: NIPS
– volume: 67
  start-page: 1307
  year: 2019
  end-page: 1321
  ident: b37
  article-title: Adaptive diffusions for scalable learning over graphs
  publication-title: IEEE Trans. Signal Process.
– start-page: 4805
  year: 2018
  end-page: 4815
  ident: b9
  article-title: Hierarchical graph representation learning with differentiable pooling
  publication-title: NIPS’18
– year: 2018
  ident: b18
  article-title: FastGCN: FAst learning with graph convolutional networks via importance sampling
  publication-title: ICLR
– year: 2020
  ident: b29
  article-title: SIGN: Scalable inception graph neural networks
– year: 2020
  ident: b44
  article-title: Multi-scale attributed node embedding
– year: 2020
  ident: b52
  article-title: Combining label propagation and simple models out-performs graph neural networks
– start-page: 148
  year: 2021
  end-page: 156
  ident: b20
  article-title: Node similarity preserving graph convolutional networks
  publication-title: WSDM ’21
– year: 2018
  ident: b50
  article-title: Deeper insights into graph convolutional networks for semi-supervised learning
– year: 2020
  ident: b51
  article-title: Multi-stage self-supervised learning for graph convolutional networks on graphs with few labels
– year: 2021
  ident: b35
  article-title: Breaking the limit of graph neural networks by improving the assortativity of graphs with local mixing patterns
– year: 2016
  ident: b38
  article-title: Convolutional neural networks on graphs with fast localized spectral filtering
– year: 2021
  ident: b25
  article-title: Beyond low-frequency information in graph convolutional networks
  publication-title: AAAI
– start-page: 807
  year: 2009
  end-page: 816
  ident: b43
  article-title: Social influence analysis in large-scale networks
  publication-title: KDD ’09
– year: 2018
  ident: b6
  article-title: Graph convolutional neural networks for web-scale recommender systems
  publication-title: KDD ’18
– start-page: 559
  year: 2019
  end-page: 568
  ident: b14
  article-title: Learning to identify high betweenness centrality nodes from scratch: A novel graph neural network approach
  publication-title: Proceedings of the 28th ACM International Conference on Information and Knowledge Management
– year: 2019
  ident: b21
  article-title: Simplifying graph convolutional networks
  publication-title: ICML
– year: 2008
  ident: b28
  article-title: Collective classification in network data
  publication-title: AI Mag.
– year: 2020
  ident: b34
  article-title: Geom-GCN: Geometric graph convolutional networks
  publication-title: ICLR
– start-page: 1506
  year: 2017
  end-page: 1515
  ident: b15
  article-title: Encoding Sentences with Graph Convolutional Networks for Semantic Role Labeling
– volume: 50
  start-page: 94:1
  year: 2017
  end-page: 94:45
  ident: b31
  article-title: Feature selection: A data perspective
  publication-title: ACM Comput. Surv.
– year: 2021
  ident: b46
  article-title: Open graph benchmark: Datasets for machine learning on graphs
– volume: 15
  start-page: 78:1
  year: 2021
  end-page: 78:32
  ident: b13
  article-title: Graph neural networks for fast node ranking approximation
  publication-title: ACM Trans. Knowl. Discov. Data
– year: 2020
  ident: b22
  article-title: DropEdge: Towards deep graph convolutional networks on node classification
  publication-title: ICLR
– year: 2019
  ident: b16
  article-title: A comprehensive survey on graph neural networks
– year: 2021
  ident: b48
  article-title: Scalable and adaptive graph neural networks with self-label-enhanced training
– start-page: 115
  year: 2011
  end-page: 148
  ident: b26
  article-title: Node classification in social networks
  publication-title: Social Network Data Analytics
– year: 2001
  ident: b27
  article-title: Birds of a feather: Homophily in social networks
– year: 2018
  ident: b19
  article-title: Predict then Propagate: Combining neural networks with personalized pagerank for classification on graphs
– year: 2021
  ident: b5
  article-title: Tree decomposed graph neural network
  publication-title: CIKM
– volume: 1
  start-page: 396
  year: 2020
  end-page: 413
  ident: b45
  article-title: Microsoft academic graph: When experts are not enough
  publication-title: Quant. Sci. Stud.
– year: 2021
  ident: 10.1016/j.jocs.2022.101695_b33
  article-title: Adaptive universal generalized PageRank graph neural network
  publication-title: ICLR
– volume: 40
  start-page: 16
  issue: 1
  year: 2014
  ident: 10.1016/j.jocs.2022.101695_b32
  article-title: A survey on feature selection methods
  publication-title: Comput. Electr. Eng.
  doi: 10.1016/j.compeleceng.2013.11.024
– year: 2021
  ident: 10.1016/j.jocs.2022.101695_b24
  article-title: Graph neural networks with heterophily
– year: 2019
  ident: 10.1016/j.jocs.2022.101695_b12
– volume: 67
  start-page: 1307
  issue: 5
  year: 2019
  ident: 10.1016/j.jocs.2022.101695_b37
  article-title: Adaptive diffusions for scalable learning over graphs
  publication-title: IEEE Trans. Signal Process.
  doi: 10.1109/TSP.2018.2889984
– volume: 1
  start-page: 396
  issue: 1
  year: 2020
  ident: 10.1016/j.jocs.2022.101695_b45
  article-title: Microsoft academic graph: When experts are not enough
  publication-title: Quant. Sci. Stud.
  doi: 10.1162/qss_a_00021
– year: 2021
  ident: 10.1016/j.jocs.2022.101695_b25
  article-title: Beyond low-frequency information in graph convolutional networks
– year: 2020
  ident: 10.1016/j.jocs.2022.101695_b52
– year: 2019
  ident: 10.1016/j.jocs.2022.101695_b16
– year: 2018
  ident: 10.1016/j.jocs.2022.101695_b18
  article-title: FastGCN: FAst learning with graph convolutional networks via importance sampling
– volume: 50
  start-page: 94:1
  issue: 6
  year: 2017
  ident: 10.1016/j.jocs.2022.101695_b31
  article-title: Feature selection: A data perspective
  publication-title: ACM Comput. Surv.
– year: 2020
  ident: 10.1016/j.jocs.2022.101695_b36
– year: 2021
  ident: 10.1016/j.jocs.2022.101695_b46
– year: 2020
  ident: 10.1016/j.jocs.2022.101695_b4
  article-title: Simple and deep graph convolutional networks
  publication-title: ICML
– year: 2017
  ident: 10.1016/j.jocs.2022.101695_b1
  article-title: Semi-supervised classification with graph convolutional networks
  publication-title: ICLR
– year: 2016
  ident: 10.1016/j.jocs.2022.101695_b38
– year: 2018
  ident: 10.1016/j.jocs.2022.101695_b19
– start-page: 1506
  year: 2017
  ident: 10.1016/j.jocs.2022.101695_b15
– year: 2001
  ident: 10.1016/j.jocs.2022.101695_b27
– start-page: 115
  year: 2011
  ident: 10.1016/j.jocs.2022.101695_b26
  article-title: Node classification in social networks
– volume: 15
  start-page: 78:1
  issue: 5
  year: 2021
  ident: 10.1016/j.jocs.2022.101695_b13
  article-title: Graph neural networks for fast node ranking approximation
  publication-title: ACM Trans. Knowl. Discov. Data
  doi: 10.1145/3446217
– volume: 33
  year: 2020
  ident: 10.1016/j.jocs.2022.101695_b23
  article-title: Beyond homophily in graph neural networks: Current limitations and effective designs
  publication-title: NeurIPS
– year: 2017
  ident: 10.1016/j.jocs.2022.101695_b11
  article-title: Neural message passing for quantum chemistry
– year: 2018
  ident: 10.1016/j.jocs.2022.101695_b39
– year: 2018
  ident: 10.1016/j.jocs.2022.101695_b10
  article-title: An end-to-end deep learning architecture for graph classification
– year: 2020
  ident: 10.1016/j.jocs.2022.101695_b51
– year: 2020
  ident: 10.1016/j.jocs.2022.101695_b44
– year: 2018
  ident: 10.1016/j.jocs.2022.101695_b6
  article-title: Graph convolutional neural networks for web-scale recommender systems
– start-page: 1024
  year: 2017
  ident: 10.1016/j.jocs.2022.101695_b17
  article-title: Inductive representation learning on large graphs
– start-page: 148
  year: 2021
  ident: 10.1016/j.jocs.2022.101695_b20
  article-title: Node similarity preserving graph convolutional networks
– year: 2021
  ident: 10.1016/j.jocs.2022.101695_b48
– year: 2018
  ident: 10.1016/j.jocs.2022.101695_b2
  article-title: Graph attention networks
  publication-title: ICLR
– year: 2020
  ident: 10.1016/j.jocs.2022.101695_b22
  article-title: DropEdge: Towards deep graph convolutional networks on node classification
– year: 2021
  ident: 10.1016/j.jocs.2022.101695_b49
– year: 2018
  ident: 10.1016/j.jocs.2022.101695_b50
– start-page: 559
  year: 2019
  ident: 10.1016/j.jocs.2022.101695_b14
  article-title: Learning to identify high betweenness centrality nodes from scratch: A novel graph neural network approach
– year: 2020
  ident: 10.1016/j.jocs.2022.101695_b34
  article-title: Geom-GCN: Geometric graph convolutional networks
  publication-title: ICLR
– year: 2021
  ident: 10.1016/j.jocs.2022.101695_b42
– start-page: 37
  year: 2014
  ident: 10.1016/j.jocs.2022.101695_b30
  article-title: Feature selection for classification: A review
  publication-title: Data Classif.: Algorithms Appl.
– year: 2008
  ident: 10.1016/j.jocs.2022.101695_b28
  article-title: Collective classification in network data
  publication-title: AI Mag.
– year: 2021
  ident: 10.1016/j.jocs.2022.101695_b35
– year: 2021
  ident: 10.1016/j.jocs.2022.101695_b41
– year: 2016
  ident: 10.1016/j.jocs.2022.101695_b47
  article-title: Node2vec: Scalable feature learning for networks
  publication-title: KDD
– year: 2019
  ident: 10.1016/j.jocs.2022.101695_b3
  article-title: MixHop: Higher-order graph convolutional architectures via sparsified neighborhood mixing
– year: 2020
  ident: 10.1016/j.jocs.2022.101695_b29
– start-page: 4805
  year: 2018
  ident: 10.1016/j.jocs.2022.101695_b9
  article-title: Hierarchical graph representation learning with differentiable pooling
– year: 2017
  ident: 10.1016/j.jocs.2022.101695_b7
– year: 2019
  ident: 10.1016/j.jocs.2022.101695_b21
  article-title: Simplifying graph convolutional networks
– year: 2019
  ident: 10.1016/j.jocs.2022.101695_b8
  article-title: Hyperbolic graph convolutional neural networks
  publication-title: NeurIPS
– year: 2021
  ident: 10.1016/j.jocs.2022.101695_b40
  article-title: Training graph neural networks with 1000 layers
– year: 2021
  ident: 10.1016/j.jocs.2022.101695_b5
  article-title: Tree decomposed graph neural network
  publication-title: CIKM
– start-page: 807
  year: 2009
  ident: 10.1016/j.jocs.2022.101695_b43
  article-title: Social influence analysis in large-scale networks
SSID ssj0000388913
Score 2.5082195
Snippet Graph Neural Networks (GNNs) have become one of the indispensable tools to learn from graph-structured data, and their usefulness has been shown in wide...
SourceID crossref
elsevier
SourceType Enrichment Source
Index Database
Publisher
StartPage 101695
SubjectTerms Feature selection
Graph Neural Networks
Node classification
Title Simplifying approach to node classification in Graph Neural Networks
URI https://dx.doi.org/10.1016/j.jocs.2022.101695
Volume 62
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LS8NAEF5KBfEitirWR9mDB0Vik30k6bFUa1XspRZ6C_sKpEhSbLz6291JNqIgPXjMskOS2cnst-GbbxC6lFD2a2GCR2lEPYv_uc2DTHhUSaksYE9jAdXIL7NwumBPS75soXFTCwO0Spf765xeZWs3MnDeHKyzbDAPQMqO-zYiK5wD53bGIojy28_g-z8LqJ0Mqy7JMN8DA1c7U9O8VoUC1W5CqgFoM_HX_vRjz5kcoH0HFvGofp4Oapm8i3YbrnoXddyXucFXTj76-hDdzTNgiVf1S7iRDMdlgfNCG6wALQM9qFoRnOX4ASSrMYh02FvNalb45ggtJvev46nneiV4ijJWeibioVb2MEO0iUOqLSohPBVacmLf3o-EjkWc2sOQMIIww6SSoPROfZ2akNsT9TFq50VuThA2LDAqkLBkkhHfSDaUVAcisoYiNKSHgsZDiXJC4tDP4i1pGGOrBLyagFeT2qs9dPNts65lNLbO5o3jk1_BkNg8v8Xu9J92Z2gPrmoW7jlql-8f5sJijVL2q2Dqo53R4_N09gUUgtGj
linkProvider Elsevier
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1NS8NAEB1qBfUi1g_8dg8KisQ2m03aHjyItbba9lKF3uLuZgsVSYutiBf_lH_QmWRTFKQHoddNJlkmw8zb8OYNwLGitl-ECY7nlT0H8b-PeVBIx9NKaQTs_YqkbuR2J2g8irue38vBV9YLQ7RKm_vTnJ5ka7tStN4sjgaDYtclKTu_hBGZ4JySZVbem493PLeNL5s1_MgnnNdvHq4bjh0t4GhPiIljyn4QacT-PDKVwIuwiHO_LyPl8ypm9bKMKrLSx7ODNJILI5RWJIzulaK-CXw8gOJzF2BRYLqgsQkXn-70xw7Jq1STscy0QYd2aJt1Ul7Z81CTTDjnyQLNtfirIP4ocvU1WLXolF2lDihAzsTrsJSR49ehYFPBmJ1aveqzDah1B0RLTxqmWKZRziZDFg8jwzTBc-IjJSHABjG7JY1sRqog-KpOSkMfb8LjXDy4Bfl4GJttYEa4RruKYkQJXjJKVJUXubKMhjIwfAfczEOhtsrlNEDjJcwoas8heTUkr4apV3fgfGozSnU7Zt7tZ44Pf0VfiIVlht3uP-2OYLnx0G6FrWbnfg9W6EpKAd6H_OT1zRwg0JmowySwGDzNO5K_AeJLDdQ
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Simplifying+approach+to+node+classification+in+Graph+Neural+Networks&rft.jtitle=Journal+of+computational+science&rft.au=Maurya%2C+Sunil+Kumar&rft.au=Liu%2C+Xin&rft.au=Murata%2C+Tsuyoshi&rft.date=2022-07-01&rft.issn=1877-7503&rft.volume=62&rft.spage=101695&rft_id=info:doi/10.1016%2Fj.jocs.2022.101695&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_jocs_2022_101695
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1877-7503&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1877-7503&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1877-7503&client=summon