A comparison of distributed machine learning methods for the support of “many labs” collaborations in computational modeling of decision making

Deep learning models are powerful tools for representing the complex learning processes and decision-making strategies used by humans. Such neural network models make fewer assumptions about the underlying mechanisms thus providing experimental flexibility in terms of applicability. However, this co...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in psychology Vol. 13; p. 943198
Main Authors Zhang, Lili, Vashisht, Himanshu, Totev, Andrey, Trinh, Nam, Ward, Tomas
Format Journal Article
LanguageEnglish
Published Frontiers Media S.A 25.08.2022
Subjects
Online AccessGet full text
ISSN1664-1078
1664-1078
DOI10.3389/fpsyg.2022.943198

Cover

Loading…
Abstract Deep learning models are powerful tools for representing the complex learning processes and decision-making strategies used by humans. Such neural network models make fewer assumptions about the underlying mechanisms thus providing experimental flexibility in terms of applicability. However, this comes at the cost of involving a larger number of parameters requiring significantly more data for effective learning. This presents practical challenges given that most cognitive experiments involve relatively small numbers of subjects. Laboratory collaborations are a natural way to increase overall dataset size. However, data sharing barriers between laboratories as necessitated by data protection regulations encourage the search for alternative methods to enable collaborative data science. Distributed learning, especially federated learning (FL), which supports the preservation of data privacy, is a promising method for addressing this issue. To verify the reliability and feasibility of applying FL to train neural networks models used in the characterization of decision making, we conducted experiments on a real-world, many-labs data pool including experiment data-sets from ten independent studies. The performance of single models trained on single laboratory data-sets was poor. This unsurprising finding supports the need for laboratory collaboration to train more reliable models. To that end we evaluated four collaborative approaches. The first approach represents conventional centralized learning (CL-based) and is the optimal approach but requires complete sharing of data which we wish to avoid. The results however establish a benchmark for the other three approaches, federated learning (FL-based), incremental learning (IL-based), and cyclic incremental learning (CIL-based). We evaluate these approaches in terms of prediction accuracy and capacity to characterize human decision-making strategies. The FL-based model achieves performance most comparable to that of the CL-based model. This indicates that FL has value in scaling data science methods to data collected in computational modeling contexts when data sharing is not convenient, practical or permissible.
AbstractList Deep learning models are powerful tools for representing the complex learning processes and decision-making strategies used by humans. Such neural network models make fewer assumptions about the underlying mechanisms thus providing experimental flexibility in terms of applicability. However, this comes at the cost of involving a larger number of parameters requiring significantly more data for effective learning. This presents practical challenges given that most cognitive experiments involve relatively small numbers of subjects. Laboratory collaborations are a natural way to increase overall dataset size. However, data sharing barriers between laboratories as necessitated by data protection regulations encourage the search for alternative methods to enable collaborative data science. Distributed learning, especially federated learning (FL), which supports the preservation of data privacy, is a promising method for addressing this issue. To verify the reliability and feasibility of applying FL to train neural networks models used in the characterization of decision making, we conducted experiments on a real-world, many-labs data pool including experiment data-sets from ten independent studies. The performance of single models trained on single laboratory data-sets was poor. This unsurprising finding supports the need for laboratory collaboration to train more reliable models. To that end we evaluated four collaborative approaches. The first approach represents conventional centralized learning (CL-based) and is the optimal approach but requires complete sharing of data which we wish to avoid. The results however establish a benchmark for the other three approaches, federated learning (FL-based), incremental learning (IL-based), and cyclic incremental learning (CIL-based). We evaluate these approaches in terms of prediction accuracy and capacity to characterize human decision-making strategies. The FL-based model achieves performance most comparable to that of the CL-based model. This indicates that FL has value in scaling data science methods to data collected in computational modeling contexts when data sharing is not convenient, practical or permissible.
Deep learning models are powerful tools for representing the complex learning processes and decision-making strategies used by humans. Such neural network models make fewer assumptions about the underlying mechanisms thus providing experimental flexibility in terms of applicability. However, this comes at the cost of involving a larger number of parameters requiring significantly more data for effective learning. This presents practical challenges given that most cognitive experiments involve relatively small numbers of subjects. Laboratory collaborations are a natural way to increase overall dataset size. However, data sharing barriers between laboratories as necessitated by data protection regulations encourage the search for alternative methods to enable collaborative data science. Distributed learning, especially federated learning (FL), which supports the preservation of data privacy, is a promising method for addressing this issue. To verify the reliability and feasibility of applying FL to train neural networks models used in the characterization of decision making, we conducted experiments on a real-world, many-labs data pool including experiment data-sets from ten independent studies. The performance of single models trained on single laboratory data-sets was poor. This unsurprising finding supports the need for laboratory collaboration to train more reliable models. To that end we evaluated four collaborative approaches. The first approach represents conventional centralized learning (CL-based) and is the optimal approach but requires complete sharing of data which we wish to avoid. The results however establish a benchmark for the other three approaches, federated learning (FL-based), incremental learning (IL-based), and cyclic incremental learning (CIL-based). We evaluate these approaches in terms of prediction accuracy and capacity to characterize human decision-making strategies. The FL-based model achieves performance most comparable to that of the CL-based model. This indicates that FL has value in scaling data science methods to data collected in computational modeling contexts when data sharing is not convenient, practical or permissible.Deep learning models are powerful tools for representing the complex learning processes and decision-making strategies used by humans. Such neural network models make fewer assumptions about the underlying mechanisms thus providing experimental flexibility in terms of applicability. However, this comes at the cost of involving a larger number of parameters requiring significantly more data for effective learning. This presents practical challenges given that most cognitive experiments involve relatively small numbers of subjects. Laboratory collaborations are a natural way to increase overall dataset size. However, data sharing barriers between laboratories as necessitated by data protection regulations encourage the search for alternative methods to enable collaborative data science. Distributed learning, especially federated learning (FL), which supports the preservation of data privacy, is a promising method for addressing this issue. To verify the reliability and feasibility of applying FL to train neural networks models used in the characterization of decision making, we conducted experiments on a real-world, many-labs data pool including experiment data-sets from ten independent studies. The performance of single models trained on single laboratory data-sets was poor. This unsurprising finding supports the need for laboratory collaboration to train more reliable models. To that end we evaluated four collaborative approaches. The first approach represents conventional centralized learning (CL-based) and is the optimal approach but requires complete sharing of data which we wish to avoid. The results however establish a benchmark for the other three approaches, federated learning (FL-based), incremental learning (IL-based), and cyclic incremental learning (CIL-based). We evaluate these approaches in terms of prediction accuracy and capacity to characterize human decision-making strategies. The FL-based model achieves performance most comparable to that of the CL-based model. This indicates that FL has value in scaling data science methods to data collected in computational modeling contexts when data sharing is not convenient, practical or permissible.
Author Zhang, Lili
Ward, Tomas
Vashisht, Himanshu
Totev, Andrey
Trinh, Nam
AuthorAffiliation 2 Insight Science Foundation Ireland Research Centre for Data Analytics , Dublin , Ireland
3 In the Wild Research Limited , Dublin , Ireland
1 School of Computing, Dublin City University , Dublin , Ireland
AuthorAffiliation_xml – name: 2 Insight Science Foundation Ireland Research Centre for Data Analytics , Dublin , Ireland
– name: 3 In the Wild Research Limited , Dublin , Ireland
– name: 1 School of Computing, Dublin City University , Dublin , Ireland
Author_xml – sequence: 1
  givenname: Lili
  surname: Zhang
  fullname: Zhang, Lili
– sequence: 2
  givenname: Himanshu
  surname: Vashisht
  fullname: Vashisht, Himanshu
– sequence: 3
  givenname: Andrey
  surname: Totev
  fullname: Totev, Andrey
– sequence: 4
  givenname: Nam
  surname: Trinh
  fullname: Trinh, Nam
– sequence: 5
  givenname: Tomas
  surname: Ward
  fullname: Ward, Tomas
BookMark eNpVkrtuFDEUhi0URELIA9C5pNnF1xm7QYoiLpEi0UBteXzZdZixB9uDlC7vkBZeLk-CZzdCxI2Pf5_z_ZL9vwYnMUUHwFuMtpQK-d7P5W63JYiQrWQUS_ECnOGuYxuMenHyX30KLkq5RW0xRBAir8Ap7ZAkiIoz8HAJTZpmnUNJESYPbSg1h2GpzsJJm32IDo5O5xjiDk6u7pMt0KcM697BssxzynWde7z_Pel4B0c9lMf7P406tjJlXUOKBYZ48Fnq4axHOCXrxpW5ejoTSpOb4Y8mvQEvvR6Lu3jaz8H3Tx-_XX3Z3Hz9fH11ebMxVLK6kU6QTngume47QzpuOcfEdob1GGnNpNGE-F4gTjztPWFEDJZ53joE552m5-D6yLVJ36o5h0nnO5V0UAch5Z3SuQYzOsUpx076AfOOs-YlvbfWWD8IaYl1rLE-HFnzMkzOGhdr1uMz6PObGPZql34pyTjtOWqAd0-AnH4urlQ1hWJce8To0lIU6TGl7dfE2oqPrSanUrLz_2wwUms21CEbas2GOmaD_gUfYbQh
Cites_doi 10.1093/jamia/ocy017
10.48550/arXiv.1806.00582
10.1016/0010-0277(94)90018-3
10.2196/12702
10.2196/23728
10.1109/ICDCS.2017.215
10.48550/arXiv.1109.5647
10.48550/arXiv.1811.03604
10.1162/neco.2008.12-06-420
10.1007/978-981-16-8193-6
10.3389/fnins.2012.00070
10.5334/jopd.ak
10.3758/s13423-017-1331-7
10.1016/s0028-3932(02)00015-5
10.1016/S1364-6613(99)01294-2
10.48550/arXiv.1406.1078
10.1038/nn.4238
10.1109/MSP.2020.2975749
10.1371/journal.pcbi.1006903
10.1162/neco.1997.9.8.1735
10.1038/s41598-020-69250-1
10.1371/journal.pmed.1002683
10.1007/978-3-319-57959-7
10.2196/20891
10.48550/arXiv.1603.04467
10.48550/arXiv.1412.6980
10.1049/cp:19991218
10.1016/j.media.2020.101765
10.48550/arXiv.2010.15582
10.48550/arXiv.1610.02527
10.1109/INFOCOM41043.2020.9155494
10.1016/j.conb.2008.08.003
10.1038/s41598-022-08863-0
ContentType Journal Article
Copyright Copyright © 2022 Zhang, Vashisht, Totev, Trinh and Ward.
Copyright © 2022 Zhang, Vashisht, Totev, Trinh and Ward. 2022 Zhang, Vashisht, Totev, Trinh and Ward
Copyright_xml – notice: Copyright © 2022 Zhang, Vashisht, Totev, Trinh and Ward.
– notice: Copyright © 2022 Zhang, Vashisht, Totev, Trinh and Ward. 2022 Zhang, Vashisht, Totev, Trinh and Ward
DBID AAYXX
CITATION
7X8
5PM
DOA
DOI 10.3389/fpsyg.2022.943198
DatabaseName CrossRef
MEDLINE - Academic
PubMed Central (Full Participant titles)
DOAJ : directory of open access journals
DatabaseTitle CrossRef
MEDLINE - Academic
DatabaseTitleList
CrossRef

MEDLINE - Academic
Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
DeliveryMethod fulltext_linktorsrc
Discipline Psychology
EISSN 1664-1078
ExternalDocumentID oai_doaj_org_article_5351e9fb15654c269ffddcdfb89d2de4
PMC9453750
10_3389_fpsyg_2022_943198
GroupedDBID 53G
5VS
9T4
AAFWJ
AAKDD
AAYXX
ABIVO
ACGFO
ACGFS
ACHQT
ACXDI
ADBBV
ADRAZ
AEGXH
AFPKN
AIAGR
ALMA_UNASSIGNED_HOLDINGS
AOIJS
BAWUL
BCNDV
CITATION
DIK
EBS
EJD
EMOBN
F5P
GROUPED_DOAJ
GX1
HYE
KQ8
M48
M~E
O5R
O5S
OK1
P2P
PGMZT
RNS
RPM
7X8
5PM
ID FETCH-LOGICAL-c394t-9e8268f594a76c265d5512d6c4710aa49ca22f78052f37f2428bd4f5d6c8556a3
IEDL.DBID M48
ISSN 1664-1078
IngestDate Wed Aug 27 01:30:30 EDT 2025
Thu Aug 21 13:41:25 EDT 2025
Thu Jul 10 20:03:08 EDT 2025
Tue Jul 01 01:35:20 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Language English
License This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c394t-9e8268f594a76c265d5512d6c4710aa49ca22f78052f37f2428bd4f5d6c8556a3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Reviewed by: Riccardo Zucca, Pompeu Fabra University, Spain; Alex Jung, Aalto University, Finland
Edited by: Baiyuan Ding, National University of Defense Technology, China
This article was submitted to Health Psychology, a section of the journal Frontiers in Psychology
OpenAccessLink https://doaj.org/article/5351e9fb15654c269ffddcdfb89d2de4
PMID 36092038
PQID 2713309280
PQPubID 23479
ParticipantIDs doaj_primary_oai_doaj_org_article_5351e9fb15654c269ffddcdfb89d2de4
pubmedcentral_primary_oai_pubmedcentral_nih_gov_9453750
proquest_miscellaneous_2713309280
crossref_primary_10_3389_fpsyg_2022_943198
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2022-08-25
PublicationDateYYYYMMDD 2022-08-25
PublicationDate_xml – month: 08
  year: 2022
  text: 2022-08-25
  day: 25
PublicationDecade 2020
PublicationTitle Frontiers in psychology
PublicationYear 2022
Publisher Frontiers Media S.A
Publisher_xml – name: Frontiers Media S.A
References Bechara (B2) 1994; 50
Li (B20); 37
Zhao (B36) 2018
Dankar (B6) 2019; 7
Wang (B33) 2020
Zhang (B35) 2017
Jung (B16) 2022
Hard (B13) 2018
French (B11) 1999; 3
Gers (B12) 2000; 12
Chang (B4) 2018; 25
Li (B21); 65
Huys (B15) 2016; 19
Rescorla (B26) 1972
Liu (B22) 2021; 9
Fintz (B10) 2021; 12
Ozdayi (B23) 2020
Sheller (B27) 2020; 10
Steingroever (B29) 2015; 3
Kingma (B17) 2014
B37
Dezfouli (B9); 15
Ratcliff (B25) 2008; 20
Steingroever (B30) 2018; 25
Zech (B34) 2018; 15
Abadi (B1) 2016
Hochreiter (B14) 1997; 9
Bechara (B3) 2002; 40
B8
Rakhlin (B24) 2011
Sheller (B28) 2018
Summerfield (B31) 2012; 6
Cho (B5) 2014
Lee (B19) 2020; 22
Voigt (B32) 2017
Konečnỳ (B18) 2016
Dayan (B7) 2008; 18
References_xml – volume: 25
  start-page: 945
  year: 2018
  ident: B4
  article-title: Distributed deep learning networks among institutions for medical imaging
  publication-title: J. Am. Med. Inform. Assoc
  doi: 10.1093/jamia/ocy017
– year: 2018
  ident: B36
  article-title: Federated learning with non-iid data
  publication-title: arXiv preprint arXiv:1806.00582
  doi: 10.48550/arXiv.1806.00582
– volume: 50
  start-page: 7
  year: 1994
  ident: B2
  article-title: Insensitivity to future consequences following damage to human prefrontal cortex
  publication-title: Cognition
  doi: 10.1016/0010-0277(94)90018-3
– volume: 7
  start-page: e12702
  year: 2019
  ident: B6
  article-title: Privacy-preserving analysis of distributed biomedical data: designing efficient and secure multiparty computations using distributed statistical learning theory
  publication-title: JMIR Med. Inform
  doi: 10.2196/12702
– volume: 9
  start-page: e23728
  year: 2021
  ident: B22
  article-title: Learning from others without sacrificing privacy: simulation comparing centralized and federated machine learning on mobile health data
  publication-title: JMIR mHealth uHealth
  doi: 10.2196/23728
– start-page: 1442
  volume-title: 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS)
  year: 2017
  ident: B35
  article-title: “Private, yet practical, multiparty deep learning,”
  doi: 10.1109/ICDCS.2017.215
– year: 2011
  ident: B24
  article-title: Making gradient descent optimal for strongly convex stochastic optimization
  publication-title: arXiv preprint arXiv:1109.5647
  doi: 10.48550/arXiv.1109.5647
– year: 2018
  ident: B13
  article-title: Federated learning for mobile keyboard prediction
  publication-title: arXiv preprint arXiv:1811.03604
  doi: 10.48550/arXiv.1811.03604
– volume: 20
  start-page: 873
  year: 2008
  ident: B25
  article-title: The diffusion decision model: theory and data for two-choice decision tasks
  publication-title: Neural Comput
  doi: 10.1162/neco.2008.12-06-420
– volume-title: Machine Learning: The Basics
  year: 2022
  ident: B16
  doi: 10.1007/978-981-16-8193-6
– volume: 6
  start-page: 70
  year: 2012
  ident: B31
  article-title: Building bridges between perceptual and economic decision-making: neural and computational mechanisms
  publication-title: Front. Neurosci
  doi: 10.3389/fnins.2012.00070
– start-page: 92
  volume-title: International MICCAI Brainlesion Workshop
  year: 2018
  ident: B28
  article-title: “Multi-institutional deep learning modeling without sharing patient data: a feasibility study on brain tumor segmentation,”
– volume: 3
  start-page: 340
  year: 2015
  ident: B29
  article-title: Data from 617 healthy participants performing the iowa gambling task: A “many labs” collaboration
  publication-title: J. Open Psychol. Data
  doi: 10.5334/jopd.ak
– volume: 25
  start-page: 951
  year: 2018
  ident: B30
  article-title: Bayesian techniques for analyzing group differences in the iowa gambling task: a case study of intuitive and deliberate decision-makers
  publication-title: Psychon. Bull. Rev
  doi: 10.3758/s13423-017-1331-7
– ident: B37
– volume: 40
  start-page: 1675
  year: 2002
  ident: B3
  article-title: Decision-making and addiction (part I): impaired activation of somatic states in substance dependent individuals when pondering decisions with negative future consequences
  publication-title: Neuropsychologia
  doi: 10.1016/s0028-3932(02)00015-5
– ident: B8
– volume: 3
  start-page: 128
  year: 1999
  ident: B11
  article-title: Catastrophic forgetting in connectionist networks
  publication-title: Trends Cogn. Sci
  doi: 10.1016/S1364-6613(99)01294-2
– year: 2014
  ident: B5
  article-title: Learning phrase representations using rnn encoder-decoder for statistical machine translation
  publication-title: arXiv preprint arXiv:1406.1078
  doi: 10.48550/arXiv.1406.1078
– volume: 19
  start-page: 404
  year: 2016
  ident: B15
  article-title: Computational psychiatry as a bridge from neuroscience to clinical applications
  publication-title: Nat. Neurosci
  doi: 10.1038/nn.4238
– volume: 37
  start-page: 50
  ident: B20
  article-title: Federated learning: challenges, methods, and future directions
  publication-title: IEEE Signal Process. Mag
  doi: 10.1109/MSP.2020.2975749
– volume: 15
  start-page: e1006903
  ident: B9
  article-title: Models that learn how humans learn: the case of decision-making and its disorders
  publication-title: PLoS Comput. Biol
  doi: 10.1371/journal.pcbi.1006903
– volume: 9
  start-page: 1735
  year: 1997
  ident: B14
  article-title: Long short-term memory
  publication-title: Neural Comput
  doi: 10.1162/neco.1997.9.8.1735
– volume: 10
  start-page: 1
  year: 2020
  ident: B27
  article-title: Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data
  publication-title: Sci. Rep
  doi: 10.1038/s41598-020-69250-1
– volume: 15
  start-page: e1002683
  year: 2018
  ident: B34
  article-title: Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study
  publication-title: PLoS Med
  doi: 10.1371/journal.pmed.1002683
– volume-title: The EU General Data Protection Regulation (GDPR). A Practical Guide, 1st Edn
  year: 2017
  ident: B32
  doi: 10.1007/978-3-319-57959-7
– volume: 22
  start-page: e20891
  year: 2020
  ident: B19
  article-title: Federated learning on clinical benchmark data: performance assessment
  publication-title: J. Med. Internet Res
  doi: 10.2196/20891
– year: 2016
  ident: B1
  article-title: Tensorflow: large-scale machine learning on heterogeneous distributed systems
  publication-title: arXiv preprint arXiv:1603.04467
  doi: 10.48550/arXiv.1603.04467
– start-page: 64
  volume-title: Classical Conditioning II: Current Research and Theory
  year: 1972
  ident: B26
  article-title: “A theory of Pavlovian conditioning: variations in the effectiveness of reinforcement and nonreinforcement,”
– year: 2014
  ident: B17
  article-title: ADAM: a method for stochastic optimization
  publication-title: arXiv preprint arXiv:1412.6980
  doi: 10.48550/arXiv.1412.6980
– volume: 12
  start-page: 2451
  year: 2000
  ident: B12
  article-title: Learning to forget: continual prediction with LSTM
  publication-title: Neural Comput
  doi: 10.1049/cp:19991218
– volume: 65
  start-page: 101765
  ident: B21
  article-title: Multi-site fmri analysis using privacy-preserving federated learning and domain adaptation: abide results
  publication-title: Med. Image Anal
  doi: 10.1016/j.media.2020.101765
– year: 2020
  ident: B23
  article-title: Improving accuracy of federated learning in non-iid settings
  publication-title: arXiv preprint arXiv:2010.15582
  doi: 10.48550/arXiv.2010.15582
– year: 2016
  ident: B18
  article-title: Federated optimization: distributed machine learning for on-device intelligence
  publication-title: arXiv preprint arXiv:1610.02527
  doi: 10.48550/arXiv.1610.02527
– start-page: 1698
  volume-title: IEEE INFOCOM 2020-IEEE Conference on Computer Communications
  year: 2020
  ident: B33
  article-title: “Optimizing federated learning on non-IID data with reinforcement learning,”
  doi: 10.1109/INFOCOM41043.2020.9155494
– volume: 18
  start-page: 185
  year: 2008
  ident: B7
  article-title: Reinforcement learning: the good, the bad and the ugly
  publication-title: Curr. Opin. Neurobiol
  doi: 10.1016/j.conb.2008.08.003
– volume: 12
  start-page: 1
  year: 2021
  ident: B10
  article-title: Using deep learning to predict human decisions and cognitive models to explain deep learning models
  publication-title: Scient. Rep
  doi: 10.1038/s41598-022-08863-0
SSID ssj0000402002
Score 2.3141446
Snippet Deep learning models are powerful tools for representing the complex learning processes and decision-making strategies used by humans. Such neural network...
SourceID doaj
pubmedcentral
proquest
crossref
SourceType Open Website
Open Access Repository
Aggregation Database
Index Database
StartPage 943198
SubjectTerms data privacy
decision-making
deep learning
distributed learning
federated learning
Psychology
SummonAdditionalLinks – databaseName: DOAJ : directory of open access journals
  dbid: DOA
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV25TsQwELUQFQ3iFMslI1EhBcLETuISEAghQQUSneX4AAqyK7JbbMc_0MLP8SXMOGG1qWhoEyfjeCae92xrHmOHHpwHU6QJJl-TCF-5RElA4mohNaHyGcRiOrd3-fWDuHmUj3NSX3QmrC0P3A7ciczkqVehQp4hhYVcheCcdaEqlUM7sRIo5rw5MhXnYKJFKbTbmMjC1EkYNdMn5IMAxwqTpip7iSjW6--BzP4Rybmcc7XCljuwyM_aTq6yBV-vsaXZnDVdZx9n3M6UBPkwcEeFcEnDyjv-Gg9Ket4pQzzxVi664QhUOQI_3kxGBL_pue_3z1ecFzgGRfP9_sV78dHwlzramYy7tUMeFXTonWSz0-lBg7TwvsEeri7vL66TTmchsZkS40R55BhlkEqYIschlg5hFLjcYuJKjRHKGoAQxQ9CVgRM6mXlRJDYopQyN9kmW6yHtd9iHAraLC6Ut4URNuTKuTwT0mcVSCOdGLCj30HXo7achkYaQh7S0UOaPKRbDw3YObll1pAqYccLGB-6iw_9V3wM2MGvUzX-ObQdYmo_nDQaiJ-nCsp0wIqet3sW-3fql-dYg1sJmSHY2v6PLu6wJfpqWqkGucsWx28Tv4dQZ1ztx6j-AZwABZw
  priority: 102
  providerName: Directory of Open Access Journals
Title A comparison of distributed machine learning methods for the support of “many labs” collaborations in computational modeling of decision making
URI https://www.proquest.com/docview/2713309280
https://pubmed.ncbi.nlm.nih.gov/PMC9453750
https://doaj.org/article/5351e9fb15654c269ffddcdfb89d2de4
Volume 13
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1La9wwEBZpesmlNH3Q7SMokFPBW1fWwzqUkpSGEEhPXdibkPXYLnS9m_UudG_5D70mfy6_pDOyd6khlxxtyxpbo3l8kpiPkJPAfGBW5RkEX5vxUPlMCwbA1bHcxioULBXTufohL0b8cizGe2RLb9UNYPMgtEM-qdHy9_DP9eYrGPwXRJwQbz_FRbOZANRjbKghHuryCXkKgUmhnV512X5yzIiV2lOIUnJwQKps9zkf7qUXqVJB_14W2j9D-V9QOn9OnnXZJD1t1X9I9kL9ghzsnNrmJfl7St2OapDOI_VYKRdJroKns3SSMtCOOmJCWz7phkImSyEzpM16gUOD793f3M7AcVCYNc39zR3tTaCGTuskZ73qFhdpotjBPlFmR-QDAnFl_hUZnX__-e0i64gYMldovsp0ABBSRqG5VdIxKTzkWcxLB5Ett5ZrZxmLiR0hFipC1C8rz6OAFqUQ0havyX49r8MbQpnC3WSlg1OWuyi197LgIhQVE1Z4PiAft4NuFm29DQM4BTVkkoYMasi0GhqQM1TLriGWyk435suJ6SzPiEJ8DjpWAFQFh4_XMXrvfKxK7WGigsTjrVINmBbul9g6zNeNYQjgc83KfEBUT9s9if0n9fRXKtKtuSggG3v7mP95Rw7wCpesmXhP9lfLdfgAOc-qOkprBUdpPv8DbC4G6w
linkProvider Scholars Portal
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+comparison+of+distributed+machine+learning+methods+for+the+support+of+%E2%80%9Cmany+labs%E2%80%9D+collaborations+in+computational+modeling+of+decision+making&rft.jtitle=Frontiers+in+psychology&rft.au=Zhang%2C+Lili&rft.au=Vashisht%2C+Himanshu&rft.au=Totev%2C+Andrey&rft.au=Trinh%2C+Nam&rft.date=2022-08-25&rft.issn=1664-1078&rft.eissn=1664-1078&rft.volume=13&rft_id=info:doi/10.3389%2Ffpsyg.2022.943198&rft.externalDBID=n%2Fa&rft.externalDocID=10_3389_fpsyg_2022_943198
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1664-1078&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1664-1078&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1664-1078&client=summon