Comparing Web-Based and Lab-Based Cognitive Assessment Using the Cambridge Neuropsychological Test Automated Battery: A Within-Subjects Counterbalanced Study

Computerized assessments are already used to derive accurate and reliable measures of cognitive function. Web-based cognitive assessment could improve the accessibility and flexibility of research and clinical assessment, widen participation, and promote research recruitment while simultaneously red...

Full description

Saved in:
Bibliographic Details
Published inJournal of medical Internet research Vol. 22; no. 8; p. e16792
Main Authors Backx, Rosa, Skirrow, Caroline, Dente, Pasquale, Barnett, Jennifer H, Cormack, Francesca K
Format Journal Article
LanguageEnglish
Published Canada Gunther Eysenbach MD MPH, Associate Professor 04.08.2020
JMIR Publications
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Computerized assessments are already used to derive accurate and reliable measures of cognitive function. Web-based cognitive assessment could improve the accessibility and flexibility of research and clinical assessment, widen participation, and promote research recruitment while simultaneously reducing costs. However, differences in context may influence task performance. This study aims to determine the comparability of an unsupervised, web-based administration of the Cambridge Neuropsychological Test Automated Battery (CANTAB) against a typical in-person lab-based assessment, using a within-subjects counterbalanced design. The study aims to test (1) reliability, quantifying the relationship between measurements across settings using correlational approaches; (2) equivalence, the extent to which test results in different settings produce similar overall results; and (3) agreement, by quantifying acceptable limits to bias and differences between measurement environments. A total of 51 healthy adults (32 women and 19 men; mean age 36.8, SD 15.6 years) completed 2 testing sessions, which were completed on average 1 week apart (SD 4.5 days). Assessments included equivalent tests of emotion recognition (emotion recognition task [ERT]), visual recognition (pattern recognition memory [PRM]), episodic memory (paired associate learning [PAL]), working memory and spatial planning (spatial working memory [SWM] and one touch stockings of Cambridge), and sustained attention (rapid visual information processing [RVP]). Participants were randomly allocated to one of the two groups, either assessed in-person in the laboratory first (n=33) or with unsupervised web-based assessments on their personal computing systems first (n=18). Performance indices (errors, correct trials, and response sensitivity) and median reaction times were extracted. Intraclass and bivariate correlations examined intersetting reliability, linear mixed models and Bayesian paired sample t tests tested for equivalence, and Bland-Altman plots examined agreement. Intraclass correlation (ICC) coefficients ranged from ρ=0.23-0.67, with high correlations in 3 performance indices (from PAL, SWM, and RVP tasks; ρ≥0.60). High ICC values were also seen for reaction time measures from 2 tasks (PRM and ERT tasks; ρ≥0.60). However, reaction times were slower during web-based assessments, which undermined both equivalence and agreement for reaction time measures. Performance indices did not differ between assessment settings and generally showed satisfactory agreement. Our findings support the comparability of CANTAB performance indices (errors, correct trials, and response sensitivity) in unsupervised, web-based assessments with in-person and laboratory tests. Reaction times are not as easily translatable from in-person to web-based testing, likely due to variations in computer hardware. The results underline the importance of examining more than one index to ascertain comparability, as high correlations can present in the context of systematic differences, which are a product of differences between measurement environments. Further work is now needed to examine web-based assessments in clinical populations and in larger samples to improve sensitivity for detecting subtler differences between test settings.
AbstractList BackgroundComputerized assessments are already used to derive accurate and reliable measures of cognitive function. Web-based cognitive assessment could improve the accessibility and flexibility of research and clinical assessment, widen participation, and promote research recruitment while simultaneously reducing costs. However, differences in context may influence task performance. ObjectiveThis study aims to determine the comparability of an unsupervised, web-based administration of the Cambridge Neuropsychological Test Automated Battery (CANTAB) against a typical in-person lab-based assessment, using a within-subjects counterbalanced design. The study aims to test (1) reliability, quantifying the relationship between measurements across settings using correlational approaches; (2) equivalence, the extent to which test results in different settings produce similar overall results; and (3) agreement, by quantifying acceptable limits to bias and differences between measurement environments. MethodsA total of 51 healthy adults (32 women and 19 men; mean age 36.8, SD 15.6 years) completed 2 testing sessions, which were completed on average 1 week apart (SD 4.5 days). Assessments included equivalent tests of emotion recognition (emotion recognition task [ERT]), visual recognition (pattern recognition memory [PRM]), episodic memory (paired associate learning [PAL]), working memory and spatial planning (spatial working memory [SWM] and one touch stockings of Cambridge), and sustained attention (rapid visual information processing [RVP]). Participants were randomly allocated to one of the two groups, either assessed in-person in the laboratory first (n=33) or with unsupervised web-based assessments on their personal computing systems first (n=18). Performance indices (errors, correct trials, and response sensitivity) and median reaction times were extracted. Intraclass and bivariate correlations examined intersetting reliability, linear mixed models and Bayesian paired sample t tests tested for equivalence, and Bland-Altman plots examined agreement. ResultsIntraclass correlation (ICC) coefficients ranged from ρ=0.23-0.67, with high correlations in 3 performance indices (from PAL, SWM, and RVP tasks; ρ≥0.60). High ICC values were also seen for reaction time measures from 2 tasks (PRM and ERT tasks; ρ≥0.60). However, reaction times were slower during web-based assessments, which undermined both equivalence and agreement for reaction time measures. Performance indices did not differ between assessment settings and generally showed satisfactory agreement. ConclusionsOur findings support the comparability of CANTAB performance indices (errors, correct trials, and response sensitivity) in unsupervised, web-based assessments with in-person and laboratory tests. Reaction times are not as easily translatable from in-person to web-based testing, likely due to variations in computer hardware. The results underline the importance of examining more than one index to ascertain comparability, as high correlations can present in the context of systematic differences, which are a product of differences between measurement environments. Further work is now needed to examine web-based assessments in clinical populations and in larger samples to improve sensitivity for detecting subtler differences between test settings.
Computerized assessments are already used to derive accurate and reliable measures of cognitive function. Web-based cognitive assessment could improve the accessibility and flexibility of research and clinical assessment, widen participation, and promote research recruitment while simultaneously reducing costs. However, differences in context may influence task performance.BACKGROUNDComputerized assessments are already used to derive accurate and reliable measures of cognitive function. Web-based cognitive assessment could improve the accessibility and flexibility of research and clinical assessment, widen participation, and promote research recruitment while simultaneously reducing costs. However, differences in context may influence task performance.This study aims to determine the comparability of an unsupervised, web-based administration of the Cambridge Neuropsychological Test Automated Battery (CANTAB) against a typical in-person lab-based assessment, using a within-subjects counterbalanced design. The study aims to test (1) reliability, quantifying the relationship between measurements across settings using correlational approaches; (2) equivalence, the extent to which test results in different settings produce similar overall results; and (3) agreement, by quantifying acceptable limits to bias and differences between measurement environments.OBJECTIVEThis study aims to determine the comparability of an unsupervised, web-based administration of the Cambridge Neuropsychological Test Automated Battery (CANTAB) against a typical in-person lab-based assessment, using a within-subjects counterbalanced design. The study aims to test (1) reliability, quantifying the relationship between measurements across settings using correlational approaches; (2) equivalence, the extent to which test results in different settings produce similar overall results; and (3) agreement, by quantifying acceptable limits to bias and differences between measurement environments.A total of 51 healthy adults (32 women and 19 men; mean age 36.8, SD 15.6 years) completed 2 testing sessions, which were completed on average 1 week apart (SD 4.5 days). Assessments included equivalent tests of emotion recognition (emotion recognition task [ERT]), visual recognition (pattern recognition memory [PRM]), episodic memory (paired associate learning [PAL]), working memory and spatial planning (spatial working memory [SWM] and one touch stockings of Cambridge), and sustained attention (rapid visual information processing [RVP]). Participants were randomly allocated to one of the two groups, either assessed in-person in the laboratory first (n=33) or with unsupervised web-based assessments on their personal computing systems first (n=18). Performance indices (errors, correct trials, and response sensitivity) and median reaction times were extracted. Intraclass and bivariate correlations examined intersetting reliability, linear mixed models and Bayesian paired sample t tests tested for equivalence, and Bland-Altman plots examined agreement.METHODSA total of 51 healthy adults (32 women and 19 men; mean age 36.8, SD 15.6 years) completed 2 testing sessions, which were completed on average 1 week apart (SD 4.5 days). Assessments included equivalent tests of emotion recognition (emotion recognition task [ERT]), visual recognition (pattern recognition memory [PRM]), episodic memory (paired associate learning [PAL]), working memory and spatial planning (spatial working memory [SWM] and one touch stockings of Cambridge), and sustained attention (rapid visual information processing [RVP]). Participants were randomly allocated to one of the two groups, either assessed in-person in the laboratory first (n=33) or with unsupervised web-based assessments on their personal computing systems first (n=18). Performance indices (errors, correct trials, and response sensitivity) and median reaction times were extracted. Intraclass and bivariate correlations examined intersetting reliability, linear mixed models and Bayesian paired sample t tests tested for equivalence, and Bland-Altman plots examined agreement.Intraclass correlation (ICC) coefficients ranged from ρ=0.23-0.67, with high correlations in 3 performance indices (from PAL, SWM, and RVP tasks; ρ≥0.60). High ICC values were also seen for reaction time measures from 2 tasks (PRM and ERT tasks; ρ≥0.60). However, reaction times were slower during web-based assessments, which undermined both equivalence and agreement for reaction time measures. Performance indices did not differ between assessment settings and generally showed satisfactory agreement.RESULTSIntraclass correlation (ICC) coefficients ranged from ρ=0.23-0.67, with high correlations in 3 performance indices (from PAL, SWM, and RVP tasks; ρ≥0.60). High ICC values were also seen for reaction time measures from 2 tasks (PRM and ERT tasks; ρ≥0.60). However, reaction times were slower during web-based assessments, which undermined both equivalence and agreement for reaction time measures. Performance indices did not differ between assessment settings and generally showed satisfactory agreement.Our findings support the comparability of CANTAB performance indices (errors, correct trials, and response sensitivity) in unsupervised, web-based assessments with in-person and laboratory tests. Reaction times are not as easily translatable from in-person to web-based testing, likely due to variations in computer hardware. The results underline the importance of examining more than one index to ascertain comparability, as high correlations can present in the context of systematic differences, which are a product of differences between measurement environments. Further work is now needed to examine web-based assessments in clinical populations and in larger samples to improve sensitivity for detecting subtler differences between test settings.CONCLUSIONSOur findings support the comparability of CANTAB performance indices (errors, correct trials, and response sensitivity) in unsupervised, web-based assessments with in-person and laboratory tests. Reaction times are not as easily translatable from in-person to web-based testing, likely due to variations in computer hardware. The results underline the importance of examining more than one index to ascertain comparability, as high correlations can present in the context of systematic differences, which are a product of differences between measurement environments. Further work is now needed to examine web-based assessments in clinical populations and in larger samples to improve sensitivity for detecting subtler differences between test settings.
Computerized assessments are already used to derive accurate and reliable measures of cognitive function. Web-based cognitive assessment could improve the accessibility and flexibility of research and clinical assessment, widen participation, and promote research recruitment while simultaneously reducing costs. However, differences in context may influence task performance. This study aims to determine the comparability of an unsupervised, web-based administration of the Cambridge Neuropsychological Test Automated Battery (CANTAB) against a typical in-person lab-based assessment, using a within-subjects counterbalanced design. The study aims to test (1) reliability, quantifying the relationship between measurements across settings using correlational approaches; (2) equivalence, the extent to which test results in different settings produce similar overall results; and (3) agreement, by quantifying acceptable limits to bias and differences between measurement environments. A total of 51 healthy adults (32 women and 19 men; mean age 36.8, SD 15.6 years) completed 2 testing sessions, which were completed on average 1 week apart (SD 4.5 days). Assessments included equivalent tests of emotion recognition (emotion recognition task [ERT]), visual recognition (pattern recognition memory [PRM]), episodic memory (paired associate learning [PAL]), working memory and spatial planning (spatial working memory [SWM] and one touch stockings of Cambridge), and sustained attention (rapid visual information processing [RVP]). Participants were randomly allocated to one of the two groups, either assessed in-person in the laboratory first (n=33) or with unsupervised web-based assessments on their personal computing systems first (n=18). Performance indices (errors, correct trials, and response sensitivity) and median reaction times were extracted. Intraclass and bivariate correlations examined intersetting reliability, linear mixed models and Bayesian paired sample t tests tested for equivalence, and Bland-Altman plots examined agreement. Intraclass correlation (ICC) coefficients ranged from ρ=0.23-0.67, with high correlations in 3 performance indices (from PAL, SWM, and RVP tasks; ρ≥0.60). High ICC values were also seen for reaction time measures from 2 tasks (PRM and ERT tasks; ρ≥0.60). However, reaction times were slower during web-based assessments, which undermined both equivalence and agreement for reaction time measures. Performance indices did not differ between assessment settings and generally showed satisfactory agreement. Our findings support the comparability of CANTAB performance indices (errors, correct trials, and response sensitivity) in unsupervised, web-based assessments with in-person and laboratory tests. Reaction times are not as easily translatable from in-person to web-based testing, likely due to variations in computer hardware. The results underline the importance of examining more than one index to ascertain comparability, as high correlations can present in the context of systematic differences, which are a product of differences between measurement environments. Further work is now needed to examine web-based assessments in clinical populations and in larger samples to improve sensitivity for detecting subtler differences between test settings.
Background: Computerized assessments are already used to derive accurate and reliable measures of cognitive function. Web-based cognitive assessment could improve the accessibility and flexibility of research and clinical assessment, widen participation, and promote research recruitment while simultaneously reducing costs. However, differences in context may influence task performance. Objective: This study aims to determine the comparability of an unsupervised, web-based administration of the Cambridge Neuropsychological Test Automated Battery (CANTAB) against a typical in-person lab-based assessment, using a within-subjects counterbalanced design. The study aims to test (1) reliability, quantifying the relationship between measurements across settings using correlational approaches; (2) equivalence, the extent to which test results in different settings produce similar overall results; and (3) agreement, by quantifying acceptable limits to bias and differences between measurement environments. Methods: A total of 51 healthy adults (32 women and 19 men; mean age 36.8, SD 15.6 years) completed 2 testing sessions, which were completed on average 1 week apart (SD 4.5 days). Assessments included equivalent tests of emotion recognition (emotion recognition task [ERT]), visual recognition (pattern recognition memory [PRM]), episodic memory (paired associate learning [PAL]), working memory and spatial planning (spatial working memory [SWM] and one touch stockings of Cambridge), and sustained attention (rapid visual information processing [RVP]). Participants were randomly allocated to one of the two groups, either assessed in-person in the laboratory first (n=33) or with unsupervised web-based assessments on their personal computing systems first (n=18). Performance indices (errors, correct trials, and response sensitivity) and median reaction times were extracted. Intraclass and bivariate correlations examined intersetting reliability, linear mixed models and Bayesian paired sample t tests tested for equivalence, and Bland-Altman plots examined agreement. Results: Intraclass correlation (ICC) coefficients ranged from ρ=0.23-0.67, with high correlations in 3 performance indices (from PAL, SWM, and RVP tasks; ρ≥0.60). High ICC values were also seen for reaction time measures from 2 tasks (PRM and ERT tasks; ρ≥0.60). However, reaction times were slower during web-based assessments, which undermined both equivalence and agreement for reaction time measures. Performance indices did not differ between assessment settings and generally showed satisfactory agreement. Conclusions: Our findings support the comparability of CANTAB performance indices (errors, correct trials, and response sensitivity) in unsupervised, web-based assessments with in-person and laboratory tests. Reaction times are not as easily translatable from in-person to web-based testing, likely due to variations in computer hardware. The results underline the importance of examining more than one index to ascertain comparability, as high correlations can present in the context of systematic differences, which are a product of differences between measurement environments. Further work is now needed to examine web-based assessments in clinical populations and in larger samples to improve sensitivity for detecting subtler differences between test settings.
Author Cormack, Francesca K
Backx, Rosa
Barnett, Jennifer H
Dente, Pasquale
Skirrow, Caroline
AuthorAffiliation 1 Cambridge Cognition Ltd Cambridge United Kingdom
3 Department of Psychiatry University of Cambridge Cambridge United Kingdom
2 School of Psychological Science University of Bristol Bristol United Kingdom
AuthorAffiliation_xml – name: 3 Department of Psychiatry University of Cambridge Cambridge United Kingdom
– name: 1 Cambridge Cognition Ltd Cambridge United Kingdom
– name: 2 School of Psychological Science University of Bristol Bristol United Kingdom
Author_xml – sequence: 1
  givenname: Rosa
  orcidid: 0000-0002-1685-863X
  surname: Backx
  fullname: Backx, Rosa
– sequence: 2
  givenname: Caroline
  orcidid: 0000-0001-8692-7787
  surname: Skirrow
  fullname: Skirrow, Caroline
– sequence: 3
  givenname: Pasquale
  orcidid: 0000-0002-2652-1512
  surname: Dente
  fullname: Dente, Pasquale
– sequence: 4
  givenname: Jennifer H
  orcidid: 0000-0002-4851-5949
  surname: Barnett
  fullname: Barnett, Jennifer H
– sequence: 5
  givenname: Francesca K
  orcidid: 0000-0002-4413-177X
  surname: Cormack
  fullname: Cormack, Francesca K
BackLink https://www.ncbi.nlm.nih.gov/pubmed/32749999$$D View this record in MEDLINE/PubMed
BookMark eNpdkttq3DAQhk1JaQ7NKxRDKRSKW8s6bi8Km6WHwNJeJCGXZiSPvVpsayvJgX2Yvmu12SQkqxsd5ptfMz9zmh2NbsQsOyfl54rMxBci5Kx6lZ0QRlWhlCRHz87H2WkI67KsSjYjb7JjWkk2S-sk-7dwwwa8Hbv8FnVxAQGbHMYmX8LjbeG60UZ7h_k8BAxhwDHmN2GXEleYL2DQ3jYd5r9x8m4TtmbletdZA31-jSHm8ym6AWKSuoAY0W-_5vP81saVHYurSa_RxJB-mcYU09DDaBJ6Fadm-zZ73UIf8PxhP8tufny_Xvwqln9-Xi7my8IwLmPRatSGQCsbBo2mgoORErUWXHGltOKMN6UWpaYaaEtRCdI2AqhgiiBQoGfZ5V63cbCuN94O4Le1A1vfPzjf1eCjNT3WnDGOKCQlM8kEp6rkXPDWUJwxpaBJWt_2WptJD9iY5JaH_oXoy8hoV3Xn7mrJKBeVSgIfHwS8-zslA-vBBoN9MgbdFOqK0TJ1JpRM6PsDdO0mPyar6oqTSvJSMJKod88reirlcQoS8GEPGO9C8Ng-IaSsd9NV309X4j4dcMZGiNbtGrH9Af0fshDRKA
CitedBy_id crossref_primary_10_1007_s10433_021_00667_x
crossref_primary_10_3389_fdgth_2024_1294222
crossref_primary_10_3758_s13428_024_02377_5
crossref_primary_10_14283_jpad_2023_117
crossref_primary_10_1017_jns_2025_10
crossref_primary_10_2196_32922
crossref_primary_10_2196_53623
crossref_primary_10_1093_arclin_acae070
crossref_primary_10_1177_09612033231168477
crossref_primary_10_3758_s13421_022_01345_8
crossref_primary_10_1097_WCO_0000000000001192
crossref_primary_10_3389_fpsyg_2021_723063
crossref_primary_10_3390_brainsci11050529
crossref_primary_10_1080_13803395_2024_2353945
crossref_primary_10_3390_nu14010071
crossref_primary_10_1016_j_ymgme_2024_108541
crossref_primary_10_3389_fpsyt_2023_1227261
crossref_primary_10_1016_j_scog_2024_100302
crossref_primary_10_1038_s41746_024_01347_7
crossref_primary_10_1080_00038628_2024_2370435
crossref_primary_10_1080_08039488_2024_2434601
crossref_primary_10_1080_23279095_2023_2279208
crossref_primary_10_4236_health_2022_145043
crossref_primary_10_1080_17483107_2024_2405894
crossref_primary_10_1097_EE9_0000000000000374
crossref_primary_10_3389_fpsyg_2024_1171873
crossref_primary_10_1186_s13023_023_02842_y
crossref_primary_10_3390_brainsci11050660
crossref_primary_10_1093_schbul_sbae051
crossref_primary_10_1016_j_scog_2021_100230
crossref_primary_10_1017_S0272263124000214
crossref_primary_10_1038_s41598_023_41900_0
crossref_primary_10_3233_JAD_215240
crossref_primary_10_1080_09658211_2021_1995435
crossref_primary_10_1016_j_neurobiolaging_2023_05_009
crossref_primary_10_2196_28368
crossref_primary_10_1016_j_actpsy_2023_104115
crossref_primary_10_1177_23312165211025941
crossref_primary_10_1038_s41537_022_00219_x
crossref_primary_10_1093_arclin_acad039
crossref_primary_10_1177_13623613231221928
crossref_primary_10_1007_s10508_023_02676_6
crossref_primary_10_1080_07448481_2023_2299414
crossref_primary_10_2196_34688
crossref_primary_10_1080_13854046_2025_2469340
crossref_primary_10_1177_10731911231159369
crossref_primary_10_3390_ijerph19095531
crossref_primary_10_14283_jpad_2021_9
crossref_primary_10_1186_s12883_024_03609_z
crossref_primary_10_2196_23384
crossref_primary_10_3390_brainsci11050669
crossref_primary_10_1002_hup_2885
crossref_primary_10_1002_alz_13401
crossref_primary_10_1186_s13195_021_00872_x
crossref_primary_10_1016_j_bbr_2023_114601
crossref_primary_10_2196_28233
crossref_primary_10_1016_j_neubiorev_2025_106067
crossref_primary_10_3389_fnagi_2021_800126
crossref_primary_10_3389_frvir_2023_1138240
crossref_primary_10_1007_s40520_023_02343_9
crossref_primary_10_1007_s12207_025_09532_z
crossref_primary_10_3389_fpsyg_2021_684307
crossref_primary_10_1016_j_tjpad_2025_100081
crossref_primary_10_1038_s41591_024_03475_9
crossref_primary_10_1177_17470218231220578
crossref_primary_10_15869_itobiad_1606146
crossref_primary_10_3389_fneur_2024_1363513
crossref_primary_10_1080_13803395_2023_2259042
crossref_primary_10_2196_46675
crossref_primary_10_1007_s10865_022_00385_4
crossref_primary_10_1186_s13195_024_01641_2
crossref_primary_10_1080_13803395_2024_2376839
crossref_primary_10_1038_s41598_024_72749_6
crossref_primary_10_2196_26004
Cites_doi 10.1080/13854046.2017.1337932
10.3758/s13423-012-0296-9
10.3758/BRM.42.1.273
10.1080/13854046.2018.1523468
10.1177/096228029900800204
10.1080/13854046.2012.663001
10.1375/twin.10.4.554
10.1037/0003-066X.59.2.105
10.1007/7854_2015_5001
10.1146/annurev.psych.57.102904.190048
10.3758/bf03192989
10.1038/srep19114
10.1080/13803395.2017.1339017
10.1073/pnas.0913053107
10.1080/13803395.2015.1038220
10.1016/j.psychres.2019.01.033
10.1519/15184.1
10.1080/13854046.2012.680913
10.1016/s0028-3932(98)00036-0
10.1037/1089-2680.7.4.331
10.1016/j.chb.2016.05.025
10.1037/1082-989X.1.1.30
10.1093/arclin/acq060
10.1016/0028-3932(90)90137-d
10.1192/bjp.148.1.1
10.1080/13854046.2016.1190405
10.1093/geronb/61.3.p144
10.1080/13854046.2013.809795
10.1192/bjp.154.6.797
10.3758/s13423-017-1323-7
10.1002/sim.5466
10.3758/bf03193146
10.1007/s004269900009
10.1016/j.jmp.2016.01.002
10.1016/S0140-6736(86)90837-8
10.1136/bmj.312.7039.1153
10.1037/1040-3590.6.4.284
10.7771/1932-6246.1167
10.1371/journal.pone.0105825
10.1080/13854046.2015.1054437
10.1016/j.jalz.2008.07.003
10.2196/jmir.4862
10.1037//0033-2909.86.2.420
10.1371/journal.pone.0073990
10.1016/0028-3932(94)00098-a
10.1177/1094428112470848
10.3389/fpsyg.2015.01652
10.1037/0003-066X.59.2.93
10.1016/j.jcm.2016.02.012
10.1016/j.jml.2007.12.005
10.11613/BM.2015.015
ContentType Journal Article
Copyright Rosa Backx, Caroline Skirrow, Pasquale Dente, Jennifer H Barnett, Francesca K Cormack. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 04.08.2020.
2020. This work is licensed under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Rosa Backx, Caroline Skirrow, Pasquale Dente, Jennifer H Barnett, Francesca K Cormack. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 04.08.2020. 2020
Copyright_xml – notice: Rosa Backx, Caroline Skirrow, Pasquale Dente, Jennifer H Barnett, Francesca K Cormack. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 04.08.2020.
– notice: 2020. This work is licensed under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
– notice: Rosa Backx, Caroline Skirrow, Pasquale Dente, Jennifer H Barnett, Francesca K Cormack. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 04.08.2020. 2020
DBID AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
3V.
7QJ
7RV
7X7
7XB
8FI
8FJ
8FK
ABUWG
AFKRA
ALSLI
AZQEC
BENPR
CCPQU
CNYFK
DWQXO
E3H
F2A
FYUFA
GHDGH
K9.
KB0
M0S
M1O
NAPCQ
PHGZM
PHGZT
PIMPY
PKEHL
PPXIY
PQEST
PQQKQ
PQUKI
PRINS
PRQQA
7X8
5PM
DOA
DOI 10.2196/16792
DatabaseName CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
ProQuest Central (Corporate)
Applied Social Sciences Index & Abstracts (ASSIA)
Nursing & Allied Health Database
ProQuest Health & Medical Collection (NC LIVE)
ProQuest Central (purchase pre-March 2016)
Hospital Premium Collection
Hospital Premium Collection (Alumni Edition)
ProQuest Central (Alumni) (purchase pre-March 2016)
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
ProQuest Social Science Premium Collection
ProQuest Central Essentials
ProQuest Central
ProQuest One Community College
Library & information science collection.
ProQuest Central Korea
Library & Information Sciences Abstracts (LISA)
Library & Information Science Abstracts (LISA)
Health Research Premium Collection (UHCL Subscription)
Health Research Premium Collection (Alumni)
ProQuest Health & Medical Complete (Alumni)
Nursing & Allied Health Database (Alumni Edition)
ProQuest Health & Medical Collection
Library Science Database
Nursing & Allied Health Premium
ProQuest Central Premium
ProQuest One Academic
ProQuest - Publicly Available Content Database
ProQuest One Academic Middle East (New)
ProQuest One Health & Nursing
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Academic
ProQuest One Academic UKI Edition
ProQuest Central China
ProQuest One Social Sciences
MEDLINE - Academic
PubMed Central (Full Participant titles)
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
Publicly Available Content Database
ProQuest One Academic Middle East (New)
Library and Information Science Abstracts (LISA)
ProQuest Central Essentials
ProQuest Health & Medical Complete (Alumni)
ProQuest Central (Alumni Edition)
ProQuest One Community College
ProQuest One Health & Nursing
Applied Social Sciences Index and Abstracts (ASSIA)
ProQuest Central China
ProQuest Central
ProQuest Library Science
Health Research Premium Collection
Health and Medicine Complete (Alumni Edition)
ProQuest Central Korea
Library & Information Science Collection
ProQuest Central (New)
Social Science Premium Collection
ProQuest One Social Sciences
ProQuest One Academic Eastern Edition
ProQuest Nursing & Allied Health Source
ProQuest Hospital Collection
Health Research Premium Collection (Alumni)
ProQuest Hospital Collection (Alumni)
Nursing & Allied Health Premium
ProQuest Health & Medical Complete
ProQuest One Academic UKI Edition
ProQuest Nursing & Allied Health Source (Alumni)
ProQuest One Academic
ProQuest One Academic (New)
ProQuest Central (Alumni)
MEDLINE - Academic
DatabaseTitleList
MEDLINE - Academic
MEDLINE
Publicly Available Content Database
Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 3
  dbid: EIF
  name: MEDLINE
  url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search
  sourceTypes: Index Database
– sequence: 4
  dbid: BENPR
  name: ProQuest Central
  url: https://www.proquest.com/central
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Medicine
Library & Information Science
EISSN 1438-8871
ExternalDocumentID oai_doaj_org_article_5445ee6731974653805565fc3e9488ad
PMC7435628
32749999
10_2196_16792
Genre Research Support, Non-U.S. Gov't
Journal Article
GeographicLocations United Kingdom--UK
GeographicLocations_xml – name: United Kingdom--UK
GroupedDBID ---
.4I
.DC
29L
2WC
36B
53G
5GY
5VS
77K
7RV
7X7
8FI
8FJ
AAFWJ
AAKPC
AAWTL
AAYXX
ABDBF
ABIVO
ABUWG
ACGFO
ADBBV
AEGXH
AENEX
AFKRA
AFPKN
AIAGR
ALIPV
ALMA_UNASSIGNED_HOLDINGS
ALSLI
AOIJS
BAWUL
BCNDV
BENPR
CCPQU
CITATION
CNYFK
CS3
DIK
DU5
DWQXO
E3Z
EAP
EBD
EBS
EJD
ELW
EMB
EMOBN
ESX
F5P
FRP
FYUFA
GROUPED_DOAJ
GX1
HMCUK
HYE
KQ8
M1O
M48
NAPCQ
OK1
OVT
P2P
PGMZT
PHGZM
PHGZT
PIMPY
PQQKQ
RNS
RPM
SJN
SV3
TR2
UKHRP
XSB
CGR
CUY
CVF
ECM
EIF
NPM
3V.
7QJ
7XB
8FK
ACUHS
AZQEC
E3H
F2A
K9.
PKEHL
PPXIY
PQEST
PQUKI
PRINS
PRQQA
7X8
5PM
PUEGO
ID FETCH-LOGICAL-c457t-fbebc1af7d4adb365ac77ebb658588b8545d0b60b3ba3f3e861fd6a36481ea3a3
IEDL.DBID M48
ISSN 1438-8871
1439-4456
IngestDate Wed Aug 27 00:35:44 EDT 2025
Thu Aug 21 13:32:58 EDT 2025
Mon Jul 21 10:31:45 EDT 2025
Fri Jul 25 19:44:02 EDT 2025
Thu Jan 02 22:57:35 EST 2025
Tue Jul 01 02:05:49 EDT 2025
Thu Apr 24 23:06:13 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 8
Keywords CANTAB
neuropsychological tests
cognition
mobile health
reliability
Language English
License Rosa Backx, Caroline Skirrow, Pasquale Dente, Jennifer H Barnett, Francesca K Cormack. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 04.08.2020.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c457t-fbebc1af7d4adb365ac77ebb658588b8545d0b60b3ba3f3e861fd6a36481ea3a3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-4413-177X
0000-0001-8692-7787
0000-0002-2652-1512
0000-0002-1685-863X
0000-0002-4851-5949
OpenAccessLink http://journals.scholarsportal.info/openUrl.xqy?doi=10.2196/16792
PMID 32749999
PQID 2512750641
PQPubID 2033121
ParticipantIDs doaj_primary_oai_doaj_org_article_5445ee6731974653805565fc3e9488ad
pubmedcentral_primary_oai_pubmedcentral_nih_gov_7435628
proquest_miscellaneous_2430658687
proquest_journals_2512750641
pubmed_primary_32749999
crossref_primary_10_2196_16792
crossref_citationtrail_10_2196_16792
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2020-08-04
PublicationDateYYYYMMDD 2020-08-04
PublicationDate_xml – month: 08
  year: 2020
  text: 2020-08-04
  day: 04
PublicationDecade 2020
PublicationPlace Canada
PublicationPlace_xml – name: Canada
– name: Toronto
– name: Toronto, Canada
PublicationTitle Journal of medical Internet research
PublicationTitleAlternate J Med Internet Res
PublicationYear 2020
Publisher Gunther Eysenbach MD MPH, Associate Professor
JMIR Publications
Publisher_xml – name: Gunther Eysenbach MD MPH, Associate Professor
– name: JMIR Publications
References ref13
ref57
ref12
ref56
ref15
ref59
ref14
ref58
ref53
ref11
ref55
ref10
ref54
ref17
ref16
ref19
ref18
ref51
ref50
ref46
ref45
ref48
ref47
ref42
Bland, JM (ref52) 1986; 1
ref41
ref44
ref43
ref49
ref8
ref7
ref9
ref4
ref3
ref6
ref5
ref40
ref35
ref34
ref37
ref36
ref31
ref30
ref33
ref32
ref2
ref1
ref39
ref38
ref24
ref23
ref26
ref25
ref20
ref22
ref21
ref28
ref27
ref29
References_xml – ident: ref12
  doi: 10.1080/13854046.2017.1337932
– ident: ref13
  doi: 10.3758/s13423-012-0296-9
– ident: ref6
  doi: 10.3758/BRM.42.1.273
– ident: ref11
  doi: 10.1080/13854046.2018.1523468
– ident: ref24
– ident: ref53
  doi: 10.1177/096228029900800204
– ident: ref8
  doi: 10.1080/13854046.2012.663001
– ident: ref2
  doi: 10.1375/twin.10.4.554
– ident: ref4
  doi: 10.1037/0003-066X.59.2.105
– ident: ref22
  doi: 10.1007/7854_2015_5001
– ident: ref34
– ident: ref10
  doi: 10.1146/annurev.psych.57.102904.190048
– ident: ref19
  doi: 10.3758/bf03192989
– ident: ref42
  doi: 10.1038/srep19114
– ident: ref18
  doi: 10.1080/13803395.2017.1339017
– ident: ref14
  doi: 10.1073/pnas.0913053107
– ident: ref15
  doi: 10.1080/13803395.2015.1038220
– ident: ref38
  doi: 10.1016/j.psychres.2019.01.033
– ident: ref45
  doi: 10.1519/15184.1
– ident: ref58
  doi: 10.1080/13854046.2012.680913
– ident: ref59
  doi: 10.1016/s0028-3932(98)00036-0
– ident: ref30
  doi: 10.1037/1089-2680.7.4.331
– ident: ref40
– ident: ref5
  doi: 10.1016/j.chb.2016.05.025
– ident: ref44
  doi: 10.1037/1082-989X.1.1.30
– ident: ref9
  doi: 10.1093/arclin/acq060
– ident: ref35
  doi: 10.1016/0028-3932(90)90137-d
– ident: ref23
  doi: 10.1192/bjp.148.1.1
– ident: ref33
– ident: ref3
  doi: 10.1080/13854046.2016.1190405
– ident: ref56
  doi: 10.1093/geronb/61.3.p144
– ident: ref57
  doi: 10.1080/13854046.2013.809795
– ident: ref39
  doi: 10.1192/bjp.154.6.797
– ident: ref51
  doi: 10.3758/s13423-017-1323-7
– ident: ref26
  doi: 10.1002/sim.5466
– ident: ref28
  doi: 10.3758/bf03193146
– ident: ref37
  doi: 10.1007/s004269900009
– ident: ref25
– ident: ref48
– ident: ref50
  doi: 10.1016/j.jmp.2016.01.002
– volume: 1
  start-page: 307
  issue: 8476
  year: 1986
  ident: ref52
  publication-title: Lancet
  doi: 10.1016/S0140-6736(86)90837-8
– ident: ref32
– ident: ref54
  doi: 10.1136/bmj.312.7039.1153
– ident: ref27
  doi: 10.1037/1040-3590.6.4.284
– ident: ref31
  doi: 10.7771/1932-6246.1167
– ident: ref29
  doi: 10.1371/journal.pone.0105825
– ident: ref17
  doi: 10.1080/13854046.2015.1054437
– ident: ref20
  doi: 10.1016/j.jalz.2008.07.003
– ident: ref16
  doi: 10.2196/jmir.4862
– ident: ref47
  doi: 10.1037//0033-2909.86.2.420
– ident: ref21
– ident: ref43
  doi: 10.1371/journal.pone.0073990
– ident: ref36
  doi: 10.1016/0028-3932(94)00098-a
– ident: ref41
  doi: 10.1177/1094428112470848
– ident: ref1
  doi: 10.3389/fpsyg.2015.01652
– ident: ref7
  doi: 10.1037/0003-066X.59.2.93
– ident: ref46
  doi: 10.1016/j.jcm.2016.02.012
– ident: ref49
  doi: 10.1016/j.jml.2007.12.005
– ident: ref55
  doi: 10.11613/BM.2015.015
SSID ssj0020491
Score 2.5562644
Snippet Computerized assessments are already used to derive accurate and reliable measures of cognitive function. Web-based cognitive assessment could improve the...
Background: Computerized assessments are already used to derive accurate and reliable measures of cognitive function. Web-based cognitive assessment could...
BackgroundComputerized assessments are already used to derive accurate and reliable measures of cognitive function. Web-based cognitive assessment could...
SourceID doaj
pubmedcentral
proquest
pubmed
crossref
SourceType Open Website
Open Access Repository
Aggregation Database
Index Database
Enrichment Source
StartPage e16792
SubjectTerms Access
Acknowledgment
Adult
Agreements
Automation
Bayesian analysis
Bias
Clinical assessment
Clinical research
Clinical trials
Cognition - physiology
Cognitive ability
Cognitive functioning
Computer peripherals
Computerization
Costs
Emotion recognition
Episodic memory
Equivalence
Female
Flexibility
Humans
Interactive computer systems
Internet
Internet - standards
Laboratories
Laboratories - standards
Male
Measurement
Neuropsychological assessment
Neuropsychological Tests - standards
Neuropsychology
Original Paper
Pattern recognition
Power
Quantitative genetics
Reaction time
Recruitment
Reliability
Reproducibility of Results
Software
Sustained attention
Within-subjects design
SummonAdditionalLinks – databaseName: DOAJ Directory of Open Access Journals
  dbid: DOA
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1La9wwEB5KDqFQSpq-3DyYQOjNZG3Jkra3zdIQStJTQnIzkiU3C8UbGu8hPyb_NTO2bHZDoZcebQkz1sxI30gznwCO86kJlZ6qlHxHpdILkdqCa2Vk7m3uVJFb3hq4_KnOr-WP2-J27aovzgnr6YH7gTthspgQlCZT0cwFZpj9pagrEaZke9bz7Etr3hBMxVCLcG-2DW840ZlM7ITPGvKNlacj6P8bqnyZHLm22pztwNsIE3HWi_cOXoVmFw5ikQF-xVhFxKOK0T13YfsyHpS_h6d5f79g8wtvgktPaa3yaBuPF3Z4mg-JQzgb2TmxyyBAAoU4FnNhx99xvz5R4hX9C85W7ZJEoE_1JJ2P33CGN4v2btGkNB_xBs8Dcs07q44zKCvqynmLjx_g-uz71fw8jTcxpJUsdJvWLrgqs7X20nonVGErrYMjVZrCGGcIhvmJUxMnnBW1CEZltVdWKGmyYIUVH2GrWTbhM6CzXnpVSzOpKDR1xqrMePq4qwNhw5AlcDxoqawiTTnflvG7pHCFlVl2ykzgcOx23_NyvOxwyioeG5lGu3tBxlVG4yr_ZVwJ7A8GUkbffigZEWrm-SNZj8Zm8ko-arFNWK6ojxSM7ZTRCXzq7WmUROSaw8xpAnrD0jZE3WxpFncd8zfBPcKr5sv_-Lc9eJ3z3gGnv8h92Gr_rMIBAazWHXa-9AykQSbq
  priority: 102
  providerName: Directory of Open Access Journals
– databaseName: ProQuest Health & Medical Collection (NC LIVE)
  dbid: 7X7
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1LT9wwEB61VEKVqqqlr7SAXAn1FrGJHdvbS7WsihCCnkDsLbJjB1ZCyZbNHvgx_FdmEiewCHHM2oqcnYc_j2e-AdhLx9oXaixjtB0ZC8d5bDKqlRGpM6mVWWooNHD6Tx6di-NZNgsBt2VIq-x9YuuoXV1QjHyf9mFF7GrJn8X_mLpG0e1qaKHxGt4QdRmldKnZw4EL0W-yCe8o3RkVbZ9uHNK1_ael6X8OWz5NkXy05xx-gPcBLLJJJ92P8MpXW7ATSg3YLxZqiei_ZcFIt2DzNFyXf4K7addlsLpkF97GB7hjOWYqx05M_zTt04fYZODoZG0eAUNoyIaSLtayeCweu0t2ht_CJqumxiXgqzqqztvfbMIu5s3VvIrRK1GYZ8mo8p0ESHmUBU6l7MXbz3B--PdsehSHfgxxITLVxKX1tkhMqZwwznKZmUIpb1GgOtPaagRjbmTlyHJreMm9lknppOFS6MQbbvgX2Kjqyn8DZo0TTpZCjwo8oFptZKIdvtyWHhGiTyLY66WUF4GsnHpmXOd4aCFh5q0wI9gdpi06do6nEw5IxMMgkWm3P9Q3l3mwzZz4iLyXCr2RIro5TQRDWVlwP0b3ZlwE272C5MHCl_mDPkbwcxhG26QLF1P5eoVzBCeEJ7WK4GunT8NKeKrosDmOQK1p2tpS10eq-VXL_42gD1Gr_v7ysn7A25RiA5TeIrZho7lZ-R0EUI3dba3kHv4HH-k
  priority: 102
  providerName: ProQuest
Title Comparing Web-Based and Lab-Based Cognitive Assessment Using the Cambridge Neuropsychological Test Automated Battery: A Within-Subjects Counterbalanced Study
URI https://www.ncbi.nlm.nih.gov/pubmed/32749999
https://www.proquest.com/docview/2512750641
https://www.proquest.com/docview/2430658687
https://pubmed.ncbi.nlm.nih.gov/PMC7435628
https://doaj.org/article/5445ee6731974653805565fc3e9488ad
Volume 22
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwhV1ta9swED62FspgjK3bunSt0aDsm7fYkiVlMEYSWspYujEa2m9GsuQ2UJw2dWD5Mfuvu3NsE5fCvhhiKUb2veg56e4RwFE80D5TAxmi7chQOM5Dk1CtjIidia1MYkNLA5MzeToV3y-TjWzC-gPePxra0XlS08XNpz93q29o8F8pjRkV6DPtJKAX3sbJSJFtTkS7kRAjAK5iLoF2jfYU7cDzzt8601HF2v8Y1HyYMbkxBZ28hBc1dmTDtbBfwRNf7MJhXXnAPrK6tIg-Nattdhd2JvXu-Wv4O14fOlhcsQtvwxFOYI6ZwrEfpvk1brKJ2LCl7GRVWgFDpMjaCi9WkXrcbnpPdo7vwobLco5DwEetmTtXX9iQXczK61kRopOiVZ97RoXwJE9Kq8ywKyUzrt7A9OT4fHwa1sczhJlIVBnm1tssMrlywjjLZWIypbxF-epEa6sRm7m-lX3LreE591pGuZOGS6Ejb7jhb2GrmBf-HTBrnHAyF7qfYbxqtZGRdvhwm3sEjD7qwVEjpTSrucvpCI2bFGMYEmZaCbMHQdvtdk3W8bDDiETcNhK3dnVjvrhKa1NNiZ7Ie6nQOSlin9PEN5TkGfcD9HbG9eCgUZC00deUYKIi8j8c64e2GU2V9l9M4edL7CM4AT6pVQ_21vrUjoTHimLPQQ9UR9M6Q-22FLPrig4cMSCCWL3_v5d_D89iWiygfBdxAFvlYukPEVGVNoCn6lIFsD06Pvv1O6jWJfA6iX4GlTX9A88cJw0
linkProvider Scholars Portal
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1fb9MwED-NIQ0khGD8C2zDSIO3aE2c2C4SQl1h6li7p07rW7BjZ6uEkrK2Qv0wfAU-I3f5t3VCvO2xsWVddee7n-273wHsh13lUtkVPu4d4UeWc1_HVCsThVaHRsShpquB0akYnEXfJvFkA_40tTCUVtn4xNJR2yKlO_IDisOS2NWCz7OfPnWNotfVpoVGZRYnbvULj2zzT8dfUL_vw_Do67g_8OuuAn4axXLhZ8aZNNCZtJG2hotYp1I6g2KpWCmjEFLYjhEdw43mGXdKBJkVmotIBU5zzXHde3AfA2-HdpScXB_wEG0HW_CI0qvRsA_ohSNci3dlW4B_YdnbKZk3YtzRE3hcg1PWq6zpKWy4fBt269IG9oHVtUukS1Y7hW3YGtXP88_gd7_qaphfsHNn_EOMkJbp3LKhbn71m3Ql1ms5QVmZt8AQirK2hIyVrCGzm-6ZjfG_sN5yUaAIuFRFDbr6yHrsfLq4nOY-ekG6VpozqrQng6G8zRSnUrbk6jmc3YmmXsBmXuTuFTCjbWRFFqlOigdio7QIlMXFTeYQkbrAg_1GS0lak6NTj44fCR6SSJlJqUwP9tpps4oN5PaEQ1JxO0jk3eWH4uoiqX1BQvxHzgmJ3k8SvZ0iQqM4S7nrojvV1oOdxkCS2qPMk2v79-BdO4y-gB54dO6KJc6JOCFKoaQHLyt7aiXhoaTDbdcDuWZpa6Kuj-TTy5JvHEEmomT1-v9ivYUHg_FomAyPT0_ewMOQ7iUotSbagc3F1dLtInhbmL1yxzD4ftdb9C9kVV7C
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3bbhMxEB2VIkVICEG5LbTFSIW3VbLrXdtBQihNiVp6EQ-tmretvfa2kdBuaBKhfAw_wtcxs7c2FeKtj1lb1kRz8bE9cwZgJ-wrl8q-8NF3hB9Zzn0dU61MFFodGhGHmq4Gjk_E_ln0bRyP1-BPUwtDaZVNTCwDtS1SuiPv0j4siV0t6GZ1WsT3vdGX6U-fOkjRS2vTTqMykUO3_IXHt9nngz3U9YcwHH09He77dYcBP41iOfcz40wa6EzaSFvDRaxTKZ1BEVWslFEIL2zPiJ7hRvOMOyWCzArNRaQCp7nmuO4DeCh5HJCPyfHNYQ-Rd9CBx5RqjUbepdeOcGXvK1sE_AvX3k3PvLXfjZ7CkxqoskFlWc9gzeUbsFWXObCPrK5jIr2yOkBsQOe4fqp_Dr-HVYfD_JKdO-Pv4m5pmc4tO9LNr2GTusQGLT8oK3MYGMJS1paTsZJBZHo7VLNT_C9ssJgXKAIuVdGELj-xATufzK8muY8Rka6YZoyq7sl4KIczxamUObl8AWf3oqmXsJ4XuXsNzGgbWZFFqpfi4dgoLQJlcXGTOUSnLvBgp9FSktZE6dSv40eCByZSZlIq04Ptdtq0Yga5O2GXVNwOEpF3-aG4vkzquJAQF5JzQmIklER1p4jcKM5S7voYWrX1YLMxkKSOLrPkxhc8eN8OY1ygxx6du2KBcyJO6FIo6cGryp5aSXgo6aDb90CuWNqKqKsj-eSq5B5HwImIWb35v1jvoIPOmRwdnBy-hUchXVFQlk20Cevz64XbQhw3N9ulwzC4uG8P_QtSM2L4
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Comparing+Web-Based+and+Lab-Based+Cognitive+Assessment+Using+the+Cambridge+Neuropsychological+Test+Automated+Battery%3A+A+Within-Subjects+Counterbalanced+Study&rft.jtitle=Journal+of+medical+Internet+research&rft.au=Backx%2C+Rosa&rft.au=Skirrow%2C+Caroline&rft.au=Dente%2C+Pasquale&rft.au=Barnett%2C+Jennifer+H&rft.date=2020-08-04&rft.issn=1438-8871&rft.eissn=1438-8871&rft.volume=22&rft.issue=8&rft.spage=e16792&rft_id=info:doi/10.2196%2F16792&rft.externalDBID=n%2Fa&rft.externalDocID=10_2196_16792
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1438-8871&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1438-8871&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1438-8871&client=summon