Facial Recognition Using Simulated Prosthetic Pixelized Vision
To evaluate a model of simulated pixelized prosthetic vision using noncontiguous circular phosphenes, to test the effects of phosphene and grid parameters on facial recognition. A video headset was used to view a reference set of four faces, followed by a partially averted image of one of those face...
Saved in:
Published in | Investigative ophthalmology & visual science Vol. 44; no. 11; pp. 5035 - 5042 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Rockville, MD
ARVO
01.11.2003
Association for Research in Vision and Ophtalmology |
Subjects | |
Online Access | Get full text |
ISSN | 0146-0404 1552-5783 1552-5783 |
DOI | 10.1167/iovs.03-0341 |
Cover
Loading…
Abstract | To evaluate a model of simulated pixelized prosthetic vision using noncontiguous circular phosphenes, to test the effects of phosphene and grid parameters on facial recognition.
A video headset was used to view a reference set of four faces, followed by a partially averted image of one of those faces viewed through a square pixelizing grid that contained 10x10 to 32x32 dots separated by gaps. The grid size, dot size, gap width, dot dropout rate, and gray-scale resolution were varied separately about a standard test condition, for a total of 16 conditions. All tests were first performed at 99% contrast and then repeated at 12.5% contrast.
Discrimination speed and performance were influenced by all stimulus parameters. The subjects achieved highly significant facial recognition accuracy for all high-contrast tests except for grids with 70% random dot dropout and two gray levels. In low-contrast tests, significant facial recognition accuracy was achieved for all but the most adverse grid parameters: total grid area less than 17% of the target image, 70% dropout, four or fewer gray levels, and a gap of 40.5 arcmin. For difficult test conditions, a pronounced learning effect was noticed during high-contrast trials, and a more subtle practice effect on timing was evident during subsequent low-contrast trials.
These findings suggest that reliable face recognition with crude pixelized grids can be learned and may be possible, even with a crude visual prosthesis. |
---|---|
AbstractList | To evaluate a model of simulated pixelized prosthetic vision using noncontiguous circular phosphenes, to test the effects of phosphene and grid parameters on facial recognition.
A video headset was used to view a reference set of four faces, followed by a partially averted image of one of those faces viewed through a square pixelizing grid that contained 10x10 to 32x32 dots separated by gaps. The grid size, dot size, gap width, dot dropout rate, and gray-scale resolution were varied separately about a standard test condition, for a total of 16 conditions. All tests were first performed at 99% contrast and then repeated at 12.5% contrast.
Discrimination speed and performance were influenced by all stimulus parameters. The subjects achieved highly significant facial recognition accuracy for all high-contrast tests except for grids with 70% random dot dropout and two gray levels. In low-contrast tests, significant facial recognition accuracy was achieved for all but the most adverse grid parameters: total grid area less than 17% of the target image, 70% dropout, four or fewer gray levels, and a gap of 40.5 arcmin. For difficult test conditions, a pronounced learning effect was noticed during high-contrast trials, and a more subtle practice effect on timing was evident during subsequent low-contrast trials.
These findings suggest that reliable face recognition with crude pixelized grids can be learned and may be possible, even with a crude visual prosthesis. To evaluate a model of simulated pixelized prosthetic vision using noncontiguous circular phosphenes, to test the effects of phosphene and grid parameters on facial recognition.PURPOSETo evaluate a model of simulated pixelized prosthetic vision using noncontiguous circular phosphenes, to test the effects of phosphene and grid parameters on facial recognition.A video headset was used to view a reference set of four faces, followed by a partially averted image of one of those faces viewed through a square pixelizing grid that contained 10x10 to 32x32 dots separated by gaps. The grid size, dot size, gap width, dot dropout rate, and gray-scale resolution were varied separately about a standard test condition, for a total of 16 conditions. All tests were first performed at 99% contrast and then repeated at 12.5% contrast.METHODSA video headset was used to view a reference set of four faces, followed by a partially averted image of one of those faces viewed through a square pixelizing grid that contained 10x10 to 32x32 dots separated by gaps. The grid size, dot size, gap width, dot dropout rate, and gray-scale resolution were varied separately about a standard test condition, for a total of 16 conditions. All tests were first performed at 99% contrast and then repeated at 12.5% contrast.Discrimination speed and performance were influenced by all stimulus parameters. The subjects achieved highly significant facial recognition accuracy for all high-contrast tests except for grids with 70% random dot dropout and two gray levels. In low-contrast tests, significant facial recognition accuracy was achieved for all but the most adverse grid parameters: total grid area less than 17% of the target image, 70% dropout, four or fewer gray levels, and a gap of 40.5 arcmin. For difficult test conditions, a pronounced learning effect was noticed during high-contrast trials, and a more subtle practice effect on timing was evident during subsequent low-contrast trials.RESULTSDiscrimination speed and performance were influenced by all stimulus parameters. The subjects achieved highly significant facial recognition accuracy for all high-contrast tests except for grids with 70% random dot dropout and two gray levels. In low-contrast tests, significant facial recognition accuracy was achieved for all but the most adverse grid parameters: total grid area less than 17% of the target image, 70% dropout, four or fewer gray levels, and a gap of 40.5 arcmin. For difficult test conditions, a pronounced learning effect was noticed during high-contrast trials, and a more subtle practice effect on timing was evident during subsequent low-contrast trials.These findings suggest that reliable face recognition with crude pixelized grids can be learned and may be possible, even with a crude visual prosthesis.CONCLUSIONSThese findings suggest that reliable face recognition with crude pixelized grids can be learned and may be possible, even with a crude visual prosthesis. |
Author | Barnett, G. David Humayun, Mark S Thompson, Robert W., Jr Dagnelie, Gislin |
Author_xml | – sequence: 1 fullname: Thompson, Robert W., Jr – sequence: 2 fullname: Barnett, G. David – sequence: 3 fullname: Humayun, Mark S – sequence: 4 fullname: Dagnelie, Gislin |
BackLink | http://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&idt=15249035$$DView record in Pascal Francis https://www.ncbi.nlm.nih.gov/pubmed/14578432$$D View this record in MEDLINE/PubMed |
BookMark | eNpt0MtLHEEQBvBGDLoab57DXJJcHNPVj3lcAkF8BIRIEr02vTXduxV6ZnR6Nmv869PDbhSCp4bm9xVV3wHb7frOMXYM_BSgKD9R_zuecplzqWCHzUBrkeuykrtsxkEVOVdc7bODGH9xLgAE32P7oJJQUszY5wuLZEP23WG_6GikvstuI3WL7Ae1q2BH12Q3Qx_HpRsJsxt6dIGe0ucdxWTfsjfehuiOtu8hu704_3l2lV9_u_x69uU6R6XlmPtGF1BhNQeL3voS6wIrPsfa-4KrWquqaWSjUChvSxTJzL3i3jcIshJQy0P2YTP3fugfVi6OpqWILgTbuX4VTQmSgxSQ4LstXM1b15j7gVo7_DH_Lk7g_RbYiDb4wXZI8cVpoWoudXJi4zBdHwfnDdJop37GwVIwwM1Uv5nqN1yaqf4UOvkv9Dz3df5xw5e0WK5pcCa2NoS0Opj1eq1Uyhg9bfMXno2Upg |
CODEN | IOVSDA |
CitedBy_id | crossref_primary_10_1088_1741_2560_4_1_S08 crossref_primary_10_1186_s12938_022_01059_7 crossref_primary_10_1111_aor_12174 crossref_primary_10_1088_1741_2560_2_1_010 crossref_primary_10_1371_journal_pone_0073592 crossref_primary_10_1016_j_jocn_2004_10_004 crossref_primary_10_1088_1741_2560_10_2_026017 crossref_primary_10_1016_j_visres_2009_07_003 crossref_primary_10_1088_1741_2560_5_1_007 crossref_primary_10_1109_TBCAS_2012_2200103 crossref_primary_10_1109_TBME_2008_2005973 crossref_primary_10_1088_1741_2560_2_1_015 crossref_primary_10_1088_1742_6596_1437_1_012012 crossref_primary_10_1111_j_1525_1594_2009_00976_x crossref_primary_10_1038_s41598_019_45416_4 crossref_primary_10_1088_1741_2560_12_1_016002 crossref_primary_10_1109_TNSRE_2006_881536 crossref_primary_10_1016_j_ins_2014_02_136 crossref_primary_10_1088_1741_2560_12_1_016003 crossref_primary_10_1088_1741_2560_9_4_046012 crossref_primary_10_1016_j_ins_2010_04_021 crossref_primary_10_1371_journal_pone_0227677 crossref_primary_10_1007_s00371_013_0886_1 crossref_primary_10_1109_MEMB_2006_1705748 crossref_primary_10_2174_1874120701509010234 crossref_primary_10_1088_1741_2560_4_1_S04 crossref_primary_10_1109_TNB_2023_3319084 crossref_primary_10_1111_j_1525_1594_2009_00878_x crossref_primary_10_1016_j_visres_2015_01_006 crossref_primary_10_1080_17434440_2016_1237287 crossref_primary_10_1088_1741_2552_aadd55 crossref_primary_10_1116_1_2998407 crossref_primary_10_1038_s41598_022_10719_6 crossref_primary_10_1002_advs_202405789 crossref_primary_10_1007_s11948_012_9375_6 crossref_primary_10_1038_s41598_024_65337_1 crossref_primary_10_1088_1741_2560_8_3_035005 crossref_primary_10_1111_j_1525_1594_2009_00778_x crossref_primary_10_1088_1741_2552_aa966d crossref_primary_10_1088_1741_2560_13_5_056008 crossref_primary_10_1088_1741_2560_13_3_036013 crossref_primary_10_1016_j_cmpb_2009_03_009 crossref_primary_10_1016_j_ins_2022_07_094 crossref_primary_10_1088_1741_2560_11_4_046009 crossref_primary_10_1080_09500340600619197 crossref_primary_10_1371_journal_pone_0204361 crossref_primary_10_1111_j_1525_1594_2009_00863_x crossref_primary_10_1111_aor_12147 crossref_primary_10_1088_1741_2552_abb5bc crossref_primary_10_3390_s21165638 crossref_primary_10_1088_1741_2552_ac5a5c crossref_primary_10_1111_aor_12504 crossref_primary_10_3171_2009_4_FOCUS0986 crossref_primary_10_1117_1_2841708 crossref_primary_10_1111_j_1525_1594_2009_00826_x crossref_primary_10_1111_aor_12476 crossref_primary_10_1109_TNSRE_2005_848687 crossref_primary_10_1088_1741_2560_11_2_025002 crossref_primary_10_3109_03091902_2014_957869 crossref_primary_10_1038_eye_2008_385 crossref_primary_10_1109_JSSC_2010_2055371 crossref_primary_10_1146_annurev_bioeng_10_061807_160529 crossref_primary_10_1586_17434440_1_1_139 crossref_primary_10_4103_1673_5374_167761 crossref_primary_10_1586_17434440_2_1_73 crossref_primary_10_1587_transinf_2018EDP7405 crossref_primary_10_1097_OPX_0b013e3182794775 crossref_primary_10_1109_TBME_2014_2314733 crossref_primary_10_1038_s41598_018_31435_0 crossref_primary_10_1088_1741_2560_6_3_035006 crossref_primary_10_1109_TBME_2007_903713 crossref_primary_10_1109_MEMB_2005_1511496 crossref_primary_10_1088_1741_2560_6_3_035008 crossref_primary_10_1088_0960_1317_16_8_016 crossref_primary_10_1088_1741_2560_6_3_035009 crossref_primary_10_1109_JPROC_2008_922589 crossref_primary_10_1167_19_13_22 crossref_primary_10_1088_1741_2560_6_3_035001 crossref_primary_10_1177_2515841418817501 crossref_primary_10_1016_j_sna_2004_11_021 crossref_primary_10_1088_1741_2552_ab021b crossref_primary_10_1111_aor_12287 crossref_primary_10_1111_aor_14022 crossref_primary_10_1016_j_visres_2004_09_032 crossref_primary_10_1109_TBME_2011_2171961 crossref_primary_10_1016_j_ophtha_2011_08_042 crossref_primary_10_1016_j_preteyeres_2015_09_003 crossref_primary_10_1016_j_visres_2009_02_003 crossref_primary_10_1016_j_visres_2017_06_002 crossref_primary_10_1088_1741_2560_10_1_011002 crossref_primary_10_1109_ACCESS_2024_3357400 crossref_primary_10_1088_1741_2560_4_1_S11 crossref_primary_10_1088_1741_2560_4_1_S15 crossref_primary_10_1097_OPX_0b013e31818b9f36 crossref_primary_10_1088_1741_2560_4_1_S13 |
ContentType | Journal Article |
Copyright | 2004 INIST-CNRS |
Copyright_xml | – notice: 2004 INIST-CNRS |
DBID | AAYXX CITATION IQODW CGR CUY CVF ECM EIF NPM 7X8 |
DOI | 10.1167/iovs.03-0341 |
DatabaseName | CrossRef Pascal-Francis Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed MEDLINE - Academic |
DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) MEDLINE - Academic |
DatabaseTitleList | MEDLINE MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: EIF name: MEDLINE url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search sourceTypes: Index Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Medicine |
EISSN | 1552-5783 |
EndPage | 5042 |
ExternalDocumentID | 14578432 15249035 10_1167_iovs_03_0341 www44_11_5035 |
Genre | Research Support, U.S. Gov't, Non-P.H.S Research Support, U.S. Gov't, P.H.S Research Support, Non-U.S. Gov't Journal Article |
GrantInformation_xml | – fundername: NEI NIH HHS grantid: R01EY12843 – fundername: NEI NIH HHS grantid: R01 EY012843 |
GroupedDBID | - 08R 2WC 34G 39C 53G 55 5GY 5RE ABFLS ACNCT ADACO ADBBV AENEX AFFNX AJYGW ALMA_UNASSIGNED_HOLDINGS BAWUL CS3 DIK DU5 E3Z EBS EJD F5P GJ GROUPED_DOAJ GX1 N9A OK1 P2P RHF SJN TRV WH7 WOQ WOW X7M ZA5 ZGI ZXP --- .55 .GJ 18M AAYXX ACGFO AFOSN CITATION TR2 W8F AI. IQODW RPM VH1 CGR CUY CVF ECM EIF NPM 7X8 |
ID | FETCH-LOGICAL-c453t-fd5618c8b1acfaf7c96c80bc9ff6049548dd3d4c24fa7c2cfabf40ffdc1382193 |
ISSN | 0146-0404 1552-5783 |
IngestDate | Fri Sep 05 08:12:23 EDT 2025 Sat Sep 28 08:40:08 EDT 2024 Wed Apr 02 07:10:39 EDT 2025 Tue Jul 01 02:52:53 EDT 2025 Thu Apr 24 23:12:44 EDT 2025 Tue Nov 10 19:47:49 EST 2020 |
IsDoiOpenAccess | false |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 11 |
Keywords | Face Prosthesis Vision Recognition |
Language | English |
License | CC BY 4.0 |
LinkModel | OpenURL |
MergedId | FETCHMERGED-LOGICAL-c453t-fd5618c8b1acfaf7c96c80bc9ff6049548dd3d4c24fa7c2cfabf40ffdc1382193 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
OpenAccessLink | https://iovs.arvojournals.org/arvo/content_public/journal/iovs/933433/7g1103005035.pdf |
PMID | 14578432 |
PQID | 71301321 |
PQPubID | 23479 |
PageCount | 8 |
ParticipantIDs | proquest_miscellaneous_71301321 pubmed_primary_14578432 pascalfrancis_primary_15249035 crossref_citationtrail_10_1167_iovs_03_0341 crossref_primary_10_1167_iovs_03_0341 highwire_smallpub1_www44_11_5035 |
ProviderPackageCode | RHF CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2003-11-01 |
PublicationDateYYYYMMDD | 2003-11-01 |
PublicationDate_xml | – month: 11 year: 2003 text: 2003-11-01 day: 01 |
PublicationDecade | 2000 |
PublicationPlace | Rockville, MD |
PublicationPlace_xml | – name: Rockville, MD – name: United States |
PublicationTitle | Investigative ophthalmology & visual science |
PublicationTitleAlternate | Invest Ophthalmol Vis Sci |
PublicationYear | 2003 |
Publisher | ARVO Association for Research in Vision and Ophtalmology |
Publisher_xml | – name: ARVO – name: Association for Research in Vision and Ophtalmology |
SSID | ssj0021120 |
Score | 2.1733139 |
Snippet | To evaluate a model of simulated pixelized prosthetic vision using noncontiguous circular phosphenes, to test the effects of phosphene and grid parameters on... |
SourceID | proquest pubmed pascalfrancis crossref highwire |
SourceType | Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 5035 |
SubjectTerms | Adult Biological and medical sciences Computer Simulation Contrast Sensitivity Face Female Form Perception - physiology Humans Male Medical sciences Ophthalmology Pattern Recognition, Visual - physiology Phosphenes - physiology Prostheses and Implants Sensory Aids Vision disorders |
Title | Facial Recognition Using Simulated Prosthetic Pixelized Vision |
URI | http://www.iovs.org/cgi/content/abstract/44/11/5035 https://www.ncbi.nlm.nih.gov/pubmed/14578432 https://www.proquest.com/docview/71301321 |
Volume | 44 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Nb9QwELWgSIgLKp9dCiUHOK2yJLETZy9IiNIWUBGiLfRmOY6tRspmqya7C_31jO04yUIrFS5RZNlJ5HkZz9gzbxB6FUWUh1nA_SxWqU9gwfVTLrHPBZVgfYdExDo5-fBLcnBCPp3Gp_2JrskuabKJuLwyr-R_pAptIFedJfsPku0eCg1wD_KFK0gYrjeS8R43G97fXBAQSNKGABwVM12WS5pEgBpsPE3L-rX4KcviEhq_m4TyoV06oNtYyvH8_Kw54-XM8jNpcCyLeuHSJ3ss9NEkLkR7_GPS741eVG0Q8P5kGDtvMMR_LSqXKzQ-6gbtcl1W2Z6Z7BfaBl7blsBtft5Ak8bg5VJbpcapWkv16CAVDhRnHFjWkr81eqLPlIv5sp7oyC9sabLWibP_WNC6MEPj4CSU6dEswEyPvo3uROBR6Cofux8_d755aBk8u692ORIJfTN897r14hildUAtr-GfUrYYyvXeirFajjfR_dbd8N5Z7DxAt2T1EN09bAMqHqG3FkLeAEKegZDXQcjrIeR1EPIshB6jk70Px-8P_Lakhi9IjBtf5WAvpyLNQi4UV1RME5EGmZgqlYCvCO5rnuOciIgoTkUEfTJFAqVyobkqwdh_gjaqeSW3kJcHhEopOQU1DgsB5zhSmUyl3lPEMHaExm6qmGj55nXZk5JdJZYRet31Prc8K9f089yss3rGyxJmN2Sr1YoQ6Mw0jEZoZ00a_fPiiExNh5dOPAyUqT4h45WcL2pGwaILcQRveWql1o8lgAqCo2c3_M5tdK__K56jjeZiIV-A-dpkOwZ4vwFBep3I |
linkProvider | Flying Publisher |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Facial+Recognition+Using+Simulated+Prosthetic+Pixelized+Vision&rft.jtitle=Investigative+ophthalmology+%26+visual+science&rft.au=Thompson%2C+Robert+W.&rft.au=Barnett%2C+G.+David&rft.au=Humayun%2C+Mark+S.&rft.au=Dagnelie%2C+Gislin&rft.date=2003-11-01&rft.issn=1552-5783&rft.volume=44&rft.issue=11&rft.spage=5035&rft_id=info:doi/10.1167%2Fiovs.03-0341&rft.externalDBID=n%2Fa&rft.externalDocID=10_1167_iovs_03_0341 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0146-0404&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0146-0404&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0146-0404&client=summon |