Same-different conceptualization: a machine vision perspective
•The state of machine vision in modeling same-different discrimination is reviewed.•Recent computational evidence implicating attentional and mnemonic processes in the representation of visual relations is provided.•Connections between same-different reasoning and the more general machine learning d...
Saved in:
Published in | Current opinion in behavioral sciences Vol. 37; pp. 47 - 55 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
01.02.2021
Elsevier |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | •The state of machine vision in modeling same-different discrimination is reviewed.•Recent computational evidence implicating attentional and mnemonic processes in the representation of visual relations is provided.•Connections between same-different reasoning and the more general machine learning discipline of ‘visual question answering’ (VQA) are drawn. We also note the similarities between this recent line of modeling and older psychophysics work investigating connections between linguistic and visual representations of relations.•Future directions for the computational modeling of same-different discrimination is outlined.
The goal of this review is to bring together material from cognitive psychology with recent machine vision studies to identify plausible neural mechanisms for visual same-different discrimination and relational understanding. We highlight how developments in the study of artificial neural networks provide computational evidence implicating attention and working memory in the ascertaining of visual relations, including same-different relations. We review some recent attempts to incorporate these mechanisms into flexible models of visual reasoning. Particular attention is given to recent models jointly trained on visual and linguistic information. These recent systems are promising, but they still fall short of the biological standard in several ways, which we outline in a final section. |
---|---|
ISSN: | 2352-1546 2352-1554 |
DOI: | 10.1016/j.cobeha.2020.08.008 |