From Sensory Signals to Modality-Independent Conceptual Representations: A Probabilistic Language of Thought Approach

People learn modality-independent, conceptual representations from modality-specific sensory signals. Here, we hypothesize that any system that accomplishes this feat will include three components: a representational language for characterizing modality-independent representations, a set of sensory-...

Full description

Saved in:
Bibliographic Details
Published inPLoS computational biology Vol. 11; no. 11; p. e1004610
Main Authors Erdogan, Goker, Yildirim, Ilker, Jacobs, Robert A.
Format Journal Article
LanguageEnglish
Published United States Public Library of Science 01.11.2015
Public Library of Science (PLoS)
Subjects
Online AccessGet full text
ISSN1553-7358
1553-734X
1553-7358
DOI10.1371/journal.pcbi.1004610

Cover

More Information
Summary:People learn modality-independent, conceptual representations from modality-specific sensory signals. Here, we hypothesize that any system that accomplishes this feat will include three components: a representational language for characterizing modality-independent representations, a set of sensory-specific forward models for mapping from modality-independent representations to sensory signals, and an inference algorithm for inverting forward models-that is, an algorithm for using sensory signals to infer modality-independent representations. To evaluate this hypothesis, we instantiate it in the form of a computational model that learns object shape representations from visual and/or haptic signals. The model uses a probabilistic grammar to characterize modality-independent representations of object shape, uses a computer graphics toolkit and a human hand simulator to map from object representations to visual and haptic features, respectively, and uses a Bayesian inference algorithm to infer modality-independent object representations from visual and/or haptic signals. Simulation results show that the model infers identical object representations when an object is viewed, grasped, or both. That is, the model's percepts are modality invariant. We also report the results of an experiment in which different subjects rated the similarity of pairs of objects in different sensory conditions, and show that the model provides a very accurate account of subjects' ratings. Conceptually, this research significantly contributes to our understanding of modality invariance, an important type of perceptual constancy, by demonstrating how modality-independent representations can be acquired and used. Methodologically, it provides an important contribution to cognitive modeling, particularly an emerging probabilistic language-of-thought approach, by showing how symbolic and statistical approaches can be combined in order to understand aspects of human perception.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Conceived and designed the experiments: GE IY RAJ. Performed the experiments: GE. Analyzed the data: GE. Contributed reagents/materials/analysis tools: GE IY. Wrote the paper: GE IY RAJ.
The authors have declared that no competing interests exist.
ISSN:1553-7358
1553-734X
1553-7358
DOI:10.1371/journal.pcbi.1004610