The Dynamic Ebbinghaus: motion dynamics greatly enhance the classic contextual size illusion

The Ebbinghaus illusion is a classic example of the influence of a contextual surround on the perceived size of an object. Here, we introduce a novel variant of this illusion called the Dynamic Ebbinghaus illusion in which the size and eccentricity of the surrounding inducers modulates dynamically o...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in human neuroscience Vol. 9; p. 77
Main Authors Mruczek, Ryan E B, Blair, Christopher D, Strother, Lars, Caplovitz, Gideon P
Format Journal Article
LanguageEnglish
Published Switzerland Frontiers Research Foundation 18.02.2015
Frontiers Media S.A
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The Ebbinghaus illusion is a classic example of the influence of a contextual surround on the perceived size of an object. Here, we introduce a novel variant of this illusion called the Dynamic Ebbinghaus illusion in which the size and eccentricity of the surrounding inducers modulates dynamically over time. Under these conditions, the size of the central circle is perceived to change in opposition with the size of the inducers. Interestingly, this illusory effect is relatively weak when participants are fixating a stationary central target, less than half the magnitude of the classic static illusion. However, when the entire stimulus translates in space requiring a smooth pursuit eye movement to track the target, the illusory effect is greatly enhanced, almost twice the magnitude of the classic static illusion. A variety of manipulations including target motion, peripheral viewing, and smooth pursuit eye movements all lead to dramatic illusory effects, with the largest effect nearly four times the strength of the classic static illusion. We interpret these results in light of the fact that motion-related manipulations lead to uncertainty in the image size representation of the target, specifically due to added noise at the level of the retinal input. We propose that the neural circuits integrating visual cues for size perception, such as retinal image size, perceived distance, and various contextual factors, weight each cue according to the level of noise or uncertainty in their neural representation. Thus, more weight is given to the influence of contextual information in deriving perceived size in the presence of stimulus and eye motion. Biologically plausible models of size perception should be able to account for the reweighting of different visual cues under varying levels of certainty.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Reviewed by: Daniele Zavagno, University of Milano-Bicocca, Italy; Dejan Todorovic, University of Belgrade, Serbia
Edited by: Baingio Pinna, University of Sassari, Italy
This article was submitted to the journal Frontiers in Human Neuroscience.
ISSN:1662-5161
1662-5161
DOI:10.3389/fnhum.2015.00077