Mice use robust and common strategies to discriminate natural scenes

Mice use vision to navigate and avoid predators in natural environments. However, the spatial resolution of mouse vision is poor compared to primates, and mice lack a fovea. Thus, it is unclear how well mice can discriminate ethologically relevant scenes. Here, we examined natural scene discriminati...

Full description

Saved in:
Bibliographic Details
Published inbioRxiv
Main Authors Yu, Yiyi, Hira, Riichiro, Stirman, Jeffrey N, Yu, Waylin, Smith, Ikuko T, Smith, Spencer L
Format Paper
LanguageEnglish
Published Cold Spring Harbor Cold Spring Harbor Laboratory Press 13.07.2017
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Mice use vision to navigate and avoid predators in natural environments. However, the spatial resolution of mouse vision is poor compared to primates, and mice lack a fovea. Thus, it is unclear how well mice can discriminate ethologically relevant scenes. Here, we examined natural scene discrimination in mice using an automated touch-screen system. We estimated the discrimination difficulty using the computational metric structural similarity (SSIM), and constructed psychometric curves. However, the performance of each mouse was better predicted by the population mean than SSIM. This high inter-mouse agreement indicates that mice use common and robust strategies to discriminate natural scenes. We tested several other image metrics to find an alternative to SSIM for predicting discrimination performance. We found that a simple, primary visual cortex (V1)-inspired model predicted mouse performance with fidelity approaching the inter-mouse agreement. The model involved convolving the images with Gabor filters, and its performance varied with the orientation of the Gabor filter. This orientation dependence was driven by the stimuli, rather than an innate biological feature. Together, these results indicate that mice are adept at discriminating natural scenes, and their performance is well predicted by simple models of V1 processing.
DOI:10.1101/156653