Online decoding of object-based attention using real-time fMRI

Visual attention is used to selectively filter relevant information depending on current task demands and goals. Visual attention is called object‐based attention when it is directed to coherent forms or objects in the visual field. This study used real‐time functional magnetic resonance imaging for...

Full description

Saved in:
Bibliographic Details
Published inThe European journal of neuroscience Vol. 39; no. 2; pp. 319 - 329
Main Authors Niazi, Adnan M., van den Broek, Philip L. C., Klanke, Stefan, Barth, Markus, Poel, Mannes, Desain, Peter, van Gerven, Marcel A. J.
Format Journal Article
LanguageEnglish
Published Oxford Blackwell Publishing Ltd 01.01.2014
Blackwell
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Visual attention is used to selectively filter relevant information depending on current task demands and goals. Visual attention is called object‐based attention when it is directed to coherent forms or objects in the visual field. This study used real‐time functional magnetic resonance imaging for moment‐to‐moment decoding of attention to spatially overlapped objects belonging to two different object categories. First, a whole‐brain classifier was trained on pictures of faces and places. Subjects then saw transparently overlapped pictures of a face and a place, and attended to only one of them while ignoring the other. The category of the attended object, face or place, was decoded on a scan‐by‐scan basis using the previously trained decoder. The decoder performed at 77.6% accuracy indicating that despite competing bottom‐up sensory input, object‐based visual attention biased neural patterns towards that of the attended object. Furthermore, a comparison between different classification approaches indicated that the representation of faces and places is distributed rather than focal. This implies that real‐time decoding of object‐based attention requires a multivariate decoding approach that can detect these distributed patterns of cortical activity. The study used real‐time fMRI to decode attention to objects belonging to two distinct categories, faces and places. Subjects saw superimposed pictures of a face and place and attended to one of them. Category of the attended object was decoded in real‐time and used to provide neurofeedback to the subject by enhancing the attended picture. The attended object was decoding with high accuracy.
Bibliography:ark:/67375/WNG-VLB83G7S-B
ArticleID:EJN12405
Fig. S1. A basis set of 15 face-place pairs used in decoding phase. Each pair was used twice in each condition, once with the face picture set as target and the other time with the place picture set as target. Note: Copyrighted pictures used in the original experiment have been replaced in the above graphic by their non-copyrighted look-alikes. Fig. S2. Graph-based visual saliency algorithm was used to select the face-place pairs. Saliency of the 50/50 hybrid and each of its constituents were observed and only those pairs were selected for which the 50/50 hybrid had an equal number of salient points for the face and place picture. Fig. S3. Stimulus timeline. (A) Example of an attend-face trial in non-feedback condition. (B) Example of an attend-place trial in feedback condition. After cues have been presented, the face-place hybrid image was updated every TR depending on classification result of the preceding TR. Fig. S4. List of all brain regions from which voxels were selected by the MVA-W classifier for training. Only regions that were activated across three or more subjects were used for further analyses. Fig. S5. (A) Absolute number of voxels selected in the regions used by classifier for training averaged across the group. (B) Percentage of voxels used per region averaged across the group. Error bars show standard error of the mean. Fig. S6. (A) Decoding accuracy as a function of TR for feedback and non-feedback condition, and attend-face and attend-place trials that constitute these two conditions. The filled round markers represent significantly above-chance decoding (P < 0.05) whereas the empty markers represent below-chance decoding (P > 0.05). (B) Mean decoding accuracy. Error bars indicate standard error of the mean. Fig. S7. Comparison of percent signal change in feedback and non-feedback conditions. (A) Percent signal change for attend-face trials in feedback and non-feedback condition. The top plots show percent signal change at every TR during a trial (including the 12 s rest period. The bottom plot shows the percent signal change aggregated over the 12 TRs. (B) Percent signal change for attend-place trials in feedback and non-feedback conditions. Error bars represent standard error of the mean. Fig. S8. Comparison of prediction probablities of the decoder for feedback and non-feedback conditions. (A) Prediction probability for feedback and non-feedback conditions containing both successful and failed trials. No significant difference was found. (B) Prediction probability for only successful trials in feedback and non-feedback conditions. The prediction probability for feedback trials was significantly higher than non-feedback trials (C) Prediction probability for only failed trials in feedback and non-feedback conditions. The prediction probability for failed trials was significantly stronger (lower) for feedback trials compared to non-feedback trials. Error bars represent standar error of the mean. Fig. S9. (A) Average decoding performance for classifiers trained on feedback and non-feedback conditions. The classifier trained on the feedback condition was decoded with significantly higher accuracy than the classifier trained on the non-feedback condition. (B). Anatomical regions recruited by the classifiers trained on feedback and non-feedback conditionsMovie S1. The movie demonstrates an example of a trial in feedback and non-feedback conditions. Furthermore it shows the actual performance of one particular subject for all attend-face and attend-place trials in the feedback condition.
University of Twente
istex:AC58585FC9B75C0C5AD67257B78EA6C26788243E
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ObjectType-Article-2
ObjectType-Feature-1
ISSN:0953-816X
1460-9568
DOI:10.1111/ejn.12405