Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2010 Apr 1;50(2):818-25.
doi: 10.1016/j.neuroimage.2009.11.084. Epub 2009 Dec 11.

Reading the mind's eye: decoding category information during mental imagery

Affiliations

Reading the mind's eye: decoding category information during mental imagery

Leila Reddy et al. Neuroimage. .

Abstract

Category information for visually presented objects can be read out from multi-voxel patterns of fMRI activity in ventral-temporal cortex. What is the nature and reliability of these patterns in the absence of any bottom-up visual input, for example, during visual imagery? Here, we first ask how well category information can be decoded for imagined objects and then compare the representations evoked during imagery and actual viewing. In an fMRI study, four object categories (food, tools, faces, buildings) were either visually presented to subjects, or imagined by them. Using pattern classification techniques, we could reliably decode category information (including for non-special categories, i.e., food and tools) from ventral-temporal cortex in both conditions, but only during actual viewing from retinotopic areas. Interestingly, in temporal cortex when the classifier was trained on the viewed condition and tested on the imagery condition, or vice versa, classification performance was comparable to within the imagery condition. The above results held even when we did not use information in the specialized category-selective areas. Thus, the patterns of representation during imagery and actual viewing are in fact surprisingly similar to each other. Consistent with this observation, the maps of "diagnostic voxels" (i.e., the classifier weights) for the perception and imagery classifiers were more similar in ventral-temporal cortex than in retinotopic cortex. These results suggest that in the absence of any bottom-up input, cortical back projections can selectively re-activate specific patterns of neural activity.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Experimental Design. The experiment consisted of two conditions. A). In the visual perception (P) condition subjects viewed different exemplars of 4 categories of objects (tools, food (common fruits and vegetables), famous faces and famous buildings). B). In the visual imagery (I) condition subjects were given auditory instructions with the names of the stimuli and asked to generate vivid and detailed mental images corresponding to these names. Note that in the actual experiment colored stimuli were used (see Figure S2).
Figure 2
Figure 2
Confusion matrices for classification in A) the intact OR ROI, B) OR ROI with the FFA and PPA excluded and C) the Retinotopic voxels. Each confusion matrix shows the probability with which an input pattern presented along the rows would be classified as one of the 4 alternative outcomes (along the columns). P/P and I/I correspond to classification performance when both training and testing was performed on the visual presentation conditions or the mental imagery conditions respectively. P/I corresponds to training on visual presentation and testing on imagery (and vice versa for I/P).
Figure 3
Figure 3
Classification performance for the 4 types of classification pooled over categories in A) object responsive voxels, B) OR-FFA&PPA and C) in retinotopic voxels. “Scrambled voxels” corresponds to scrambling the voxel order for the test data in comparison to the training data, and “Shuffle labels” corresponds to shuffling the labels of the training examples. The performance values plotted here correspond to the mean of the diagonal values in the corresponding matrices in Figures 2 and S3 (* = p< 0.005). Note that since a 4-way classification was performed, chance performance is at 25%. Error bars represent S.E.M.
Figure 4
Figure 4
Correlation of the SVM weight maps of the P and I classifiers for all pairs of categories in OR (A) and the retinotopic voxels (B).

Similar articles

Cited by

References

    1. Amedi A, Malach R, Pascual-Leone A. Negative BOLD differentiates visual imagery and perception. Neuron. 2005;48:859–872. - PubMed
    1. Carlson TA, Schrater P, He S. Patterns of activity in the categorical representations of objects. J Cogn Neurosci. 2003;15:704–717. - PubMed
    1. Cox DD, Savoy RL. Functional magnetic resonance imaging (fMRI) “brain reading”: detecting and classifying distributed patterns of fMRI activity in human visual cortex. Neuroimage. 2003;19:261–270. - PubMed
    1. Cui X, Jeter CB, Yang D, Montague PR, Eagleman DM. Vividness of mental imagery: individual variability can be measured objectively. Vision Res. 2007;47:474–478. - PMC - PubMed
    1. D'Esposito M, Detre JA, Aguirre GK, Stallcup M, Alsop DC, Tippet LJ, Farah MJ. A functional MRI study of mental image generation. Neuropsychologia. 1997;35:725–730. - PubMed

Publication types