Hide/Show Apps

A computational model of the brain for decoding mental states from FMRI images

Alkan, Sarper
Brain decoding from brain images obtained using functional magnetic resonance imaging (fMRI) techniques is an important task for the identification of mental states and illnesses as well as for the development of brain machine interfaces. The brain decoding methods that use multi-voxel pattern analysis that rely on the selection of voxels (volumetric pixels) that have relevant activity with respect to the experimental tasks or stimuli of the fMRI experiments are the most commonly used methods. While MVPA based on voxel selection is proven to be an effective approach, we argue that an alternative approach exists, which resembles the processing hieararchy of the human brain for the processing and the representation of the mental states. In this study, we propose a hierarchical brain model for brain decoding. The hierarchical model we propose first clusters a brain image into sets of voxels where the voxels that have a highly correlated activity with each other fall into the same set, which we call supervoxels. Using the supervoxels, we aim to capture the nervous activity from specialized brain regions, which are assumed to process a distinct aspect of a given stimulus or mental task such as processing color, texture, or shape of a given visual object. Then, we combine the brain activity represented by each supervoxel using a method that we call Brain Region Ensembles (BRE) in order to decode mental states from fMRI images. Our analyses on multiple fMRI datasets show that the BRE is much better suited to the classification of mental states from fMRI images than classical voxel selection methodology. Additionally, we show that BRE can be used for the specification of brain regions that are relevant to the experimental tasks or stimuli when the aim is to identify the regions that have discriminative activity with respect to two different mental states.