Neural Decoding of Visual Object Position from Human fMRI Data

Paul Sukhanov (1051207)


Visual information in the early primate visual system is known to be represented in a retinotopic fashion, so that adjoining points in the visual field are represented by neural activities at adjoining points on the cortical sheet. Although a rough form of retinotopy persists into higher visual cortex, the way in which spatial information is encoded in higher areas has not been thoroughly explored. In particular, the way in which the degree of retinotopy and the sizes of receptive fields in higher visual cortex constrain the accuracy of spatial encoding is not well understood. In the current study we aimed to investigate these questions by decoding the spatial position of 3-dimensional moving objects from fMRI activity recorded during observation of the objects by human subjects. This was achieved by training sparse linear regression models using neural activity from different areas of visual cortex and observing the accuracy of predicted object trajectories for a test set of fMRI data. Notably, accurate predictions were obtained even from higher level visual cortex ( both ventral and dorsal). These accuracies are then viewed in terms of the sizes of receptive fields known to exist in each area, and a computational model is used to demonstrate how wide receptive fields are compatible with accurate spatial encoding.