Supplementary Figure 1. Localization of face patches (a) Sagittal slice showing the location of fmri-identified face patches in one monkey targeted

Similar documents
Supplementary material

Supplementary materials for: Executive control processes underlying multi- item working memory

Nature Neuroscience: doi: /nn Supplementary Figure 1. Behavioral training.

Nature Neuroscience: doi: /nn Supplementary Figure 1. Trial structure for go/no-go behavior

Sum of Neurally Distinct Stimulus- and Task-Related Components.

Astrocyte signaling controls spike timing-dependent depression at neocortical synapses

Nature Neuroscience: doi: /nn Supplementary Figure 1. Lick response during the delayed Go versus No-Go task.

The primate amygdala combines information about space and value

Nature Methods: doi: /nmeth Supplementary Figure 1. Activity in turtle dorsal cortex is sparse.

Summary of behavioral performances for mice in imaging experiments.

Supplementary Figure 1. Example of an amygdala neuron whose activity reflects value during the visual stimulus interval. This cell responded more

Supporting Information

Supplemental Information. Gamma and the Coordination of Spiking. Activity in Early Visual Cortex. Supplemental Information Inventory:

Theta sequences are essential for internally generated hippocampal firing fields.

Substructure of Direction-Selective Receptive Fields in Macaque V1

Attention Response Functions: Characterizing Brain Areas Using fmri Activation during Parametric Variations of Attentional Load

Supplementary Information for Correlated input reveals coexisting coding schemes in a sensory cortex

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION. Supplementary Figure 1

SpikerBox Neural Engineering Workshop

Supplementary Figure 1. Recording sites.

SUPPLEMENTARY INFORMATION. Teaching brain-machine interfaces as an alternative paradigm to neuroprosthetics control

Supplemental Information

Neural Networks: Tracing Cellular Pathways. Lauren Berryman Sunfest 2000

Disparity- and velocity- based signals for 3D motion perception in human MT+ Bas Rokers, Lawrence K. Cormack, and Alexander C. Huk

Strength of Gamma Rhythm Depends on Normalization

NeuroImage 70 (2013) Contents lists available at SciVerse ScienceDirect. NeuroImage. journal homepage:

Area MT Neurons Respond to Visual Motion Distant From Their Receptive Fields

Supplementary Materials

G5)H/C8-)72)78)2I-,8/52& ()*+,-./,-0))12-345)6/3/782 9:-8;<;4.= J-3/ J-3/ "#&' "#% "#"% "#%$

Electrophysiological Indices of Target and Distractor Processing in Visual Search

Representation of negative motivational value in the primate

Meaning-based guidance of attention in scenes as revealed by meaning maps

Ube3a is required for experience-dependent maturation of the neocortex

Lateralized hippocampal oscillations underlie distinct aspects of human spatial memory and navigation. Jacobs et al.

Spontaneous Cortical Activity Reveals Hallmarks of an Optimal Internal Model of the Environment. Berkes, Orban, Lengyel, Fiser.

Supplementary Figure 1. Basic properties of compound EPSPs at

Chapter 4. Activity of human hippocampal and amygdala neurons during retrieval of declarative memories

(Visual) Attention. October 3, PSY Visual Attention 1

SUPPLEMENTARY INFORMATION

Two distinct mechanisms for experiencedependent

Explicit information for category-orthogonal object properties increases along the ventral stream

Cortical Representation of Bimanual Movements

Studying the time course of sensory substitution mechanisms (CSAIL, 2014)

Decoding a Perceptual Decision Process across Cortex

Experience-dependent recovery of vision following chronic deprivation amblyopia. Hai-Yan He, Baisali Ray, Katie Dennis and Elizabeth M.

Reward prediction based on stimulus categorization in. primate lateral prefrontal cortex

Effects of Attention on MT and MST Neuronal Activity During Pursuit Initiation

Nature Neuroscience doi: /nn Supplementary Figure 1. Characterization of viral injections.

Increased Synchronization of Neuromagnetic Responses during Conscious Perception

Attention modulates spatial priority maps in the human occipital, parietal and frontal cortices

Does Contralateral Delay Activity Reflect Working Memory Storage or the Current Focus of Spatial Attention within Visual Working Memory?

Nature Medicine: doi: /nm.4084

Models of Attention. Models of Attention

Neural correlates of multisensory cue integration in macaque MSTd

SUPPLEMENTARY INFORMATION

Nature Neuroscience: doi: /nn Supplementary Figure 1

Nature Neuroscience: doi: /nn Supplementary Figure 1

Supplementary figure 1: LII/III GIN-cells show morphological characteristics of MC

Morton-Style Factorial Coding of Color in Primary Visual Cortex

Timing and Sequence of Brain Activity in Top-Down Control of Visual-Spatial Attention

Supplementary Figure 1

Wenqin Hu, Cuiping Tian, Tun Li, Mingpo Yang, Han Hou & Yousheng Shu

Supplementary Information. Gauge size. midline. arcuate 10 < n < 15 5 < n < 10 1 < n < < n < 15 5 < n < 10 1 < n < 5. principal principal

Error Detection based on neural signals

Psych 333, Winter 2008, Instructor Boynton, Exam 2

For Non-Commercial Academic Use Only

Context-Dependent Changes in Functional Circuitry in Visual Area MT

CROSSMODAL PLASTICITY IN SPECIFIC AUDITORY CORTICES UNDERLIES VISUAL COMPENSATIONS IN THE DEAF "

Toward the neural causes of human visual perception and behavior

STDP enhances synchrony in feedforward network

Supplementary Information. Errors in the measurement of voltage activated ion channels. in cell attached patch clamp recordings

Significance of Natural Scene Statistics in Understanding the Anisotropies of Perceptual Filling-in at the Blind Spot

Reorganization between preparatory and movement population responses in motor cortex

Matching Categorical Object Representations in Inferior Temporal Cortex of Man and Monkey

The control of attentional target selection in a colour/colour conjunction task

Supplementary Figure 1. GABA depolarizes the majority of immature neurons in the

Experimental Design. Thomas Wolbers Space and Aging Laboratory Centre for Cognitive and Neural Systems

Neuron, Volume 63 Spatial attention decorrelates intrinsic activity fluctuations in Macaque area V4.

Supplemental Information: Adaptation can explain evidence for encoding of probabilistic. information in macaque inferior temporal cortex

Optimal speed estimation in natural image movies predicts human performance

Dep. Control Time (min)

Choice-related activity and correlated noise in subcortical vestibular neurons

A general error-based spike-timing dependent learning rule for the Neural Engineering Framework

Title: Plasticity of intrinsic excitability in mature granule cells of the dentate gyrus

Attention: Neural Mechanisms and Attentional Control Networks Attention 2

Three-dimensional tuning profile of motor cortical activity during arm. [Running Head] motor cortical activity during arm movements

NeuroImage. Relationship between BOLD amplitude and pattern classification of orientation-selective activity in the human visual cortex

Parietal Reach Region Encodes Reach Depth Using Retinal Disparity and Vergence Angle Signals

OPTO 5320 VISION SCIENCE I

The Precision of Single Neuron Responses in Cortical Area V1 during Stereoscopic Depth Judgments

IEEE. Proof. IN the last decade, much success has been achieved in de-

EYE POSITION FEEDBACK IN A MODEL OF THE VESTIBULO-OCULAR REFLEX FOR SPINO-CEREBELLAR ATAXIA 6

Incomplete immune response to Coxsackie B viruses. associates with early autoimmunity against insulin

Does contralateral delay activity reflect working memory storage or the current focus of spatial attention within visual working memory?

Convergence Principles: Information in the Answer

Supplementary Figure 1. Nature Neuroscience: doi: /nn.4547

Analysis of in-vivo extracellular recordings. Ryan Morrill Bootcamp 9/10/2014

Nature Neuroscience: doi: /nn.4581

Two Visual Contrast Processes: One New, One Old

Transcription:

Supplementary Figure 1. Localization of face patches (a) Sagittal slice showing the location of fmri-identified face patches in one monkey targeted for recording; dark black line indicates electrode. Stereotactic coordinates for the location of ML were as follows: M1 AP 4.50, ML 25.5 (right hemisphere); M2 AP 4.50, ML 27.43 (right hemisphere), M3 AP 4.00, ML -26.2 (left hemisphere). (b) Neuronal responses (baseline-subtracted, averaged from 60 to 220ms) to images of different categories recorded from the ML. (c) Distribution of FSI (see Methods) across all visually responsive neurons. Dotted lines indicate FSI =0.33.

Supplementary Figure 2. Stimuli for face-object pair experiment The face images are from FERET face data base. The object images shown are similar to the objects actually shown, but due to copyright reasons, we cannot show the original. The grill image is adapted from an image from flickr.com (https://www.flickr.com/photos/fictures/14808488890/).

Supplementary Figure 3. Main results shown separately for three monkeys. (a) Results of face-object experiment for monkeys M1, M2, M3. Conventions as in Figure 2a, d, g, j. (b) Results of face-face experiment for monkeys M1, M2, M3. Conventions as in Figure 4b, d.

Supplementary Figure 4. The integration rule for face-object pairs does not depend on a (a) The distribution of face selectivity, defined as FSI, across the population in the horizontal configuration. Neurons were classified into three groups: (1) low face selectivity (FSI < 0.33 black), (2) medium face selectivity (0.33 < FSI <0.66), (3) high face selectivity (FSI > 0.66, white). (b) The population mean response of each group defined in (a). Conventions as in Figure 2a, d. (c) Same as (a) for the vertical configuration. (d) The population mean response of each group defined in (c). Conventions as in Figure 2g, j.

Supplementary Figure 5. Response time course in response to face-object pairs. (a) The population mean response of neurons in ML to face-object pairs with face presented in the contralateral visual field and object presented in the ipsilateral visual field. Black lines represent the responses to the face-object pairs across time. Red (blue) lines represent the responses across time to the constituent face (object) of the corresponding face-object pair when presented in isolation. The 5 rows represent 5 different contrast levels. (b) Same as (a), for the condition in which faces were presented in the ipsilateral visual field and objects were presented in the contralateral visual field. (c) Same as (a), for the condition in which faces were presented in the upper visual field and objects were presented in the lower visual field. (d) Same as (a), for the condition in which faces were presented in the lower visual field and objects were presented in the upper visual field. (e) The face weight (see Methods) computed across time at five relative contrast levels (from top to bottom) with face presented in the contralateral visual field and object presented in the ipsilateral visual field. Note some time points are missing because the difference between the responses to face and object was too small (<0.04), making the weight estimation meaningless. (f) Same as (e), for the condition in which faces were presented in the ipsilateral visual field and objects were presented in the contralateral visual field. (g) Same as (e), for the condition in which faces were presented in the upper visual field and objects were presented in the lower visual field. (h) Same as (e), for the condition in which faces were presented in the lower visual field and objects were presented in the upper visual field.

Supplementary Figure 6. The correlation between response magnitudes to pairs of faces to isolated faces (a) Response magnitudes to pairs of faces versus to isolated faces (left: contralateral face, right: ipsilateral face) for one example cell, with faces in the horizontal configuration. (b) Response magnitudes to pairs of faces versus to isolated faces (left: upper face, right: lower face) for one example cell, with face in the vertical configuration. (c) Scatter plots of the correlation between responses to pairs of faces and contralateral isolated faces versus the correlation between responses to pairs of faces and ipsilateral isolated faces, across all neurons. Blue indicates neurons that showed a significant correlation between the responses to pairs of faces and to contralateral isolated faces. Black indicates neurons that showed no significance for either correlation. (p < 0.05, Bonferroni corrected). (d) Scatter plots of the correlation between responses to pairs of faces and isolated faces above fixation versus the correlation between responses to pairs of faces and isolated faces below fixation across all the neurons. Blue indicates neurons that only showed a significant correlation between the responses to pairs of faces and isolated faces above the fixation. Red indicates neurons that only showed a significant correlation between responses to pairs of faces and isolated faces above fixation. Magenta indicates neurons that showed significance for both correlations (p < 0.05, Bonferroni corrected).

Supplementary Figure 7. The normalization model using LFP amplitudes as weights predicts neural responses. (a) The normalization model fit using LFP amplitudes as weights to the face-object experiment when faces and objects were presented horizontally. Line indicates model fit, circles indicate observed data (data same as Figure 2b, e). (b) Same as (a), for faces and objects presented vertically (data same as Figure 2h, k). (c) The correlation between the observed responses and the normalization model fit when faces and objects were presented horizontally. We added a free LFP baseline parameter to obtain the model fits (i.e., the weight in the normalization model were calculated as the LFP amplitude minus the baseline). (d) Same as (c), for face and objects presented vertically.

Supplementary Figure 8. Integration rule for subset of face cells showing a greater response to contralateral objects compared to ipsilateral faces. Black line represents response to face-object pair at different relative contrasts. Red (blue) line represents responses to the constituent face (object) of the corresponding face-object pair when presented in isolation.

Supplementary Figure 9. Simulation of object decoding as a function of integration rule (winner-take-all, averaging), readout strategy (sparse, full), and noise level. (a) Hypothetical one dimensional object space. (b) Decoding accuracy (averaged across three classifiers, for A/not A, B/not B C/not C) as a function of noise level, for a winner-take-all rule (red curves) and averaging rule (blue curves). Filled circles indicate performance when reading out all neurons, while open circles indicate performance when reading out neurons selective for the object being detected. The winner-take-all rule always outperformed the averaging rule, with especially pronounced difference for sparse readout.

Supplementary Figure 10. Decoding face features in pairs of faces across different spatial configurations. (a) For PC1 (i.e., dimension 1 of our 6-d face space), we plot the predicted versus actual feature value, for a face presented contralaterally or ipsilaterally in a horizontal pair, and for a face presented above or below in a vertical pair. The predicted values were determined according to the decoding model described in the method session: a model of face decoding for pairs of faces. (b) The slope of the best-fit line between predicted and actual feature values for each of the 6 dimensions in our 6-d face space, for all four spatial conditions.