NeuroImage 70 (2013) Contents lists available at SciVerse ScienceDirect. NeuroImage. journal homepage:

Size: px
Start display at page:

Download "NeuroImage 70 (2013) Contents lists available at SciVerse ScienceDirect. NeuroImage. journal homepage:"

Transcription

1 NeuroImage 70 (2013) Contents lists available at SciVerse ScienceDirect NeuroImage journal homepage: The distributed representation of random and meaningful object pairs in human occipitotemporal cortex: The weighted average as a general rule Annelies Baeck a,b,, Johan Wagemans b, Hans P. Op de Beeck a a Laboratory of Biological Psychology, University of Leuven (KU Leuven), Tiensestraat 102, 3000 Leuven, Belgium b Laboratory of Experimental Psychology, University of Leuven (KU Leuven), Tiensestraat 102, 3000 Leuven, Belgium article info abstract Article history: Accepted 13 December 2012 Available online 22 December 2012 Keywords: fmri LOC Object recognition Object relations Natural scenes typically contain multiple visual objects, often in interaction, such as when a bottle is used to fill a glass. Previous studies disagree about the representation of multiple objects and the role of object position herein, nor did they pinpoint the effect of potential interactions between the objects. In an fmri study, we presented four single objects in two different positions and object pairs consisting of all possible combinations of the single objects. Objects pairs could form either a meaningful action configuration in which they interact with each other or a non-meaningful configuration. We found that for single objects and object pairs both identity and position were represented in multi-voxel activity patterns in LOC. The response patterns of object pairs were best predicted by a weighted average of the response patterns of the constituent objects, with the strongest single-object response (the max response) weighted more than the min response. The difference in weight between the max and the min object was larger for familiar action pairs than for other pairs when participants attended to the configuration. A weighted average thus relates the response patterns of object pairs to the response patterns of single objects, even when the objects interact Elsevier Inc. All rights reserved. Introduction Object identity is considered to be extracted through hierarchical processing along the ventral object vision pathway. Ultimately, it is represented in distributed patterns of neural activity in the highest stages of that pathway, namely the inferior temporal cortex in monkeys and object-selective regions in human occipitotemporal cortex (e.g. DiCarlo and Cox, 2007; Haxby et al., 2001). Many studies have investigated neural responses to isolated objects (e.g., Carlson et al., 2003; Cichy et al., 2011; Kravitz et al., 2010; Logothetis and Sheinberg, 1996; Spiridon and Kanwisher, 2002; Tanaka, 1996), a situation atypical of the real world, where we usually see multiple objects at the same time. However, an increasing number of studies have revealed that the representation of a particular object is altered by the presence of other objects (e.g., Chelazzi et al., 1998; Miller et al., 1993; Reddy and Kanwisher, 2007; Rolls and Tovee, 1995; Zoccolan et al., 2005, 2007). The exact nature of the coding of displays containing multiple objects is not clear because of discrepancies between existing studies and a lack of investigation of potentially relevant factors. Here we will focus upon the simplest situation, namely displays containing two objects. We investigated one discrepancy, namely the exact relationship between the responses to object pairs and the Corresponding author. addresses: Annelies.baeck@ppw.kuleuven.be (A. Baeck), Johan.wagemans@ppw.kuleuven.be (J. Wagemans), Hans.OpdeBeeck@ppw.kuleuven.be (H.P. Op de Beeck). responses to their constituent objects, and two potentially relevant factors: the existence of an action relationship between the two objects of a pair and the task context. Theoretically, the relationship between the representation of single objects and of object pairs could be captured by many functions, including the following four possibilities that describe how the response to the object pair relates to the response to the constituent objects when presented in isolation: (i) a simple averaging of the responses to the two objects, (ii) a weighted average with more weight to the object that elicits the strongest response in a neuron (the max object as opposed to the min object), (iii) a nonlinear max operator, which is an extreme version of a weighted average with the weight for the min object being 0, and (iv) any of many possible models in which the response to the object pair cannot be easily predicted from the responses to the two single objects. At the moment we can only draw limited conclusions from the studies available in the literature for the following reasons. First, studies are not consistent. There is evidence for non-linear relationships (e.g. Heuer and Britten, 2002), with the max operator as the most prominent proposal. A couple of studies comparing responses to single objects and object pairs argued in favor of a simple averaging (MacEvoy and Epstein, 2009; Zoccolan et al., 2005), while other studies found evidence for a weighted average with more weight given to the max object (Agam et al., 2010; Reddy et al., 2009). Zoccolan et al. (2007) provided one explanation for the discrepancy between simple averaging and weighted averaging by showing a negative relationship between clutter tolerance and stimulus selectivity so that minimally /$ see front matter 2012 Elsevier Inc. All rights reserved.

2 38 A. Baeck et al. / NeuroImage 70 (2013) selective neurons will tend to implement a max operator while the very highly selective neurons (which were mostly studied by Zoccolan et al., 2005) implement a simple average. Second, all these previous studies investigating the relationship between single-object and paired-object responses have focused upon random object pairs, in which there is no meaningful relationship between the two composing objects. Stimuli used were geometric forms (Zoccolan et al., 2005, 2007), for which no particular configuration is preferred, or randomly chosen complex objects (Agam et al., 2010; MacEvoy and Epstein, 2009; Reddy et al., 2009). In the second case, these objects are combined so that neither the configuration nor the relative size correspond to real-world experiences. However, behavioral, fmri, and TMS studies (e.g., Green and Hummel, 2006; Kim and Biederman, 2010; Kim et al., 2011; Riddoch et al., 2003, 2006) have noted that two objects can often interact meaningfully, for example by forming a so-called action pair (e.g. a cork screw on top of a bottle of wine), and that this interaction is relevant for the representation of these object pairs. For example, Riddoch et al. (2006) presented neglect patients with objects that were or were not co-located for action. They found a reduction of extinction for object pairs that performed a familiar action. An effect of this type of interacting objects on the overall strength of response in object-selective regions was also found with fmri in normal subjects (Robberts and Humphreys, 2010). In sum, these studies suggested that such action pairs are represented as a whole (Humphreys and Riddoch, 2007) and that the overall activity elicited by object pairs in the object vision pathway is modulated by such action relationship. As these findings suggest that action pairs are coded differently, we wondered whether the relationship between single-object representations and pair representations in random object pairs would still apply to action pairs. Stated differently, we do not know whether the findings from previous studies, namely that the whole is equal to the average (MacEvoy and Epstein, 2009) or the weighted average (Agam et al., 2010; Reddy et al., 2009) of the parts in the case of random object pairs, can be extrapolated to action pairs. In the present study, we compared the multi-voxel patterns of responses in the object vision pathway between single objects and object pairs. Our methods are similar to the ones used in a previous experiment (MacEvoy and Epstein, 2009) to be able to compare the results and better evaluate the effect of our additional manipulations. We compared the three most frequently proposed models in the literature, namely simple averaging, the max model, and weighted averaging. In addition, the position of the objects within the pairs was also manipulated, to test whether the inclusion of meaningful, action-related configurations would alter the relationship between the response patterns of pairs and their constituent objects. We found that this relationship between the response patterns of single objects and pairs could be most reliably described by a weighted average of the patterns of the single objects, with the maximum response weighted more than the minimum response. Data suggested that the maximum and the minimum response were weighted differently for the different types of object pairs when participants attended to the configuration. Method Participants Ten naive students of the University of Leuven (KU Leuven) with normal or corrected-to-normal vision participated in this study as paid volunteers (ages between 20 and 26 years, two male, all reported being right-handed). Data from one participant were excluded due to excessive head movement. The experiments were approved by the ethical committee of the Faculty of Psychology and Educational Sciences and the Medical Ethical Committee of the KU Leuven. Participants signed an informed consent at the start of each imaging session. Stimuli Stimuli included four pairs of greyscale pictures of objects with all background elements removed (Table 1). During the experiment the individual object pictures are shown in all possible pairings, in the description here we refer to the familiar action pair as the original pair. Stimulus size of the largest object of each pair was approximately 4 visual degrees, the smaller object of the original pair was scaled relatively to the size of the larger object within each original pair. Within each original pair, one object was depicted as the active partner, while the other object was the recipient of the action. To depict the actions correctly, the active object had to be presented on top of the passive object. The four pairs were divided in two sets of two pairs. Within these sets all possible combinations of the four single objects were used in the experiment. For each participant, only one stimulus set would be presented. Different sets were used to allow for generalization of the results over different stimuli. The results did not differ across the two different stimulus sets, thus we will report the results for both sets together. Pairs could be divided in four categories (Fig. 1): familiar action pairs, unfamiliar action pairs (objects with a non-associative relationship, an active object is presented on top), familiar non-action pairs (associative relationship between objects, a passive object is presented on top) and unfamiliar non-action pairs. For each object, twelve exemplars were created. By means of an independent pilot study (six participants, ages between 21 and 28), we collected similarity ratings for all combinations of the exemplars within each object. Participants were asked to rate perceptual similarity of the stimuli on a scale of 1 (not similar at all) to 7 (very similar). From each object, five exemplars were then selected for the experiment, so that average similarity between exemplars was equal for all objects (average similarity rating between 4 and 4.1 for every possible pair formed with the five exemplars) and the standard deviation (between pairs of exemplars) of the similarity ratings was as small as possible (standard deviation smaller than 0.5 for all objects). An independent group of 13 participants (ages between 22 and 34, four male) rated each of the object pairs on a 7-point scale of familiarity ( How well can these objects be used together? ) and of how well the objects were positioned for action ( Are these objects in the correct position to be used together? ). Familiar pairs received average ratings of 6.8 (action pairs) and 6.88 (non-action pairs) on familiarity, while unfamiliar objects received average ratings of 1.86 (action pairs) and 2.07 (non-action pairs). Familiarity ratings were significantly higher for familiar than for unfamiliar pairs (F(1,23) = , pb.001), and this effect did not interact with action versus non-action (F(1,23) =.106, p=.748). Objects in the action pairs were considered to be better positioned for action (average ratings of 6.8 for familiar pairs and 3.0 for unfamiliar pairs) than objects in the non-action pairs (average ratings of 1.62 for familiar pairs and 1.88 for unfamiliar pairs) (F(1,23) =63.889, pb.001) and this was influenced by whether the objects formed a familiar or unfamiliar pair (interaction effect: F(1,23) =26.78, pb.001). Table 1 Two sets of object stimuli. Within each set, objects were presented in their original pairs and re-paired with all other objects. Different bottles were used in the first and second stimulus set. Stimulus set Active object Passive object 1 Bottle Glass Saw Plank 2 Corkscrew Wine bottle Pen Paper

3 A. Baeck et al. / NeuroImage 70 (2013) In independent localizer runs, grayscale pictures of scenes, faces, objects and phase-scrambled images were presented. Participants performed a size judgment task during these runs: they had to press a button when the presented picture was smaller than the previous one. In both experimental and localizer runs, participants were asked to maintain central fixation. Procedure Apparatus Imaging data were acquired using a 3 T Philips Intera scanner. Each functional run consisted of 113 T2*-weighted echoplanar images (EPIs) (48 slices, mm in-plane voxel size, slice thickness 2 mm, interslice gap.1 mm, TR=3000 ms, TE=30 ms, flip angle= 90, matrix). In addition we collected a high-resolution T1-weighted anatomical scan for each participant (182 slices, resolution 0.98 by 0.98 by 1.2 mm, TR=9.6 ms, TE 4.6 ms, acquisition matrix). Stimuli were presented using Psychtoolbox 3 (Brainard, 1997). We used a Barco 6400i LCD projector (resolution , refresh rate 60 Hz) to project the stimuli on a vertical screen, which was made visible via a mirror attached to the headcoil. Viewing distance was approximately 35 cm. Movements of the left pupil were recorded with an MR-compatible eye-tracking device (Applied Science Laboratories system 5000). Design Fig. 1. Example stimuli from the four categories. Experimental runs consisted of 45 blocks of 7.5 s, including five fixation blocks as a baseline (one at the start, one at the end of the run and three after every tenth experimental block) and two blocks of each stimulus condition. A red fixation dot was always present. The order of the blocks was counterbalanced over runs. Eight conditions with single objects (four objects presented approximately 0.02 visual degrees below or above the fixation point) and 12 conditions with all possible configurations (pairs) of the single objects were presented. Each run included two blocks of each of these 20 stimulus conditions and each block comprised ten trials of 750 ms. In every trial, a random exemplar of the object or object pair was shown for 300 ms. Exemplars changed every trial, with exception of two trials in each block in which the previous exemplar was repeated. In case of an object pair, the exemplar of only one object would be repeated. Participants had to perform one of two tasks in the experimental runs. The first task was a 1-back task at the exemplar level: participants were instructed to press a button whenever they noticed an immediate repeat of an exemplar. When objects were presented in a pair, participants had to attend equally to both objects since participants could not predict for which object of a pair the exemplar would be repeated. In the second task, the participants had to judge the configuration of the stimulus block and respond with one of four keys: single object, object pair not correctly positioned for action, object pair correctly positioned and likely to perform an action, and a category in-between, in which an action was possible, but not very likely. Before the first imaging session, participants performed one training run outside the scanner to get acquainted to the stimuli and the task. Every imaging session would also start with a training run in the scanner. At the start of the training run, the fixation performance of the subjects was often not sufficient, but this run was sufficient to obtain proper fixation at the end. Data of this training run were not included in the analysis. Each subject participated in three imaging sessions. In each of the first two sessions, data of two localizer runs and eight experimental runs were collected. The participants had to perform the 1-back task. In the third session, participants had to perform the action-related task in ten experimental runs. Eye movements were recorded during experimental runs. Due to various technical difficulties (often related to a low contrast of the infrared images), eye movements were only properly recorded during 69% of the runs. Analysis Eye movements Fixations, calculated with Eyenal, were defined as periods of at least 100 ms during which the gaze did not change by more than one visual degree. Coordinate values were corrected for drift per run by means of a regression analysis. For each block, the mean and standard deviation of the coordinate on the vertical axis during fixation were calculated, both weighted by fixation duration. Values were first calculated separately for each run and then averaged per subject. Coordinate values during stimulus conditions were compared over subjects with values during the fixation condition. For runs in which participants were performing the 1-back task, no difference in mean fixation location in comparison with the fixation condition was found for stimulus blocks with one stimulus presented above the fixation point (t(7) =.062, p=.952), a single object shown below the fixation point (t(7) =1.08, p=.316), or object pairs (t(7)=.448, p=.668). When comparing weighted standard deviations as an index of the variability in fixation position, we even found that less variation was found for the stimulus blocks than for the fixation blocks (single objects above fixation point: t(7) =5.769, p=.001; single objects below fixation point: t(7) =3.908, p=.006; pairs: t(7) =5.178, p=.001). During runs in which the action-related task was performed, again no difference in mean fixation location was found (single objects above fixation point: t(8) =.489, p=.638; single objects below fixation point: t(8) =1.155, p=.281; pairs: t(8) =.488, p=.638). With this task, there was also no difference in variance compared to fixation blocks found (single objects above fixation point: t(8) =.021, p=.984; single objects below fixation point: t(8) =.021, p=.984; pairs: t(8) =.229, p=.825). Overall, there was only limited variation in the gaze (mean value of the weighted standard deviation per block is 0.52 visual degrees), indicating good fixation. fmri preprocessing Imaging data were analyzed using the Statistical Parametric Mapping software package (SPM8, Wellcome Department of Cognitive Neurology, London), as well as custom Matlab code. Preprocessing involved slice timing, spatial realignment, co-registration of functional and anatomical images, segmentation and spatial normalization to an MNI (Montreal Neurological Institute) template. During normalization,

4 40 A. Baeck et al. / NeuroImage 70 (2013) functional images were re-sampled to a voxel size of mm. Finally, functional images were smoothed using Gaussian kernels of 4 mm full-width at half maximum (FWHM). Statistical analyses Data were modeled at the individual level with regressors for each condition and six covariates (the translations and rotation parameters needed for realignment). Further analyses were performed using the parameter estimates ( beta values ) per run obtained after fitting the general linear model. Regions of interest Regions of interest (ROI) were defined by a combination of functional data from the localizer scans and anatomical landmarks. Ventral face-selective regions (face ROI) were defined by the faces minus objects contrast. The lateral occipital complex (LOC) was the result of the objects minus scrambled contrast with face-selective regions excluded. Only lateral occipital and occipitotemporal regions were included. LOC was further divided in a lateral, posterior part (posterior LOC or ploc) and an anterior, ventral part (anterior LOC or aloc). The same contrast was used to define a region containing object-selective voxels in the intraparietal sulcus (IPS). The parahippocampal place area (PPA) was defined as voxels in the medial occipitotemporal region significantly more activated by scenes versus faces. All contrasts were thresholded at pb.0001 (uncorrected for multiple comparisons). Pattern classification Linear support vector machines (SVM) were implemented using the OSU SVM Matlab toolbox as described before (e.g., Op de Beeck et al., 2010). The data were first divided in two random equally sized subset of runs. We constructed lists as long as the number of voxels in a ROI, and each list contained the standardized response (standard-normal transformation of the beta values) for all voxels for a particular condition in one subset. For each pair of conditions, a linear SVM was trained using the lists from these two conditions from half of the runs to construct the hyperplane that best separates the data from the two conditions. The performance of the classifier on this pair-wise classification was calculated for the average data from the remaining half of the runs. This procedure was repeated 100 times per pair of conditions with a random assignment of runs to the training and test set. The higher the decoding accuracy, the better the classifier is able to discriminate between two conditions. For the generalization analyses, a different pair of conditions was used for training versus testing. Searchlight and regression analyses These analyses are based upon the methods of MacEvoy and Epstein (2009), unless noted otherwise. For each voxel within LOC a local cluster containing all other LOC-voxels within a radius of 5 mm was defined. SVM performance for the object pairs was calculated for all clusters (see previous section for the description of our SVM methods), resulting in a measure of the distinctiveness of the multi-voxel response patterns of each cluster that can be used to rank the clusters ( cluster classification rank ). Then, two linear regression analyses were conducted for each voxel. First, a linear regression of the responses to pairs against the sum of responses to their constituent objects was performed. In the other regression, which was not included by MacEvoy and Epstein (2009), two predictors were included in the model: the first one was the response to the object that evoked the highest response in that voxel (max response), and the second was the response to the other object of the pair (min response). For both models, the position of the object was taken into account: for any given pair, only the responses of the single objects presented in the same position as in that particular pair were used. Regression analyses were performed on unstandardized beta values, because standardization would abolish differences in mean responsiveness. For both regression analyses, median parameter values (R 2, intercept and regression coefficients) were calculated per cluster. This value was then related to the cluster classification rank. The reasoning behind this is that clusters that most accurately differentiate between object pairs would also contain voxels that are the most instructive of the true relationship between responses to pairs and to their constituent objects. Further analyses (ANOVA) were done using mean values of the 20 best ranked clusters. SVM analysis with synthetic patterns These analyses are based upon the methods of MacEvoy and Epstein (2009), unless noted otherwise. First synthetic data were constructed for the pairs. Synthetic mean data of a pair was generated per voxel as the average patterns evoked by the corresponding single objects. Synthetic max and synthetic min responses were created by taking the higher and the lower of the two responses to the two single objects of the pair. Regression coefficients of the aforementioned regression analysis with two predictors were used to create synthetic weighted mean response patterns. Synthetic min and synthetic weighed mean were not included by MacEvoy and Epstein (2009). Also for these analyses, the position of the object was taken into account. The classifier was first trained with the synthetic data and then tested on the actual responses to the pair stimuli. The other methodological details of these SVM analyses on synthetic patterns are the same as described above in the section on Pattern classification. Results Representation of single objects Before we turn to the representation of the object pairs, we first investigated the properties of the representation of single object images. In particular, we checked what information about the single objects is stored in different parts of LOC: position and/or identity. Results are summarized in Fig. 2. Both ploc and aloc were able to discriminate better than chance between different objects when stimulus position is held constant (ploc: F(1,8) = , pb.001; aloc: F(1,8)=16.941, p=.003) and also between different positions when the object is held constant (ploc: F(1,8) = , pb.001; aloc: F(1,8) = , pb.001). In ploc, information about the position was more strongly coded than information about the identity of the object (F(1,8) =20.476, p=.002), while no significant difference in discrimination accuracy was found in aloc (F(1,8) =0.430, p=.53). Next, we investigated the invariance of the representation of one property, either identity or position, across a change in the other property. If LOC contains useful representations of identity, one would expect that these representations generalize across different positions. For this, we trained the classifier to discriminate between two different objects in one position (for example, both above the fixation dot) and tested the performance of the classifier on discriminating between the same two objects in a different position (i.e., both presented below the fixation dot). Generalization across different positions was significantly above chance in both ploc (F(1,8) =19.776, pb.001) and aloc (F(1,8) =65.821, pb.001). The same applied for generalization of position information over different objects (ploc: F(1,8) =51.552, pb.001; aloc: F(1,8) =17.598, p=.003). When comparing decoding accuracy of the two different generalizations (over objects and over positions), no difference was found in aloc (F(1,8) =0.174, p=.685), but in ploc more generalization was found over objects than over different positions (F(1,8) =13.809, p=.006). Thus, data showed that both identity and position are represented in activity patterns in ploc and aloc, but that in ploc position information is more strongly coded than identity.

5 A. Baeck et al. / NeuroImage 70 (2013) Fig. 2. Results of SVM and generalization analyses for single objects in ploc and aloc. Error bars represent the standard error of the mean (SEM). (**pb.01). Representation of object pairs Next we investigated what happens to this representation of object position and object identity when objects are shown in pairs (Fig. 3). SVM accuracy when discriminating between two pairs containing the same objects, but placed differently (for example, first pair is a bottle above a glass, second pair is a glass presented on top of a bottle) was significantly above chance (ploc: F(1,8)= , pb.001; aloc: F(1,8) =17.192, p=.035). This means that not only the identity of the objects forming the pair is represented in the activity patterns, but also information about the positions of both elements. Likewise, when comparing pairs with one object in common, SVM accuracy was lower when the one object the two pairs had in common was presented in the same position (for example, both pairs contain a bottle and both times it is presented above the fixation point) versus in a different position (in the example: the bottle is presented above the fixation dot in one pair, and below the fixation dot in the second pair) (ploc: F(1,8) = , pb.001; aloc: F(1,8) =10.569, p=.012). Finally, the classifier was less able to discriminate between two pairs with one object in common, presented at a different position, than between two pairs that had no objects in common (ploc: F(1,8)=14.623, p=.005; aloc: F(1,8)= , pb.001). In sum, also for object pairs activity patterns in LOC contain information about both the identity and the location of presented objects. Relationship between responses to object pairs and responses to their constituent objects presented in isolation: regression analyses Here we are mainly interested in the relationship between the representation of single objects and the representation of object pairs. First, we compared mean beta values of single object conditions with conditions in which object pairs were presented (Fig. 4). A significant difference between both types of conditions was found in LOC: a stronger response was elicited during conditions in which an object pair was shown (t(8) =4.385, p=.002). The same effect was found in face-selective areas (t(8) =2.426, p=.041 and IPS (t(8) = 5.95, pb.001), but not in PPA (t(8) =.865, p=.412). No differences were found between ploc and aloc for this analysis and all following analyses, thus we report results for entire LOC from now on. We also investigated the effect of the different types of configuration on the mean beta values (Fig. 4 for LOC). In none of the ROIs there was an effect of familiarity, co-location for action or an interaction between both variables (p>.05 for all comparisons). Next we investigated more closely the nature of the relationship between responses patterns of object pairs and their constituent objects. MacEvoy and Epstein (2009) argued in favor of a simple averaging relationship and supported this conclusion by a linear regression analysis. In addition to the simple averaging model, we implemented a second regression analysis, namely weighted averaging, to be able to compare both models. As done by MacEvoy and Epstein (2009), we base Fig. 3. Mean decoding accuracies when discriminating between object pairs in ploc and aloc. Error bars represent the standard error of the mean (SEM).

6 42 A. Baeck et al. / NeuroImage 70 (2013) Fig. 4. Mean beta values for the single objects and the four categories of object pairs in LOC. Error bars represent the standard error of the mean (SEM). conclusions upon the results from the voxel clusters in LOC with the highest classification accuracy for object pairs. For both models, R 2 increased with better classification ranks (Figs. 5A and 6A), indicating that more accurate predictions were made when the spatial pattern of response showed more distinct response patterns. For the simple averaging model, the intercept decreased (Fig. 5B) and the slope increased (Fig. 5C) as the classification accuracy improved. The mean slope value for the best searchlight clusters was.44, not significantly different from.50 (t(8)=.557, p=.580), thus converging to averaging (and not to summation for instance). The mean intercept value for the best clusters was.45, which is significantly higher than zero (t(8)=2.999, p=.017). This is an important deviation from the results of MacEvoy and Epstein (2009), where the intercept converged to a value around zero. Thus, in our study, the linear regression analysis that assumes a simple averaging solves the discrepancy with mean response strength (which should be the same for single objects and pairs under the assumption of simple averaging) by raising the intercept. For the weighted averaging model, two regression coefficients were estimated, namely for the variables representing the response to the object that evoked the highest versus the lowest neuronal response. The regression coefficient for the maximum response increased with higher accuracy rank (Fig. 6C), while there seemed to be no specific relationship between the value of the minimum response and the accuracy rank (Fig. 6D). Both regression coefficients were significantly different from each other (mean value max response=.66, mean value min response=.28, t(8) =6.902, pb.001), indicating that this is a different model than the simple averaging model. The value of the regression coefficient of the minimum response was higher than zero (t(8) =7.051, pb.001), therefore also distinguishing the model from a pure max-operator. The intercept for the weighted averaging model decreased as classification accuracy became better (Fig. 6B) and in this model the end point at the best accuracy was not significantly different from zero (mean value=.18, t(8) =1.325, p=.222). R 2 for the simple averaging model (mean value=.41) was significantly lower than R 2 for the weighted average model (mean value=.55) (t(8)=3.091, p=.015), but since the second model has one extra parameter, this does not necessarily mean that this is a better model. We therefore compared the predictions of both models more directly in the next section. Relationship between responses to object pairs and responses to their constituent objects presented in isolation: pattern classification and generalization results The responses to the object pairs as predicted by the three possible models (weighted average, simple average or max operator) to the actual pair responses (Fig. 7) were compared. In addition, we also tested the classification accuracy based on the minimum response to compare the contribution of the maximum and the minimum response. For this, we performed an SVM analysis in which one half of the data (the training data) was replaced by synthetic patterns. In LOC, classification based on the synthetic weighted mean was not significantly different than classification based on the actual pair responses (t(8) =1.713, p=.125, Fig. 7), while classification based on the synthetic mean (t(8) =2.359, p=.046), the synthetic max response patterns (t(8) =5.603, p=.001) and the synthetic min response patterns (t(8) =8.592, pb.001) was less accurate compared to classification based on the measured pair responses. Direct comparison of classification accuracies of the synthetic patterns confirmed this pattern: classification accuracy based on the synthetic weighted mean was better than classification based on the synthetic mean (t(8) =4.438, p=.002), the synthetic max response patterns (t(8) =9.141, pb.001) and the synthetic min response patterns (t(8)=11.34, pb.001). When one half of the data was replaced by the synthetic mean response patterns, classification accuracy was better than when one half of the data was replaced by the synthetic min response patterns (t(8) =11.792, pb.001). When comparing classification based on the synthetic mean and synthetic max responses, we found that classification based on the synthetic mean response patterns was significantly better (t(8) =4.764, p=.001), replicating the effect found by MacEvoy and Epstein (2009). We also investigated this relationship in the other regions of interest (Fig. 7). In all these areas, classification accuracy based on the actual pair responses was above chance (face-selective regions: t(8) = 3.741, p=.006; PPA: t(8) =3.456, p=.009; IPS: t(8) =5.674, p= b.001). Hence, this serves as a good baseline to compare classification results when the training data are replaced with synthetic patterns. We did not include the synthetic weighted mean condition for these ROIs, as the size of these regions (on average the number of voxels in the face-selective regions, PPA and IPS was respectively 15%, 9%, and 12% of the number of voxels in LOC) and the reliability of the best voxels/clusters in these regions was not sufficient to perform the regression analyses needed to find the weights for min and max. The average decoding accuracy for the object pairs of the best clusters in the face-selective regions, PPA and IPS corresponded on average to the lowest 18%, 10% or 26% clusters in LOC. As can be noticed in Fig. 6, at this level of accuracy, the outcomes of the regression analyses do not yet converge to the final values and thus contain not yet enough information to apply this method. In both face-selective regions and IPS, classification based on the synthetic mean responses was better than classification based on the synthetic max responses (face-selective regions: t(8)=8.63, pb.001; IPS: t(8) =6.183, pb.001) or synthetic min responses (face-selective regions: t(8)=7.854, pb.001; IPS: t(8)=9.624, pb.001). In PPA, no significant differences were found when one half of the data was replaced

7 A. Baeck et al. / NeuroImage 70 (2013) that the responses patterns to object pairs seemed to be best described by a linear combination of the response patterns of their constituent objects. Effect of the type of configuration Next, we investigated whether the type of configuration (a familiar action configuration (A-pairs) versus the other three configurations (NA-pairs)) had an effect on the relationship between pair responses and responses to their constituent objects. Therefore, we compared the SVM accuracies for different combinations of configuration types in LOC (Fig. 8). No difference in classification accuracy based on the actual pair responses was found between both types of pairs (t(8)= 1.518, p=.167), indicating that both types of configuration were represented equally well. We performed a 2 (SVM training data: synthetic weighted mean vs synthetic mean) 2 (configuration type: A vs NA pairs) repeated measures ANOVA to investigate whether the type of configuration altered the representation of the object pair. We found a main effect of training data (F(1,8) =10.868, p=.011), but not of configuration type (F(1,8) =2.037, p=.191). No interaction effect was found (F(1,8) =.533, p=.486): for both configuration types classification was more accurate when based on the synthetic weighted mean responses than when based on synthetic mean responses. Since both the regression analyses and the SVM analysis based on synthetic data patterns indicate that the weighted averaging model best reflects the data, we further looked whether the configuration type would alter the weight that is given to the different coefficients in this model. For this, we looked whether the configuration type influenced the classification accuracy based on the synthetic max and min response patterns, by means of a 2 (SVM training data: synthetic max vs synthetic min data patterns) 2 (configuration type: A vs NA pairs) repeated measures ANOVA. Again, a main effect of training data (F(1,8) =26.77, pb.001), but not of configuration (F(1,8) =.776, p=.001) was found. More importantly, no interaction effect was found (F(1,8) =2.453, p=.156). Thus, the relationship between response patterns to single objects and object pairs was not affected by the type of configuration formed by the object pair. Effect of action-related task Fig. 5. Results of the regression analyses in LOC for the simple averaging model. Median (A) R 2, (B) intercept and (C) slope are plotted as a function of the cluster accuracy rank. Data were smoothed with a 20-bin mean filter. by the synthetic mean compared to the synthetic max (t(8)=.557, p=.580) or the synthetic min responses (t(8)=1.8, p=.110). When comparing the actual pair responses in PPA to classification results based on the synthetic data patterns, we did find a significant difference between the actual pair responses and the classification accuracy based on the synthetic max (t(8) =2.437, p=.041) and the synthetic min responses (t(8)=2.567, p=.033), but not for the synthetic mean response patterns (t(8) =1.604, p=.147). Overall, we consistently find Finally, we investigated whether there was any effect of the task context. During the 1-back task, no specific effect related to the configuration of the object pairs was found. But due to the nature of that task, participants did not have to attend to the differences in configuration. Therefore, in the second task, participants were asked to judge the configuration on how well the objects were positioned to perform an action together, making the configuration relevant. Behavioral data during this task showed that there was a main effect of the configuration type (F(3,24) =58.489, pb.001). Familiar action pairs were judged to be more correctly positioned for and more likely to perform an action than unfamiliar action pairs (t(8) =12.376, pb.001), familiar non-action pairs (t(8) =12.744, pb.001) and unfamiliar non-action pairs (t(8) =29.182, pb.001). No significant differences in behavioral judgments were found between the scores of these last three kinds of pairs (for all comparisons p>.05). Even with this new task, when the type of configuration was relevant for the subject's behavior, no effect of familiarity or co-location for action was found on the mean activity levels (p>.05 for all comparisons, Fig. 9). Mean responses to object pairs were again significantly higher than responses to single objects in all areas (LOC: F(1,8) =75.586, pb.001; face-selective regions: F(1,8) =5.342, p=.001; PPA: F(1,8) =2.550, p=.034; IPS: F(1,8) =5.555, p=.001). Again with this new task, the weighted average model seemed best suited to describe the data when fitted using data from all types of configurations. The model did not significantly differ from the one estimated when participants were performing the 1-back

8 44 A. Baeck et al. / NeuroImage 70 (2013) Fig. 6. Results of the regression analyses in LOC for the weighted averaging model. Median (A) R 2, (B) intercept, (C) regression coefficients of the max response (coefficient max) and (D) regression coefficients of the min response (coefficient min) are plotted as a function of the cluster accuracy rank. Data were smoothed with a 20-bin mean filter. task: both the values of the regression coefficients for the maximum (mean value=.68, t(8)=.076, p=.941) and the minimum response (mean value=.17, t(8)=1.297, p=.231) did not change significantly, the coefficient for the maximum was again significantly higher than the coefficient for the minimum response (t(8)=4.602, p=.002), and the intercept was not significantly different from zero (mean value=.22, t(8)=1.325, p=.222). Next we compared classification results when one half of the data was replaced with synthetic patterns. The same pattern of results was found compared to results when performing the 1-back task: classification accuracy in LOC was better when based on the synthetic weighted mean compared to accuracy based on the synthetic mean (t(8)=3.043, p=.016), synthetic max (t(8)=5.013, p=.001) or synthetic min response patterns (t(8)=6.471, pb.001). Classification accuracy when one half of the data was replaced by the synthetic mean responses was significantly better when replacing it with the synthetic min data patterns (t(8)=8.015, pb.001), but no significant difference was found between accuracy based on synthetic mean and max patterns (t(8)=.521, p=.617). In Fig. 7. Results of SVM analyses: the classifier was first trained with the synthetic data and then tested on the actual responses to the pair stimuli. Data are plotted in function of the region of interest.

9 A. Baeck et al. / NeuroImage 70 (2013) Fig. 8. Results of SVM analyses in LOC for different kind of object pairs (A=familiar action pairs, NA=all other pairs) when one half of the data is replaced by synthetic data patterns. the other regions of interest, SVM accuracy discriminating between the object pairs failed to reach significance in PPA (t(8)=1.303, p=.229), making it impossible to compare the classification accuracy when one half of the data was replaced by synthetic data patterns. Pair classification in face-selective regions (t(8)=3.447, p=.009) and IPS (t(8)=8.186, pb.001) was significantly better than chance. Classification based on the synthetic mean data was better than classification based on the synthetic mindatapatternsinboththeface-selectiveregions(t(8)=2.874,p=.021) and IPS (t(8)=4.17, p=.003). When one half of the data was replaced by the synthetic max responses, classification accuracy in IPS was less accurate than when replaced by the synthetic mean responses (t(8)=2.91, p=.02), but no significant difference was found in the face-selective regions (t(8)=2, p=.081). Finally, we investigated whether there was any difference between the types of configurations in the action-related task in terms of the relative reliance upon the max and min object as indicated by the generalization performance of a classifier that is trained using either the max or the min object responses and tested using the pair responses. We directly contrasted the results from training with the max versus the min object in a repeated measures ANOVA with training data (synthetic max and min responses) and configuration type as independent variables. A main effect of training data (F(1,8) =20.89, p=.002), but no main effect of configuration (F(1,8) =.034, p=.859) was found. Importantly, we did find an interaction effect between both variables (F(1,8) =7.681, p=.024), with a larger difference between the max and the min object for the familiar action pairs. It thus seems that the max and min response are weighted differently for the different types of object pairs when participants attend to the configuration. Thus, with the action task, we find a larger role of the max object compared to the min object for the representation of familiar action pairs compared to the representation of other object pairs (Fig. 10). Note that this same interaction effect was not present in the 1-back task. However, a small trend in the same direction was present, and when testing the task effect explicitly in a three-way repeatedmeasures ANOVA, we did not find a significant three-way interaction effect between the task, the training data (synthetic max and min data patterns) and the configuration (F(1,8)=.124, p=.734). Thus, while we can say that there is a significant difference between the representation of familiar action pairs and the representation of other object pairs in the action task, and while this effect was only significant in this action task, our findings do not provide conclusive evidence that this effect critically depends upon the task that subjects are performing. Discussion In the present study we presented single objects and object pairs while participants were performing an exemplar-level 1-back task or a task in which they had to judge the action quality of the configuration. We found that for both individual objects and object pairs information about the identity and the position of the objects is represented in the response patterns in LOC. The relationship between response patterns of pairs and single objects was best described by a weighted average of the responses to the single objects. Specifically, the response to the object that elicits the strongest response in a voxel (the max object) was weighted more than the response to the other object. Furthermore, the difference in weight between the max and the min object was larger Fig. 9. Mean beta values for the single objects and the four categories of object pairs in LOC when subjects were performing the action-related task. Error bars represent the standard error of the mean (SEM).

10 46 A. Baeck et al. / NeuroImage 70 (2013) for familiar action pairs than for other pairs when participants attended to the configuration. Position coding in LOC For single objects and object pairs both their identity and position were represented in activity patterns in LOC. Position was coded more strongly than identity in ploc, but not in aloc. This pattern of results is in accordance with previous results. For example, Hung et al. (2005) found that both information about object identity and position could be read out from the same neuronal population in the macaque temporal cortex. Human fmri data confirmed the representation of category and position specific information in objectselective regions (Cichy et al., 2011; Kravitz et al., 2010; Sayres and Grill-Spector, 2008). Schwarzlose et al. (2008) extended the search for identity and position information to more regions of interest, namely regions characterized by a strong selectivity for faces, scenes, bodies and objects. In general, they found that position and identity information are represented in all category-selective regions, and that more location information is stored in regions on the lateral (in contrast to the ventral) surface of the brain, as is also found in our study. Relationship between responses to object pairs and their constituent objects: a comparison with previous studies In line with some previous studies (e.g. Agam et al., 2010; Reddy et al., 2009), we found that the response to an object pair could be best predicted by a weighted average of the responses to the single objects. However, other studies have also found evidence for other forms of linear combinations (simple averaging: e.g. MacEvoy and Epstein, 2009; Zoccolan et al., 2005) and for nonlinearities (e.g. Gawne and Martin, 2002; Heuer and Britten, 2002). There are several possible explanations for these differences. First, there are differences in the methods used. Studies with monkeys as subjects can measure responses at the level of individual neurons, while this has not been done in humans. Looking at single neurons, Zoccolan et al. (2007) found that the exact relationship depends on how selective a voxel is for the given stimuli: for highly selective neurons, the relationship tended to be simple averaging, as found by Zoccolan et al. (2005), while a max operator was implemented when the neuron was poorly selective. On average, an IT neuron will thus best be described by a weighted average, since this is in between the simple average and the max. This is in accordance with our study, since in any given voxel, lots of neurons are pooled together, both highly and poorly selective. Furthermore, studies have used different definitions of weighted averaging. We calculated the weighted average in the same manner as Agam et al. (2010): first the objects in each pair that elicited the stronger and the weaker response were determined for every smallest possible unit (in our study each separate voxel). For each pair, a max and min variable was then defined in each voxel. Reddy et al. (2009) used the entire pattern to define the maximum and minimum object. Both methods resulted in the same conclusions. In the weighted average model of Gawne and Martin (2002), the weighted variables are the responses to the objects in a certain position. Thus the first variable was the response to object i in one location and the second variable was the response to stimulus j in the other location. When comparing this model to the MAX model, they found that the MAX operator was a better model for a substantial fraction of V4 visual cortical neurons, but for many neurons, they could not determine any clear relationship. Still the differences in results remain between our study and the study of MacEvoy and Epstein (2009), which were unexpected given the resemblance in design and analyses. Both studies found that the response to the pair was best predicted by a linear combination of the responses to the single objects, in our experiment the most comprehensive model was a weighted averaging instead of the simple averaging model MacEvoy and Epstein (2009) argued for. To some degree the differences can be explained by the fact that we explicitly modeled the weighted averaging which is necessary to find positive evidence to differentiate a weighted from a simple average. Nevertheless, also some discrepancies were present when we repeated the exact same analyses, such as the significantly positive offset in the linear regression analysis with a simple average, which was only present in our study and which suggests that single-object responses in the study of MacEvoy and Epstein (2009) were higher relative to the pair responses. These discrepancies could be related to several differences between the two studies, including the exact stimulus and task characteristics or the possible presence of saccades towards the objects in the study of MacEvoy and Epstein (2009). Representation of configuration type: weighted averaging as a general mechanism? In the present study, the relationship between the response patterns of object pairs and their constituent object was always best defined by a weighted average, regardless of the type of configuration. We also did not find any difference in mean activation between the different types of configurations. This last result is in contrast with the findings of Robberts and Humphreys (2010): they found that the action relationship influenced the strength of the response to object pairs, and this is regardless of whether the objects were attended Fig. 10. Results of SVM analyses when subjects are performing the action-related task. Data from LOC are plotted separately for different kind of object pairs (A=familiar action pairs, NA=all other pairs).

Twelve right-handed subjects between the ages of 22 and 30 were recruited from the

Twelve right-handed subjects between the ages of 22 and 30 were recruited from the Supplementary Methods Materials & Methods Subjects Twelve right-handed subjects between the ages of 22 and 30 were recruited from the Dartmouth community. All subjects were native speakers of English,

More information

From feedforward vision to natural vision: The impact of free viewing and clutter on monkey inferior temporal object representations

From feedforward vision to natural vision: The impact of free viewing and clutter on monkey inferior temporal object representations From feedforward vision to natural vision: The impact of free viewing and clutter on monkey inferior temporal object representations James DiCarlo The McGovern Institute for Brain Research Department of

More information

Report. Decoding the Representation of Multiple Simultaneous Objects in Human Occipitotemporal Cortex

Report. Decoding the Representation of Multiple Simultaneous Objects in Human Occipitotemporal Cortex Current Biology 19, 1 5, June 9, 2009 ª2009 Elsevier Ltd All rights reserved DOI 10.1016/j.cub.2009.04.020 Decoding the Representation of Multiple Simultaneous Objects in Human Occipitotemporal Cortex

More information

Supplementary information Detailed Materials and Methods

Supplementary information Detailed Materials and Methods Supplementary information Detailed Materials and Methods Subjects The experiment included twelve subjects: ten sighted subjects and two blind. Five of the ten sighted subjects were expert users of a visual-to-auditory

More information

Supplementary Information Methods Subjects The study was comprised of 84 chronic pain patients with either chronic back pain (CBP) or osteoarthritis

Supplementary Information Methods Subjects The study was comprised of 84 chronic pain patients with either chronic back pain (CBP) or osteoarthritis Supplementary Information Methods Subjects The study was comprised of 84 chronic pain patients with either chronic back pain (CBP) or osteoarthritis (OA). All subjects provided informed consent to procedures

More information

Supplemental Material

Supplemental Material 1 Supplemental Material Golomb, J.D, and Kanwisher, N. (2012). Higher-level visual cortex represents retinotopic, not spatiotopic, object location. Cerebral Cortex. Contents: - Supplemental Figures S1-S3

More information

Journal of Neuroscience. For Peer Review Only

Journal of Neuroscience. For Peer Review Only The Journal of Neuroscience Discrimination Training Alters Object Representations in Human Extrastriate Cortex Journal: Manuscript ID: Manuscript Type: Manuscript Section: Date Submitted by the Author:

More information

NeuroImage 56 (2011) Contents lists available at ScienceDirect. NeuroImage. journal homepage:

NeuroImage 56 (2011) Contents lists available at ScienceDirect. NeuroImage. journal homepage: NeuroImage 56 (2011) 1372 1381 Contents lists available at ScienceDirect NeuroImage journal homepage: www.elsevier.com/locate/ynimg Multiple scales of organization for object selectivity in ventral visual

More information

Classification and Statistical Analysis of Auditory FMRI Data Using Linear Discriminative Analysis and Quadratic Discriminative Analysis

Classification and Statistical Analysis of Auditory FMRI Data Using Linear Discriminative Analysis and Quadratic Discriminative Analysis International Journal of Innovative Research in Computer Science & Technology (IJIRCST) ISSN: 2347-5552, Volume-2, Issue-6, November-2014 Classification and Statistical Analysis of Auditory FMRI Data Using

More information

Attention Response Functions: Characterizing Brain Areas Using fmri Activation during Parametric Variations of Attentional Load

Attention Response Functions: Characterizing Brain Areas Using fmri Activation during Parametric Variations of Attentional Load Attention Response Functions: Characterizing Brain Areas Using fmri Activation during Parametric Variations of Attentional Load Intro Examine attention response functions Compare an attention-demanding

More information

WHAT DOES THE BRAIN TELL US ABOUT TRUST AND DISTRUST? EVIDENCE FROM A FUNCTIONAL NEUROIMAGING STUDY 1

WHAT DOES THE BRAIN TELL US ABOUT TRUST AND DISTRUST? EVIDENCE FROM A FUNCTIONAL NEUROIMAGING STUDY 1 SPECIAL ISSUE WHAT DOES THE BRAIN TE US ABOUT AND DIS? EVIDENCE FROM A FUNCTIONAL NEUROIMAGING STUDY 1 By: Angelika Dimoka Fox School of Business Temple University 1801 Liacouras Walk Philadelphia, PA

More information

Human Object Interactions Are More than the Sum of Their Parts

Human Object Interactions Are More than the Sum of Their Parts Cerebral Cortex, March 2017;27:2276 2288 doi:10.1093/cercor/bhw077 Advance Access Publication Date: 12 April 2016 Original Article ORIGINAL ARTICLE Human Object Interactions Are More than the Sum of Their

More information

Supplementary materials for: Executive control processes underlying multi- item working memory

Supplementary materials for: Executive control processes underlying multi- item working memory Supplementary materials for: Executive control processes underlying multi- item working memory Antonio H. Lara & Jonathan D. Wallis Supplementary Figure 1 Supplementary Figure 1. Behavioral measures of

More information

Experimental design for Cognitive fmri

Experimental design for Cognitive fmri Experimental design for Cognitive fmri Alexa Morcom Edinburgh SPM course 2017 Thanks to Rik Henson, Thomas Wolbers, Jody Culham, and the SPM authors for slides Overview Categorical designs Factorial designs

More information

Supporting Online Material for

Supporting Online Material for www.sciencemag.org/cgi/content/full/324/5927/646/dc1 Supporting Online Material for Self-Control in Decision-Making Involves Modulation of the vmpfc Valuation System Todd A. Hare,* Colin F. Camerer, Antonio

More information

Visual Categorization: How the Monkey Brain Does It

Visual Categorization: How the Monkey Brain Does It Visual Categorization: How the Monkey Brain Does It Ulf Knoblich 1, Maximilian Riesenhuber 1, David J. Freedman 2, Earl K. Miller 2, and Tomaso Poggio 1 1 Center for Biological and Computational Learning,

More information

Accepted Manuscript. Brain-decoding fmri reveals how wholes relate to the sum of parts

Accepted Manuscript. Brain-decoding fmri reveals how wholes relate to the sum of parts Accepted Manuscript Brain-decoding fmri reveals how wholes relate to the sum of parts Jonas Kubilius, Annelies Baeck, Johan Wagemans, Hans P. Op de Beeck PII: S0010-9452(15)00052-0 DOI: 10.1016/j.cortex.2015.01.020

More information

Supplementary Information

Supplementary Information Supplementary Information The neural correlates of subjective value during intertemporal choice Joseph W. Kable and Paul W. Glimcher a 10 0 b 10 0 10 1 10 1 Discount rate k 10 2 Discount rate k 10 2 10

More information

Nature Neuroscience: doi: /nn Supplementary Figure 1. Task timeline for Solo and Info trials.

Nature Neuroscience: doi: /nn Supplementary Figure 1. Task timeline for Solo and Info trials. Supplementary Figure 1 Task timeline for Solo and Info trials. Each trial started with a New Round screen. Participants made a series of choices between two gambles, one of which was objectively riskier

More information

Spatial coding and invariance in object-selective cortex

Spatial coding and invariance in object-selective cortex available at www.sciencedirect.com journal homepage: www.elsevier.com/locate/cortex Research report Spatial coding and invariance in object-selective cortex Thomas Carlson a,b,c,d, *, Hinze Hogendoorn

More information

SUPPLEMENTARY INFORMATION. Table 1 Patient characteristics Preoperative. language testing

SUPPLEMENTARY INFORMATION. Table 1 Patient characteristics Preoperative. language testing Categorical Speech Representation in the Human Superior Temporal Gyrus Edward F. Chang, Jochem W. Rieger, Keith D. Johnson, Mitchel S. Berger, Nicholas M. Barbaro, Robert T. Knight SUPPLEMENTARY INFORMATION

More information

Functional topography of a distributed neural system for spatial and nonspatial information maintenance in working memory

Functional topography of a distributed neural system for spatial and nonspatial information maintenance in working memory Neuropsychologia 41 (2003) 341 356 Functional topography of a distributed neural system for spatial and nonspatial information maintenance in working memory Joseph B. Sala a,, Pia Rämä a,c,d, Susan M.

More information

Neural Systems for Visual Scene Recognition

Neural Systems for Visual Scene Recognition Neural Systems for Visual Scene Recognition Russell Epstein University of Pennsylvania Where (dorsal) What (ventral) Cortical specialization for scenes?? (Epstein & Kanwisher, 1998) Scenes > Faces &

More information

Resistance to forgetting associated with hippocampus-mediated. reactivation during new learning

Resistance to forgetting associated with hippocampus-mediated. reactivation during new learning Resistance to Forgetting 1 Resistance to forgetting associated with hippocampus-mediated reactivation during new learning Brice A. Kuhl, Arpeet T. Shah, Sarah DuBrow, & Anthony D. Wagner Resistance to

More information

Quiroga, R. Q., Reddy, L., Kreiman, G., Koch, C., Fried, I. (2005). Invariant visual representation by single neurons in the human brain, Nature,

Quiroga, R. Q., Reddy, L., Kreiman, G., Koch, C., Fried, I. (2005). Invariant visual representation by single neurons in the human brain, Nature, Quiroga, R. Q., Reddy, L., Kreiman, G., Koch, C., Fried, I. (2005). Invariant visual representation by single neurons in the human brain, Nature, Vol. 435, pp. 1102-7. Sander Vaus 22.04.2015 The study

More information

Nature Neuroscience: doi: /nn Supplementary Figure 1. Behavioral training.

Nature Neuroscience: doi: /nn Supplementary Figure 1. Behavioral training. Supplementary Figure 1 Behavioral training. a, Mazes used for behavioral training. Asterisks indicate reward location. Only some example mazes are shown (for example, right choice and not left choice maze

More information

Multivariate Patterns in Object-Selective Cortex Dissociate Perceptual and Physical Shape Similarity

Multivariate Patterns in Object-Selective Cortex Dissociate Perceptual and Physical Shape Similarity Multivariate Patterns in Object-Selective Cortex Dissociate Perceptual and Physical Shape Similarity Johannes Haushofer 1,2*, Margaret S. Livingstone 1, Nancy Kanwisher 2 PLoS BIOLOGY 1 Department of Neurobiology,

More information

Dissociation between Dorsal and Ventral Posterior Parietal Cortical Responses to Incidental Changes in Natural Scenes

Dissociation between Dorsal and Ventral Posterior Parietal Cortical Responses to Incidental Changes in Natural Scenes Dissociation between Dorsal and Ventral Posterior Parietal Cortical Responses to Incidental Changes in Natural Scenes Lorelei R. Howard 1, Dharshan Kumaran 2, H. Freyja Ólafsdóttir 1, Hugo J. Spiers 1

More information

Supporting online material. Materials and Methods. We scanned participants in two groups of 12 each. Group 1 was composed largely of

Supporting online material. Materials and Methods. We scanned participants in two groups of 12 each. Group 1 was composed largely of Placebo effects in fmri Supporting online material 1 Supporting online material Materials and Methods Study 1 Procedure and behavioral data We scanned participants in two groups of 12 each. Group 1 was

More information

Experience-Dependent Sharpening of Visual Shape Selectivity in Inferior Temporal Cortex

Experience-Dependent Sharpening of Visual Shape Selectivity in Inferior Temporal Cortex Cerebral Cortex Advance Access published December 28, 2005 Cerebral Cortex doi:10.1093/cercor/bhj100 Experience-Dependent Sharpening of Visual Shape Selectivity in Inferior Temporal Cortex David J. Freedman

More information

Supplemental Information. Neural Representations Integrate the Current. Field of View with the Remembered 360 Panorama. in Scene-Selective Cortex

Supplemental Information. Neural Representations Integrate the Current. Field of View with the Remembered 360 Panorama. in Scene-Selective Cortex Current Biology, Volume 26 Supplemental Information Neural Representations Integrate the Current Field of View with the Remembered 360 Panorama in Scene-Selective Cortex Caroline E. Robertson, Katherine

More information

Reporting Checklist for Nature Neuroscience

Reporting Checklist for Nature Neuroscience Corresponding Author: Manuscript Number: Manuscript Type: Alex Pouget NN-A46249B Article Reporting Checklist for Nature Neuroscience # Main Figures: 7 # Supplementary Figures: 3 # Supplementary Tables:

More information

Decoding the future from past experience: learning shapes predictions in early visual cortex

Decoding the future from past experience: learning shapes predictions in early visual cortex J Neurophysiol 113: 3159 3171, 2015. First published March 5, 2015; doi:10.1152/jn.00753.2014. Decoding the future from past experience: learning shapes predictions in early visual cortex Caroline D. B.

More information

Representational similarity analysis

Representational similarity analysis School of Psychology Representational similarity analysis Dr Ian Charest Representational similarity analysis representational dissimilarity matrices (RDMs) stimulus (e.g. images, sounds, other experimental

More information

Introduction to MVPA. Alexandra Woolgar 16/03/10

Introduction to MVPA. Alexandra Woolgar 16/03/10 Introduction to MVPA Alexandra Woolgar 16/03/10 MVP...what? Multi-Voxel Pattern Analysis (MultiVariate Pattern Analysis) * Overview Why bother? Different approaches Basics of designing experiments and

More information

Group-Wise FMRI Activation Detection on Corresponding Cortical Landmarks

Group-Wise FMRI Activation Detection on Corresponding Cortical Landmarks Group-Wise FMRI Activation Detection on Corresponding Cortical Landmarks Jinglei Lv 1,2, Dajiang Zhu 2, Xintao Hu 1, Xin Zhang 1,2, Tuo Zhang 1,2, Junwei Han 1, Lei Guo 1,2, and Tianming Liu 2 1 School

More information

Identification of Neuroimaging Biomarkers

Identification of Neuroimaging Biomarkers Identification of Neuroimaging Biomarkers Dan Goodwin, Tom Bleymaier, Shipra Bhal Advisor: Dr. Amit Etkin M.D./PhD, Stanford Psychiatry Department Abstract We present a supervised learning approach to

More information

A Stable Topography of Selectivity for Unfamiliar Shape Classes in Monkey Inferior Temporal Cortex

A Stable Topography of Selectivity for Unfamiliar Shape Classes in Monkey Inferior Temporal Cortex Cerebral Cortex doi:10.1093/cercor/bhm196 Cerebral Cortex Advance Access published November 21, 2007 A Stable Topography of Selectivity for Unfamiliar Shape Classes in Monkey Inferior Temporal Cortex Hans

More information

An Integrated Face Body Representation in the Fusiform Gyrus but Not the Lateral Occipital Cortex

An Integrated Face Body Representation in the Fusiform Gyrus but Not the Lateral Occipital Cortex An Integrated Face Body Representation in the Fusiform Gyrus but Not the Lateral Occipital Cortex Michal Bernstein, Jonathan Oron, Boaz Sadeh, and Galit Yovel Abstract Faces and bodies are processed by

More information

Comparing event-related and epoch analysis in blocked design fmri

Comparing event-related and epoch analysis in blocked design fmri Available online at www.sciencedirect.com R NeuroImage 18 (2003) 806 810 www.elsevier.com/locate/ynimg Technical Note Comparing event-related and epoch analysis in blocked design fmri Andrea Mechelli,

More information

Prediction of Successful Memory Encoding from fmri Data

Prediction of Successful Memory Encoding from fmri Data Prediction of Successful Memory Encoding from fmri Data S.K. Balci 1, M.R. Sabuncu 1, J. Yoo 2, S.S. Ghosh 3, S. Whitfield-Gabrieli 2, J.D.E. Gabrieli 2 and P. Golland 1 1 CSAIL, MIT, Cambridge, MA, USA

More information

Rules of apparent motion: The shortest-path constraint: objects will take the shortest path between flashed positions.

Rules of apparent motion: The shortest-path constraint: objects will take the shortest path between flashed positions. Rules of apparent motion: The shortest-path constraint: objects will take the shortest path between flashed positions. The box interrupts the apparent motion. The box interrupts the apparent motion.

More information

Supplemental Information

Supplemental Information Current Biology, Volume 22 Supplemental Information The Neural Correlates of Crowding-Induced Changes in Appearance Elaine J. Anderson, Steven C. Dakin, D. Samuel Schwarzkopf, Geraint Rees, and John Greenwood

More information

Procedia - Social and Behavioral Sciences 159 ( 2014 ) WCPCG 2014

Procedia - Social and Behavioral Sciences 159 ( 2014 ) WCPCG 2014 Available online at www.sciencedirect.com ScienceDirect Procedia - Social and Behavioral Sciences 159 ( 2014 ) 743 748 WCPCG 2014 Differences in Visuospatial Cognition Performance and Regional Brain Activation

More information

Sum of Neurally Distinct Stimulus- and Task-Related Components.

Sum of Neurally Distinct Stimulus- and Task-Related Components. SUPPLEMENTARY MATERIAL for Cardoso et al. 22 The Neuroimaging Signal is a Linear Sum of Neurally Distinct Stimulus- and Task-Related Components. : Appendix: Homogeneous Linear ( Null ) and Modified Linear

More information

How do individuals with congenital blindness form a conscious representation of a world they have never seen? brain. deprived of sight?

How do individuals with congenital blindness form a conscious representation of a world they have never seen? brain. deprived of sight? How do individuals with congenital blindness form a conscious representation of a world they have never seen? What happens to visual-devoted brain structure in individuals who are born deprived of sight?

More information

Just One View: Invariances in Inferotemporal Cell Tuning

Just One View: Invariances in Inferotemporal Cell Tuning Just One View: Invariances in Inferotemporal Cell Tuning Maximilian Riesenhuber Tomaso Poggio Center for Biological and Computational Learning and Department of Brain and Cognitive Sciences Massachusetts

More information

Supplemental Information: Task-specific transfer of perceptual learning across sensory modalities

Supplemental Information: Task-specific transfer of perceptual learning across sensory modalities Supplemental Information: Task-specific transfer of perceptual learning across sensory modalities David P. McGovern, Andrew T. Astle, Sarah L. Clavin and Fiona N. Newell Figure S1: Group-averaged learning

More information

A possible mechanism for impaired joint attention in autism

A possible mechanism for impaired joint attention in autism A possible mechanism for impaired joint attention in autism Justin H G Williams Morven McWhirr Gordon D Waiter Cambridge Sept 10 th 2010 Joint attention in autism Declarative and receptive aspects initiating

More information

The functional organization of the ventral visual pathway and its relationship to object recognition

The functional organization of the ventral visual pathway and its relationship to object recognition Kanwisher-08 9/16/03 9:27 AM Page 169 Chapter 8 The functional organization of the ventral visual pathway and its relationship to object recognition Kalanit Grill-Spector Abstract Humans recognize objects

More information

Experimental Design. Thomas Wolbers Space and Aging Laboratory Centre for Cognitive and Neural Systems

Experimental Design. Thomas Wolbers Space and Aging Laboratory Centre for Cognitive and Neural Systems Experimental Design Thomas Wolbers Space and Aging Laboratory Centre for Cognitive and Neural Systems Overview Design of functional neuroimaging studies Categorical designs Factorial designs Parametric

More information

Visual Context Dan O Shea Prof. Fei Fei Li, COS 598B

Visual Context Dan O Shea Prof. Fei Fei Li, COS 598B Visual Context Dan O Shea Prof. Fei Fei Li, COS 598B Cortical Analysis of Visual Context Moshe Bar, Elissa Aminoff. 2003. Neuron, Volume 38, Issue 2, Pages 347 358. Visual objects in context Moshe Bar.

More information

Summary. Multiple Body Representations 11/6/2016. Visual Processing of Bodies. The Body is:

Summary. Multiple Body Representations 11/6/2016. Visual Processing of Bodies. The Body is: Visual Processing of Bodies Corps et cognition: l'embodiment Corrado Corradi-Dell Acqua corrado.corradi@unige.ch Theory of Pain Laboratory Summary Visual Processing of Bodies Category-specificity in ventral

More information

Reporting Checklist for Nature Neuroscience

Reporting Checklist for Nature Neuroscience Corresponding Author: Manuscript Number: Manuscript Type: Simon Musall NNA47695 Article Reporting Checklist for Nature Neuroscience # Main Figures: 6 # Supplementary Figures: 14 # Supplementary Tables:

More information

Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition

Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition Charles F. Cadieu, Ha Hong, Daniel L. K. Yamins, Nicolas Pinto, Diego Ardila, Ethan A. Solomon, Najib

More information

Incorporation of Imaging-Based Functional Assessment Procedures into the DICOM Standard Draft version 0.1 7/27/2011

Incorporation of Imaging-Based Functional Assessment Procedures into the DICOM Standard Draft version 0.1 7/27/2011 Incorporation of Imaging-Based Functional Assessment Procedures into the DICOM Standard Draft version 0.1 7/27/2011 I. Purpose Drawing from the profile development of the QIBA-fMRI Technical Committee,

More information

NeuroImage xxx (2010) xxx xxx. Contents lists available at ScienceDirect. NeuroImage. journal homepage: www. elsevier. com/ locate/ ynimg

NeuroImage xxx (2010) xxx xxx. Contents lists available at ScienceDirect. NeuroImage. journal homepage: www. elsevier. com/ locate/ ynimg NeuroImage xxx (2010) xxx xxx YNIMG-07283; No. of pages: 15; 4C: 4, 5, 7, 8, 10 Contents lists available at ScienceDirect NeuroImage journal homepage: www. elsevier. com/ locate/ ynimg Sparsely-distributed

More information

SUPPLEMENT: DYNAMIC FUNCTIONAL CONNECTIVITY IN DEPRESSION. Supplemental Information. Dynamic Resting-State Functional Connectivity in Major Depression

SUPPLEMENT: DYNAMIC FUNCTIONAL CONNECTIVITY IN DEPRESSION. Supplemental Information. Dynamic Resting-State Functional Connectivity in Major Depression Supplemental Information Dynamic Resting-State Functional Connectivity in Major Depression Roselinde H. Kaiser, Ph.D., Susan Whitfield-Gabrieli, Ph.D., Daniel G. Dillon, Ph.D., Franziska Goer, B.S., Miranda

More information

Impaired face discrimination in acquired prosopagnosia is associated with abnormal response to individual faces in the right middle fusiform gyrus

Impaired face discrimination in acquired prosopagnosia is associated with abnormal response to individual faces in the right middle fusiform gyrus Impaired face discrimination in acquired prosopagnosia is associated with abnormal response to individual faces in the right middle fusiform gyrus Christine Schiltz Bettina Sorger Roberto Caldara Fatima

More information

arxiv: v1 [q-bio.nc] 12 Jun 2014

arxiv: v1 [q-bio.nc] 12 Jun 2014 1 arxiv:1406.3284v1 [q-bio.nc] 12 Jun 2014 Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition Charles F. Cadieu 1,, Ha Hong 1,2, Daniel L. K. Yamins 1,

More information

Functional MRI Mapping Cognition

Functional MRI Mapping Cognition Outline Functional MRI Mapping Cognition Michael A. Yassa, B.A. Division of Psychiatric Neuro-imaging Psychiatry and Behavioral Sciences Johns Hopkins School of Medicine Why fmri? fmri - How it works Research

More information

Changing expectations about speed alters perceived motion direction

Changing expectations about speed alters perceived motion direction Current Biology, in press Supplemental Information: Changing expectations about speed alters perceived motion direction Grigorios Sotiropoulos, Aaron R. Seitz, and Peggy Seriès Supplemental Data Detailed

More information

VIII. 10. Right Temporal-Lobe Contribution to the Retrieval of Family Relationships in Person Identification

VIII. 10. Right Temporal-Lobe Contribution to the Retrieval of Family Relationships in Person Identification CYRIC Annual Report 2009 VIII. 10. Right Temporal-Lobe Contribution to the Retrieval of Family Relationships in Person Identification Abe N. 1, Fujii T. 1, Ueno A. 1, Shigemune Y. 1, Suzuki M. 2, Tashiro

More information

THE ENCODING OF PARTS AND WHOLES

THE ENCODING OF PARTS AND WHOLES THE ENCODING OF PARTS AND WHOLES IN THE VISUAL CORTICAL HIERARCHY JOHAN WAGEMANS LABORATORY OF EXPERIMENTAL PSYCHOLOGY UNIVERSITY OF LEUVEN, BELGIUM DIPARTIMENTO DI PSICOLOGIA, UNIVERSITÀ DI MILANO-BICOCCA,

More information

Decline of the McCollough effect by orientation-specific post-adaptation exposure to achromatic gratings

Decline of the McCollough effect by orientation-specific post-adaptation exposure to achromatic gratings *Manuscript Click here to view linked References Decline of the McCollough effect by orientation-specific post-adaptation exposure to achromatic gratings J. Bulthé, H. Op de Beeck Laboratory of Biological

More information

11/18/2013. Correlational Research. Correlational Designs. Why Use a Correlational Design? CORRELATIONAL RESEARCH STUDIES

11/18/2013. Correlational Research. Correlational Designs. Why Use a Correlational Design? CORRELATIONAL RESEARCH STUDIES Correlational Research Correlational Designs Correlational research is used to describe the relationship between two or more naturally occurring variables. Is age related to political conservativism? Are

More information

Common Neural Substrates for Ordinal Representation in Short-Term Memory, Numerical and Alphabetical Cognition

Common Neural Substrates for Ordinal Representation in Short-Term Memory, Numerical and Alphabetical Cognition Common Neural Substrates for Ordinal Representation in Short-Term Memory, Numerical and Alphabetical Cognition Lucie Attout 1 *, Wim Fias 2, Eric Salmon 3, Steve Majerus 1 1 Department of Psychology -

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION doi: doi:10.1038/nature08103 10.1038/nature08103 Supplementary Figure 1. Sample pictures. Examples of the natural scene pictures used in the main experiment. www.nature.com/nature 1 Supplementary Figure

More information

NeuroImage 61 (2012) Contents lists available at SciVerse ScienceDirect. NeuroImage. journal homepage:

NeuroImage 61 (2012) Contents lists available at SciVerse ScienceDirect. NeuroImage. journal homepage: NeuroImage 61 (2012) 1113 1119 Contents lists available at SciVerse ScienceDirect NeuroImage journal homepage: www.elsevier.com/locate/ynimg The advantage of brief fmri acquisition runs for multi-voxel

More information

What do you think of the following research? I m interested in whether a low glycemic index diet gives better control of diabetes than a high

What do you think of the following research? I m interested in whether a low glycemic index diet gives better control of diabetes than a high What do you think of the following research? I m interested in whether a low glycemic index diet gives better control of diabetes than a high glycemic index diet. So I randomly assign 100 people with type

More information

Involvement of both prefrontal and inferior parietal cortex. in dual-task performance

Involvement of both prefrontal and inferior parietal cortex. in dual-task performance Involvement of both prefrontal and inferior parietal cortex in dual-task performance Fabienne Collette a,b, Laurence 01ivier b,c, Martial Van der Linden a,d, Steven Laureys b, Guy Delfiore b, André Luxen

More information

Using confusion matrices to estimate mutual information between two categorical measurements

Using confusion matrices to estimate mutual information between two categorical measurements 2013 3rd International Workshop on Pattern Recognition in Neuroimaging Using confusion matrices to estimate mutual information between two categorical measurements Dirk B. Walther (bernhardt-walther.1@osu.edu)

More information

Differential Viewing Strategies towards Attractive and Unattractive Human Faces

Differential Viewing Strategies towards Attractive and Unattractive Human Faces Differential Viewing Strategies towards Attractive and Unattractive Human Faces Ivan Getov igetov@clemson.edu Greg Gettings ggettin@clemson.edu A.J. Villanueva aaronjv@clemson.edu Chris Belcher cbelche@clemson.edu

More information

The Role of Working Memory in Visual Selective Attention

The Role of Working Memory in Visual Selective Attention Goldsmiths Research Online. The Authors. Originally published: Science vol.291 2 March 2001 1803-1806. http://www.sciencemag.org. 11 October 2000; accepted 17 January 2001 The Role of Working Memory in

More information

Reporting Checklist for Nature Neuroscience

Reporting Checklist for Nature Neuroscience Corresponding Author: Manuscript Number: Manuscript Type: Wu Li NNA48469A Article Reporting Checklist for Nature Neuroscience # Main Figures: 6 # Supplementary Figures: 0 # Supplementary Tables: 0 # Supplementary

More information

Enhanced Brain Correlations during Rest Are Related to Memory for Recent Experiences

Enhanced Brain Correlations during Rest Are Related to Memory for Recent Experiences Article Enhanced Brain Correlations during Are Related to Memory for Recent Experiences Arielle Tambini, 1 Nicholas Ketz, 2 and Lila Davachi 1,2, * 1 Center for Neural Science, New York University, 4 Washington

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 5: Data analysis II Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis II 6 Single

More information

FAILURES OF OBJECT RECOGNITION. Dr. Walter S. Marcantoni

FAILURES OF OBJECT RECOGNITION. Dr. Walter S. Marcantoni FAILURES OF OBJECT RECOGNITION Dr. Walter S. Marcantoni VISUAL AGNOSIA -damage to the extrastriate visual regions (occipital, parietal and temporal lobes) disrupts recognition of complex visual stimuli

More information

Neuropsychologia ] (]]]]) ]]] ]]] Contents lists available at SciVerse ScienceDirect. Neuropsychologia

Neuropsychologia ] (]]]]) ]]] ]]] Contents lists available at SciVerse ScienceDirect. Neuropsychologia Neuropsychologia ] (]]]]) ]]] ]]] Contents lists available at SciVerse ScienceDirect Neuropsychologia journal homepage: www.elsevier.com/locate/neuropsychologia The response of face-selective cortex with

More information

Preliminary MEG decoding results Leyla Isik, Ethan M. Meyers, Joel Z. Leibo, and Tomaso Poggio

Preliminary MEG decoding results Leyla Isik, Ethan M. Meyers, Joel Z. Leibo, and Tomaso Poggio Computer Science and Artificial Intelligence Laboratory Technical Report MIT-CSAIL-TR-2012-010 CBCL-307 April 20, 2012 Preliminary MEG decoding results Leyla Isik, Ethan M. Meyers, Joel Z. Leibo, and Tomaso

More information

Brain activity related to integrative processes in visual object recognition: bottom-up integration and the modulatory influence of stored knowledge

Brain activity related to integrative processes in visual object recognition: bottom-up integration and the modulatory influence of stored knowledge Neuropsychologia 40 (2002) 1254 1267 Brain activity related to integrative processes in visual object recognition: bottom-up integration and the modulatory influence of stored knowledge C. Gerlach a,,

More information

Experimental design of fmri studies

Experimental design of fmri studies Experimental design of fmri studies Kerstin Preuschoff Computational Neuroscience Lab, EPFL LREN SPM Course Lausanne April 10, 2013 With many thanks for slides & images to: Rik Henson Christian Ruff Sandra

More information

Chapter 5: Perceiving Objects and Scenes

Chapter 5: Perceiving Objects and Scenes Chapter 5: Perceiving Objects and Scenes The Puzzle of Object and Scene Perception The stimulus on the receptors is ambiguous. Inverse projection problem: An image on the retina can be caused by an infinite

More information

Show Me the Features: Regular Viewing Patterns. During Encoding and Recognition of Faces, Objects, and Places. Makiko Fujimoto

Show Me the Features: Regular Viewing Patterns. During Encoding and Recognition of Faces, Objects, and Places. Makiko Fujimoto Show Me the Features: Regular Viewing Patterns During Encoding and Recognition of Faces, Objects, and Places. Makiko Fujimoto Honors Thesis, Symbolic Systems 2014 I certify that this honors thesis is in

More information

Classifying instantaneous cognitive states from fmri data

Classifying instantaneous cognitive states from fmri data Carnegie Mellon University From the SelectedWorks of Marcel Adam Just 2003 Classifying instantaneous cognitive states from fmri data Tom M. Mitchell Rebecca Hutchinson Marcel Adam Just, Carnegie Mellon

More information

Voxel-based Lesion-Symptom Mapping. Céline R. Gillebert

Voxel-based Lesion-Symptom Mapping. Céline R. Gillebert Voxel-based Lesion-Symptom Mapping Céline R. Gillebert Paul Broca (1861) Mr. Tan no productive speech single repetitive syllable tan Broca s area: speech production Broca s aphasia: problems with fluency,

More information

Frank Tong. Department of Psychology Green Hall Princeton University Princeton, NJ 08544

Frank Tong. Department of Psychology Green Hall Princeton University Princeton, NJ 08544 Frank Tong Department of Psychology Green Hall Princeton University Princeton, NJ 08544 Office: Room 3-N-2B Telephone: 609-258-2652 Fax: 609-258-1113 Email: ftong@princeton.edu Graduate School Applicants

More information

The representation of perceived shape similarity and its role for category learning in monkeys: A modeling study

The representation of perceived shape similarity and its role for category learning in monkeys: A modeling study Available online at www.sciencedirect.com Vision Research 48 (2008) 598 610 www.elsevier.com/locate/visres The representation of perceived shape similarity and its role for category learning in monkeys:

More information

Modeling the Deployment of Spatial Attention

Modeling the Deployment of Spatial Attention 17 Chapter 3 Modeling the Deployment of Spatial Attention 3.1 Introduction When looking at a complex scene, our visual system is confronted with a large amount of visual information that needs to be broken

More information

Topic 11 - Parietal Association Cortex. 1. Sensory-to-motor transformations. 2. Activity in parietal association cortex and the effects of damage

Topic 11 - Parietal Association Cortex. 1. Sensory-to-motor transformations. 2. Activity in parietal association cortex and the effects of damage Topic 11 - Parietal Association Cortex 1. Sensory-to-motor transformations 2. Activity in parietal association cortex and the effects of damage Sensory to Motor Transformation Sensory information (visual,

More information

Supplementary Online Content

Supplementary Online Content Supplementary Online Content Green SA, Hernandez L, Tottenham N, Krasileva K, Bookheimer SY, Dapretto M. The neurobiology of sensory overresponsivity in youth with autism spectrum disorders. Published

More information

DATA MANAGEMENT & TYPES OF ANALYSES OFTEN USED. Dennis L. Molfese University of Nebraska - Lincoln

DATA MANAGEMENT & TYPES OF ANALYSES OFTEN USED. Dennis L. Molfese University of Nebraska - Lincoln DATA MANAGEMENT & TYPES OF ANALYSES OFTEN USED Dennis L. Molfese University of Nebraska - Lincoln 1 DATA MANAGEMENT Backups Storage Identification Analyses 2 Data Analysis Pre-processing Statistical Analysis

More information

Classification. Methods Course: Gene Expression Data Analysis -Day Five. Rainer Spang

Classification. Methods Course: Gene Expression Data Analysis -Day Five. Rainer Spang Classification Methods Course: Gene Expression Data Analysis -Day Five Rainer Spang Ms. Smith DNA Chip of Ms. Smith Expression profile of Ms. Smith Ms. Smith 30.000 properties of Ms. Smith The expression

More information

Supplementary Note Psychophysics:

Supplementary Note Psychophysics: Supplementary Note More detailed description of MM s subjective experiences can be found on Mike May s Perceptions Home Page, http://www.senderogroup.com/perception.htm Psychophysics: The spatial CSF was

More information

Framework for Comparative Research on Relational Information Displays

Framework for Comparative Research on Relational Information Displays Framework for Comparative Research on Relational Information Displays Sung Park and Richard Catrambone 2 School of Psychology & Graphics, Visualization, and Usability Center (GVU) Georgia Institute of

More information

Reporting Checklist for Nature Neuroscience

Reporting Checklist for Nature Neuroscience Corresponding Author: Manuscript Number: Manuscript Type: Bernhard Staresina A51406B Article Reporting Checklist for Nature Neuroscience # Main Figures: 5 # Supplementary Figures: 10 # Supplementary Tables:

More information

Supporting Information

Supporting Information Revisiting default mode network function in major depression: evidence for disrupted subsystem connectivity Fabio Sambataro 1,*, Nadine Wolf 2, Maria Pennuto 3, Nenad Vasic 4, Robert Christian Wolf 5,*

More information

Supplementary Material for

Supplementary Material for Supplementary Material for Selective neuronal lapses precede human cognitive lapses following sleep deprivation Supplementary Table 1. Data acquisition details Session Patient Brain regions monitored Time

More information

Reporting Checklist for Nature Neuroscience

Reporting Checklist for Nature Neuroscience Corresponding Author: Manuscript Number: Manuscript Type: Rutishauser NNA57105 Article Reporting Checklist for Nature Neuroscience # Main Figures: 8 # Supplementary Figures: 6 # Supplementary Tables: 1

More information