SUPPLEMENTARY INFORMATION. Predicting visual stimuli based on activity in auditory cortices
|
|
- Gyles Kelly
- 5 years ago
- Views:
Transcription
1 SUPPLEMENTARY INFORMATION Predicting visual stimuli based on activity in auditory cortices Kaspar Meyer, Jonas T. Kaplan, Ryan Essex, Cecelia Webber, Hanna Damasio & Antonio Damasio Brain and Creativity Institute University of Southern California 3641 Watt Way Los Angeles, California , USA 1
2 Supplementary figures and figure legends 11 s 5 s 2 s 2 s 2 s Video clip Image acquisition Video clip Image acquisition Time Supplementary Figure 1 Sparse-sampling scanning paradigm. All video clips were presented during the silent interval between two image acquisitions. The image acquisition started seven seconds after the beginning and two seconds after the end of a video clip. 2
3 Supplementary Figure 2 Auditory mask. Horizontal (a) and coronal (b) cuts through the brain of Subject 1 showing the bilateral auditory mask on the supratemporal plane. The left hemisphere is displayed on the right side of the images. The horizontal slice does not display the full antero-posterior extent of the right-hemispheric mask. The abrupt anterior cutoff of the left-hemispheric mask is due to the conservative criteria we applied when determining the borders of the mask (see Supplementary methods). 3
4 7 Subjective rating Supplementary Figure 3 Subjective ratings of the video clips. Subjects were asked the following question: Did the video clip you just watched evoke sounds in your mind? They were requested to rate each clip on a scale from 1 ( not at all ) to 7 ( very much so ). The bars represent average ratings across all subjects. The error bars represent the standard error of the mean. 4
5 Classifier performance Supplementary Figure 4 Classifier performance in control regions. The bars represent classifier performance averaged across all subjects and across all three pair-wise categorical discriminations for the target mask in early auditory cortices (green bar) and for control masks in primary visual cortex, anterior cingulate cortex, middle temporal gyrus, and middle frontal gyrus (blue bars). The control masks were adjusted in size to the auditory target mask (see Supplementary methods). 5
6 Classifier performance Tested on novel stimuli Trained / tested on the same stimuli 0 Animals vs. instruments Animals vs. objects Instruments vs. objects Supplementary Figure 5 Classifier performance on novel stimuli. Blue bars represent classifier performance when two stimuli of each category were used as training trials and the remaining stimulus from each category was used as test trial. Performance was significantly above chance for the two discriminations involving the object stimuli. For comparison, green bars represent classifier performance when the classifier was trained and tested on the same two stimuli from each category. 6
7 0.6 Classifier performance Supplementary Figure 6 Classifier performance for scrambled categories. The x-axis represents different ways in which the nine video clips were rearranged from the original categories into novel groups of three. The scores from 3 to 9 indicate how many stimuli remained in their original groups. A score of 9 thus refers to the original categorical arrangement, whereas a score of 7 indicates that two stimuli had been switched between the original groups. A score of 3 indicates that each group contained one stimulus from each of the original categories. The y- axis represents classifier performance on three-way discriminations averaged across all possible arrangements corresponding to a certain score. Prediction power was best for the original arrangement and worst for arrangements in which all three groups contained one stimulus from each of the original categories. Chance level is
8 Classifier performance /2 1/4 1/6 1/8 2/3 2/5 2/7 2/9 3/5 3/7 3/9 4/6 4/8 5/6 5/8 6/7 6/9 7/9 1/3 1/5 1/7 1/9 2/4 2/6 2/8 3/4 3/6 3/8 4/5 4/7 4/9 5/7 5/9 6/8 7/8 8/9 Supplementary Figure 7 Classifier performance on auditory stimuli. Bars represent classifier performance on each of the 36 pair-wise discriminations among individual auditory stimuli (the figure is analogous to Fig. 1a in the main manuscript, which displays the same information for the visual stimuli). Stimulus key: 1 rooster; 2 cow; 3 dog; 4 violin; 5 piano; 6 bass; 7 vase; 8 chainsaw; 9 coins. 8
9 Classifier performance Tested on novel stimuli Trained / tested on the same stimuli 0 Animals vs. instruments Animals vs. objects Instruments vs. objects Supplementary Figure 8 Classifier performance on novel stimuli (auditory condition). Blue bars represent classifier performance when two stimuli of each category were used as training trials and the remaining stimulus from each category was used as test trial. For comparison, green bars represent classifier performance when the classifier was trained and tested on the same two stimuli from each category. 9
10 Supplementary methods Stimuli Please refer to the video files (Supplementary video clips 1 3) provided online for examples of our video clips. We recorded five of the nine clips ourselves; the remaining four were downloaded from the internet, making sure that image quality and resolution were sufficient. Supplementary Fig. 3 displays the subjective ratings of the video clips by the subjects, in terms of how much sound the clips evoked in their mind. Note that the subjects rated the clips after the end of the scanning session; during the experiment, they were naïve to the purpose of the study, i.e. they were not told that the video clips they saw were aimed at evoking sounds in their mind. The rationale behind the categorical selection of the stimuli was two-fold. First, we were interested to see whether the auditory activity patterns induced by visual stimuli pertaining to a conceptual category would share certain features with one another and could be told apart from the patterns induced by stimuli pertaining to different categories. Second (and more importantly with respect to the main purpose of the study), if categorical representations existed, we would be able to use this in our favor from a methodological point of view. The auditory activity patterns we aimed to demonstrate presumably had a very low signal-to-noise ratio (SNR) because they were not induced by direct auditory input (but rather by signals from the visual cortex) and because we employed a sparsesampling scanning paradigm that only permitted us to acquire a single image per stimulus presentation. Given the low SNR, a high number of training trials for the classifier appeared crucial. If the auditory activity patterns induced by stimuli from within a category shared certain features, this would permit us to collapse them to train the classifier and thus increase the number of trials three-fold. Attaining the same number of training trials using a single stimulus would have been extremely monotonous for the subjects and might have resulted in decreased attention and undesired habituation effects. 10
11 Stimulus presentation and image acquisition Timing and presentation of the video clips was controlled with MATLAB (The Mathworks) using the freely available Psychophysics Toolbox Version 3 software 16. The clips were projected onto a rearprojection screen at the end of the scanner bore which subjects viewed through a mirror mounted on the head coil. A sparse-sampling scanning paradigm was used to ensure that the video clips were presented during scanner silence (Supplementary Fig. 1): a single whole-brain volume was acquired starting seven seconds after the beginning (and thus two seconds after the end) of each video clip. This delay was chosen based on observations in a separate set of subjects which had shown that the hemodynamic response recorded from auditory cortex in response to sound stimuli reached its peak relatively early, around four seconds post stimulus presentation. As the sound-implying events occurred towards the end of the five-second video clips, the above stimulus paradigm was aimed at capturing auditory activity at its maximum. During each functional run, the nine video clips were presented three times each in random order, for a total of 27 clips per run. Each subject completed eight functional runs, yielding a total of 216 clip presentations (72 per category, 24 per individual stimulus). Images were acquired with a 3-Tesla Siemens MAGNETON Trio System. Echo-planar volumes were acquired with the following parameters: TR = 11,000 ms, TA = 2,000 ms, TE = 25 ms, flip angle = 90, 64 x 64 matrix, in-plane resolution 3.0 mm x 3.0 mm, 41 transverse slices, each 2.5 mm thick, covering the whole brain. To precisely define the auditory mask, we also acquired a structural T1- weighted MPRAGE in each subject (TR = 2,530 ms, TE = 3.09 ms, flip angle = 10, 256 x 256 matrix, 208 coronal slices, 1 mm isotropic resolution). The functional scans were co-registered to each subject s anatomical scan using FSL s (FMRIB Software Library 17 ) FLIRT linear registration tool 18 to perform a sixdegree-of-freedom rigid-body transformation. The resulting transformation matrix was then used to 11
12 warp the anatomical masks into the functional data space, where all subsequent analyses were performed. Studies applying multivariate pattern analysis often use voxel sizes smaller than the one employed in the present study. However, decreasing voxel size to 1.5 mm x 1.5 mm x 2.0 mm resulted in a decrease, rather than an increase, of classifier performance. This was likely due to the fact that the SNR, presumably low in the first place (see above), was further decreased by the reduction of voxel size. Regions of interest The main objective in defining our target mask in auditory cortices was to assure that it would not include any multimodal cortices, as it was our aim to predict visual stimuli based on the activity of areas not targeted by ascending visual pathways. We therefore opted for a restrictive anatomical mask, rather than a functional localizer, as the latter would have borne the risk of labeling multimodal areas. In the antero-posterior dimension, our auditory mask (Supplementary Fig. 2) was restricted to the extent of Heschl s gyrus or the anterior-most transverse temporal gyrus if there was more than one. We are aware that unimodal auditory cortex is considered to extend beyond Heschl s gyrus to include a larger extent of the planum temporale, especially in the posterior direction. However, the only unambiguous anatomical criterion for the posterior limit of our mask behind Heschl s gyrus would have been the end of the Sylvian fissure; had our mask been defined accordingly, it likely would have included multimodal areas around the temporo-parietal junction. Therefore, we opted for a very restrictive definition of the mask even though this lowered prediction performance with respect to a mask that included more of the anatomical extent of the planum temporale. Heschl s gyrus is located on the supratemporal plane and runs in a postero-medial-to-anterolateral direction. Its postero-medial end is usually an unambiguous eminence that starts to appear at the medial border of the supratemporal plane when one moves through coronal slices in a postero-anterior 12
13 direction. Its antero-lateral end is often less well demarcated. We chose the following conservative definition to determine the anterior border of Heschl s gyrus: we included only slices on which both the first transverse temporal sulcus (bounding Heschl s gyrus anteriorly) and Heschl s sulcus (bounding the gyrus posteriorly) were discernible. In some cases, this meant that we discarded slices on which the gyrus still seemed apparent but was not well-defined in terms of the bordering sulci. Medio-laterally, the mask extended beyond Heschl s gyrus to include the whole width of the supratemporal plane. Medially, the border of the mask was clearly defined by the transition point between the supratemporal plane and the insular cortex; laterally, we made sure to include no more than half of the superior temporal gyrus, as we wanted to avoid any gray matter in the superior temporal sulcus, which is known to contain multimodal areas. To illustrate classifier performance outside unimodal auditory cortex, we established control masks in primary visual cortex, anterior cingulate cortex, middle frontal gyrus, and anterior middle temporal gyrus. These masks were obtained by choosing the corresponding probabilistic masks from the Jülich (primary visual cortex) and Harvard-Oxford (anterior cingulate cortex, middle frontal gyrus, and anterior middle temporal gyrus) cortical atlases included with FSL and then warping them from the standard space into each individual s functional space. For each control mask, we selected the probability threshold so that the average number of voxels it occupied in the subjects functional spaces did not differ significantly from the average number of voxels occupied by the auditory target mask. Multivariate pattern analysis Multivariate pattern analysis (MVPA) of fmri data uses pattern classification algorithms to identify distributed neural representations associated with specific stimuli or classes of stimuli. As opposed to traditional (univariate) fmri analysis techniques, in which each voxel is analyzed separately, MVPA can find information in the spatial pattern of activation across multiple voxels. For example, 13
14 whereas the average activation in a region of interest may not differ significantly between two stimulus conditions, information may still be contained within the spatial profile of the activity pattern in that region. MVPA is often used for decoding, i.e. it aims at identifying a specific perceptual representation based on the pattern of neural activation in a region of interest. For example, MVPA has been used to predict, from the activity pattern in primary visual cortex, which of several visual stimuli a subject was exposed to. Typically, a computer algorithm (referred to as classifier ) is trained for this prediction task using a certain proportion of all trials, and the performance of the algorithm is then tested using the remaining trials. For example, we would train the algorithm by providing it with the auditory activity patterns recorded during dog and chainsaw trials of seven functional runs, telling it each time whether the pattern corresponded to a dog or a chainsaw trial. We would then test the algorithm by providing it with dog and chainsaw trials from the last run and asking it, each time, to guess whether the pattern corresponded to a dog or a chainsaw trial. If the algorithm identified the correct stimulus at higher-than-chance level in the test trials, this would imply that the area within our auditory mask contains information about the video clips presented to the subject. MVPA was performed using the PyMVPA software package 19, in combination with LibSVM s implementation of the linear support vector machine ( Data from the eight functional scans were concatenated and motion corrected to the middle volume of the entire time series using FSL s MCFLIRT tool 20 and then linearly de-trended and converted to z-scores by run. Due to the sparse-sampling scanning paradigm (Supplementary Fig. 1), each trial corresponded to a single functional volume. We performed pair-wise discriminations both among all individual stimuli (n = 36, given there were nine stimuli) and among all categories (n = 3, given there were three categories). In either case, training and testing was performed with a cross-validation approach: for each crossvalidation step, the classifier was trained on seven functional runs and tested on the eighth. This procedure was repeated eight times, using each run as test run once. In each cross-validation step, 14
15 performance was calculated as the number of correct guesses of the classifier divided by the number of test trials. Overall performance was obtained by averaging the results from the eight cross-validation steps. The algorithm classified a total of 48 trials for each individual pair-wise discrimination and a total of 144 trials for each categorical pair-wise discrimination in each subject. Support vector machines were designed for binary classification, i.e. to distinguish between two stimuli only. However, there are various ways to combine multiple binary classifiers to perform multiclass discriminations, i.e. to distinguish among more than two stimuli. LibSVM uses the so-called oneagainst-one method (see reference 21 for further information). Statistical analyses All p-values referred to in the main text are the results of two-tailed t-tests across all eight subjects. Whenever parametric testing was performed, we first verified whether the data points were compatible with the assumption of an underlying normal distribution by performing a Lilliefors test. Among the results of the 36 pair-wise discriminations among individual stimuli (Fig. 1a), this assumption only had to be rejected in a single case (which is to be expected when testing 36 data samples at p = 0.05). The only other set of results that did not pass the normality test was the one for the categorical discrimination between animals and objects (dark gray bars in Fig. 2). However, when the remarkably high performance value of Subject 1 was eliminated from the statistic, the p-value resulting from the remaining seven performance values, which now passed the normality test, was actually even lower than the one indicated in the text (1.2 x 10-5 vs. 6.4 x 10-5 ). 15
16 Supplementary results and discussion Is prediction performance due to differences in the overall activity level evoked by different video clips within the auditory mask? As the subjective ratings of our video clips indicated that they were not identical in terms of how much sound they evoked in the subjects mind (Supplementary Fig. 3), it is conceivable that prediction power would have resulted from differences in the overall level of activity within the auditory mask, rather than from information contained within the spatial activity profile. We assessed this possibility by repeating all pair-wise analyses presented earlier (both among individual stimuli and among categories) but training the classifier only on the averaged activity across the whole auditory mask, thus depriving it of the information contained in the spatial activity profile. Of the 26 pair-wise discriminations among individual stimuli that had yielded prediction performances significantly above chance, only two remained significant when this method was applied (note that at p = 0.05, one would on average expect 1.3 out of the 26 discriminations to fulfill the significance criterion purely by chance). All three pair-wise discriminations among categories were no longer significant. This indicates that the information that allowed the classifier to successfully predict the visual stimuli was indeed contained in the spatial profile, rather than the overall level, of auditory activity. Classifier performance outside early auditory cortices To illustrate classifier performance outside early auditory cortices, we established size-matched control masks in primary visual cortex, anterior cingulate cortex, middle frontal gyrus, and middle temporal gyrus (see Supplementary methods). Within each of these masks, we assessed classifier performance averaged across all subjects and across all three categorical discrimination tasks. Given the visual nature of our stimuli, we expected the analysis of V1 activity to yield high prediction performance. 16
17 Moreover, any brain region targeted by forward projections from the visual cortex might also contain information about the video clips, leading to above-chance prediction performance. The results are in keeping with these predictions (Supplementary Fig. 4 online): performance in primary visual cortex was higher than in the auditory target mask, the averaged performance across all three categorical discriminations approaching 0.8. Prediction performance in the anterior cingulate cortex was modest but significantly different from chance, while performance in the middle frontal gyrus and the middle temporal gyrus were indistinguishable from chance. Thus, classifier performance in the auditory target mask exceeded performance in the latter three control masks even though the control masks were located in areas which are considered multimodal association cortices and potentially receive (indirect) forward projections from the visual cortices. More evidence for categorical representations The fact that the classifier successfully differentiated categories when we collapsed all three stimuli of a group suggests that the auditory activity patterns induced by those stimuli shared certain features. However, it would be conceivable that the classifier could be trained to differentiate any set of three stimuli from any other set of three stimuli. Were the auditory representations induced by video clips from within a category indeed more similar than the representations induced by clips from different categories? We addressed this question in two ways. First, we tested whether the classifier was able to assign entirely novel stimuli to the correct categories. To this end, we again performed the three categorical discriminations described earlier but, in each case, only used two stimuli from each category for training and the remaining stimulus from each category for testing. Thus, the classifier was asked to assign stimuli it had never encountered to categories it had learned based on different stimuli. For each pair-wise categorical discrimination, there were nine ways (3 x 3) to pull out one test stimulus from each 17
18 category. Averaged across all nine permutations and across all subjects, prediction performance was (p = 0.073) for animals vs. instruments, (p = 6.4 x 10-4 ) for animals vs. objects, and (p = 6.5 x 10-3 ) for instruments vs. objects (blue bars in Supplementary Fig. 5). Predictably, these results were lower than those obtained when we used the same two stimuli of each category for training and testing (0.600, 0.693, and 0.608, respectively; green bars in Supplementary Fig. 5). However, they were still significantly above chance for two of the three categorical discriminations and close to significance for the third one. Second, we performed three-way discriminations (see Online methods) both among the three original categories and among scrambled groups of stimuli. If categorical representations existed, prediction performance should be higher when discriminating among the original categories (i.e. [cow, dog, rooster] vs. [bass, piano, violin] vs. [chainsaw, coins, vase]) than when discriminating among randomly arranged groups (e.g. [cow, bass, chainsaw] vs. [dog, piano, coins] vs. [rooster, violin, vase]). We grouped our nine stimuli in all possible ways (n = 280) and then trained and tested the classifier on each of these arrangements. Averaged across subjects, the original arrangement yielded the highest prediction accuracy out of all 280 permutations (0.505 at a chance level of 0.333). Prediction accuracy decreased monotonically as a function of the number of stimuli that were switched with respect to the original categories, the lowest prediction accuracies resulting from arrangements in which each group contained one stimulus of each original category, as in the example given earlier (Supplementary Fig. 6). Both analyses just presented suggest that there were more similarities among the auditory representations induced by video clips from the same category than among the patterns induced by clips from different categories. Does this finding imply that early auditory cortices represent conceptual categories of sound stimuli or does it simply imply that the sounds associated with the clips from a given category share acoustic features, which in turn leads to similar neural representations? A recent study 2 found categorical representations in auditory association cortices for acoustic stimuli (cats, singers, and 18
19 guitars). Notably, those categorical representations were present even though the stimuli from the three categories were closely matched in terms of duration, root-mean-square power, temporal envelope, and temporal profile of their harmonic structure. The authors concluded that the auditory association cortices represent acoustic stimuli categorically irrespective of their acoustic features. It is tempting to speculate that the categorical representations we observe while sounds are perceived in the mind s ear would be of the same nature as those evidenced during auditory perception, i.e. that they would reflect conceptual affiliation rather than acoustic features. However, our study cannot answer this question with certainty because our stimuli were not designed to evoke sounds that would closely match with respect to their acoustic features. Naturally, even if such an attempt had been made, sounds evoked in a subject s mind by visual stimuli could not be controlled as effectively as sounds evoked by auditory stimuli. Classifier performance on auditory stimuli In four subjects (the ones for whom we had obtained the highest average performance during the main experiment, except for one participant who could not partake in a second scanning session due to claustrophobia), we conducted an additional control experiment in which the subjects were exposed to the actual sounds implied by the video clips used in the main experiment (the sounds were presented alone, without the corresponding video clip). In some instances, the audio clips represented directly the audio traces of the video clips; in other instances (when the sound quality of the video clips was insufficient), we recorded separate audio clips or searched for high-quality clips on the internet. In all cases, the auditory stimuli were approximately matched to the corresponding visual stimuli in terms of temporal profile. All clips were matched in terms of root-mean-square (RMS) power and presented at a comfortable listening level adjusted individually for each subject. The clips were presented according to the same sparse-sampling paradigm used in the main experiment (Supplementary Fig. 1). 19
20 We first assessed prediction performance when the classifier algorithm was both trained and tested on audio trials. As in the main experiment, each stimulus was presented three times per run, and each subject completed eight runs, for a total of 24 presentations per stimulus and 72 presentations per category. The same cross-validation paradigm was used to test the classifier as in the video sessions. Classification performance was high for the discriminations both among individual sound clips and among categories. Performance on categorical discriminations, averaged across the four subjects, was for animals vs. instruments, for animals vs. objects, and for instruments vs. objects (dark gray bars in Fig. 3). These values are slightly above those found in a previous study 2, probably due to the higher number of training trials used here and to the fact that a deliberate effort was made in the mentioned study to match the auditory stimuli with respect to a number of acoustic features (see preceding section). We assessed the significance of the categorical discriminations in all four subjects individually using a cumulative binomial distribution function. All results were highly significant (p < 0.01), except for the animals vs. instruments discrimination in Subject 7 (p = 0.075). It is interesting to note that just as on visual trials, the classifier on average performed best on the discrimination between animal and object stimuli (black and dark gray bars in Fig. 3). Performance on pair-wise discriminations among individual clips, averaged across the four subjects, ranged from to 0.889, averaging (Supplementary Fig. 7). An interesting observation could be made from these results. In the main experiment (when video clips were used as stimuli), average classifier performance on discriminations among categories had been higher than on discriminations among individual stimuli. This trend was evident at the group level (0.660 vs ) and in seven of eight individual subjects. In the auditory control experiment, the relationship was reversed both at the group level (0.729 vs ) and in all four individual subjects. A possible explanation for this observation is that the auditory representations of sounds that are perceived are more differentiated than the representations of sounds experienced in the mind s ear. On 20
21 the one hand, this would lead to higher discrimination accuracy at the level of individual stimuli; on the other hand, it would complicate the classifier s task on categorical discrimination tasks, as the activity patterns induced by stimuli from within a category would contain more variability ( noise, from the classifier s perspective). Additional support for this conclusion comes from Supplementary Figs. 5 and 8. Recall that we had assessed the algorithm s ability to classify entirely novel stimuli in order to verify whether the auditory activity patterns induced by video clips from within a certain category were indeed more similar than patterns induced by clips from different categories (see section More evidence for categorical representations and Supplementary Fig. 5). An analogous control experiment was carried out for the auditory stimuli as well (Supplementary Fig. 8). While we found that, just as had been the case for the visual stimuli, the classifier successfully assigned novel auditory stimuli to the correct categories, it can be gleaned by comparing the two figures that the drop in performance when the algorithm was tested on novel stimuli was larger in the auditory condition. This again suggests that, for auditory stimuli, there was more variability among the activity patterns induced by the stimuli from within a category, so that the classifier was less successful at guessing the third stimulus of a category from the first two. Thus, the evidence presented in this and the preceding paragraphs suggests that the auditory activity patterns instantiated while sounds are experienced in the mind s ear, while preserving the core features that permit categorical discrimination, are somewhat degenerated with respect to the features that distinguish the stimuli within the categories. Again, however, the data do not provide a conclusive answer to the question whether these core features characterize conceptual representations or simply reflect acoustic features common to the stimuli from within a category. We also asked whether the classifier, after being trained on video trials, would be able to successfully discriminate the corresponding audio trials, and vice versa. If this were the case, we could conclude that the auditory activity pattern induced by a video clip implying a certain sound would bear 21
22 similarities with the pattern induced by the sound itself. We obtained comparable results when training the algorithm on the visual trials and testing it on the auditory trials as when following the reverse procedure; we therefore only report the results obtained according to the former paradigm. At the categorical level, while the animals vs. instruments and the instruments vs. objects discriminations yielded performances only slightly above chance (0.526 and 0.529, respectively), performance in the animals vs. objects discrimination was (results averaged across all four subjects; light gray bars in Fig. 3). While the animals vs. objects discrimination was significant in two out of four subjects, both the animals vs. instruments and the instruments vs. objects discriminations were significant in one out of four subjects (p < 0.05, cumulative binomial distribution function). A similar pattern was observed for the discriminations among individual stimuli: while the classifier performed above 0.6 in five out of the nine discriminations between animal and object stimuli, no other discrimination reached this value, many results fluctuating around chance level. Several methodological and conceptual issues should be considered when interpreting these results. First, when training and testing the classifier on video trials, performance on the animals vs. instruments and instruments vs. objects discriminations had exceeded 0.6 only by a small margin. It was to be expected that these values would be lower yet when testing the algorithm on audio instead of video trials. Second, it is difficult to decide upon standard audio clips which would match as closely as possible the very personal sound experience each of our subjects had for each of the video clips. In fact, given this dissimilarity in subjective sound experience between the video trials and the standardized audio trials, an analogous dissimilarity would be expected at the level of the respective neural representations in early auditory cortices, and this should inevitably lower prediction performance. A way of avoiding this problem would have been to present the auditory stimuli before the video clips, in order to control more precisely the subjects experience when seeing the sound-implying events. 22
23 However, presenting auditory and visual stimuli in this order would have meant establishing a crossmodal association in the very course of the experiment, and this, in our opinion, would have limited the degree to which our results could be generalized. After all, we intended to show that perceiving soundimplying visual stimuli automatically leads to content-specific representations in early auditory cortices, i.e. without the subjects being aware of the purpose of the study or making a deliberate effort to image sounds. Third, and most importantly, when we averaged the activity level across the whole auditory mask on both the visual training trials and the auditory test trials (in the same way it had been done for the control experiment described earlier), the classifier often performed considerably below chance level (yielding values as low as 0.35). This indicates that the sounds that evoked the highest overall activity level in the auditory condition often were not the ones that had been experienced most vividly in the visual condition. (This occurred even though the audio clips were matched for RMS power and the control experiment described earlier had indicated that the overall activity levels during the visual condition were too similar to yield significant prediction power.) Thus, it appears important to consider at least two dimensions content and vividness when referring both to the subjective experience of sound and its presumed neural counterpart in early auditory cortices. While watching a video clip implying a certain sound and actually hearing that sound may lead to a (somewhat) similar auditory experience in terms of content, the experience may still differ in terms of its vividness; we would thus be looking for similarities in the neural counterparts of mental events that were not so similar in the first place. Specifically, while the sound experience produced by a certain auditory stimulus may be similar, in terms of content, to the experience produced by the corresponding video clip, it may also be similar, in terms of vividness, to the experience produced by a different video clip. Assuming the neural representations in early auditory cortices to reflect the corresponding mental events faithfully, this could be the reason for some of the incorrect categorical attributions we observed. In a more general 23
24 way, it is important to remember that the subjective sound experiences induced when watching a video clip implying a certain sound and when actually perceiving that sound are by no means identical (as evidenced by our awareness of whether we hear a sound in reality or just in our mind s ear). We should not expect the neural activity patterns observed in these two situations to be any more similar than the mental events they presumably reflect. In summary, the present data do not allow the conclusion that the auditory activity patterns evoked by sound-implying video clips are replicas of those evoked by corresponding sounds. However, given the above methodological and conceptual qualifications, it would be no less premature to claim that the patterns are entirely distinct. Also, the finding that the patterns are not identical does not rule out a correlation between neural activity in early auditory cortices and the subjective experience of sound. 24
25 Supplementary References 16. Brainard, D. H. The psychophysics toolbox. Spatial Vision 10, (1997). 17. Smith, S. M. et al. Advances in functional and structural MR image analysis and implementation as FSL. NeuroImage 23 (S1), (2004). 18. Jenkinson, M. & Smith, S. A global optimisation method for robust affine registration of brain images. Med. Image Anal. 5, (2001). 19. Hanke, M. et al. PyMVPA: a Python toolbox for multivariate pattern analysis of fmri data. Neuroinformatics 7, (2009). 20. Jenkinson, M., Bannister, P., Brady, M. & Smith, S. Improved optimization for the robust and accurate linear registration and motion correction of brain images. NeuroImage 17, (2002). 21. Hsu, C.-W. & Lin, C.-J. A comparison of methods for multi-class support vector machines. IEEE Transactions on Neural Networks 13, (2002). 25
26 Supplementary video clips Supplementary video clip 1: This clip displays a howling dog and serves as an example for the animal video clips. Supplementary video clip 2: This clip displays a piano key being struck and serves as an example for the musical instrument video clips. Supplementary video clip 3: This clip displays a handful of coins being dropped into a glass and serves as an example for the object video clips. 26
Supplementary Information Methods Subjects The study was comprised of 84 chronic pain patients with either chronic back pain (CBP) or osteoarthritis
Supplementary Information Methods Subjects The study was comprised of 84 chronic pain patients with either chronic back pain (CBP) or osteoarthritis (OA). All subjects provided informed consent to procedures
More informationSupplementary information Detailed Materials and Methods
Supplementary information Detailed Materials and Methods Subjects The experiment included twelve subjects: ten sighted subjects and two blind. Five of the ten sighted subjects were expert users of a visual-to-auditory
More informationClassification and Statistical Analysis of Auditory FMRI Data Using Linear Discriminative Analysis and Quadratic Discriminative Analysis
International Journal of Innovative Research in Computer Science & Technology (IJIRCST) ISSN: 2347-5552, Volume-2, Issue-6, November-2014 Classification and Statistical Analysis of Auditory FMRI Data Using
More informationSupplementary Online Content
Supplementary Online Content Green SA, Hernandez L, Tottenham N, Krasileva K, Bookheimer SY, Dapretto M. The neurobiology of sensory overresponsivity in youth with autism spectrum disorders. Published
More informationSupplementary Information
Supplementary Information The neural correlates of subjective value during intertemporal choice Joseph W. Kable and Paul W. Glimcher a 10 0 b 10 0 10 1 10 1 Discount rate k 10 2 Discount rate k 10 2 10
More informationAuditory fmri correlates of loudness perception for monaural and diotic stimulation
PROCEEDINGS of the 22 nd International Congress on Acoustics Psychological and Physiological Acoustics (others): Paper ICA2016-435 Auditory fmri correlates of loudness perception for monaural and diotic
More informationIdentification of Neuroimaging Biomarkers
Identification of Neuroimaging Biomarkers Dan Goodwin, Tom Bleymaier, Shipra Bhal Advisor: Dr. Amit Etkin M.D./PhD, Stanford Psychiatry Department Abstract We present a supervised learning approach to
More informationSupplementary materials for: Executive control processes underlying multi- item working memory
Supplementary materials for: Executive control processes underlying multi- item working memory Antonio H. Lara & Jonathan D. Wallis Supplementary Figure 1 Supplementary Figure 1. Behavioral measures of
More informationSupplemental Information
Current Biology, Volume 22 Supplemental Information The Neural Correlates of Crowding-Induced Changes in Appearance Elaine J. Anderson, Steven C. Dakin, D. Samuel Schwarzkopf, Geraint Rees, and John Greenwood
More informationSupplemental Material
1 Supplemental Material Golomb, J.D, and Kanwisher, N. (2012). Higher-level visual cortex represents retinotopic, not spatiotopic, object location. Cerebral Cortex. Contents: - Supplemental Figures S1-S3
More informationSUPPLEMENTARY INFORMATION. Table 1 Patient characteristics Preoperative. language testing
Categorical Speech Representation in the Human Superior Temporal Gyrus Edward F. Chang, Jochem W. Rieger, Keith D. Johnson, Mitchel S. Berger, Nicholas M. Barbaro, Robert T. Knight SUPPLEMENTARY INFORMATION
More informationExperimental Design. Outline. Outline. A very simple experiment. Activation for movement versus rest
Experimental Design Kate Watkins Department of Experimental Psychology University of Oxford With thanks to: Heidi Johansen-Berg Joe Devlin Outline Choices for experimental paradigm Subtraction / hierarchical
More informationCongruency Effects with Dynamic Auditory Stimuli: Design Implications
Congruency Effects with Dynamic Auditory Stimuli: Design Implications Bruce N. Walker and Addie Ehrenstein Psychology Department Rice University 6100 Main Street Houston, TX 77005-1892 USA +1 (713) 527-8101
More informationNature Neuroscience doi: /nn Supplementary Figure 1. Characterization of viral injections.
Supplementary Figure 1 Characterization of viral injections. (a) Dorsal view of a mouse brain (dashed white outline) after receiving a large, unilateral thalamic injection (~100 nl); demonstrating that
More informationWHAT DOES THE BRAIN TELL US ABOUT TRUST AND DISTRUST? EVIDENCE FROM A FUNCTIONAL NEUROIMAGING STUDY 1
SPECIAL ISSUE WHAT DOES THE BRAIN TE US ABOUT AND DIS? EVIDENCE FROM A FUNCTIONAL NEUROIMAGING STUDY 1 By: Angelika Dimoka Fox School of Business Temple University 1801 Liacouras Walk Philadelphia, PA
More informationAssigning B cell Maturity in Pediatric Leukemia Gabi Fragiadakis 1, Jamie Irvine 2 1 Microbiology and Immunology, 2 Computer Science
Assigning B cell Maturity in Pediatric Leukemia Gabi Fragiadakis 1, Jamie Irvine 2 1 Microbiology and Immunology, 2 Computer Science Abstract One method for analyzing pediatric B cell leukemia is to categorize
More informationMULTI-CHANNEL COMMUNICATION
INTRODUCTION Research on the Deaf Brain is beginning to provide a new evidence base for policy and practice in relation to intervention with deaf children. This talk outlines the multi-channel nature of
More informationClassification of Honest and Deceitful Memory in an fmri Paradigm CS 229 Final Project Tyler Boyd Meredith
12/14/12 Classification of Honest and Deceitful Memory in an fmri Paradigm CS 229 Final Project Tyler Boyd Meredith Introduction Background and Motivation In the past decade, it has become popular to use
More informationRajeev Raizada: Statement of research interests
Rajeev Raizada: Statement of research interests Overall goal: explore how the structure of neural representations gives rise to behavioural abilities and disabilities There tends to be a split in the field
More informationIntroduction to MVPA. Alexandra Woolgar 16/03/10
Introduction to MVPA Alexandra Woolgar 16/03/10 MVP...what? Multi-Voxel Pattern Analysis (MultiVariate Pattern Analysis) * Overview Why bother? Different approaches Basics of designing experiments and
More informationPersonal Space Regulation by the Human Amygdala. California Institute of Technology
Personal Space Regulation by the Human Amygdala Daniel P. Kennedy 1, Jan Gläscher 1, J. Michael Tyszka 2 & Ralph Adolphs 1,2 1 Division of Humanities and Social Sciences 2 Division of Biology California
More informationGroup-Wise FMRI Activation Detection on Corresponding Cortical Landmarks
Group-Wise FMRI Activation Detection on Corresponding Cortical Landmarks Jinglei Lv 1,2, Dajiang Zhu 2, Xintao Hu 1, Xin Zhang 1,2, Tuo Zhang 1,2, Junwei Han 1, Lei Guo 1,2, and Tianming Liu 2 1 School
More informationSupporting Online Material for
www.sciencemag.org/cgi/content/full/324/5927/646/dc1 Supporting Online Material for Self-Control in Decision-Making Involves Modulation of the vmpfc Valuation System Todd A. Hare,* Colin F. Camerer, Antonio
More informationFramework for Comparative Research on Relational Information Displays
Framework for Comparative Research on Relational Information Displays Sung Park and Richard Catrambone 2 School of Psychology & Graphics, Visualization, and Usability Center (GVU) Georgia Institute of
More informationFunctional Elements and Networks in fmri
Functional Elements and Networks in fmri Jarkko Ylipaavalniemi 1, Eerika Savia 1,2, Ricardo Vigário 1 and Samuel Kaski 1,2 1- Helsinki University of Technology - Adaptive Informatics Research Centre 2-
More informationPrediction of Successful Memory Encoding from fmri Data
Prediction of Successful Memory Encoding from fmri Data S.K. Balci 1, M.R. Sabuncu 1, J. Yoo 2, S.S. Ghosh 3, S. Whitfield-Gabrieli 2, J.D.E. Gabrieli 2 and P. Golland 1 1 CSAIL, MIT, Cambridge, MA, USA
More informationCategorical Perception
Categorical Perception Discrimination for some speech contrasts is poor within phonetic categories and good between categories. Unusual, not found for most perceptual contrasts. Influenced by task, expectations,
More informationTitle:Atypical language organization in temporal lobe epilepsy revealed by a passive semantic paradigm
Author's response to reviews Title:Atypical language organization in temporal lobe epilepsy revealed by a passive semantic paradigm Authors: Julia Miro (juliamirollado@gmail.com) Pablo Ripollès (pablo.ripolles.vidal@gmail.com)
More informationOver-representation of speech in older adults originates from early response in higher order auditory cortex
Over-representation of speech in older adults originates from early response in higher order auditory cortex Christian Brodbeck, Alessandro Presacco, Samira Anderson & Jonathan Z. Simon Overview 2 Puzzle
More informationUSING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES
USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES Varinthira Duangudom and David V Anderson School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA 30332
More informationBRAIN STATE CHANGE DETECTION VIA FIBER-CENTERED FUNCTIONAL CONNECTIVITY ANALYSIS
BRAIN STATE CHANGE DETECTION VIA FIBER-CENTERED FUNCTIONAL CONNECTIVITY ANALYSIS Chulwoo Lim 1, Xiang Li 1, Kaiming Li 1, 2, Lei Guo 2, Tianming Liu 1 1 Department of Computer Science and Bioimaging Research
More informationChapter 5. Summary and Conclusions! 131
! Chapter 5 Summary and Conclusions! 131 Chapter 5!!!! Summary of the main findings The present thesis investigated the sensory representation of natural sounds in the human auditory cortex. Specifically,
More informationRhythm and Rate: Perception and Physiology HST November Jennifer Melcher
Rhythm and Rate: Perception and Physiology HST 722 - November 27 Jennifer Melcher Forward suppression of unit activity in auditory cortex Brosch and Schreiner (1997) J Neurophysiol 77: 923-943. Forward
More informationReporting Checklist for Nature Neuroscience
Corresponding Author: Manuscript Number: Manuscript Type: Alex Pouget NN-A46249B Article Reporting Checklist for Nature Neuroscience # Main Figures: 7 # Supplementary Figures: 3 # Supplementary Tables:
More informationHebbian Plasticity for Improving Perceptual Decisions
Hebbian Plasticity for Improving Perceptual Decisions Tsung-Ren Huang Department of Psychology, National Taiwan University trhuang@ntu.edu.tw Abstract Shibata et al. reported that humans could learn to
More informationMedical Neuroscience Tutorial Notes
Medical Neuroscience Tutorial Notes Finding the Central Sulcus MAP TO NEUROSCIENCE CORE CONCEPTS 1 NCC1. The brain is the body's most complex organ. LEARNING OBJECTIVES After study of the assigned learning
More informationSupplemental Information: Task-specific transfer of perceptual learning across sensory modalities
Supplemental Information: Task-specific transfer of perceptual learning across sensory modalities David P. McGovern, Andrew T. Astle, Sarah L. Clavin and Fiona N. Newell Figure S1: Group-averaged learning
More informationSUPPLEMENTARY INFORMATION
doi:10.1038/nature11239 Introduction The first Supplementary Figure shows additional regions of fmri activation evoked by the task. The second, sixth, and eighth shows an alternative way of analyzing reaction
More informationFMRI Data Analysis. Introduction. Pre-Processing
FMRI Data Analysis Introduction The experiment used an event-related design to investigate auditory and visual processing of various types of emotional stimuli. During the presentation of each stimuli
More informationReporting Checklist for Nature Neuroscience
Corresponding Author: Manuscript Number: Manuscript Type: Simon Musall NNA47695 Article Reporting Checklist for Nature Neuroscience # Main Figures: 6 # Supplementary Figures: 14 # Supplementary Tables:
More informationLecturer: Rob van der Willigen 11/9/08
Auditory Perception - Detection versus Discrimination - Localization versus Discrimination - - Electrophysiological Measurements Psychophysical Measurements Three Approaches to Researching Audition physiology
More informationDouble dissociation of value computations in orbitofrontal and anterior cingulate neurons
Supplementary Information for: Double dissociation of value computations in orbitofrontal and anterior cingulate neurons Steven W. Kennerley, Timothy E. J. Behrens & Jonathan D. Wallis Content list: Supplementary
More informationQUANTIFYING CEREBRAL CONTRIBUTIONS TO PAIN 1
QUANTIFYING CEREBRAL CONTRIBUTIONS TO PAIN 1 Supplementary Figure 1. Overview of the SIIPS1 development. The development of the SIIPS1 consisted of individual- and group-level analysis steps. 1) Individual-person
More informationSupporting online material. Materials and Methods. We scanned participants in two groups of 12 each. Group 1 was composed largely of
Placebo effects in fmri Supporting online material 1 Supporting online material Materials and Methods Study 1 Procedure and behavioral data We scanned participants in two groups of 12 each. Group 1 was
More informationLecturer: Rob van der Willigen 11/9/08
Auditory Perception - Detection versus Discrimination - Localization versus Discrimination - Electrophysiological Measurements - Psychophysical Measurements 1 Three Approaches to Researching Audition physiology
More informationFRONTAL LOBE. Central Sulcus. Ascending ramus of the Cingulate Sulcus. Cingulate Sulcus. Lateral Sulcus
FRONTAL LOBE Central Ascending ramus of the Cingulate Cingulate Lateral Lateral View Medial View Motor execution and higher cognitive functions (e.g., language production, impulse inhibition, reasoning
More informationCROSSMODAL PLASTICITY IN SPECIFIC AUDITORY CORTICES UNDERLIES VISUAL COMPENSATIONS IN THE DEAF "
Supplementary Online Materials To complement: CROSSMODAL PLASTICITY IN SPECIFIC AUDITORY CORTICES UNDERLIES VISUAL COMPENSATIONS IN THE DEAF " Stephen G. Lomber, M. Alex Meredith, and Andrej Kral 1 Supplementary
More informationChapter 11: Sound, The Auditory System, and Pitch Perception
Chapter 11: Sound, The Auditory System, and Pitch Perception Overview of Questions What is it that makes sounds high pitched or low pitched? How do sound vibrations inside the ear lead to the perception
More informationText to brain: predicting the spatial distribution of neuroimaging observations from text reports (submitted to MICCAI 2018)
1 / 22 Text to brain: predicting the spatial distribution of neuroimaging observations from text reports (submitted to MICCAI 2018) Jérôme Dockès, ussel Poldrack, Demian Wassermann, Fabian Suchanek, Bertrand
More informationFREQUENCY DOMAIN HYBRID INDEPENDENT COMPONENT ANALYSIS OF FUNCTIONAL MAGNETIC RESONANCE IMAGING DATA
FREQUENCY DOMAIN HYBRID INDEPENDENT COMPONENT ANALYSIS OF FUNCTIONAL MAGNETIC RESONANCE IMAGING DATA J.D. Carew, V.M. Haughton, C.H. Moritz, B.P. Rogers, E.V. Nordheim, and M.E. Meyerand Departments of
More informationSupplementary Materials
Supplementary Materials Supplementary Figure S1: Data of all 106 subjects in Experiment 1, with each rectangle corresponding to one subject. Data from each of the two identical sub-sessions are shown separately.
More informationSum of Neurally Distinct Stimulus- and Task-Related Components.
SUPPLEMENTARY MATERIAL for Cardoso et al. 22 The Neuroimaging Signal is a Linear Sum of Neurally Distinct Stimulus- and Task-Related Components. : Appendix: Homogeneous Linear ( Null ) and Modified Linear
More informationSupporting Information. Demonstration of effort-discounting in dlpfc
Supporting Information Demonstration of effort-discounting in dlpfc In the fmri study on effort discounting by Botvinick, Huffstettler, and McGuire [1], described in detail in the original publication,
More informationTwelve right-handed subjects between the ages of 22 and 30 were recruited from the
Supplementary Methods Materials & Methods Subjects Twelve right-handed subjects between the ages of 22 and 30 were recruited from the Dartmouth community. All subjects were native speakers of English,
More informationResistance to forgetting associated with hippocampus-mediated. reactivation during new learning
Resistance to Forgetting 1 Resistance to forgetting associated with hippocampus-mediated reactivation during new learning Brice A. Kuhl, Arpeet T. Shah, Sarah DuBrow, & Anthony D. Wagner Resistance to
More informationReporting Checklist for Nature Neuroscience
Corresponding Author: Manuscript Number: Manuscript Type: Yves De Koninck NNBC47734A Brief Communication Reporting Checklist for Nature Neuroscience # Main Figures: 3 # Supplementary Figures: 1 # Supplementary
More informationComment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA)
Comments Comment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA) Is phase locking to transposed stimuli as good as phase locking to low-frequency
More informationTask-specific reorganization of the auditory cortex in deaf humans
Task-specific reorganization of the auditory cortex in deaf humans Łukasz Bola a,b,1, Maria Zimmermann a,c,1, Piotr Mostowski d, Katarzyna Jednoróg e, Artur Marchewka b, Paweł Rutkowski d, and Marcin Szwed
More informationfmri Evidence for Modality-Specific Processing of Conceptual Knowledge on Six Modalities
fmri Evidence for Modality-Specific Processing of Conceptual Knowledge on Six Modalities Simmons, W.K. 1, Pecher, D. 2, Hamann, S.B. 1, Zeelenberg, R. 3, & Barsalou, L.W. 1 1 Emory University, 2 Erasmus
More informationIncorporation of Imaging-Based Functional Assessment Procedures into the DICOM Standard Draft version 0.1 7/27/2011
Incorporation of Imaging-Based Functional Assessment Procedures into the DICOM Standard Draft version 0.1 7/27/2011 I. Purpose Drawing from the profile development of the QIBA-fMRI Technical Committee,
More informationRegional and Lobe Parcellation Rhesus Monkey Brain Atlas. Manual Tracing for Parcellation Template
Regional and Lobe Parcellation Rhesus Monkey Brain Atlas Manual Tracing for Parcellation Template Overview of Tracing Guidelines A) Traces are performed in a systematic order they, allowing the more easily
More informationExperimental design for Cognitive fmri
Experimental design for Cognitive fmri Alexa Morcom Edinburgh SPM course 2017 Thanks to Rik Henson, Thomas Wolbers, Jody Culham, and the SPM authors for slides Overview Categorical designs Factorial designs
More informationSupporting Information
Supporting Information Newman et al. 10.1073/pnas.1510527112 SI Results Behavioral Performance. Behavioral data and analyses are reported in the main article. Plots of the accuracy and reaction time data
More informationReporting Checklist for Nature Neuroscience
Corresponding Author: Manuscript Number: Manuscript Type: Roger Thompson NNA52598C Article Reporting Checklist for Nature Neuroscience # Main Figures: 7 # Supplementary Figures: 9 # Supplementary Tables:
More informationFunctional topography of a distributed neural system for spatial and nonspatial information maintenance in working memory
Neuropsychologia 41 (2003) 341 356 Functional topography of a distributed neural system for spatial and nonspatial information maintenance in working memory Joseph B. Sala a,, Pia Rämä a,c,d, Susan M.
More informationParietal representations of stimulus features are amplified during memory retrieval and flexibly aligned with top-down goals
This Accepted Manuscript has not been copyedited and formatted. The final version may differ from this version. Research Articles: Behavioral/Cognitive Parietal representations of stimulus features are
More informationSupplemental Data. Inclusion/exclusion criteria for major depressive disorder group and healthy control group
1 Supplemental Data Inclusion/exclusion criteria for major depressive disorder group and healthy control group Additional inclusion criteria for the major depressive disorder group were: age of onset of
More informationReporting Checklist for Nature Neuroscience
Corresponding Author: Manuscript Number: Manuscript Type: Rutishauser NNA57105 Article Reporting Checklist for Nature Neuroscience # Main Figures: 8 # Supplementary Figures: 6 # Supplementary Tables: 1
More informationMorton-Style Factorial Coding of Color in Primary Visual Cortex
Morton-Style Factorial Coding of Color in Primary Visual Cortex Javier R. Movellan Institute for Neural Computation University of California San Diego La Jolla, CA 92093-0515 movellan@inc.ucsd.edu Thomas
More informationHuman Object Interactions Are More than the Sum of Their Parts
Cerebral Cortex, March 2017;27:2276 2288 doi:10.1093/cercor/bhw077 Advance Access Publication Date: 12 April 2016 Original Article ORIGINAL ARTICLE Human Object Interactions Are More than the Sum of Their
More informationDynamic functional integration of distinct neural empathy systems
Social Cognitive and Affective Neuroscience Advance Access published August 16, 2013 Dynamic functional integration of distinct neural empathy systems Shamay-Tsoory, Simone G. Department of Psychology,
More informationIntroduction to Computational Neuroscience
Introduction to Computational Neuroscience Lecture 5: Data analysis II Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis II 6 Single
More informationSpectrograms (revisited)
Spectrograms (revisited) We begin the lecture by reviewing the units of spectrograms, which I had only glossed over when I covered spectrograms at the end of lecture 19. We then relate the blocks of a
More informationASSESSING MULTISENSORY INTEGRATION WITH ADDITIVE FACTORS AND FUNCTIONAL MRI
ASSESSING MULTISENSORY INTEGRATION WITH ADDITIVE FACTORS AND FUNCTIONAL MRI Thomas W. James, Ryan A. Stevenson and Sunah Kim Department of Psychological and Brain Sciences, Indiana University, 1101 E Tenth
More informationBiologically-Inspired Human Motion Detection
Biologically-Inspired Human Motion Detection Vijay Laxmi, J. N. Carter and R. I. Damper Image, Speech and Intelligent Systems (ISIS) Research Group Department of Electronics and Computer Science University
More informationDecoding the future from past experience: learning shapes predictions in early visual cortex
J Neurophysiol 113: 3159 3171, 2015. First published March 5, 2015; doi:10.1152/jn.00753.2014. Decoding the future from past experience: learning shapes predictions in early visual cortex Caroline D. B.
More informationBehavioral and neural correlates of perceived and imagined musical timbre
Neuropsychologia 42 (2004) 1281 1292 Behavioral and neural correlates of perceived and imagined musical timbre Andrea R. Halpern a,, Robert J. Zatorre b, Marc Bouffard b, Jennifer A. Johnson b a Psychology
More informationAttention Response Functions: Characterizing Brain Areas Using fmri Activation during Parametric Variations of Attentional Load
Attention Response Functions: Characterizing Brain Areas Using fmri Activation during Parametric Variations of Attentional Load Intro Examine attention response functions Compare an attention-demanding
More informationHow do individuals with congenital blindness form a conscious representation of a world they have never seen? brain. deprived of sight?
How do individuals with congenital blindness form a conscious representation of a world they have never seen? What happens to visual-devoted brain structure in individuals who are born deprived of sight?
More informationDifferences of Face and Object Recognition in Utilizing Early Visual Information
Differences of Face and Object Recognition in Utilizing Early Visual Information Peter Kalocsai and Irving Biederman Department of Psychology and Computer Science University of Southern California Los
More informationOpinion This Is Your Brain on Politics
Opinion This Is Your Brain on Politics Published: November 11, 2007 This article was written by Marco Iacoboni, Joshua Freedman and Jonas Kaplan of the University of California, Los Angeles, Semel Institute
More informationDisparity- and velocity- based signals for 3D motion perception in human MT+ Bas Rokers, Lawrence K. Cormack, and Alexander C. Huk
Disparity- and velocity- based signals for 3D motion perception in human MT+ Bas Rokers, Lawrence K. Cormack, and Alexander C. Huk Supplementary Materials fmri response (!% BOLD) ).5 CD versus STS 1 wedge
More informationInvestigations in Resting State Connectivity. Overview
Investigations in Resting State Connectivity Scott FMRI Laboratory Overview Introduction Functional connectivity explorations Dynamic change (motor fatigue) Neurological change (Asperger s Disorder, depression)
More informationSupplemental Information. Triangulating the Neural, Psychological, and Economic Bases of Guilt Aversion
Neuron, Volume 70 Supplemental Information Triangulating the Neural, Psychological, and Economic Bases of Guilt Aversion Luke J. Chang, Alec Smith, Martin Dufwenberg, and Alan G. Sanfey Supplemental Information
More informationFAILURES OF OBJECT RECOGNITION. Dr. Walter S. Marcantoni
FAILURES OF OBJECT RECOGNITION Dr. Walter S. Marcantoni VISUAL AGNOSIA -damage to the extrastriate visual regions (occipital, parietal and temporal lobes) disrupts recognition of complex visual stimuli
More informationReport. Sound Categories Are Represented as Distributed Patterns in the Human Auditory Cortex
Current Biology 19, 498 502, March 24, 2009 ª2009 Elsevier Ltd All rights reserved DOI 10.1016/j.cub.2009.01.066 Sound Categories Are Represented as Distributed Patterns in the Human Auditory Cortex Report
More informationComparing event-related and epoch analysis in blocked design fmri
Available online at www.sciencedirect.com R NeuroImage 18 (2003) 806 810 www.elsevier.com/locate/ynimg Technical Note Comparing event-related and epoch analysis in blocked design fmri Andrea Mechelli,
More informationSupporting Information
Supporting Information Forsyth et al. 10.1073/pnas.1509262112 SI Methods Inclusion Criteria. Participants were eligible for the study if they were between 18 and 30 y of age; were comfortable reading in
More informationReporting Checklist for Nature Neuroscience
Corresponding Author: Manuscript Number: Manuscript Type: Bernhard Staresina A51406B Article Reporting Checklist for Nature Neuroscience # Main Figures: 5 # Supplementary Figures: 10 # Supplementary Tables:
More informationReasoning and working memory: common and distinct neuronal processes
Neuropsychologia 41 (2003) 1241 1253 Reasoning and working memory: common and distinct neuronal processes Christian C. Ruff a,b,, Markus Knauff a,c, Thomas Fangmeier a, Joachim Spreer d a Centre for Cognitive
More informationSupplementary materials. Appendix A;
Supplementary materials Appendix A; To determine ADHD diagnoses, a combination of Conners' ADHD questionnaires and a semi-structured diagnostic interview was used(1-4). Each participant was assessed with
More informationReporting Checklist for Nature Neuroscience
Corresponding Author: Manuscript Number: Manuscript Type: Leonard Petrucelli RS51511B Resource Reporting Checklist for Nature Neuroscience # Main s: 6 1 table # s: 13 # Tables: 11 # Videos: 0 This checklist
More informationFunctional MRI Mapping Cognition
Outline Functional MRI Mapping Cognition Michael A. Yassa, B.A. Division of Psychiatric Neuro-imaging Psychiatry and Behavioral Sciences Johns Hopkins School of Medicine Why fmri? fmri - How it works Research
More informationCISC 3250 Systems Neuroscience
CISC 3250 Systems Neuroscience Levels of organization Central Nervous System 1m 10 11 neurons Neural systems and neuroanatomy Systems 10cm Networks 1mm Neurons 100μm 10 8 neurons Professor Daniel Leeds
More informationPerceptual Learning of Motion Direction Discrimination with Suppressed and Unsuppressed MT in Humans: An fmri Study
Perceptual Learning of Motion Direction Discrimination with Suppressed and Unsuppressed MT in Humans: An fmri Study Benjamin Thompson 1 *, Bosco S. Tjan 2, Zili Liu 3 1 Department of Optometry and Vision
More informationIntroduction to Computational Neuroscience
Introduction to Computational Neuroscience Lecture 10: Brain-Computer Interfaces Ilya Kuzovkin So Far Stimulus So Far So Far Stimulus What are the neuroimaging techniques you know about? Stimulus So Far
More informationTemporal preprocessing of fmri data
Temporal preprocessing of fmri data Blaise Frederick, Ph.D., Yunjie Tong, Ph.D. McLean Hospital Brain Imaging Center Scope This talk will summarize the sources and characteristics of unwanted temporal
More informationReporting Checklist for Nature Neuroscience
Corresponding Author: Manuscript Number: Manuscript Type: Prof. Giulio Tononi N56649CZ Article Reporting Checklist for Nature Neuroscience # Main s: 5 # lementary s: 8 # lementary s: 4 # lementary Videos:
More informationDiscriminative Analysis for Image-Based Population Comparisons
Discriminative Analysis for Image-Based Population Comparisons Polina Golland 1,BruceFischl 2, Mona Spiridon 3, Nancy Kanwisher 3, Randy L. Buckner 4, Martha E. Shenton 5, Ron Kikinis 6, and W. Eric L.
More information