Dynamic field theory applied to f MRI signal analysis

Size: px
Start display at page:

Download "Dynamic field theory applied to f MRI signal analysis"

Transcription

1 University of Iowa Iowa Research Online Theses and Dissertations Summer 2016 Dynamic field theory applied to f MRI signal analysis Joseph Paul Ambrose University of Iowa Copyright 2016 Joseph Paul Ambrose This dissertation is available at Iowa Research Online: Recommended Citation Ambrose, Joseph Paul. "Dynamic field theory applied to fmri signal analysis." PhD (Doctor of Philosophy) thesis, University of Iowa, Follow this and additional works at: Part of the Applied Mathematics Commons

2 DYNAMIC FIELD THEORY APPLIED TO FMRI SIGNAL ANALYSIS by Joseph Paul Ambrose A thesis submitted in partial fulfillment of the requirements for the Doctor of Philosophy degree in Applied Mathematical and Computational Sciences in the Graduate College of The University of Iowa August 2016 Thesis Supervisors: Rodica Curtu, Associate Professor John Spencer, Professor

3 Copyright by JOSEPH PAUL AMBROSE 2016 All Rights Reserved

4 Graduate College The University of Iowa Iowa City, Iowa CERTIFICATE OF APPROVAL This is to certify that the Ph.D. thesis of PH.D. THESIS Joseph Paul Ambrose has been approved by the Examining Committee for the thesis requirement for the Doctor of Philosophy degree in Applied Mathematical and Computational Sciences at the August 2016 graduation. Thesis Committee: Rodica Curtu, Thesis Supervisor John Spencer, Thesis Supervisor Bruce Ayati Colleen Mitchell Michelle Voss

5 ACKNOWLEDGEMENTS This material is based upon work supported by the National Science Foundation under Grant Number HSD and Grant Number BCS ii

6 ABSTRACT In the field of cognitive neuroscience, there is a need for theory-based approaches to fmri data analysis. The dynamic neural field model-based approach has been developing to meet this demand. This dissertation describes my contributions to this approach. The methods and tools were demonstrated through a case study experiment on response selection and inhibition. The experiment was analyzed via both the standard behavioral approach and the new model-based method, and the two methods were compared head to head. The methods were quantitatively comparable at the individuallevel of the analysis. At the group level, the model-based method reveals distinct functional networks localized in the brain. This validates the dynamic neural field modelbased approach in general as well as my recent contributions. iii

7 PUBLIC ABSTRACT This dissertation describes the development of a recent approach to explaining neural data. The approach uses neural network models based on theories of cognitive function to predict patterns in brain activity during experiments. I have created new tools to support the use of this approach. The tools were demonstrated through the analysis of fmri data from a recent response selection experiment. The same experiment was analyzed with both the standard method used in the fmri literature and the new modelbased approach to compare their effectiveness. Results show that the model-based method out-performs the standard fmri analysis method by identifying the functional roles of specific brain areas, shedding new light on how populations of neurons work together in the brain to solve particular tasks. iv

8 TABLE OF CONTENTS LIST OF TABLES... vii LIST OF FIGURES... viii CHAPTER 1: INTRODUCTION... 1 CHAPTER 2: BACKGROUND AND OVERVIEW Prior work leading to the current approach Generating Predictions LFP Toolkit CHAPTER 3: CASE STUDY Introduction to the Experiment The DF Model Model Specifications Equation for the Visual Field Equation for the Spatial Attention Field Equation for the Feature Attention Field Equation for the Contrast Field Equation for the Working Memory Field Equations for the Go and Nogo subnetworks Local Field Interactions Long Range (Inter-Network) Coupling Stimulus Functions Materials and Methods Participants Stimuli and Experimental Design fmri Image Acquisition Description of Behavioral Simulations: Results and Discussion CHAPTER 4: THE MODEL-BASED FMRI APPROACH Overview Materials and Methods v

9 Hemodynamic simulations from the DF model Data analyses for the Standard and DF models: Results and Discussion CHAPTER 5: DISCUSSION General Discussion of the Case Study Results Future Considerations REFERENCES vi

10 LIST OF TABLES Table 1 Local field interaction parameters Table 2 Long range coupling parameters Table 3 Stimulus function parameters Table 4 Voxel count of overlapping effects Table 5 Details of overlapping effects between DF model and Type Effects Table 6 Details of overlapping effects between DF model and Other Effects vii

11 LIST OF FIGURES Figure 1 Example LFPs from two components Figure 2 Example LFPs from two trial conditions Figure 3 Example canonical LFPs Figure 4 Construction of long-form LFPs Figure 5 Hemodynamic response function Figure 6 Example BOLD predictions from two model components Figure 7 Example normalized BOLD predictions Figure 8 Downsampled BOLD predicitons Figure 9 Architecture of the GnG DF model Figure 10 Neural field dynamics during Go tasks Figure 11 Neural field dynamics during No Go tasks Figure 12 Experimental design for the GnG task Figure 13 Reaction times for the behavioral and model data Figure 14 Model-based fmri approach overview Figure 15 - Functional maps generated by the DF model Figure 16 - Overlap between DF and standard group-level activation maps viii

12 CHAPTER 1: INTRODUCTION The current state of fmri analysis in cognitive neuroscience has limitations. By running general linear models based on event timings, one can identify which brain regions are involved in each behavioral condition of a task. However, such an approach is not able to explain how or why these regions produce the behavior. That is often done after-the-fact using a verbal / conceptual theory. In reaction, there has been a call for theory-driven approaches to fmri that can discern the specific roles played by active brain regions. One recently emerging direction is that of model-based approaches where theory-based models of cognitive processes are employed. In a recent overview of the field of model-based cognitive neuroscience, Turner et. al. (2016) described three categories of approaches. The first type of approach uses neural data to guide and inform a behavioral model, that is, a model that mimics features of responses such as reaction times and accuracy. One example of this approach is the Leaky Competing Accumulator model by Usher and McClelland (2001). This model used evidence accumulation mechanisms to explain individual differences in performance in k-alternative tasks and demonstrate possible mechanics of the primacy and recency effects. This model falls under the data-informed category because they included specific mechanics inspired by the attenuation of firing rates due to within-system inhibition. In a simple accumulation model, the degree of evidence 1

13 for a given choice alternative can reach negative values through inhibition. However, firing rates of individual neurons cannot become negative even when attenuated by inhibition. To account for this effect, their model resets the degree of evidence to zero in any case where it would become negative. The second type of approach first takes an existent behavioral model and applies it to the prediction of neural data. One example of this approach is Rescorla and Wagner s (1972) model of learning conditioned responses. In this model, the value of a conditioned stimulus is updated over successive trials according to a learning rate parameter. The model produces trial-by-trial estimates of the error between the conditioned and unconditioned stimuli. This measure can then be used in general linear models to detect patterns matching the model predictions within fmri data. This potentially allows one to identify neural processes which are not directly measureable through behavioral results. The last and most difficult approach is to develop a model to simultaneously predict behavioral and neural data from a single source of data. The biggest limitation to this approach is its inherent complexity as an approach of this type has the most constraints. However, in previous work, Spencer and colleagues have been progressing toward this final type of approach using dynamic neural fields. Based on their initial successes, the goal of this dissertation is to formalize the dynamic field theory approach to model-based fmri. 2

14 In the next chapter, I will discuss standardizing the method of producing BOLD signal predictions from our dynamic field models. I will cover the rationale for the process we arrived at from model construction to fmri analysis. Following, I will cover the additions I developed for the COSIVINA framework which serve as a toolbox for implementing the process. In chapter 3, I will introduce a case study fmri experiment on response selection which was used to develop the process and tools. This chapter will cover the details of the experiment as well as the structure and parameters of the specific model created to capture its results. Chapter 4 will cover the results of the experiment. The experiment was analyzed with standard timing-based methods as well as our modelbased approach. I then compare the results of the two approaches through metrics of captured variance. Finally, chapter 5 will discuss the results of both approaches, the comparisons, and future directions that our approach could be developed. 3

15 CHAPTER 2: BACKGROUND AND OVERVIEW The goal of this thesis is to develop an interactive model-based approach to fmri. In this chapter, I will overview the path leading to the present approach. Then I will step through the process we have determined to bring model data into the analysis of fmri data. First, we develop a model to represent the neural processes hypothesized. Next, we record predictions of local neural activity patterns from the model as it responds to different conditions in a task. Finally, we use the activity predictions to calculate functional patterns of fmri-measured BOLD signal. These predictions are then used as regressors for statistical analysis of fmri data. 2.1 Prior work leading to the current approach Recent work developing theory-based neural models by Deco, Rolls and Horwitz (2004 What and Where) has led to the current approach. These researchers used integrate-and-fire attractor networks to simulate neural activity. In particular, their model employed a fine-grained representation of spiking activity and connectivity based on single-neuron recordings. They constructed several populations of simulated neurons to reflect networks tuned to specific objects, positions, or combinations thereof. Their investigation resulted in dynamics that qualitatively match those of single-neuron recordings. They later derived simulated fmri signals from the model by sampling the 4

16 total synaptic activity over time and convolving it with a standard hemodynamic response function. In particular, they first measured temporal evolution of total synaptic activity from their model by averaging the synaptic flow of each population of neurons at each time step. To generate a BOLD response from the measured neural activity, they convolved the LFP measure with an impulse response function defined as: h(t) = c 1 t n 1Exp [ t t 1 ] a 2 c 2 t n 2Exp [ t t 2 ] where c1, c2, a2, t1, t2 n1 and n2 are parameters adjusted to fit the experimentally measured hemodynamic response. This resulted in some qualitative matches to a measured fmri pattern. The integrate-and-fire network approach had some challenges. Although one version of the model was able to approximate single neuron recordings from a prior study as well as a measured fmri pattern in dorsolateral prefrontal cortex (PFC) from another prior study, other patterns in the ventrolateral PFC went uncaptured. As well, the comparisons that were made were purely qualitatively via visual inspection. No attempt was made to quantitatively connect the measures. Another issue is that the fine-grained resolution of the simulation led to large computational costs for the simulation. Each of the 2000 neurons simulated necessitated six differential equations to govern the specific dynamics of multiple receptors and overall membrane potential with a time step size of 0.1 milliseconds. For the purpose of simulating fmri signals, one can question whether this degree of biophysical detail is needed. Measured fmri signals have significantly lower spatial and temporal resolution. Current fmri 5

17 methods have a spatial resolution of one to five millimeters that is large enough to contain hundreds of thousands of individual neurons. The temporal resolution is normally on the order of seconds. The temporal resolution is further impacted by the low-pass nature of the blood flow response that spreads out the effect of small-scale variations in neural activity. Due to these limitations, we hypothesize that models which capture population-level activity might be sufficient to capture the macro scale patterns observed in fmri. Our alternative approach uses dynamic field theory (DFT). Recent work (Hasegawa 2008, Renart Brune Wang 2003, Treyes 1993) shows that mean field theory approaches can be applied to neuronal dynamics. That is to say, populations of neurons can be represented by higher-order systems in ways that preserve the aggregate behavior of individual neurons accurately instead of individually simulating many neurons. Our goal with DF modeling is to directly simulate this level of neural population dynamics to match the scale of patterns observed in fmri data. To represent the population-level dynamics, we turn to a representation based on the idea of distribution of population activation. Individual sensory, motor, and cognitive neurons among a population are selectively sensitive to particular features - for instance, color, orientation, spatial frequency, spatial position, or movement direction and distance. When considered together as a population, they describe a meaningful representation of stimuli observed and/or remembered. Our simulations seek to model this level of representation with a dynamic field governed by interactions 6

18 reflective of the underlying neuronal organization. This is comparatively computationally efficient because it simulates neural population dynamics at a more macroscale relative to integrate-and-fire networks. Although this is the case, as with the Deco approach, we can similarly convolve the activity of the model with a hemodynamic response function to simulate an fmri signal. In that way, we should be able to capture the macro-scale signals measured by fmri with simulations on the same level of complexity. The overall goal of the process is not just behavioral matches with the model or qualitative similarity in predicted and measured fmri signals, but predictions specific enough to quantitatively analyze alongside experimentally measured fmri BOLD signals. The initial attempt at this process was explored by Buss, Wifall, Hazeltine and Spencer (2009). In this paper, they examined a dual-task paradigm. That is, they studied the costs associated with executing two tasks as compared to one, as well as the reductions in these costs achieved through practice. They first constructed a DF model of the task used in the experimental paradigm. Using a model with memory trace elements, they were able to capture the behavioral patterns of reaction times which improve with practice for several versions of the task. Root-mean-square error (RMSE) analysis determined that the predicted reaction times significantly fit those observed experimentally. They then adapted Deco et al s method of creating a model-based hemodynamic response prediction (detailed later in chapter 3). RMSE analysis between the simulated and observed hemodynamics again showed a significant fit. 7

19 They later tried models with various alterations to the memory trace mechanisms that were not able to strongly capture either the behavioral or hemodynamic patterns of the observed data. This supported the original version of the model and also demonstrated that not just any model will capture the observed results. It is important to highlight what was achieved in the Buss 2009 paper. The DF model simulated neural dynamics in real time. The dynamics create peaks of activation that conceptually drive a motor response in the model. This mimics the responses made by participants in the experiment. Furthermore, the same neural dynamics that give rise to the response are also used to simulate hemodynamic predictions in real time. Therefore, the same dynamics are used to capture both brain and behavior simultaneously. This attempt was one of the first models to successfully capture both sets of data using a single model with the same set of parameters. Another recent exploration by Buss et al (2016) further demonstrated the effectiveness of the DF approach to fmri. In conventional fmri analysis, a model of brain activity that has been parameterized for each stimulus condition is estimated via linear regression. From this, a set of parametric maps is constructed for each condition and then used to infer brain regions where the model coefficients are statistically nonzero or vary between conditions. Buss et al s study proposed to use a DF model to directly parametrize a general linear model (GLM) approach. They then used linear regression to map the real-time neural dynamic patterns from model components onto cortical regions. Their results were striking not only did the model-predicted patterns 8

20 show up in the data, the model-based approach out-performed the standard GLM analyses on the metrics of adjusted R 2 and number of voxels with significant effects. Further, the model-based results provide explanations as to the specific functions of brain regions by identifying which model components patterns are present in each brain region. For example, some regions indicate a role in working memory maintenance while others indicated patterns of change detection. In summary, recent work exploring the application of DF model-based predictions to the analysis of fmri data appear promising. We have examples of models that can capture results at both behavioral and hemodynamic levels, suggesting that the approach is valid and merits further exploration. The goal of the present project is to standardize the methods of DF model-based analysis and construct tools to facilitate implementation. I also test whether the DF approach generalizes to recent fmri data from a response selection task. 2.2 Generating Predictions Next, I will go over the method by which we translate model-derived activity patterns into fmri predictions and carry out the subsequent analysis steps. Consolidating and standardizing this process has been a primary focus of my research. In brief, we start by reading out a simulated local field potential (LFP) over time as the model processes the task. The LFP is adjusted relative to baseline or resting state readings from the model. Based on the simulated LFP, we then directly generate the hemodynamic response (HDR) that we expect to occur in the fmri measurements of 9

21 BOLD signal change. Finally, we normalize these predicted HDRs to a percent max signal scale. The resulting time series data are used as regressors in GLM-based analysis of the existing fmri measurements. Our models are created using the COSIVINA framework. COSIVINA stands for compose, simulate, and visualize neurodynamic architectures. More specifically, it is an object-oriented environment in Matlab that facilitates the creation and operation of DF models. The models are composed of large systems of differential equations, and COSIVINA allows us to standardize and organize the computation involved. Specific information on COSIVINA can be found in the book Dynamic Thinking A Primer on Dynamic Field Theory, and the framework itself along with usage documentation is available online at In DF models, neural populations with distinct functions are represented with fields. The general equation governing the state of these fields can be represented by the following: τ e u j(x, t) = u j (x, t) + h j + [c j g j (u j )](x, t) + [c jk g k (u k )](x, t) + η j (x, t) + s j (x) In this formulation, u represents the activation level of the given field. Fields track information over certain dimensions, which can be such things as color or spatial dimensions, and x represents the field dimension(s). The parameter τ describes the time constant that determines the relaxation time in the model or, conceptually, the relative rate at which activation patterns build in the field. The field state evolves dynamically 10 k

22 over time, described with the variable t. (Note that in the model proposed in this thesis, time t is assumed to be measured in milliseconds, if not otherwise particularly mentioned.) The attractor state of the equation is determined by the resting level of the field, h, plus the summed contribution of connections from all fields within the model, beginning with uj itself. The function g, a sigmoid function, determines whether a given location in the contributing field has enough activity to meet the threshold of contribution. The function cjk describes the connectivity kernel between fields uj and uk showing the contribution of field uk to the dynamics of field uj. The functions g(uk(x)) and cjk(x) are convolved to determine the full contribution from field uk to uj. The equation for convolution is (f g)(x) = f(y)g(x y)dy. This connection can take positive values for excitatory connections, negative values for inhibitory connections, or zero values if the fields are unconnected or the locations are too separate. The functions η and s describe the noise and stimulus inputs to the field, respectively. Noise functions are included to represent realistic neural conditions and are typically spatially correlated. Stimulus inputs represent influences from systems not included in the model, such as sensory signals. As with Deco s integrate and fire model, the ultimate goal of the model is to generate predictions of synaptic activity. DF equations specify the rate of change in activation of specific neural populations that, by hypothesis, are believed to be located in specific brain regions. Therefore, we can use the field equations to estimate a local field potential. To do this, we sum all activity contributing to the rate of change of each 11

23 field, ignoring the stability terms in the equation (the u term and the resting level term, h). In keeping with the previous definitions, the predicted LFP is calculated by the following: LFP j (t) = ( [c j g j (u j )](x, t) + [c jk g k (u k )](x, t) ) x k There are some considerations that need to be made in the process of this calculation. First, since both excitatory and inhibitory communication require active neurons and, biophysically, generate positive ion flow, we need to sum both in a positive way toward predictions of local activity; thus, we take the absolute value of all excitatory and inhibitory contributions. DF models contain various components with individual functions such as stimuli detection, holding memory patterns, and identifying novel stimuli. Each component in the model has a different network of interactions that drives a different response pattern. Consequently, a separate LFP is created for each component to predict that component s specific function. Figure 1 depicts example LFP simulations from two model components (feature attention and contrast, see Chapter 3) over three trials. The same conditions are used in each trial. The variance between the repetitions is a consequence of the stochastic nature of the model. Since noise is employed in each field, there will be both small-scale variations in the signal (especially evident in the second component) as well as overall variation in reaction timing. 12

24 Figure 1 Example LFPs from feature attention (upper panel) and contrast (lower panel) fields. Each panel depicts three repetitions of the same trial condition. Each repetition is individually calculated and 1500 milliseconds long. Different trial types involve individual conditions that affect response patterns. Further, the response patterns undergo different changes depending on the model component in question. Figure 2 depicts examples of differences that can arise in 13

25 model subnetworks based on changing trial conditions. In some components, as with feature attention in the first panel, the activity level is similar across conditions with minor differences in timing. In others, as with contrast in the second panel, different conditions lead to large differences in overall activity observed. This contrast is key to the model-based approach because it allows components to have unique signatures on both the scale of the individual trial as well as larger scale signatures across task conditions. 14

26 Figure 2 Example LFPs from feature attention and contrast under varied conditions. Condition A is a go trial; condition B is a nogo trial. The next step is to use our measurements of model activity to establish a canonical hemodynamic prediction. Our neural models employ noise in most fields to better reflect real brain activity. However as shown earlier, one consequence of stochastic noise is that no two runs of the model will have the same exact pattern of 15

27 activity. To account for this variance, we run many repetitions of each condition. The number of repetitions is commonly chosen to reflect the number of trials undertaken by the subjects in the actual experiment. We then average the measured time series from each repetition to determine our canonical predicted LFP signal. Figure 3 depicts examples of canonical trial predictions created by averaging trials such as those shown in Figure 1. Figure 3 Example canonical LFPs averaged over many repetitions, generated by feature attention (green) and contrast (blue) for the same trial condition (go). Another concern is putting the measured values in the appropriate context. Much like the measurement of fmri data, we take a baseline measurement from the model. Using the same calculations, measuring the activity in the model when no trial 16

28 stimuli are given simulates a resting state. We average these readings (across all time points and conditions) to obtain a single average resting value which is then subtracted out of our predictions order to express change in activity relative to resting state. There remains the question of how model units compare those in the fmri data. This is addressed below. Once we have calculated an LFP prediction for each model component and trial type, we can then construct long-scale LFP predictions for each subject in the experiment based on the order and timing of trials they experienced. These are minutes long, numerically generated, canonical LFPs that model the subject's neural network state induced during each experimental block. To do this, we create an zero-valued time series the length of the entire experiment. We then use trial onset timings from the experiment to anchor the trial LFP prediction for the corresponding trial type. For example, if a trial of a certain condition has an onset time of 4500ms after the start of the experiment, then the canonical LFP for that trial is added to the long-scale LFP starting at the same onset time (see Figure 4). Once this process is completed for each trial, we have constructed an experiment-based LFP-template for each subject. This process is repeated for each model component in order to construct individual LFP subnetwork predictions associated, presumably, to different cognitive functions. 17

29 Figure 4 Example construction of long-form LFPs for feature attention (upper panel) and contrast (lower panel). In this example, the experiment has two conditions: A (go) and B (nogo). First, a LFP is calculated for each condition and component. Then the individual trial canonical LFPs are combined into time series which represent a sequence of trials. In this case, the experiment has A trials at 500, and ms, and B trials at 7500 and ms. fmri data does not measure neural activity directly. It measures changes in blood flow as the neurovascular system responds to resource demands of active neurons. Consequently, there is a delay between neural activity and the measured BOLD signal. To account for this, we use a standard hemodynamic response function. This describes the expected response pattern in the BOLD signal for a given amount a neural activity. By convolving this hemodynamic response function with our neural activity predictions in the form of LFPs, we are able to generate predicted BOLD activity patterns that are directly comparable to the measured data. The equation used 18

30 for the response function is given below and depicted in Figure 5. Examples of resulting BOLD predictions from the convolution between long scale LFPs and the hemodynamic response function are shown in Figure 6. HRF(t) = t n 1 Exp ( t λ ) /[λn (n 1)!], λ = 1.3, n = 4 Figure 5 - The hemodynamic response function used in the present project 19

31 Figure 6 Example BOLD predictions from feature attention (upper panel) and contrast (lower panel). Starting time point matches that of Figure 5. 20

32 It is at this stage we address the aforementioned question of comparing model units for the numerically generated BOLD signal to those derived from the fmri data. Similar to the step when we introduced the resting level baseline for the canonical LFP in the DF model, we again take guidance from the treatment of fmri data. In this case, we normalize each predicted BOLD signal by its average value over time across the entire experiment. This takes us away from model-based units to a more abstract percentage scale relative to the mean. The results of applying this process to the time series in Figure 6 is depicted in Figure 7. 21

33 Figure 7 Example BOLD predictions from feature attention (upper panel) and contrast (lower panel) normalized to a percentage mean scale. 22

34 We then turn these BOLD signal predictions into regressors for statistical analysis of the fmri data. This requires matching the sampling rate of the time series to that of the data. Figure 8 depicts regressors from the present project after downsampling to match the temporal resolution (TR) from the fmri data. 23

35 Figure 8 Example downsampled time points from BOLD signal predictions for feature attention (upper panel) and contrast (lower panel). The solid lines depict the predictions at the full 1 millisecond resolution from the model. The circles indicate the 2 second resolution which matches the sampling rate of the fmri data. 24

36 With regressors for each subject, we can run general linear models (GLMs) at the subject level to ask whether the model-based predictions capture a significant proportion of variance in the fmri data from particular clusters in the brain. From this, we draw some key measures such as r-squared values and voxel counts. R-squared values measure how much of the observed variance in fmri patterns are explained with a given set of regressors. Voxel counts identify how many physical brain regions in the data show significant evidence of the given patterns. Both of these measures can be compared against traditional analysis methods to determine if our methods account for less, more, or as much variance. When results from the individual level analyses are promising, we can also investigate results at the group level. At this stage, we find and categorize clusters of voxels where the regressors capture a significant proportion of variance at the group level, meaning across participants. Again, the different components in our model each generate distinct patterns that can identify individual functions. Thus, when an effect is present, we can make specific inferences about the function of the brain area represented. This is a key advantage of the model-based approach in general. I illustrate this approach in Chapter 4 in the case study of a DF model of response selection and inhibition. 2.3 LFP Toolkit To standardize the process of generating model predictions and running the GLM analyses, it is critical to have the tools to do this work. Thus, I created tools within the COSIVINA framework to facilitate this process. I created customizable new 25

37 model elements to record LFP signals in a flexible way from any kind of field. Since LFP signals require totaling all contributions to a field, it can be tedious and difficult to ensure every detail is included in larger and more involved models. To this end, I have also implemented functions to construct basic LFP elements in a systematic and reliable way. These functions recursively search for every relevant field connection in the model structure to ensure nothing is left out of the calculations. With these functions, it is possible to quickly add LFP tracking to any new or existing COSIVINA model. Between automatic LFP creation and manual LFP customization, these tools pave the way forward for further model-based fmri research. As an example implementation of these LFP tools, refer to the following section of code: 1. sim.addelement(neuralfield('field u', fieldsize, tau_u, h_u, b_u)); 2. sim.addelement(gausskernel1d('field v -> field u', fieldsize, sigma_vu, Amp_vu, true, true, kcf), 'field v', [], 'field u'); 3. sim.addelement(lfp('lfp1 for field u', 'field u', historyduration, interval), 'field v -> field u'); 4. sim.createlfp('lfp2 for field u', 'field u', historyduration, interval); Line 1 constructs a new field in a model with given parameters. Line 2 is an example excitatory connection between two model fields. Line 3 implements an LFP for field u 26

38 which measures input from the connection created in line 2 and tracks the evolution of the field activity shared. I constructed this model element as an addition to the COSIVINA to track LFPs generically from any model using the framework. The element automatically normalizes each contribution according to the size of the connecting field. However, using this method, one must manually list each connection in the model under the appropriate LFP initialization. In small models, this is trivial, but as the number of fields and connections increase, this process becomes more difficult. Line 4 demonstrates the automatic tool for LFP creation. This command searches the model for each connection that provides input to the given field and constructs the LFP element itself. I also investigated creating similar tools for the process of creating long-form LFPs and convolving BOLD predictions. Unfortunately, this process resists standardization due to the sheer number of inputs required. In particular, managing the number and value of trial onset timings is difficult to accomplish generically. Another major complication is that fmri experiments often involve multiple sessions of trials which must be stitched together. In order to counter-balance the experiment, different participants may experience the sessions in different orders. Ensuring the sessions are stitched together in the right order for each participant is a nontrivial process. Future work will be needed to optimize this part of the process. Example code from the present project is available upon request. 27

39 To validate and test these new procedures, I took a model of response selection and an fmri data set and examined whether this new DF model-based approach outperformed standard GLM analyses with these data. In the next sections, I discuss the details and results of this study. 28

40 CHAPTER 3: CASE STUDY The goal of Chapters 3 and 4 is to apply the tools developed in Chapter 2 to a case study examining the neural dynamics of response selection. The first step in this approach was to develop a DF model of response selection and inhibition and to apply that model to an existing behavioral dataset. Chapter 3 describes the resultant DF model and the experimental data that were simulated. Chapter 4 then applies the tools from Chapter 2 to the fmri data obtained in the same experiment, comparing the modelbased approach to standard fmri analyses. Note that the work presented in Chapters 3 and 4 have been summarized in a paper submitted for publication (see Wijeakumar, Ambrose, Spencer, and Curtu, 2016). 3.1 Introduction to the Experiment In the last decade, the behavioral and neural bases for response selection and inhibition have garnered considerable interest, in part, because poor performance on response selection tasks has been linked to performance deficits in atypical populations (Kaladjian, Jeanningros, Azorin, Anton, & Mazzola-Pomietto, 2011; Monterosso, Aron, Cordova, Xu, & London, 2005; Pliszka, Liotti, & Woldorff, 2000). Although these links are compelling, response selection and inhibition has proved to be a complex topic with a myriad of different response selection and inhibition tasks. Aron and colleagues recently proposed a useful taxonomy that classifies response selection and inhibition 29

41 based on the type of control required and the nature of the stopping mechanism engaged (A. R. Aron, 2007). In particular, these researchers distinguish reactive (stopping and switching movement based on changes in the environment) from proactive tasks (applying a brake when conflict is detected to slow down a response), as well as global (complete stopping of the initiated movement) from selective tasks (one movement plan is stopped and another is selected). Aron and colleagues have further demonstrated that response inhibition activates a fronto-striatal network and that the right inferior frontal cortex is the neural locus of a response inhibition module (Adam R. Aron, Robbins, & Poldrack, 2014). By contrast, recent work from Hampshire and colleagues has shown that parts of the inferior frontal cortex respond to inhibitory and non-inhibitory components of a task equally, suggesting the absence of an inhibition-specific module (Erika-Florence, Leech, & Hampshire, 2014; Hampshire & Sharp, 2015). They argued that the right inferior frontal cortex houses sub-components of a distributed functional network that supports a broad range of attentional and working memory maintenance processes. Recently, Wijeakumar et al. examined this controversy in a study using two different response selection and inhibition tasks and showed that a robust neural response was observed on infrequent trials, regardless of the need for inhibition (Wijeakumar et al., 2015). Interestingly, these researchers also found evidence for the engagement of memory processes: the number of stimulus-response (SR) mappings modulated the neural signal across multiple brain areas, with a reduction in the BOLD 30

42 signal as the number of SR mappings increased. Wijeakumar et al. suggested that this might reflect a form of associative rather than working memory. In general, these results were in agreement with reports from Hampshire and colleagues. In the current study, I extend this work by investigating whether a model-based fmri approach might usefully contribute to this debate by clarifying the roles of attention, working memory, and stimulus saliency in a reactive inhibition task such as the Go/Nogo task. To date, there are several models of response selection and inhibition. For instance, Weicki and Frank s model of response inhibition unifies many findings from the inhibitory control literature and has simulated key aspects of neural data from both neurophysiology and event-related potential (Wiecki & Frank, 2013). Although the model has many strengths, it may not be ideal for explaining our fmri findings because the SR mappings are coded into the model by hand ; thus, this might not be an ideal model to explore tasks where participants must quickly learn different numbers of SR mappings. Indeed, because people learn the SR mappings quite quickly, this is a tricky theoretical issue. Moreover, to date, these response selection models have not been linked to fmri. This is unfortunate because these types of models have a rather direct tie to neural reality and might usefully contribute to current controversies. In this chapter, I focus on an alternative modeling approach using dynamic field theory (DFT). DFT has a relatively long history in the response selection literature. Erlhagen and Schoner (2001) used DFT to show how neural population dynamics differ as the number of SR mappings changes within response selection tasks and proposed a 31

43 memory trace mechanism for how such mappings could be quickly learned (Erlhagen & Schöner, 2002). More recently, Buss and colleagues have developed a model-based approach to fmri using dynamic fields (Buss, Huppert, Schoner, Magnotta, & Spencer, n.d.; Buss, Wifall, Hazeltine, & Spencer, 2009) that has been formalized in the previous chapters. Here, I apply this approach to a case study that examines the neural bases of response selection. My question is whether a model-based approach to response selection and inhibition can shed light on the controversy regarding the attentional/wm network proposed by Hampshire, and whether the model-based approach formalized in Chapter 2 compares favorably the standard fmri analysis approach. 3.2 The DF Model Here, I use a DF model of the Go/Nogo task (GnG task) and use it to investigate how parametric manipulations of the memory Load (i.e., the number of SR mappings) and the Proportion of Go and Nogo trials affect participants responses. The model consists of several dynamic neural fields (DFs) that compute neural population dynamics u j according to the following equation (Amari, 1977; Ermentrout, 1998): τ j u j(x, t) = u j (x, t) + h j + [c j g j (u j )](x, t) + [c jk g k (u k )](x, t) + η j (x, t) + s j (x). (3.1) The activation u j of each component is modeled at high temporal resolution (millisecond timescale) with time constant τ j. It assumes a resting level h j and depends on lateral (within the field) and longer range (between different fields) excitatory and k 32

44 inhibitory interactions, c j g j (u j ) and c jk g k (u k ) respectively. These are implemented by convolutions between field outputs g(u(x, t)) and connectivity kernels c(x) with the latter defined either as a Gaussian function or as the difference of two Gaussians ( Mexican hat shape). The temporal dynamics of the neural activity is also influenced by external inputs s j and it is non-deterministic due to noise η j. The activation u(x, t) is distributed continuously over an appropriate feature space x such as color or spatial position (Figure 9 blue curves). Then the field output g(u(x, t)), is computed by the sigmoid (logistic) function g(u) = 1/(1 + Exp [ βu]) with threshold set to zero and steepness parameter (Figure 9 red curves). Therefore, g(u) remains near zero for low activations; it rises as activation reaches a soft threshold; and it saturates at a value of one for high activations (see middle panels in Figures 10 and 11). Excitatory and inhibitory coupling, both within fields and among them, promote the formation of localized peaks of activation in response to external stimulation. In the model, any above-the-threshold activation peak is interpreted as an experimentally (fmri or LFP recordings) detectable response of that particular neural field to a stimulus. The architecture of the proposed DF model includes seven sub-networks associated with: the visual field (vis); the spatial attention field (satn); the feature attention (fatn), the contrast field (con); the working memory field (wm); and a decision sub-system of two nodes, go and nogo (Figure 9; for details on field equations and parameter values, see section 3.3). This architecture is derived from two 33

45 recent models of visual working memory [VWM] (Johnson, Spencer, Luck, & Schöner, 2009; Johnson, Spencer, & Schöner, 2009; Schneegans, Spencer, Schöner, Hwang, & Hollingworth, 2014). This builds on the attention and WM proposals of Hampshire and colleagues (Erika-Florence et al., 2014). In particular, if response selection networks in the brain tap more general attention and working memory networks, this implies that a model of attention and working memory might do a good job explaining canonical findings from the response selection and inhibition literatures. Thus, the current project examined whether the Johnson et al. and Schneegans et al. models might capture effects from Go/NoGo. 34

46 Figure 9 -. Architecture of the GnG DF model. Seven sub-networks are included: (i) the visual field, vis; (ii) the spatial attention field, satn; (iii) the feature attention, fatn; (iv) the contrast field, con; (v) the working memory field; wm; (vi) the go and (vii) nogo nodes. The neural fields are coupled by uni- or bi- directional excitatory (green) or inhibitory (red) connections. Within each field, the time-evolution of the activation variable u(x,t) is plotted in blue, while the field output g(u(x,t)) is plotted in red. The conceptual analogy is as follows. In typical VWM tasks, particular stimuli are tagged as targets of WM. This might be done, for instance, through instruction remember the green and red items. In DFT, this can be accomplished by building peaks in a cortical field tuned to color and leaving a memory trace at the green and red hue values (Erlhagen & Schöner, 2002). Such traces can help build peaks localized decisions about color at the green and red hue values on subsequent trials (Lipinski, Schneegans, Sandamirskaya, Spencer, & Schöner, 2012). Since these are targets for task-relevant responses, we might consider these the go set. This intuition 35

47 can be formalized by adding a go node to the model that sums activation across the WM layer. Thus, whenever a go stimulus value is detected, the model determines that this response should be passed along to, say, a motor system to generate a response (Buss et al., 2009). Reversely, other items might stand out as different from the go set. This type of difference detection is subserved by another layer in the Johnson et al. model the contrast (con) layer. If the model is allowed to form memory traces of these other stimuli that stand out as different from the go set, then the model would form a memory of the no go set as well. This intuition can be formalized by adding a no go node that sums activation across the con field, detecting the presence of a no go stimulus. Note that go and nogo nodes assume all-to-all local connectivity and they are represented by single variables. The final pieces of the architecture shown in Figure 9 are carried forward from a recent model (Schneegans et al., 2014). The visual field is metrically organized along two dimensions, the spatial and the color representation of the stimuli. This field encodes the perceptual properties of items in the visual field. The spatial attention field is a one-dimensional network organized along the spatial dimension; the feature attention field is topographically organized along the color dimension. These attention layers are winner-take-all layers that select items in the visual field as the focus of attention (Schneegans et al., 2014). Although it is not clear that visual and attentional processing play a dramatic role in Go/NoGo, they are included here in an effort to 36

48 integrate the current model with previous models. Note that if this model effectively captures data from Go/NoGo tasks, this constitutes a strong test of generalization in that the model architecture was developed to examine VWM, not response selection or inhibitory control. The dynamics of the DF model during a Go and a Nogo task are summarized in Figures 10 and 11, respectively. Figure 10 shows the spatio-temporal trajectories in each neural field component of the DF model during the Go task. In particular, three time snapshots are illustrated: (A) When a Go color is presented (time course of stimulus is shown in purple; the left panel), an activation peak is built in the visual field vis. This induces a peak in the working memory field wm and a weak peak in the feature attention field fatn (curves in blue). Then, the peak in wm leads to an increase in activation of the go node (see left panel; in green). In addition, due to inhibition from wm that dominates excitation received from vis, the activity of the contrast field con is lowered at the location of the Go color; (B) At some time between 400 and 500 milliseconds after the stimulus was turned on, the activity of the go node crosses the threshold. Its output function is greater than 0.5 (see left panel; in green), driven by a strong peak formation in the wm. In addition, the peak in fatn becomes stronger, and a sub-threshold hill forms in con as well; (C) In the interval of time between the response (reaction time RT~ 450 ms) and end of the trial (1500 ms), the activity peaks in vis, fatn, con and wm stabilize. Importantly, the hill in con remains sub-threshold. Also note that the activity of the go node reaches saturation. 37

49 Figure 10 - Spatio-temporal neural field dynamics in the GnG DF model during the Go task. For an explanation of the model s state at three distinct time instances (A) t=350 msec; (B) t=460 msec; (C) t=1300 msec, please refer to the main text. Similarly, Figure 11 shows the spatio-temporal dynamics of the DF model during the Nogo task at three time instances: (A) The Nogo color induces activation of the visual field vis and, so, of the contrast field con at the corresponding color 38

50 coordinate in the feature space. A sub-threshold hill in fatn forms as well, and wm is locally inhibited; (B) The peak in con becomes stronger and stabilizes. Field fatn shows supra-threshold activity. At the Nogo color location in wm the activity is inhibited; (C) The activity stabilizes in vis, fatn, con and wm, the peak in wm remaining subthreshold. Note that the nogo node stays on, while the go node remains inactive. 39

51 Figure 11 - Spatio-temporal neural field dynamics in the GnG DF model during the Nogo task. For an explanation of the model s state at three distinct time instances (A, B, C), please refer to the main text. 40

52 3.3. Model Specifications The dynamic neural field (DF) model for the Go/Nogo paradigm consists of 7 coupled neuronal sub-networks: (i) The visual field (vis) is a continuous, metrically organized field of neurons along two dimensions spatial and feature/color representation of the stimuli; (ii) The spatial attention field (satn) is a one-dimensional network organized along the spatial dimension; (iii-v) The feature attention (fatn), the contrast field (con), and working memory field (wm) are topographically organized one-dimensional fields of neurons along the color dimension; (vi-vii) The decision system consists of two subnetworks of all-to-all, topologically undistinguishable neurons that are modeled by two single nodes, go and nogo. The DF model for the Go/Nogo paradigm is therefore defined by a system of five integral-differential equations (3.2) (3.6) and two ordinary differential equations (3.8) (3.9), as listed below. Each equation is described by a sum of several components. The first three terms correspond to the local field interactions, while local noise is modeled by the function η. All terms that depend on two distinct indices are associated with long-range, inter-field coupling. Applied stimulus, when appropriate, is given by function s. Excitatory coupling takes positive values, while inhibitory coupling is negative. The functional topography assumes local excitation and lateral inhibition, and it is modeled by a difference of two Gaussians resulting in a Mexican-hat connectivity. The dot in u represents the derivative of neuronal activity u with respect to time t. Detailed 41

53 definitions of each coupling term are included in Section 3.3.8, and the set of parameters used in the simulation of this DF model are listed in Tables 1, 2 and Equation for the Visual Field Besides local neuronal population interactions, the visual field receives excitatory connections from the spatial attention and the feature attention fields via convolutions c vis,satn g satn (u satn ) and c vis,fatn g satn (u fatn ). It is also subject to external stimulus s vis (x, y). τ e u vis(x, y, t) = u vis (x, y, t) + h vis + c vis (x x, y y )g vis (u vis (x, y, t))dx dy + c vis,satn (x x ) g satn (u satn (x, t))dx + c vis,fatn (y y ) g fatn (u fatn (y, t)) dy + η vis (x, y, t) + s vis (x, y) (3.2) Equation for the Spatial Attention Field The spatial attention field receives two excitatory inputs: projections c satn,vis g satn (u vis ) from the visual field, and a sub-threshold bump activity s satn (x). The latter is centered at the position of stimulus presentation and it simulates the response of the network during the fixation stage of the task. 42

54 τ e u satn(x, t) = u satn (x, t) + h satn + c satn (x x )g satn (u satn (x, t))dx + c satn,vis (x x )g vis (u vis (x, y, t))dy dx + η satn (x, t) + s satn (x) (3.3) Equation for the Feature Attention Field The feature attention field receives excitatory inputs from the visual, contrast and working memory fields. τ e u fatn(y, t) = u fatn (y, t) + h fatn + c fatn (y y )g fatn (u fatn (y, t)) dy + c fatn,vis (y y )g vis (u vis (x, y, t))dy dx + c fatn,con (y y )g con (u con (y, t))dy + c fatn,wm (y y )g wm (u wm (y, t))dy + η fatn (y, t) (3.4) Equation for the Contrast Field The contrast field receives feedforward excitatory connections from the visual and feature attention fields; inhibitory connections from the working memory field; and excitatory feedback from the nogo node. To account for learning during the pre-task instruction step, a sub-threshold input s con (y) with activity bumps localized at the Nogo colors is also included. 43

55 τ e u con(y, t) = u con (y, t) + h con + c con (y y )g con (u con (y, t))dy + c con,vis (y y )g vis (u vis (x, y, t))dy dx + c con,fatn (y y )g fatn (u fatn (y, t)) dy + c con,wm (y y )g wm (u wm (y, t))dy +a con,nogo g nogo (u nogo (t)) + η con (y, t) + s con (y) (3.5) Equation for the Working Memory Field Similarly, the working memory field receives feed-forward excitatory connections from the visual and feature attention fields; inhibitory connections from the contrast field; and excitatory feedback from the go node. In addition, there is a subthreshold input s wm (y) of activity bumps localized at the Go colors which simulates learning during the pre-task instruction step, τ e u wm(y, t) = u wm (y, t) + h wm + c wm (y y )g wm (u wm (y, t))dy + c wm,vis (y y )g vis (u vis (x, y, t))dy dx + c wm,fatn (y y )g fatn (u fatn (y, t)) dy + c wm,con (y y )g con (u con (y, t))dy + a wm,go g go (u go (t)) + η wm (y, t) + s wm (y) 44

56 (3.6) Equations for the Go and Nogo subnetworks The go and nogo nodes are coupled by mutual inhibition. In addition, feedforward excitation is projected from the working memory field to the go node, and from the contrast field to the nogo node respectively. τ e u go(t) = u go (t) + h go + a go g go (u go (t)) +a go,nogo g nogo (u nogo (t)) + a go,wm g wm (u wm (y, t))dy + η go (t) and τ e u nogo(t) = u nogo (t) + h nogo + a nogo g nogo (u nogo (t)) (3.7) +a nogo,go g go (u go (t)) + a nogo,con g con (u con (y, t))dy + η nogo (t) Local Field Interactions All parameters associated with local interactions in the DF model above are listed in Table (3.8) The Gaussian interaction kernel that determines the spread of activation inside a given field to neighboring units (see parameters σ j,e and σ j,i in Table 1) with strengths determined by the amplitude parameters a j,e, a j,i and a j,global is defined by c j (z z ) = a j,e Exp [ (z z ) 2 2σ j,e 2 ] a j,i Exp [ (z z ) 2 2 ] + a 2σ j,global j,i

57 (3.9) Here the variable z = x or z = y spans either the spatial dimension (x S) or the feature (color) dimension (y F), while the index j {satn, fatn, con, wm} corresponds to the neural field spatial attention, feature attention, contrast field or working memory, respectively. The gain output function g normalizes the field activation, and is assumed to be the sigmoidal g(u) = 1 1+Exp [ βu] (3.10) with threshold set to zero and steepness parameter β. Consequently, activation levels lower than the threshold contribute relatively little to neural interactions, while positive activation levels (higher than the threshold 0) contribute strongly to neural interactions. Each neural network is subject to spatially correlated noise η j (z, t) defined as the convolution between a Gaussian kernel and white noise ξ(z, t) η j (z, t) = a j,noise Exp [ (z z ) 2 2σ2 ] ξ(z, t)dz. (3.11) j,noise Note that the variable ξ(z, t) takes random values from a normal distribution with zero mean and unit standard deviation N but has its strength scaled with 1/ dt. Similar definitions are given for the visual field (j = vis) which spans two coordinates, the spatial and color representations. In this case, the convolution c vis 46

58 g vis (u vis ) and the noise η vis are two-dimensional functions so the Gaussian interaction kernel and the spatially correlated noise are defined by c j (x x, y y ) and = a j,e Exp [ (x x ) 2 + a j,i Exp [ (x x ) 2 2σ j,i η j (x, y, t) = a j,noise Exp [ (x x ) 2 2σ j,noise 2σ j,e 2 ] Exp [ (y y ) 2 2 ] 2σ j,e 2 ] Exp [ (y y ) 2 2 ] + a 2σ j,global j,i 2 ] Exp [ (y y ) 2 2σ j,noise (3.12) 2 ] ξ(x, y, t)dx dy (3.13) On the other hand, the nodes Go and Nogo with index j {go, nogo} are assumed to have global connectivity. Then their local field interactions are simply the product a j g j (u j (t)) (3.14) between the gain function and constant a j. The noise function is defined by η j (t) = a j,noise ξ(t) (3.15) 47

59 Long Range (Inter-Network) Coupling The coupling between two distinct fields of the neural network is defined by a Gaussian kernel as well. Thus, if field k receives input from field j then the connectivity function is the convolution c k,j ( ) g j (u j (, t)) with kernel c k,j (z z ) = a k,j Exp [ (z z ) 2 2 ] 2σ k,j (3.16) In particular, if the coupling is a projection of the visual field (j = vis) into either of the fields spatial attention, feature attention, contrast or working memory (k), then the convolution is a double-integral over the two-dimensional set, S F. The Gaussian kernel depends, however, only on one variable (for example, x) so the integration over the other variable (y) ultimately reduces to a summation of the output gain along the secondary dimension y. 48

60 Table 1 - Local field interactions: parameter values used in the simulation of the DF model. See also Eqs. (3.2) (3.6) and (3.9)-(3.15). 49

61 If the coupling is a projection of the working memory (or contrast field) into the go (or nogo node), then the kernel of the convolution function reduces to a constant, c k,j = a k,j In addition, if the coupling is between the go and nogo nodes then the convolution is simply the product c k,j g j (u j (t)) and, again, c k,j = a k,j. the DF model. (3.17) Table 2 summarizes all parameter values associated with long range coupling in Table 2 - Long range (inter-network) coupling: parameter values used in the simulation of the DF model. For all existing connections j to k where it makes sense, the spread of activation takes the value σ_(k,j)=5. See also Eqs. (3.7) (3.8) and (3.16) (3.17). 50

62 3.3.9 Stimulus Functions All parameters associated with stimuli in the DF model appear in Table 3. Stimuli s j to field j are modeled by normalized Gaussian inputs centered at particular position z j,s in the neural field, and with spread parameter σ j,s and amplitude a j,s. In particular, stimuli applied to the spatial attention, contrast and working memory fields induce local sub-threshold bump(s) of activity in the absence of the external stimulus s vis (x, y). s vis (x, y) = a vis,s 1 2π σ vis,s 2σ vis,s 2 Exp [ (x x vis,s) 2 2 ] Exp [ (y y vis,s) 2 2 ] 2σ vis,s s satn (x) = a satn,s 1 Exp [ (x x satn,s) 2 2π σ 2 ] satn,s 2σ satn,s s con (y) = a con,s load/2 1 Exp [ (y y l con,s) 2 ] 2π σ con,s 2σ2 con,s l=1 s wm (y) = a wm,s load/2 1 Exp [ (y y l wm,s) 2 ] 2π σ wm,s 2σ2 wm,s l=1 (3.18) 51

63 The sub-threshold activity bump in the spatial attention field is assumed to form during the fixation stage and prior to application of the Go/Nogo stimulus s vis (x, y). Similarly, sub-threshold activity bumps in the contrast and working memory fields are assumed to form during the instruction stage when the subject learns the Go and Nogo colors, and again prior to application of the external stimulus s vis (x, y). For example, Load 4 requires learning of two Go colors and other two Nogo colors. Therefore, during the numerical simulation time, two sub-threshold activity bumps centered at the Go colors are placed in the working memory field, and two sub-threshold activity bumps centered at the Nogo colors are placed in the contrast field. 52

64 Table 3 - Stimulus functions: parameter values used in the simulation of the DF model. See also Eqs. (3.2), (3.3), (3.5), (3.6) and (3.18). 53

65 3.4. Materials and Methods The GnG DF model was used to simulate details from an fmri study that investigated the effects of manipulating memory Load and Proportion of go and nogo trials during a Go/Nogo task on response selection and inhibition. Details of the experimental protocol used for the fmri study have been described elsewhere (Wijeakumar et al., 2015). For clarity, some details of the study and experimental design are included below Participants Twenty right-handed native English-speaking subjects (age range 25±4 years) participated in the current study. All participants signed an informed consent form approved by the Ethics Committee at the University of Iowa Stimuli and Experimental Design An HP computer (Windows 7) with E-prime version 2.0 was used to run the experiment. No practice as such was provided to the participants but they were familiarized with the general sequence of events with a couple of trials before they entered the MRI scanner. Participants performed a GnG task wherein they were asked to press a button ( Go ) when they saw some stimuli and withhold ( Nogo ) their response when another set of stimuli were presented. Stimuli varied in color but not in shape. The shapes were chosen from another study (Drucker & Aguirre, 2009). The colors were equally distributed in CIELAB 1976 color space, a perceptually uniform color space and color-appearance model developed 54

66 by the Commission Internationale de l E clairage. Go colors were separated from Nogo colors by 60 degrees in color space and such that directly adjacent colors were associated with different response types. Figure 12 - Experimental design for the GnG task. Each trial started with a fixation cross presented at the center of the screen for 2500 ms, followed by the stimulus presentation at the center of the screen for 1500 ms (see Figure 12). The participants were advised to respond to the visual stimuli as fast as possible. If participants did not enter a response within 250ms for a Go response, then a message saying No Response Detected was presented on the screen. Inter-trial interval was jittered between 1000, 2500 or 3500 ms presented 50%, 25% or 25% of the trials respectively. 55

67 Two parametric manipulations were carried out a Proportion manipulation and a Load manipulation. For the Proportion manipulation (at Load 4), the number of Go and Nogo trials were varied as follows. In the 25% condition, 25% of the trials were Go trials and 75% of the trials were Nogo trials. In the 50% condition, 50% of the trials were Go trials and the rest of the 50% of the trials were Nogo trials. In the 75% condition, 75% of the trials were Go trials and 25% of the trials were Nogo trials. For the Load manipulation, 50% of the trials were Go trials and the rest of the 50% were Nogo trials. In the Load 2 condition, one stimulus was associated with a Go response and another with the Nogo response. In the Load 4 condition, two stimuli were associated with a Go stimulus and two other stimuli with a Nogo response. In the Load 6 condition, three stimuli were associated with the Go response and three stimuli with a Nogo response. Participants completed five runs in total: Load 2, Load 4 (also called Proportion 50), Load 6, Proportion 25 and Proportion 75. Each run had a total of 144 trials. The order of the runs was randomized fmri Image Acquisition The 3T Siemens TIM Trio magnetic resonance imaging system with a 12- channel head coil at the Magnetic Resonance Research facility at the University of Iowa was used. An MP-RAGE sequence was used to collect anatomical T1-weighted volumes. Functional BOLD imaging was acquired using an axial 2D echo-planar gradient echo sequence with the following parameters: TE=30 ms, TR=2000 ms, flip 56

68 angle= 70, FOV= mm, matrix=64 64, slice thickness/gap=4.0/ 1.0 mm, and bandwidth=1920 Hz/pixel. The task was presented to the participant inside the scanner through a highresolution projection system connected to a PC using E-prime software. The timing of the stimuli being presented was synchronized to the MRI scanner s trigger pulse. The stimuli were subtended at a visual angle of degrees. A manipulandam was strapped to the participants hand to record their responses. Foam padding was inserted between the participant s head and the head coil to prevent head movement Description of Behavioral Simulations: All numerical simulations of the GnG DF model were performed with the COSIVINA simulation package (available at see Figures Statistical calculations and interpretation of results were then performed in custom-written Matlab code. The first objective was to find one set of parameters for the model to capture reaction times reported experimentally at Load 2. We achieved this by adjusting the model parameters to obtain good qualitative dynamics in all DFs. Then, for better quantitative results, the connection strengths between the go node and wm field and nogo node and con field were tuned. The strength of the sub-threshold stimuli in the wm and con fields for Go and Nogo trials respectively, were tuned as well. Interestingly, the connection strength between the nogo node and the con field needed to be greater than the connection strength between the go node and the wm field. Once the model was able 57

69 to capture the reaction times for Go trials at Load 2, the next step was to capture reaction times for Go trials obtained for Load 4 and Load 6. Based on previous findings, the hypothesis was generated that increasing the number of items would result in the distribution of associative weights across the items held in memory resulting in competition and slower reaction times. Hence, the strength of the sub-threshold stimuli was decreased in both wm and con fields (to mimic the 50% Proportion condition) whilst maintaining the same connection strength between fields. For the Proportion manipulation, Proportion 50% corresponded to Load 4 and so its parameters were used as an anchor to fit the (experimentally reported) reaction times from Proportion 25% and Proportion 75%. Once again, the rationale was that the strength of the local sub-threshold stimulus is likely to be the most important parameter to fluctuate as the number of Go and Nogo trials changed. It is reasonable to assume that as the number of Go trials increased, the strength of the memory trace for Go trials would also increase. Likewise, as the number of Go trials decreased, the strength of these memory traces would decrease. Therefore, the sub-threshold stimulus strength was varied in wm and con but maintained the same connection strengths for other parameters across all Proportion (see Table 3). Simulations were run for 144 trials per model for a total of 20 identical models (to serve as the equivalent of 20 participants) for each of the Load and Proportion manipulations. Mean and standard deviations were calculated across reaction times and compared across the standard behavioral and DF model data. 58

70 Note that, an alternative way to capture changes in reaction times reflecting changes in memory Load and Proportion might be by varying the connection strengths between fields. However, exploring this idea was beyond the scope of the present work. 3.5 Results and Discussion Only correct trials were used for the analyses (Wijeakumar et al., 2015). Both behavioral and DF simulated data showed an increase in mean reaction times from Load 2 to Load 6 [p<0.001] (see Figure 13). In the Load 2 condition, the peaks in each field reach threshold quicker resulting in quicker reaction times than in the Load 6 condition. Standard deviations across the 20 sets of simulations were most similar to that of the behavioral data at Load 4. At Load 6, the model showed higher SDs than in the data; at Load 2, the model showed lower SDs than in the data. Interestingly, the trend across these conditions was qualitatively correct (Figure 13C). This trend in behavior results from an associative-memory system and reflects greater competition between increased numbers of SR mappings across the load manipulation. Consequently, decision-making mechanisms are slowed down and more time is provided to select the appropriate response. Note that, although the strength of the memory trace was decreased proportionally in the current model, this effectively represents how memory traces (or associative weights) in the model would change with variations in the number of SR mappings (Erlhagen & Schöner, 2002). Mean reaction times decreased from Proportion 25% to Proportion 75 % for both behavioral and model data [p<0.001] (see Figure 13B). As expected, with greater 59

71 number of Go trials (75% condition), response selection mechanisms are constantly in use and so reaction times decrease. In the 25% condition, Go trials are infrequent and successful performance requires longer reaction times. In the DF model, the subthreshold stimulus strength in the con field in the 25% condition is increased and its value is greater than the stimulus strength in the wm field. On the contrary, in the 75% condition, stimulus strength was reduced for the con field and its value was higher in the wm field - see Table 3. Increases in sub-threshold stimulus strength reflect a heightened memory trace. As expected, the reaction times computed from the DF model captured the trends in the behavioral data. As observed in Load 4 in the Load manipulation, standard deviation for the model closely matched the behavioral data (see Figure 13D). Go trials for Proportion 25% had the lowest sub-threshold stimulus strength and therefore, presented with the highest amount of reaction time variance. In summary, the DF model of response selection captures the behavioral data from the GnG experiment well. Moreover, results demonstrate that a model originally developed to capture data from VWM experiments can be effectively generalized to capture data from a response selection task. The next step is to examine whether the same model that quantitatively captures behavioral data can also quantitatively predict patterns of brain activity measured with fmri. This is discussed in the next chapter. 60

72 700 A. Behavior Model 700 B Load-2 Load-4 Load-6 0 Prop-25 Prop-50 Prop C. D Load)2 Load)4 Load)6 0 Prop)25 Prop)50 Prop)75 Figure 13 - Mean reaction times computed for the behavioral and model data for (A.) Load (B.) Proportion, and standard deviation of reaction times for (C.) Load and (D.) Proportion 61

73 CHAPTER 4: THE MODEL-BASED FMRI APPROACH 4.1. Overview Since the GnG DF model successfully captured the behavioral data for both the Load and Proportion manipulations, I moved to the central question of this case study - could the model offer insights into the neural bases of response selection and inhibition? In particular, I asked whether the model might shed light on the functional roles of the neural systems identified in the previous fmri study ( Wijeakumar et al., 2015) Materials and Methods Hemodynamic simulations from the DF model To simulate the hemodynamics for this study, I used the method described in Chapter 2. I illustrate the procedure below using the LFP for the contrast field neural population as an example (con field in Figures 9-11). The LFPs for all other neural fields in the GnG DF model (Figure 9) result in similar way. Consider the dynamic field equation (3.1) with appropriate input neural fields and connections that contribute to the dynamics of the neural population in con field. This equation is defined by (3.5) in the previous section or, more explicitly, by 62

74 τ e u con(y, t) = u con (y, t) + h con + s con (y) + c con,noise (y y )ξ(y, t)dy + [c con,e (y y ) c con,i (y y )] g con (u con (y, t))dy + c con,vis (y y )g vis (u vis (x, y, t))dy dx + c con,fatn (y y )g fatn (u fatn (y, t)) dy + c con,wm (y y )g wm (u wm (y, t))dy + a con,nogo g nogo (u nogo (t)) + η con (y, t) + s con (y) Here s con (y) specifies the stationary sub-threshold stimulus to the con field, spatially tuned to Nogo colors. The spatially correlated noise η con is obtained by convolution between kernel c con,noise and vector ξ of white noise. Local connections include both excitatory and inhibitory components, c con = c con,e c con,i. All kernels are Gaussian functions of the form c(y y ) = a Exp [ (y y ) 2 ] with positive parameters a except a con,wm < 0. To generate an LFP for the contrast field, the absolute value of all terms contributing to the rate of change of activation within the field is summed, excluding the stability term, u con (y, t), and the neuronal resting level, h con. The resulting LFP equation for the con field is given by: 2σ 2 63

75 u LFP (t) = 1 n s con(y) + 1 n c con,noise(y y )ξ(y, t)dy + 1 n c con,e(y y )g con (u con (y, t))dy + 1 n c con,i(y y )g con (u con (y, t))dy + 1 n c con,fatn(y y )g fatn (u fatn (y, t)) dy + 1 n c con,wm(y y )g wm (u wm (y, t))dy + 1 n m c con,vis(y y )g vis (u vis (x, y )) dy dx + a con,nogo g nogo (u nogo (t)) Note that field activities were aggregated into a single LFP. Consequently, I normalized the field contributions over one or more dimensions by dividing by the number of units in each field, n in the case of one dimension, n m in the case of two. A hemodynamic response is then calculated by convolving u LFP with an impulse response function that specifies the time course of the slow blood-flow response to neural activation (see Chapter 2). The GnG DF model was used to simulate 72 Go and Nogo trials for each condition. Hemodynamics were generated from the average LFP over these repetitions for each component, condition and trial type. I used a mapping of 1 model time-step to 1 ms in the experiment to simulate the details of each trial. A long regressor for each model component was created as described in Chapter 2. Then, these regressors were used in a General Linear Model (GLM) as detailed below Data analyses for the Standard and DF models: Analysis of Functional Neuroimages (AFNI) software was used to analyze data. Standard preprocessing such as slice timing corrections, outlier removal, motion 64

76 correction and spatial smoothing (Gaussian FWHM=8mm) was conducted on the fmri data. First-level analyses consisted of constructing three GLMs: (Model A) Standard model This model consisted of regressors for motion, drifts in baseline, and ten regressors of interest created from the ten conditions (Go and Nogo trials at Load 2,4 and 6 and Proportion 25 and Proportion 75). (Model B) DF model This model consisted of regressors for motion, drifts in baseline, and four regressors of interest created from the most relevant components of the DF model (con, wm, go, and nogo). Regressors for the other fields were not included because they were highly correlated with one another. (Model AB) Standard-DF Model this model consisted of regressors for motion, drifts in baseline, and fourteen regressors of interest (the standard model components + the DF model components). To compare performance between the Standard and DF models, I used adjusted R 2 as a key metric as it can correct for the different degrees of freedom across models (DF model has 4 regressors whereas the Standard model has 10 regressors) and can be interpreted as a percentage. The Proportion of unique variance captured by the DF model (Model B) above and beyond the Standard Model (Model A) was calculated using the following formula: F = R 2 2 R Y AB Y A 2 n k A k B 1 1 R k Y AB B 65

77 2 where vector R Y AB is the variance captured by Model AB, R 2 Y A is the variance captured by Model A, and k A and k B are the degrees of freedom for Model A and Model B respectively. This F statistic was evaluated on (kb, n- ka - kb -1) degrees of freedom with = The same formula was applied to calculate unique variance captured by the Standard model. The betamaps from the Standard GLM model were input into two 2-factor ANOVAs, that is, a Load ANOVA and a Proportion ANOVA (run using 3dMVM). The Load ANOVA consisted of Type and Load as factors and the Proportion ANOVA consisted of Type and Proportion as factors. The Type factor was divided by Go vs Nogo trials. The Load factor was divided by the number of stimuli presented 2, 4 or 6. The Proportion factor was divided according to the distribution of trials in the given block 25%, 50% or 75% go trials. The main effect and interaction maps from both sets of ANOVAs were thresholded and clustered based on family-wise corrections obtained from 3dClustSim. Unique main effects of Type ( Type Effect ) from the Proportion and Load ANOVAs were pooled together. Unique Other effects consisted of Load main effect, Proportion main effect, Load x Type interaction, and Proportion x Type interaction. The betamaps from the DF GLM were input into an ANOVA (run using 3dMVM) with component as the only factor. The main effect of component was thresholded and clustered using the same family-wise correction methods described above. A one-sample t-test was conducted for each component within the spatial 66

78 constraints of this clustered main effect image to ascertain the contribution of each component to the main effect. Then, unique and combined component effects were identified. This multi-component map was intersected with the Type effect and Other Effects maps to establish whether those cortical regions where the DF components were present also showed an overlap with regions activated by differences in Type and/or Load and/or Proportion of trials in the GnG task Results and Discussion A schematic overview of the model-based approach is shown in Figure 14. Figure 14A shows HDRs and LFPs for Loads 2 and 6 in the con and wm fields. Maximum hemodynamic response (HDR) predictions across the four DF components and across Load and Proportion manipulations are displayed in Figure 14b. Differences in HDR amplitude between Loads 2 and 6 are evident in the average HDR amplitude in the con and wm fields. Figure 14c shows regressors for each component constructed by inserting the condition-specific HDR at the onset of each trial in the same order that was presented to each participant. An example predictor for one participant a regressor in the GLM model is shown in the inset in Figure 14c. This time course was created by inserting the predicted hemodynamic time course from the WM component (e.g., Figure 14a) for each trial type at the appropriate start time in the time series and then summing these predictions. If there is a brain region involved in WM, the model predicts that this brain area should show the particular pattern of BOLD changes over 67

79 time shown in the inset. The GLM results can be used to statistically evaluate such predictions. Overall fit of the model to the fmri data was evaluated using adjusted R 2 measures and unique variance accounted for by each of the models. Overall adjusted R 2 values for the Standard model [average = 0.127] were significantly different from the DF model [average = 0.102] (p < 0.001). When examining only those voxels where both models captured significant variance, the standard model [adjusted R 2 = 0.130] did not perform significantly better than the DF model [adjusted R 2 = 0.121] (p = 0.251). Interestingly however, the DF model [17045 voxels] accounted for significantly more unique variance in voxel count than the standard model [9299 voxels] (p< ). It is critical to note that the Standard model had more than two times more regressors than the DF model (10 vs. 4). Figure 15 shows those components that produced statistically significant clusters within the regions showing a main effect of component. The con and wm fields recruited the largest numbers of regions. The cortical network recruited by the DF components constituted of the insula, putamen, thalamus, parts of the occipital and temporal cortices, pre and postcentral gyrus, medial frontal gyrus and inferior parietal lobule. As is evident from the figure, unique contributions from the con field, nogo node, and wm field and combined contributions from the con field and go node and wm field and nogo node recruited distinct regions suggesting model-specific functionalities 68

80 unique to these regions. In general, these regions highlight the response selection network identified in previous work quite well. 69

81 Figure 14 - Model-based fmri Approach. (A) Average HDR and LFP for Load 2 (blue) and Load 6 (red) for the contrast (Nogo trials) and working memory (Go trials) fields. (B) Predictions for the four components for the model across three Load and Proportion 70

82 Ce r e b e l l u m C ere b el l u m I n s ula I n su la M OG F F G Lingua l Gy r us L i ng ua l Gyr us C au d at e Cu ne us P u t a men C in g ulate G y r u s M e d i al F G Po s tc e n tr a l G y ru s Pr ec en t r a l Gy r u s I PL Con Con and Go Nogo Wm Nogo and Wm Figure 15 - Functional maps generated by the DF model. Colored regions represent cortical areas where a main effect of component was present. The next important question is whether these significant clusters overlap with specific effects in the standard model? I observed common and unique effects when I intersected the effects from the Standard and DF models. Table 4 presents voxel counts 71

83 for common and unique effects between the Standard and DF models. Figure 16 shows the spatial distribution of these voxel counts for the unique and common effects. Table 4 - Voxel count of overlapping effects between DF and standard model activation maps. Voxel Count Type Main Effect only 1831 Other Effects only 384 DNF Components only 4430 DNF Components and Type Main Effects 1313 DNF Components and Other Effects

84 c e r e b e ll u m c e re b ellu m S TG In s ul a In sul a Put am e n IO G L in gu a l G yrus T hal amu s M T G STG C un eu s Cin g u late Gyr u s P o s t c e n tr a l G yr u s MF G M e d FG Un i q ue Ty p e ME IPL Un i q u e Other Effe cts Unique DNF DN F a n d O ther Effec ts D N F a n d Ty p e ME Figure 16 - Overlap between DF and standard group-level activation maps. The overlap between the regions recruited by the DF components and the main effect of Type, which activated a wide network of cortical areas, accounted for 1313 voxels (shown in yellow in Figure 16). The overlap between DF components and more 73

A Dynamic Field Theory of Visual Recognition in Infant Looking Tasks

A Dynamic Field Theory of Visual Recognition in Infant Looking Tasks A Dynamic Field Theory of Visual Recognition in Infant Looking Tasks Sammy Perone (sammy-perone@uiowa.edu) and John P. Spencer (john-spencer@uiowa.edu) Department of Psychology, 11 Seashore Hall East Iowa

More information

Sum of Neurally Distinct Stimulus- and Task-Related Components.

Sum of Neurally Distinct Stimulus- and Task-Related Components. SUPPLEMENTARY MATERIAL for Cardoso et al. 22 The Neuroimaging Signal is a Linear Sum of Neurally Distinct Stimulus- and Task-Related Components. : Appendix: Homogeneous Linear ( Null ) and Modified Linear

More information

Supplementary materials for: Executive control processes underlying multi- item working memory

Supplementary materials for: Executive control processes underlying multi- item working memory Supplementary materials for: Executive control processes underlying multi- item working memory Antonio H. Lara & Jonathan D. Wallis Supplementary Figure 1 Supplementary Figure 1. Behavioral measures of

More information

Self-Organization and Segmentation with Laterally Connected Spiking Neurons

Self-Organization and Segmentation with Laterally Connected Spiking Neurons Self-Organization and Segmentation with Laterally Connected Spiking Neurons Yoonsuck Choe Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 USA Risto Miikkulainen Department

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 5: Data analysis II Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis II 6 Single

More information

Integrating Perception and Working Memory in a Three-Layer Dynamic Field Model

Integrating Perception and Working Memory in a Three-Layer Dynamic Field Model OUP UNCORRECTED PROOF FIRSTPROOFS, Mon Aug 3 215, NEWGEN 6 Integrating Perception and Working Memory in a Three-Layer Dynamic Field Model JEFFREY S. JOHNSON AND VANESSA R. SIMMERING hallmark of perceptual

More information

Experimental design of fmri studies

Experimental design of fmri studies Experimental design of fmri studies Kerstin Preuschoff Computational Neuroscience Lab, EPFL LREN SPM Course Lausanne April 10, 2013 With many thanks for slides & images to: Rik Henson Christian Ruff Sandra

More information

Advanced Data Modelling & Inference

Advanced Data Modelling & Inference Edinburgh 2015: biennial SPM course Advanced Data Modelling & Inference Cyril Pernet Centre for Clinical Brain Sciences (CCBS) Neuroimaging Sciences Modelling? Y = XB + e here is my model B might be biased

More information

Dynamics of Color Category Formation and Boundaries

Dynamics of Color Category Formation and Boundaries Dynamics of Color Category Formation and Boundaries Stephanie Huette* Department of Psychology, University of Memphis, Memphis, TN Definition Dynamics of color boundaries is broadly the area that characterizes

More information

Modeling of Hippocampal Behavior

Modeling of Hippocampal Behavior Modeling of Hippocampal Behavior Diana Ponce-Morado, Venmathi Gunasekaran and Varsha Vijayan Abstract The hippocampus is identified as an important structure in the cerebral cortex of mammals for forming

More information

Information Processing During Transient Responses in the Crayfish Visual System

Information Processing During Transient Responses in the Crayfish Visual System Information Processing During Transient Responses in the Crayfish Visual System Christopher J. Rozell, Don. H. Johnson and Raymon M. Glantz Department of Electrical & Computer Engineering Department of

More information

Hierarchical dynamical models of motor function

Hierarchical dynamical models of motor function ARTICLE IN PRESS Neurocomputing 70 (7) 975 990 www.elsevier.com/locate/neucom Hierarchical dynamical models of motor function S.M. Stringer, E.T. Rolls Department of Experimental Psychology, Centre for

More information

Hebbian Plasticity for Improving Perceptual Decisions

Hebbian Plasticity for Improving Perceptual Decisions Hebbian Plasticity for Improving Perceptual Decisions Tsung-Ren Huang Department of Psychology, National Taiwan University trhuang@ntu.edu.tw Abstract Shibata et al. reported that humans could learn to

More information

Experimental design for Cognitive fmri

Experimental design for Cognitive fmri Experimental design for Cognitive fmri Alexa Morcom Edinburgh SPM course 2017 Thanks to Rik Henson, Thomas Wolbers, Jody Culham, and the SPM authors for slides Overview Categorical designs Factorial designs

More information

MSc Neuroimaging for Clinical & Cognitive Neuroscience

MSc Neuroimaging for Clinical & Cognitive Neuroscience MSc Neuroimaging for Clinical & Cognitive Neuroscience School of Psychological Sciences Faculty of Medical & Human Sciences Module Information *Please note that this is a sample guide to modules. The exact

More information

Incorporation of Imaging-Based Functional Assessment Procedures into the DICOM Standard Draft version 0.1 7/27/2011

Incorporation of Imaging-Based Functional Assessment Procedures into the DICOM Standard Draft version 0.1 7/27/2011 Incorporation of Imaging-Based Functional Assessment Procedures into the DICOM Standard Draft version 0.1 7/27/2011 I. Purpose Drawing from the profile development of the QIBA-fMRI Technical Committee,

More information

Executive function (EF) refers to the higher-order cognitive processes that enable

Executive function (EF) refers to the higher-order cognitive processes that enable Executive function (EF) refers to the higher-order cognitive processes that enable flexible behavior when contextual demands change. Children in an elementary classroom, for example, are often required

More information

Rolls,E.T. (2016) Cerebral Cortex: Principles of Operation. Oxford University Press.

Rolls,E.T. (2016) Cerebral Cortex: Principles of Operation. Oxford University Press. Digital Signal Processing and the Brain Is the brain a digital signal processor? Digital vs continuous signals Digital signals involve streams of binary encoded numbers The brain uses digital, all or none,

More information

Resistance to forgetting associated with hippocampus-mediated. reactivation during new learning

Resistance to forgetting associated with hippocampus-mediated. reactivation during new learning Resistance to Forgetting 1 Resistance to forgetting associated with hippocampus-mediated reactivation during new learning Brice A. Kuhl, Arpeet T. Shah, Sarah DuBrow, & Anthony D. Wagner Resistance to

More information

Representational Switching by Dynamical Reorganization of Attractor Structure in a Network Model of the Prefrontal Cortex

Representational Switching by Dynamical Reorganization of Attractor Structure in a Network Model of the Prefrontal Cortex Representational Switching by Dynamical Reorganization of Attractor Structure in a Network Model of the Prefrontal Cortex Yuichi Katori 1,2 *, Kazuhiro Sakamoto 3, Naohiro Saito 4, Jun Tanji 4, Hajime

More information

A model of dual control mechanisms through anterior cingulate and prefrontal cortex interactions

A model of dual control mechanisms through anterior cingulate and prefrontal cortex interactions Neurocomputing 69 (2006) 1322 1326 www.elsevier.com/locate/neucom A model of dual control mechanisms through anterior cingulate and prefrontal cortex interactions Nicola De Pisapia, Todd S. Braver Cognitive

More information

Inferential Brain Mapping

Inferential Brain Mapping Inferential Brain Mapping Cyril Pernet Centre for Clinical Brain Sciences (CCBS) Neuroimaging Sciences Edinburgh Biennial SPM course 2019 @CyrilRPernet Overview Statistical inference is the process of

More information

Theoretical Neuroscience: The Binding Problem Jan Scholz, , University of Osnabrück

Theoretical Neuroscience: The Binding Problem Jan Scholz, , University of Osnabrück The Binding Problem This lecture is based on following articles: Adina L. Roskies: The Binding Problem; Neuron 1999 24: 7 Charles M. Gray: The Temporal Correlation Hypothesis of Visual Feature Integration:

More information

Nature Neuroscience: doi: /nn Supplementary Figure 1. Behavioral training.

Nature Neuroscience: doi: /nn Supplementary Figure 1. Behavioral training. Supplementary Figure 1 Behavioral training. a, Mazes used for behavioral training. Asterisks indicate reward location. Only some example mazes are shown (for example, right choice and not left choice maze

More information

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence To understand the network paradigm also requires examining the history

More information

Attention Response Functions: Characterizing Brain Areas Using fmri Activation during Parametric Variations of Attentional Load

Attention Response Functions: Characterizing Brain Areas Using fmri Activation during Parametric Variations of Attentional Load Attention Response Functions: Characterizing Brain Areas Using fmri Activation during Parametric Variations of Attentional Load Intro Examine attention response functions Compare an attention-demanding

More information

Memory Processes in Perceptual Decision Making

Memory Processes in Perceptual Decision Making Memory Processes in Perceptual Decision Making Manish Saggar (mishu@cs.utexas.edu), Risto Miikkulainen (risto@cs.utexas.edu), Department of Computer Science, University of Texas at Austin, TX, 78712 USA

More information

G5)H/C8-)72)78)2I-,8/52& ()*+,-./,-0))12-345)6/3/782 9:-8;<;4.= J-3/ J-3/ "#&' "#% "#"% "#%$

G5)H/C8-)72)78)2I-,8/52& ()*+,-./,-0))12-345)6/3/782 9:-8;<;4.= J-3/ J-3/ #&' #% #% #%$ # G5)H/C8-)72)78)2I-,8/52& #% #$ # # &# G5)H/C8-)72)78)2I-,8/52' @5/AB/7CD J-3/ /,?8-6/2@5/AB/7CD #&' #% #$ # # '#E ()*+,-./,-0))12-345)6/3/782 9:-8;;4. @5/AB/7CD J-3/ #' /,?8-6/2@5/AB/7CD #&F #&' #% #$

More information

Introductory Motor Learning and Development Lab

Introductory Motor Learning and Development Lab Introductory Motor Learning and Development Lab Laboratory Equipment & Test Procedures. Motor learning and control historically has built its discipline through laboratory research. This has led to the

More information

Chapter 5. Summary and Conclusions! 131

Chapter 5. Summary and Conclusions! 131 ! Chapter 5 Summary and Conclusions! 131 Chapter 5!!!! Summary of the main findings The present thesis investigated the sensory representation of natural sounds in the human auditory cortex. Specifically,

More information

Experimental design. Alexa Morcom Edinburgh SPM course Thanks to Rik Henson, Thomas Wolbers, Jody Culham, and the SPM authors for slides

Experimental design. Alexa Morcom Edinburgh SPM course Thanks to Rik Henson, Thomas Wolbers, Jody Culham, and the SPM authors for slides Experimental design Alexa Morcom Edinburgh SPM course 2013 Thanks to Rik Henson, Thomas Wolbers, Jody Culham, and the SPM authors for slides Overview of SPM Image Image timeseries timeseries Design Design

More information

Neural response time integration subserves. perceptual decisions - K-F Wong and X-J Wang s. reduced model

Neural response time integration subserves. perceptual decisions - K-F Wong and X-J Wang s. reduced model Neural response time integration subserves perceptual decisions - K-F Wong and X-J Wang s reduced model Chris Ayers and Narayanan Krishnamurthy December 15, 2008 Abstract A neural network describing the

More information

Experimental Design. Thomas Wolbers Space and Aging Laboratory Centre for Cognitive and Neural Systems

Experimental Design. Thomas Wolbers Space and Aging Laboratory Centre for Cognitive and Neural Systems Experimental Design Thomas Wolbers Space and Aging Laboratory Centre for Cognitive and Neural Systems Overview Design of functional neuroimaging studies Categorical designs Factorial designs Parametric

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 11: Attention & Decision making Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis

More information

Lecture 2.1 What is Perception?

Lecture 2.1 What is Perception? Lecture 2.1 What is Perception? A Central Ideas in Perception: Perception is more than the sum of sensory inputs. It involves active bottom-up and topdown processing. Perception is not a veridical representation

More information

Memory, Attention, and Decision-Making

Memory, Attention, and Decision-Making Memory, Attention, and Decision-Making A Unifying Computational Neuroscience Approach Edmund T. Rolls University of Oxford Department of Experimental Psychology Oxford England OXFORD UNIVERSITY PRESS Contents

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION doi:10.1038/nature10776 Supplementary Information 1: Influence of inhibition among blns on STDP of KC-bLN synapses (simulations and schematics). Unconstrained STDP drives network activity to saturation

More information

Changes in frontal and posterior cortical activity underlie the early emergence of executive function

Changes in frontal and posterior cortical activity underlie the early emergence of executive function Received: 9 September 2016 Accepted: 24 June 2017 DOI: 10.1111/desc.12602 PAPER Changes in frontal and posterior cortical activity underlie the early emergence of executive function Aaron T. Buss 1 John

More information

Synfire chains with conductance-based neurons: internal timing and coordination with timed input

Synfire chains with conductance-based neurons: internal timing and coordination with timed input Neurocomputing 5 (5) 9 5 www.elsevier.com/locate/neucom Synfire chains with conductance-based neurons: internal timing and coordination with timed input Friedrich T. Sommer a,, Thomas Wennekers b a Redwood

More information

Neuroimaging. BIE601 Advanced Biological Engineering Dr. Boonserm Kaewkamnerdpong Biological Engineering Program, KMUTT. Human Brain Mapping

Neuroimaging. BIE601 Advanced Biological Engineering Dr. Boonserm Kaewkamnerdpong Biological Engineering Program, KMUTT. Human Brain Mapping 11/8/2013 Neuroimaging N i i BIE601 Advanced Biological Engineering Dr. Boonserm Kaewkamnerdpong Biological Engineering Program, KMUTT 2 Human Brain Mapping H Human m n brain br in m mapping ppin can nb

More information

Carnegie Mellon University Annual Progress Report: 2011 Formula Grant

Carnegie Mellon University Annual Progress Report: 2011 Formula Grant Carnegie Mellon University Annual Progress Report: 2011 Formula Grant Reporting Period January 1, 2012 June 30, 2012 Formula Grant Overview The Carnegie Mellon University received $943,032 in formula funds

More information

CS/NEUR125 Brains, Minds, and Machines. Due: Friday, April 14

CS/NEUR125 Brains, Minds, and Machines. Due: Friday, April 14 CS/NEUR125 Brains, Minds, and Machines Assignment 5: Neural mechanisms of object-based attention Due: Friday, April 14 This Assignment is a guided reading of the 2014 paper, Neural Mechanisms of Object-Based

More information

CHAPTER - 6 STATISTICAL ANALYSIS. This chapter discusses inferential statistics, which use sample data to

CHAPTER - 6 STATISTICAL ANALYSIS. This chapter discusses inferential statistics, which use sample data to CHAPTER - 6 STATISTICAL ANALYSIS 6.1 Introduction This chapter discusses inferential statistics, which use sample data to make decisions or inferences about population. Populations are group of interest

More information

Experimental Design I

Experimental Design I Experimental Design I Topics What questions can we ask (intelligently) in fmri Basic assumptions in isolating cognitive processes and comparing conditions General design strategies A few really cool experiments

More information

Neural Correlates of Human Cognitive Function:

Neural Correlates of Human Cognitive Function: Neural Correlates of Human Cognitive Function: A Comparison of Electrophysiological and Other Neuroimaging Approaches Leun J. Otten Institute of Cognitive Neuroscience & Department of Psychology University

More information

A Computational Model of Prefrontal Cortex Function

A Computational Model of Prefrontal Cortex Function A Computational Model of Prefrontal Cortex Function Todd S. Braver Dept. of Psychology Carnegie Mellon Univ. Pittsburgh, PA 15213 Jonathan D. Cohen Dept. of Psychology Carnegie Mellon Univ. Pittsburgh,

More information

Beyond bumps: Spiking networks that store sets of functions

Beyond bumps: Spiking networks that store sets of functions Neurocomputing 38}40 (2001) 581}586 Beyond bumps: Spiking networks that store sets of functions Chris Eliasmith*, Charles H. Anderson Department of Philosophy, University of Waterloo, Waterloo, Ont, N2L

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 7: Network models Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis II 6 Single neuron

More information

Target-to-distractor similarity can help visual search performance

Target-to-distractor similarity can help visual search performance Target-to-distractor similarity can help visual search performance Vencislav Popov (vencislav.popov@gmail.com) Lynne Reder (reder@cmu.edu) Department of Psychology, Carnegie Mellon University, Pittsburgh,

More information

Neuron, Volume 63 Spatial attention decorrelates intrinsic activity fluctuations in Macaque area V4.

Neuron, Volume 63 Spatial attention decorrelates intrinsic activity fluctuations in Macaque area V4. Neuron, Volume 63 Spatial attention decorrelates intrinsic activity fluctuations in Macaque area V4. Jude F. Mitchell, Kristy A. Sundberg, and John H. Reynolds Systems Neurobiology Lab, The Salk Institute,

More information

Learning Classifier Systems (LCS/XCSF)

Learning Classifier Systems (LCS/XCSF) Context-Dependent Predictions and Cognitive Arm Control with XCSF Learning Classifier Systems (LCS/XCSF) Laurentius Florentin Gruber Seminar aus Künstlicher Intelligenz WS 2015/16 Professor Johannes Fürnkranz

More information

Spiking Inputs to a Winner-take-all Network

Spiking Inputs to a Winner-take-all Network Spiking Inputs to a Winner-take-all Network Matthias Oster and Shih-Chii Liu Institute of Neuroinformatics University of Zurich and ETH Zurich Winterthurerstrasse 9 CH-857 Zurich, Switzerland {mao,shih}@ini.phys.ethz.ch

More information

A Neural Model of Context Dependent Decision Making in the Prefrontal Cortex

A Neural Model of Context Dependent Decision Making in the Prefrontal Cortex A Neural Model of Context Dependent Decision Making in the Prefrontal Cortex Sugandha Sharma (s72sharm@uwaterloo.ca) Brent J. Komer (bjkomer@uwaterloo.ca) Terrence C. Stewart (tcstewar@uwaterloo.ca) Chris

More information

Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention

Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention Tapani Raiko and Harri Valpola School of Science and Technology Aalto University (formerly Helsinki University of

More information

Active Control of Spike-Timing Dependent Synaptic Plasticity in an Electrosensory System

Active Control of Spike-Timing Dependent Synaptic Plasticity in an Electrosensory System Active Control of Spike-Timing Dependent Synaptic Plasticity in an Electrosensory System Patrick D. Roberts and Curtis C. Bell Neurological Sciences Institute, OHSU 505 N.W. 185 th Avenue, Beaverton, OR

More information

http://www.diva-portal.org This is the published version of a paper presented at Future Active Safety Technology - Towards zero traffic accidents, FastZero2017, September 18-22, 2017, Nara, Japan. Citation

More information

Learning and Adaptive Behavior, Part II

Learning and Adaptive Behavior, Part II Learning and Adaptive Behavior, Part II April 12, 2007 The man who sets out to carry a cat by its tail learns something that will always be useful and which will never grow dim or doubtful. -- Mark Twain

More information

Contributions to Brain MRI Processing and Analysis

Contributions to Brain MRI Processing and Analysis Contributions to Brain MRI Processing and Analysis Dissertation presented to the Department of Computer Science and Artificial Intelligence By María Teresa García Sebastián PhD Advisor: Prof. Manuel Graña

More information

Classification and Statistical Analysis of Auditory FMRI Data Using Linear Discriminative Analysis and Quadratic Discriminative Analysis

Classification and Statistical Analysis of Auditory FMRI Data Using Linear Discriminative Analysis and Quadratic Discriminative Analysis International Journal of Innovative Research in Computer Science & Technology (IJIRCST) ISSN: 2347-5552, Volume-2, Issue-6, November-2014 Classification and Statistical Analysis of Auditory FMRI Data Using

More information

Spontaneous Cortical Activity Reveals Hallmarks of an Optimal Internal Model of the Environment. Berkes, Orban, Lengyel, Fiser.

Spontaneous Cortical Activity Reveals Hallmarks of an Optimal Internal Model of the Environment. Berkes, Orban, Lengyel, Fiser. Statistically optimal perception and learning: from behavior to neural representations. Fiser, Berkes, Orban & Lengyel Trends in Cognitive Sciences (2010) Spontaneous Cortical Activity Reveals Hallmarks

More information

Computing with Spikes in Recurrent Neural Networks

Computing with Spikes in Recurrent Neural Networks Computing with Spikes in Recurrent Neural Networks Dezhe Jin Department of Physics The Pennsylvania State University Presented at ICS Seminar Course, Penn State Jan 9, 2006 Outline Introduction Neurons,

More information

Why is our capacity of working memory so large?

Why is our capacity of working memory so large? LETTER Why is our capacity of working memory so large? Thomas P. Trappenberg Faculty of Computer Science, Dalhousie University 6050 University Avenue, Halifax, Nova Scotia B3H 1W5, Canada E-mail: tt@cs.dal.ca

More information

Ch.20 Dynamic Cue Combination in Distributional Population Code Networks. Ka Yeon Kim Biopsychology

Ch.20 Dynamic Cue Combination in Distributional Population Code Networks. Ka Yeon Kim Biopsychology Ch.20 Dynamic Cue Combination in Distributional Population Code Networks Ka Yeon Kim Biopsychology Applying the coding scheme to dynamic cue combination (Experiment, Kording&Wolpert,2004) Dynamic sensorymotor

More information

Learning in neural networks

Learning in neural networks http://ccnl.psy.unipd.it Learning in neural networks Marco Zorzi University of Padova M. Zorzi - European Diploma in Cognitive and Brain Sciences, Cognitive modeling", HWK 19-24/3/2006 1 Connectionist

More information

Visual Categorization: How the Monkey Brain Does It

Visual Categorization: How the Monkey Brain Does It Visual Categorization: How the Monkey Brain Does It Ulf Knoblich 1, Maximilian Riesenhuber 1, David J. Freedman 2, Earl K. Miller 2, and Tomaso Poggio 1 1 Center for Biological and Computational Learning,

More information

Computational Explorations in Cognitive Neuroscience Chapter 7: Large-Scale Brain Area Functional Organization

Computational Explorations in Cognitive Neuroscience Chapter 7: Large-Scale Brain Area Functional Organization Computational Explorations in Cognitive Neuroscience Chapter 7: Large-Scale Brain Area Functional Organization 1 7.1 Overview This chapter aims to provide a framework for modeling cognitive phenomena based

More information

A model of parallel time estimation

A model of parallel time estimation A model of parallel time estimation Hedderik van Rijn 1 and Niels Taatgen 1,2 1 Department of Artificial Intelligence, University of Groningen Grote Kruisstraat 2/1, 9712 TS Groningen 2 Department of Psychology,

More information

Early Learning vs Early Variability 1.5 r = p = Early Learning r = p = e 005. Early Learning 0.

Early Learning vs Early Variability 1.5 r = p = Early Learning r = p = e 005. Early Learning 0. The temporal structure of motor variability is dynamically regulated and predicts individual differences in motor learning ability Howard Wu *, Yohsuke Miyamoto *, Luis Nicolas Gonzales-Castro, Bence P.

More information

Prefrontal cortex. Executive functions. Models of prefrontal cortex function. Overview of Lecture. Executive Functions. Prefrontal cortex (PFC)

Prefrontal cortex. Executive functions. Models of prefrontal cortex function. Overview of Lecture. Executive Functions. Prefrontal cortex (PFC) Neural Computation Overview of Lecture Models of prefrontal cortex function Dr. Sam Gilbert Institute of Cognitive Neuroscience University College London E-mail: sam.gilbert@ucl.ac.uk Prefrontal cortex

More information

Lecturer: Rob van der Willigen 11/9/08

Lecturer: Rob van der Willigen 11/9/08 Auditory Perception - Detection versus Discrimination - Localization versus Discrimination - - Electrophysiological Measurements Psychophysical Measurements Three Approaches to Researching Audition physiology

More information

Changing expectations about speed alters perceived motion direction

Changing expectations about speed alters perceived motion direction Current Biology, in press Supplemental Information: Changing expectations about speed alters perceived motion direction Grigorios Sotiropoulos, Aaron R. Seitz, and Peggy Seriès Supplemental Data Detailed

More information

Supplementary Figure 1. Recording sites.

Supplementary Figure 1. Recording sites. Supplementary Figure 1 Recording sites. (a, b) Schematic of recording locations for mice used in the variable-reward task (a, n = 5) and the variable-expectation task (b, n = 5). RN, red nucleus. SNc,

More information

Supplementary Information Methods Subjects The study was comprised of 84 chronic pain patients with either chronic back pain (CBP) or osteoarthritis

Supplementary Information Methods Subjects The study was comprised of 84 chronic pain patients with either chronic back pain (CBP) or osteoarthritis Supplementary Information Methods Subjects The study was comprised of 84 chronic pain patients with either chronic back pain (CBP) or osteoarthritis (OA). All subjects provided informed consent to procedures

More information

REPORT DOCUMENTATION PAGE

REPORT DOCUMENTATION PAGE REPORT DOCUMENTATION PAGE Form Approved OMB NO. 0704-0188 The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions,

More information

Lecturer: Rob van der Willigen 11/9/08

Lecturer: Rob van der Willigen 11/9/08 Auditory Perception - Detection versus Discrimination - Localization versus Discrimination - Electrophysiological Measurements - Psychophysical Measurements 1 Three Approaches to Researching Audition physiology

More information

Observational Learning Based on Models of Overlapping Pathways

Observational Learning Based on Models of Overlapping Pathways Observational Learning Based on Models of Overlapping Pathways Emmanouil Hourdakis and Panos Trahanias Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH) Science and Technology

More information

Phil 490: Consciousness and the Self Handout [16] Jesse Prinz: Mental Pointing Phenomenal Knowledge Without Concepts

Phil 490: Consciousness and the Self Handout [16] Jesse Prinz: Mental Pointing Phenomenal Knowledge Without Concepts Phil 490: Consciousness and the Self Handout [16] Jesse Prinz: Mental Pointing Phenomenal Knowledge Without Concepts Main Goals of this Paper: Professor JeeLoo Liu 1. To present an account of phenomenal

More information

Cell Responses in V4 Sparse Distributed Representation

Cell Responses in V4 Sparse Distributed Representation Part 4B: Real Neurons Functions of Layers Input layer 4 from sensation or other areas 3. Neocortical Dynamics Hidden layers 2 & 3 Output layers 5 & 6 to motor systems or other areas 1 2 Hierarchical Categorical

More information

2012 Course : The Statistician Brain: the Bayesian Revolution in Cognitive Science

2012 Course : The Statistician Brain: the Bayesian Revolution in Cognitive Science 2012 Course : The Statistician Brain: the Bayesian Revolution in Cognitive Science Stanislas Dehaene Chair in Experimental Cognitive Psychology Lecture No. 4 Constraints combination and selection of a

More information

Information-theoretic stimulus design for neurophysiology & psychophysics

Information-theoretic stimulus design for neurophysiology & psychophysics Information-theoretic stimulus design for neurophysiology & psychophysics Christopher DiMattina, PhD Assistant Professor of Psychology Florida Gulf Coast University 2 Optimal experimental design Part 1

More information

The storage and recall of memories in the hippocampo-cortical system. Supplementary material. Edmund T Rolls

The storage and recall of memories in the hippocampo-cortical system. Supplementary material. Edmund T Rolls The storage and recall of memories in the hippocampo-cortical system Supplementary material Edmund T Rolls Oxford Centre for Computational Neuroscience, Oxford, England and University of Warwick, Department

More information

Task Preparation and the Switch Cost: Characterizing Task Preparation through Stimulus Set Overlap, Transition Frequency and Task Strength

Task Preparation and the Switch Cost: Characterizing Task Preparation through Stimulus Set Overlap, Transition Frequency and Task Strength Task Preparation and the Switch Cost: Characterizing Task Preparation through Stimulus Set Overlap, Transition Frequency and Task Strength by Anita Dyan Barber BA, University of Louisville, 2000 MS, University

More information

Introduction to Functional MRI

Introduction to Functional MRI Introduction to Functional MRI Douglas C. Noll Department of Biomedical Engineering Functional MRI Laboratory University of Michigan Outline Brief overview of physiology and physics of BOLD fmri Background

More information

Expert System Profile

Expert System Profile Expert System Profile GENERAL Domain: Medical Main General Function: Diagnosis System Name: INTERNIST-I/ CADUCEUS (or INTERNIST-II) Dates: 1970 s 1980 s Researchers: Ph.D. Harry Pople, M.D. Jack D. Myers

More information

Model-Based fmri Analysis. Will Alexander Dept. of Experimental Psychology Ghent University

Model-Based fmri Analysis. Will Alexander Dept. of Experimental Psychology Ghent University Model-Based fmri Analysis Will Alexander Dept. of Experimental Psychology Ghent University Motivation Models (general) Why you ought to care Model-based fmri Models (specific) From model to analysis Extended

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction 1.1 Motivation and Goals The increasing availability and decreasing cost of high-throughput (HT) technologies coupled with the availability of computational tools and data form a

More information

How has Computational Neuroscience been useful? Virginia R. de Sa Department of Cognitive Science UCSD

How has Computational Neuroscience been useful? Virginia R. de Sa Department of Cognitive Science UCSD How has Computational Neuroscience been useful? 1 Virginia R. de Sa Department of Cognitive Science UCSD What is considered Computational Neuroscience? 2 What is considered Computational Neuroscience?

More information

A model of the interaction between mood and memory

A model of the interaction between mood and memory INSTITUTE OF PHYSICS PUBLISHING NETWORK: COMPUTATION IN NEURAL SYSTEMS Network: Comput. Neural Syst. 12 (2001) 89 109 www.iop.org/journals/ne PII: S0954-898X(01)22487-7 A model of the interaction between

More information

Thalamocortical Feedback and Coupled Oscillators

Thalamocortical Feedback and Coupled Oscillators Thalamocortical Feedback and Coupled Oscillators Balaji Sriram March 23, 2009 Abstract Feedback systems are ubiquitous in neural systems and are a subject of intense theoretical and experimental analysis.

More information

A general error-based spike-timing dependent learning rule for the Neural Engineering Framework

A general error-based spike-timing dependent learning rule for the Neural Engineering Framework A general error-based spike-timing dependent learning rule for the Neural Engineering Framework Trevor Bekolay Monday, May 17, 2010 Abstract Previous attempts at integrating spike-timing dependent plasticity

More information

Theta sequences are essential for internally generated hippocampal firing fields.

Theta sequences are essential for internally generated hippocampal firing fields. Theta sequences are essential for internally generated hippocampal firing fields. Yingxue Wang, Sandro Romani, Brian Lustig, Anthony Leonardo, Eva Pastalkova Supplementary Materials Supplementary Modeling

More information

Analysis of in-vivo extracellular recordings. Ryan Morrill Bootcamp 9/10/2014

Analysis of in-vivo extracellular recordings. Ryan Morrill Bootcamp 9/10/2014 Analysis of in-vivo extracellular recordings Ryan Morrill Bootcamp 9/10/2014 Goals for the lecture Be able to: Conceptually understand some of the analysis and jargon encountered in a typical (sensory)

More information

Are Retrievals from Long-Term Memory Interruptible?

Are Retrievals from Long-Term Memory Interruptible? Are Retrievals from Long-Term Memory Interruptible? Michael D. Byrne byrne@acm.org Department of Psychology Rice University Houston, TX 77251 Abstract Many simple performance parameters about human memory

More information

Lateral Inhibition Explains Savings in Conditioning and Extinction

Lateral Inhibition Explains Savings in Conditioning and Extinction Lateral Inhibition Explains Savings in Conditioning and Extinction Ashish Gupta & David C. Noelle ({ashish.gupta, david.noelle}@vanderbilt.edu) Department of Electrical Engineering and Computer Science

More information

Shadowing and Blocking as Learning Interference Models

Shadowing and Blocking as Learning Interference Models Shadowing and Blocking as Learning Interference Models Espoir Kyubwa Dilip Sunder Raj Department of Bioengineering Department of Neuroscience University of California San Diego University of California

More information

Input-speci"c adaptation in complex cells through synaptic depression

Input-specic adaptation in complex cells through synaptic depression 0 0 0 0 Neurocomputing }0 (00) } Input-speci"c adaptation in complex cells through synaptic depression Frances S. Chance*, L.F. Abbott Volen Center for Complex Systems and Department of Biology, Brandeis

More information

Detecting Cognitive States Using Machine Learning

Detecting Cognitive States Using Machine Learning Detecting Cognitive States Using Machine Learning Xuerui Wang & Tom Mitchell Center for Automated Learning and Discovery School of Computer Science Carnegie Mellon University xuerui,tom.mitchell @cs.cmu.edu

More information

Inverse problems in functional brain imaging Identification of the hemodynamic response in fmri

Inverse problems in functional brain imaging Identification of the hemodynamic response in fmri Inverse problems in functional brain imaging Identification of the hemodynamic response in fmri Ph. Ciuciu1,2 philippe.ciuciu@cea.fr 1: CEA/NeuroSpin/LNAO May 7, 2010 www.lnao.fr 2: IFR49 GDR -ISIS Spring

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction Artificial neural networks are mathematical inventions inspired by observations made in the study of biological systems, though loosely based on the actual biology. An artificial

More information