Efficient Encoding of Natural Time Varying Images Produces Oriented Space-Time Receptive Fields

Size: px
Start display at page:

Download "Efficient Encoding of Natural Time Varying Images Produces Oriented Space-Time Receptive Fields"

Transcription

1 Efficient Encoding of Natural Time Varying Images Produces Oriented Space-Time Receptive Fields Rajesh P. N. Rao and Dana H. Ballard Department of Computer Science University of Rochester Rochester, NY 17 Technical Report 97. National Resource Laboratory for the Study of Brain and Behavior Department of Computer Science, University of Rochester August 1997 Abstract The receptive fields of neurons in the mammalian primary visual cortex are oriented not only in the domain of space, but in most cases, also in the domain of space-time. While the orientation of a receptive field in space determines the selectivity of the neuron to image structures at a particular orientation, a receptive field s orientation in space-time characterizes important additional properties such as velocity and direction selectivity. Previous studies have focused on explaining the spatial receptive field properties of visual neurons by relating them to the statistical structure of static natural images. In this report, we examine the possibility that the distinctive spatiotemporal properties of visual cortical neurons can be understood in terms of a statistically efficient strategy for encoding natural time varying images. We describe an artificial neural network that attempts to accurately reconstruct its spatiotemporal input data while simultaneously reducing the statistical dependencies between its outputs. The network utilizes spatiotemporally summating neurons and learns efficient sparse distributed representations of its spatiotemporal input stream by using recurrent lateral inhibition and a simple threshold nonlinearity for rectification of neural responses. When exposed to natural time varying images, neurons in a simulated network developed localized receptive fields oriented in both space and space-time, similar to the receptive fields of neurons in the primary visual cortex. 1 Introduction Since the seminal experiments of Hubel and Wiesel over 3 years ago[hubel and Wiesel, 19; 19], it has been known that neurons in the mammalian primary visual cortex respond selectively to stimuli such as edges or bars at particular orientations. In many cases, the neurons are directionally selective i.e. they respond only to motion in a particular direction. An especially useful concept in characterizing the response properties of visual neurons has been the notion of a receptive field. The receptive field of a neuron is classically defined as the area of visual space within which stimuli such as bars or edges can elicit responses This research was supported by NIH/PHS research grant 1-P1-RR93. 1

2 from the neuron [Hartline, 19]. Although they are a function of both space and time, early depictions of visual receptive fields were confined to spatial coordinates. In recent years, new mapping techniques have allowed the characterization of receptive fields in both space and time[emerson et al., 197; McLean and Palmer, 199; Shapley et al., 199; DeAngelis et al., 1993a] (see [DeAngelis et al., 199] for a review). The new mapping results indicate that in most cases, the receptive field of a visual neuron changes over time. It has been noted that while the spatial structure of a receptive field indicates neuronal attributes such as preference for a particular orientation of bars or edges, the spatiotemporal structure of a neuron s receptive field governs important dynamical properties such as velocity and direction selectivity[adelson and Bergen, 19; Watson and Ahumada, 19; Burr et al., 19]. In particular, orientation of a neuron s receptive field in space-time indicates the preferred direction of motion while the slope of the oriented subregions gives an estimate of the preferred velocity[mclean and Palmer, 199; Albrecht and Geisler, 1991; Reid et al., 1991; DeAngelis et al., 1993b; McLean et al., 199]. An attractive approach to understanding the receptive field properties of visual neurons is to relate them to the statistical structure of natural images. Motivated by the property that natural images possess a power spectrum [Field, 197], Atick and Redlich [Atick, 199; Atick and Redlich, 199] provided an explanation of the center-surround structure of retinal ganglion receptive fields in terms of whitening or decorrelation of outputs in response to natural images. Several Hebbian learning algorithms for decorrelation have also been proposed [Bienenstock et al., 19; Williams, 19; Barrow, 197; Linsker, 19; Oja, 199; Sanger, 199; Foldiak, 199; Atick and Redlich, 1993], many of which perform Principal Component Analysis (PCA). Although the PCA of natural images produces lower order components that resemble oriented filters [Baddeley and Hancock, 1991; Hancock et al., 199], the higher order components are unlike any known neural receptive field profiles. In addition, the receptive fields obtained are global rather than localized feature detectors. Recently, Olshausen and Field showed that a neural network that includes the additional constraint of maximizing the sparseness of the distribution of output activities develops, when trained on static natural images, synaptic weights with localized, oriented spatial receptive fields [Olshausen and Field, 199] (see also [Harpur and Prager, 199; Rao and Ballard, 1997a] and related work on projection pursuit [Huber, 19] based learning methods [Intrator, 199; Law and Cooper, 199; Shouval, 199]). Similar results have also been obtained using an algorithm that extracts the independent components of a set of static natural images [Bell and Sejnowski, 1997]. These algorithms are all based directly or indirectly on Barlow s principle of redundancy reduction[barlow, 191; 197; 199; 199], where the goal is to learn feature detectors whose outputs are as statistically independent as possible. The underlying motivation is that sensory inputs such as images are generally comprised of a set of independent objects or features whose components are highly correlated. By learning detectors for these independent features, the sensory system can develop accurate internal models of the sensory environment and can efficiently represent external events as sparse conjunctions of independent features. In this paper, we explore the possibility that the distinctive spatiotemporal receptive field properties of visual cortical neurons can be understood in terms of a statistically efficient strategy for encoding natural time varying images [Eckert and Buchsbaum, 1993; Dong and Atick, 199]. We describe an artificial neural network that attempts to accurately reconstruct its spatiotemporal input data while simultaneously reducing the statistical dependencies between its outputs, as advocated by the redundancy reduction principle. Our approach utilizes a spatiotemporal generative model that can be viewed as a simple extension of the spatial generative model used by Harpur and Prager[Harpur and Prager, 199], Olshausen and Field [Olshausen and

3 Field, 199], Rao and Ballard [Rao and Ballard, 1997a], and others. The spatiotemporal generative model allows neurons in the network to perform not just a spatial summation of the current input, but a spatiotemporal summation of both current and past inputs over a finite spatiotemporal extent. The network learns efficient sparse distributed representations of its spatiotemporal input stream by utilizing lateral inhibition[foldiak, 199] and a simple threshold nonlinearity for rectification of neural responses [Lee and Seung, 1997; Hinton and Ghahramani, 1997]. When exposed to natural time varying images, neurons in a simulated network developed localized receptive fields oriented in both space and space-time, similar to the receptive fields of neurons in the primary visual cortex. Spatial Generative Models The idea of spatial generative models has received considerable attention in recent studies pertaining to neural coding [Hinton and Sejnowski, 19; Jordan and Rumelhart, 199; Zemel, 199; Dayan et al., 199; Hinton and Ghahramani, 1997], although the roots of the approach can be traced back to early ideas in control theory such as Wiener filtering[wiener, 199] and Kalman filtering [Kalman, 19]. In this section, we first consider a class of spatial generative models that have previously been used in the neural modeling literature for explaining spatial receptive field properties[harpur and Prager, 199; Olshausen and Field, 199; Rao and Ballard, 1997a]. This will serve to motivate the spatiotemporal models we will be concerned with later. Assume that an image, denoted by a vector I of pixels, can be represented as a linear combination of a set of basis vectors : I (1) The coefficients can be regarded as an internal representation of spatial characteristics of the image I, as interpreted using the internal model defined by the basis vectors. In terms of a neuronal network, the coefficients correspond to the activities or firing rates of neurons while the vectors in the basis matrix correspond to the synaptic weights of neurons. It is convenient to rewrite the above equation in matrix form as: I r () where is the matrix whose columns consist of the basis vectors and r is the vector consisting of coefficients. The goal is to estimate the coefficients r for a given image and, on a longer time scale, learn appropriate basis vectors in. A standard approach is to define a least-squared error criterion of the form:! r" (3) I! r"$# I! r" () where denotes the % th pixel of I and denotes the % th row of. Note that is simply the sum of squared pixel-wise errors between the input I and the image reconstruction r. Estimates for and r can be obtained by minimizing [Williams, 19; Daugman, 19; Pece, 199; Harpur and Prager, 199; Olshausen and Field, 199; Rao and Ballard, 1997a]. 3

4 One can obtain a probabilistic generative model of the image generation process by utilizing a Gaussian noise process n to model the differences between I and r. The resulting stochastic generative model becomes: I r n () If a zero mean Gaussian noise process n with unit covariance is assumed, one can show that is the negative log likelihood of generating the input I (see, for example,[bryson and Ho, 197]). Thus, minimizing is equivalent to maximizing the likelihood of the observed data. Unfortunately, in many cases, minimization of a least-squares optimization function such as without additional constraints generates solutions that are far from being adequate descriptors of the true input generation process. For example, a popular solution to the least-squares minimization criterion is principal component analysis (PCA), also sometimes referred to as eigenvector or singular value decomposition (SVD) of the input covariance matrix. PCA optimizes by finding a set of mutually orthogonal basis vectors that are aligned with the directions of maximum variance in the input data. This is perfectly adequate in the case where the input data clouds are Gaussian and capturing pairwise statistics suffices. However, statistical studies have shown that natural image distributions are highly non-gaussian and cannot be adequately described using orthogonal bases [Field, 199]. Thus, additional constraints are required in order to guide the optimization process towards solutions that more accurately reflect the input generation process. One way of adding constraints is to take into account the prior distributions of the parameters r and. Thus, one can minimize an optimization criterion of the form: r" " () where " and " are terms related to the prior distributions of the parameters r and. In particular, they denote the negative log of the prior probabilities of r and respectively. When viewed in the context of information theory, these negative log probability terms in can be interpreted as representing the cost of coding the parameters in bits (in the base ). Thus, the function can be given an interpretation in terms of the minimum description length (MDL) principle[rissanen, 199; Zemel, 199], namely, that solutions are required not only to be accurate but also to be cheap in terms of coding length. This formalizes the wellknown Occam s Razor principle that advocates simplicity over complexity among solutions to a problem. One may also note that minimizing is equivalent to maximizing the posterior probability of input data (maximum a posteriori (MAP) estimation). Specific choices of and determine the the nature of internal representations that will be learned. For example, Olshausen and Field [Olshausen and Field, 199] proposed functions of the form " ", " and "! to encourage sparseness in r. Alternately, one can use a zero-mean multivariate Gaussian prior on r[rao and Ballard, 1997a] to yield the negative log of the prior density: r" r# r (7) where is a positive constant and denotes a set of lateral weights. The matrix represents the inverse covariance matrix of r. We show in the next section that this choice enforces lateral inhibition among the output neurons, thereby encouraging sparse distributed representations, and leads to an anti-hebbian learning rule for the lateral weights equivalent to Foldiak s well-known adaptation rule[foldiak, 199].

5 Generative Weights U Spatial Generative Model A I Image r Spatial Response Vector I U r B Image Sequence I(k) I() I(1) U(k) U() U(1) r Spatiotemporal Response Vector Spatiotemporal Generative Model I(k) U(k) I() U() I(1) U(1) r Figure 1: Generative Models. (A) Linear spatial generative model used in[harpur and Prager, 199; Olshausen and Field, 199; Rao and Ballard, 1997a]. A given input image I is assumed to be generated by multiplying a basis vector matrix with a set of hidden causes represented by the spatial response vector r. (B) Spatiotemporal generative model used in this paper. An input image sequence I,, is assumed to be generated by multiplying a set of basis matrices,, with a spatiotemporal response vector r. In the case of, for the sake of simplicity, we will assume a Gaussian distribution with a covariance, yielding the following value for : " () where denotes the sum of squares of the elements of the given matrix. A final constraint is motivated both by biology and by information coding concerns. We will constrain the network outputs (coefficients ) to be non-negative, acknowledging the fact that the firing rate of a neuron cannot be negative. The non-negativity constraint is especially attractive in information coding terms since it causes an infinite density at for the rectified and a consequent low coding cost at [Hinton and Ghahramani, 1997], which encourages sparseness among the outputs of the network. We are overlooking the possibility that a single neuron can signal both positive and negative quantities by raising or lowering its firing rate with respect to a fixed background firing rate corresponding to zero.

6 " " " 3 Spatiotemporal Generative Models In the previous section, the input data consisted of static images I, and we used a single basis matrix to capture the statistical structure of the space of input images. Furthermore, the spatial structure given by the pixels of the image I were internally represented by a single spatial response vector r. We now draw an analogy between time and space to define a spatiotemporal generative model. Suppose our training set consists of different sequences of images each, a given sequence being denoted by the vectors I ",. We will use a set of basis matrices ",. For each, " will be used to capture the statistical structure of the images occurring at the time step in the training sequences. As will become clear in the next section, the underlying motivation here is that the neurons in the network perform not just a spatial summation, but a space-time summation of inputs over a finite spatiotemporal extent (see Figure ). Thus, since inputs are weighted differentially depending on their spatiotemporal history, we need to learn a set of synaptic weights, one for each time instant. These spatiotemporal synaptic weights, as given by the ", in turn determine the spatiotemporal receptive fields of the neurons in the network. A single spatiotemporal response vector r will be used to characterize a given spatiotemporal image sequence for, in much the same way as a single spatial response vector was previously used to characterize the spatial structure for % pixels of a given static image. We thus obtain the following space-time analog of Equation : I " " r Note that in the special case where, we obtain Equation. From a probabilistic perspective, one can rewrite Equation 9 in the form of the following stochastic generative model: I " I ". I ". (9) r n () where n is a stochastic noise process accounting for the differences between I " and " r. Once again, it is easy to see that Equation is a special case of the above generative model, where. We can now define the following space-time analog of the optimization function : "! " r" I "! " r" # I "! (11) " r" (1) This is simply the sum of squared pixel-wise reconstruction errors across both space and time. As in the previous section, if n is assumed to be a zero mean Gaussian with unit covariance, one can show that is the negative log likelihood of generating the inputs I " I ". Thus, minimizing is equivalent to maximizing the likelihood of the spatiotemporal input sequence.

7 As in the previous section, by assuming Gaussian priors on the parameters r, following MDL-based cost function: r# r " and, we obtain the (13) We will additionally constrain the elements of r to be non-negative as discussed in the previous section. ", and. This can be achieved by Thus, we are now left with the task of finding optimal estimates of r, minimizing using gradient descent as discussed in the next section. Network Dynamics and Learning Rules In this section, we describe gradient-descent based estimation rules for obtaining optimal estimates of r, " and. In the case where the input consists of batch data, as in Section.1, one may alternate between the optimization of r for fixed " and, and the optimization of " and for fixed r, thereby implementing a form of the expectation-maximization (EM) algorithm[dempster et al., 1977]. In the case of on-line data, which is considered in Section., the optimization of r occurs simultaneously with " and. However, the learning rates for " and are set to values much smaller than the adaptation rate of r..1 Estimation of r An optimal estimate of r can be obtained by performing stochastic gradient descent on with respect to r: where r! r " # "! " r"! r (1) governs the rate of descent towards the minima. The above equation is relatively easy to interpret: in order to modify r towards the optimal estimate, we need to obtain the residual error "! " r" between the input at time and its reconstruction " r. The residual errors for the various time steps are then spatiotemporally weighted by their corresponding weights "# to obtain a spatiotemporal sum, which is modified by lateral inhibition due to the term! r. In a neural implementation, the individual rows of the matrices " # would comprise the synaptic weights of a single spatiotemporally summating neuron. Thus, the % th row of " # would represent the effect of the synapses of the % th neuron for time instant. Figure A shows a network implementation of the above dynamics. A possible problem with this implementation is the need for computing global residual errors at each iteration of the estimation process. This becomes especially problematic in the case where the data is being obtained on-line since one would need to keep the past images in memory and in addition, use separate sets of neurons representing the matrices " for generating the signals " r at each iteration. The dynamics can be implemented more locally by rewriting Equation 1 in the following form: r " # "! r! r (1) where " # ". Note that this form of the dynamics does not require that residual errors be computed at each iteration. Rather, we simply perform a spatiotemporal filtering of the inputs I " using the synaptic weight matrices " # for and then subtract two lateral terms, one involving 7

8 A I(k) Residual Error Signals T U(1) T U(k) T U() Spatiotemporal Filtering of Errors + + Lateral Inhibition I(1) I() Image Sequence U(k) U() U(1) Rectify r Spatiotemporal Response Vector L B I(1) I() I(k) Spatiotemporal Filtering of Inputs T U(1) T U(k) T U() + Lateral Inhibition L + Rectify r W Recurrent Excitation/Inhibition Figure : Alternate Network Implementations of the Response Vector Dynamics. (A) shows a globally recurrent architecture for implementing the dynamics of the response vector r (Equation 1). The architecture requires the calculation of residual error signals [Mumford, 199; Barlow, 199; Pece, 199; Rao and Ballard, 1997a] between predicted images and actual images. These errors are filtered by a set of spatiotemporally summating neurons whose synapses represent the matrices. The result is used to correct r in conjunction with recurrent lateral inhibition and rectification. (B) shows a biologically more plausible locally recurrent architecture for implementing the dynamics of r. Rather than computing global residual errors, the inputs are directly filtered by a set of spatiotemporally summating neurons representing. The result is then further modified by recurrent excitation/inhibition due to the weights (see text) as well as lateral inhibition due to the weights.

9 and the other involving. While the term involving vector! the synapses of spatiotemporally summating neurons, with the % th row of is exclusively inhibitory, the components of the r may be excitatory or inhibitory. Once again, the rows of the matrices "# correspond to "# representing the synaptic weights of the % th spatiotemporal neuron for time instant. Expressing the above dynamics in its discrete form and adding a final constraint of non-negativity results in the following equation for updating r until convergence is achieved for the given input sequence: r % " r % " " # "! r % "! r % " " (1) where is a threshold nonlinearity for rectification: given vector. It is interesting to note the similarity between the dynamics as given by the stochastic gradient descent rule above and those proposed by Lee and Seung for their Conic network[lee and Seung, 1997]. In particular, the above equation can regarded as a spatiotemporal extension of the dynamics used in the Conic network. A network implementation of this equation is shown in Figure B.. On-Line Estimation ", applied to all components of the The dynamics described above can be extended to the more realistic case where the inputs are being encountered on-line. Let represent the current time instant. The on-line form of Equation 1 is then given by: r! "! " # r! r (17) Expressing the above dynamics in its discrete form and enforcing the constraint of non-negativity yields the following rule for updating r at each time instant: where the operator r " r " " #! "! r "! r " " (1) again denotes rectification. In summary, the current spatiotemporal response is determined by three factors: the previous spatiotemporal response r ", the past inputs I " I! ", and lateral inhibition due to and. These three factors at time instant are combined via summation, followed by rectification, to yield the responses at time instant. The consideration of only the past inputs rather than the entire input history in the equation above is consistent with the observation that cortical neurons process stimuli within restricted temporal epochs of time (see, for example,[deangelis et al., 199])..3 Learning Rules A learning rule for determining the optimal estimate for each descent on with respect to ", for each : "! " "! 9 " can be obtained by performing gradient " r" r#! " (19)

10 where is the learning rate parameter. Note that for the on-line case, I " is replaced by its on-line counterpart I! " in the above equation. Similarly, a learning rule for the lateral weights can be obtained by performing gradient descent on with respect to :! rr#! () Note that this is a Hebbian learning rule with decay. In conjunction with the inhibition in the dynamics of r (Equation 1), the above learning rule can be seen to be equivalent to Foldiak s anti-hebbian learning rule [Foldiak, 199], if the diagonal terms of, which implement self-inhibition, are set to zero. Experimental Results The algorithms derived in the previous section were tested on a set of five digitized natural images from the Ansel Adams Fiat Lux collection at the UCR/California Museum of Photography (Figure 3A, images reproduced with permission). The images were of size pixels with grayscale pixel values between and. Each image was preprocessed by filtering with a circularly symmetrical zero-phase whitening/low-pass filter with the spatial frequency profile[olshausen and Field, 199; Atick and Redlich, 199]: " where the cut-off frequency cycles/image. As described in [Olshausen and Field, 1997], the whitening component of the filter " performs sphering for natural image data by attenuating the low frequencies and boosting the higher frequencies. The low-pass exponential component " helps to reduce the effects of noise/aliasing at high frequencies and eliminates the artifacts of using a rectangular sampling lattice. Figure 3B shows the frequency profile of the whitening/low-pass filter in the 1-D case while Figure 3C shows the -D case. The corresponding spatial profile obtained via inverse Fourier transform is shown in Figure 3D. The spatial profile resembles the well-known center-surround receptive fields characteristic of retinal ganglion cells. Atick and Redlich [Atick and Redlich, 199; Atick, 199] have shown that the measured spatial frequency profiles of retinal ganglion cells are well approximated by filters resembling ". Figure 3E shows the results of filtering an image from the training set using ". The training data comprised of sequences of contiguous image patches extracted from the filtered natural patch is shown as a box labeled RF in Figure 3E. Starting from an image patch extracted at a randomly selected location in a given training image, the next training patch was extracted by moving in one of directions as shown in Figure 3F. This was achieved by incrementing or decrementing the and/or coordinate of the current patch location by one according to the current direction of motion. After every image patches, the current direction was set randomly to one of the possible directions. The direction was also randomly changed in case an image boundary was encountered. A new training image was obtained after every patches, thereby cycling through the five preprocessed training images. In the first experiment, a network of neurons was trained on sequences of natural image patches of size. The temporal extent of processing was set to, resulting in sets of synaptic weight vectors. These vectors, which form the rows of " #,, were initialized to uniformly random values and normalized to length one. After the presentation of each training image patch, the response vector images. The relative size of a (1)

11 A K(f) IFT FT 3 3 B f f y C f x D RF Image E F Figure 3: Training Paradigm. (A) The five natural images used in the experiments. The original negatives are from the Ansel Adams Fiat Lux collection at the UCR/California Museum of Photography (images reproduced with permission). (B) 1-D spatial frequency response profile of zero-phase whitening/low-pass filter used for preprocessing the natural images. (C) The full -D response profile of the same filter. (D) The profile in the space domain obtained by inverse Fourier transform (IFT), showing the familiar center-surround receptive field characteristic of retinal ganglion cells (see text). (E) A natural image after being filtered with the center-surround filter in (D). The relative size of a receptive field (same size as the input image patches) is shown as a box labeled RF on the bottom right corner. (F) depicts the process of extracting image patches from the filtered natural images for training the neural network. Patches are extracted from the square window as it moves in one of directions within a given training image (see text from more details). 11

12 t=1 t= t=3 t= t= t= t=7 t= A Initial Synaptic Weights t=1 t= t=3 t= t= t= t=7 t= B x y C y t=1 x Synapses After Learning Figure : Example of Receptive Field Dynamics after Learning (RF size = ). (A) shows the initial set of synaptic weights (or receptive fields) for a single model neuron. The weights shown comprised the first row of for. The components of these vectors were initialized to random values according to the uniform distribution and the resulting vectors were normalized to length one. (B) shows the same set of synaptic weights after training the -neuron network on natural images (see text for details). The synaptic profiles at each time step resemble oriented Gabor wavelets[daugman, 19; Marcelja, 19]. (C) depicts these synaptic weights using a classical D receptive field representation, where the dark regions are inhibitory and the bright regions are excitatory. The entire sequence of synaptic weights suggests that this neuron is tuned towards dark bars moving diagonally from the bottom left corner of the receptive field to the top right corner r was updated according to the on-line estimation rule given by equation 1 with and. The weights " were adapted at each time step using equation 19 with and. Similarly, the lateral weights were adapted using equation with and. The diagonal of, representing self-inhibitory decay terms or the leakiness parameters in the leaky integrator equation 1, was fixed at, although qualitatively similar results were obtained if the diagonal was also additionally adapted along with the lateral weights. The learning rate parameters and were gradually decreased by dividing with every image presentations and a stable solution was arrived at after image presentations. Figure shows a set of synaptic weight vectors (first row of "# for ) for a model neuron before and after learning (A and B respectively). Since these synaptic vectors form the feedforward weighting function of neurons in the network (see Figure B), they can be roughly interpreted as receptive fields or spatial impulse response functions for time steps. The receptive fields after learning resemble localized Gabor wavelets which have previously been shown to well approximate the receptive field weighting profiles of simple cells in the mammalian primary visual cortex[daugman, 19; Marcelja, 19; Olshausen and Field, 199]. In addition, the model neuron can be seen to be tuned towards dark bars moving diagonally from the bottom left corner of the receptive field to the top right corner. A 1

13 (a) t=1 3 7 t=1 3 7 (b) (c) (d) (e) (g) (f) (h) (i) (j) (k) (l) (m) (n) (o) (p) Figure : Further Examples of Receptive Fields (RF size = ). (a) through (p) show examples of synaptic weights for sixteen other model neurons from the network described in Figure. Several varieties of orientation selectivity can be discerned, with each neuron being tuned towards a particular direction of motion. Other neurons, such as those depicted in (o) and (p), appear to be selective for oriented image structures that are approximately stationary or contain motion along the preferred orientation. number of other examples of receptive fields developed by the network are shown in Figure. The set of spatiotemporal synaptic weights together form an overcomplete set of basis functions for representing spatiotemporal input sequences. In a second set of experiments, we investigated the importance of rectification and lateral inhibition in learning visual cortex-like receptive fields. Three networks using the same set of parameters, initialization conditions, and training patches as the one used in Figures and were subjected to three different conditions. In the first network, rectification was removed but the lateral weights were learned and used as before while in the second network, the lateral weights (including the diagonal terms) were disabled but rectification was retained. The third network was trained without either lateral inhibition or rectification of responses. Figure compares the results obtained after image presentations for the same model neuron as in Figure. This specific example as well as the results obtained for other model neurons in the networks suggest that both lateral inhibition and rectification are necessary for obtaining synaptic profiles resembling visual cortical receptive fields. In a third set of experiments, the results obtained for receptive fields were verified in the case. The training paradigm involving the natural image patches was identical to the case. A network of neurons was trained on sequences of natural image patches of size. The temporal extent, of processing was set to. The parameters were set as follows: 13,,

14 y t=1 x 3 7 With Rectification and Lateral Inhibition Lateral Inhibition Only Rectification Only Without Rectification or Lateral Inhibition Figure : The Need for Lateral Inhibition and Rectification during Learning. The figure shows the synaptic profiles obtained after training for the same neuron (from Figure ) with identical initial conditions and identical training regimes but with lateral inhibition and/or rectification removed. As can be seen from this specific example, which is typical of the result obtained for other neurons in the networks, both lateral inhibition and rectification appear to be necessary for obtaining synaptic profiles resembling visual cortical receptive fields. Neither rectification nor lateral inhibition alone seemed to suffice to produce the desired neural receptive fields.,, and. The diagonal of was fixed at. The learning rate parameters and were gradually decreased by dividing with every image presentations and a stable solution was arrived at after image presentations. Figure 7 shows a set of synaptic weight vectors (or receptive fields) for a model neuron before and after learning. The model neuron can be seen to be tuned towards dark horizontal bars moving downwards. Several other examples of receptive fields developed by the network are shown in Figure. A majority of these neurons appear to be tuned towards oriented image structures moving in a particular direction. Some neurons, such as the one shown in (k), exhibit more complex dynamics, involving two or more inhibitory subregions coalescing into one while other neurons, such as the one shown in (l), appear to be tuned towards oriented image structures that are either approximately stationary or contain some form of motion along the preferred orientation. The space-time receptive fields for the model neurons in Figure 7 and are shown in Figure 9. These - plots were obtained by integrating the 3-D spatiotemporal receptive field data along the neuron s preferred orientation, as illustrated in the top two rows of the figure. The inverse of the slope of the oriented subregions in the space-time receptive field provides an estimate of the neuron s preferred velocity (see, for example, [Adelson and Bergen, 19]). Thus, in the case of the top two rows in the figure, the slope is approximately indicating a preferred speed of approximately pixel/time step for these two neurons in their respective directions. The space-time receptive fields in Figure 9 (b) through (l) are those for the model neurons in Figure (b) through (l). Note that even though the training image window moved at pixel/time step, in some cases, such as (d) and (h), the preferred speed is less than pixel/time step due to the well-known aperture effect [Adelson and Movshon, 19]. In the extreme case of an approximately stationary receptive field as in (l), the space-time receptive field indicates a preferred speed of zero. In order to evaluate the nature of internal representations used by the network, we exposed the trained 1

15 t=1 t= t=3 t= t= t= A t=7 t= t=9 t= t=11 t= Initial Synaptic Weights t=1 t= t=3 t= t= t= B t=7 t= t=9 t= t=11 t= y x Synapses After Learning C y x t= Figure 7: Example of Receptive Field Dynamics after Learning (RF size = ). (A) shows the initial set of random and normalized synaptic weights for a single model neuron in a -neuron network. (B) shows the same set of synaptic weights after training the network on the filtered natural images (see text for details). (C) depicts these synaptic weights using a classical D receptive field representation (dark regions are inhibitory, bright regions are excitatory). The sequence of synaptic weights suggests that this neuron is tuned towards dark horizontal bars moving downwards. 1

16 t= (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) Figure : Further Examples of Receptive Fields (RF size = ). (a) through (l) show examples of synaptic weights for twelve other model neurons from the network used in Figure 7. A majority of these neurons are tuned towards oriented image structures moving in a particular direction. Some neurons, such as the one shown in (k), exhibit more complex dynamics, for example, involving two or more inhibitory subregions coalescing into one. Other neurons, such as the one shown in (l), appear to be tuned towards oriented image structures that are either approximately stationary or contain some motion along the preferred orientation. 1

17 t= t x t= t (a) x (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) t x Figure 9: Space-Time Receptive Fields (size = ). The top two rows depict the process of constructing space-time receptive field profiles from temporal sequences of -D spatial receptive fields. The top row is from Figure 7 while the second row is from Figure (a). The - plots are obtained by integrating the 3-D spatiotemporal data along the neuron s preferred orientation (in these two cases, along the horizontal direction). Note that orientation in - space indicates the neuron s preferred direction of motion (in this case, upwards or downwards). In addition, the slope of approximately in both cases indicates a preferred speed of approximately pixel/time step for these neurons (since slope is inverse with the preferred velocity[adelson and Bergen, 19]). (b) through (l) show the space-time receptive fields for the model neurons in Figure (b) through (l). In some cases, such as (d) and (h), the preferred speed is less than pixel/time step due to the aperture effect[adelson and Movshon, 19]. In the case of (l), the preferred speed is zero. 17

18 network and a companion network without the lateral inhibitory weights to a sequence of images depicting a bright vertical bar moving to the right on a dark background (Figure ). In both cases, the responses of the model neurons in each network were plotted as histograms at each time step, with the first vertical bar corresponding to the response of the first neuron, the second to that of the second neuron, and so on. As seen in Figure, the trained network generates sparse distributed representations of the spatiotemporal input sequence (only a few neurons are active at each time step). On the other hand, disabling the lateral inhibitory connections results in a much larger number of neurons being active at each time step, suggesting that the lateral connections play a crucial role in generating sparse distributed representations of input stimuli. In the final set of experiments, we analyzed the direction selectivity of model neurons in the trained -neuron network. Figure 11 illustrates the experimental methodology used. Each neuron in the network was exposed to oriented dark/bright bars on a bright/dark background moving in a direction perpendicular to the bar s orientation (bright or dark bars were chosen depending on which case elicited the largest response from the neuron). The direction of motion was varied from to in steps of. Figure 11B shows the cases where the direction of motion is (downwards) and (upwards) respectively. Figure 11C shows the response of the model neuron in these two cases as a function of the time from stimulus onset. As expected from the structure of its space-time receptive field, the neuron exhibits a significant response for a bar moving downwards (preferred direction) while a bar moving upwards (null direction) elicits little or no response. A direction selectivity index (DSI) was defined for each model neuron as: DSI! Peak Response in Null Direction Peak Response in Preferred Direction The preferred direction was taken to be the direction of motion eliciting the largest response from the model neuron and the null direction was set to the opposite direction. Thus, a neuron with a peak response of zero in the null direction and a nonzero response in the preferred direction has a DSI of or. A neuron with equal responses in both directions have a DSI of zero. Figure 1A shows polar response plots of four model neurons with increasing DSI from left to right. The rightmost plot is for the neuron in Figure 11 and its DSI of confirms its relatively high direction selectivity. Figure 1B shows the population distribution of direction selectivity in the -neuron network. A relatively large proportion of the neurons ( of the or ) had a direction selectivity index of or greater. Discussion The results suggest that many of the important spatiotemporal properties of visual cortical neurons such as velocity and direction selectivity can be explained in terms of learning efficient spatiotemporal generative models for natural time varying images. Efficiency is defined not only in terms of reducing image reconstruction errors but also in terms of reducing statistical dependencies between network outputs. This redundancy reduction is implemented via recurrent lateral inhibition as derived from the MDL principle. The network also utilizes other lateral connections which are derived from the generative model. These lateral excitatory/inhibitory connections play a role similar to those used in Bayesian belief networks for facilitating the phenomenon of explaining away [Pearl, 19]. The network additionally includes rectification of outputs to encourage sparseness among the network activities. Although rectification helps in the development of sparse distributed representations, rectification alone was found to be insufficient in producing satisfactory space-time receptive fields. Likewise, lateral inhibition alone was also found to be insufficient. 1 ()

19 Input Responses (Without L. Inhibition) Responses (With L. Inhibition) Input Responses (Without L. Inhibition) Responses (With L. Inhibition) Figure : Lateral Inhibition and Sparse Distributed Representations of Input Stimuli. To test the role of the lateral inhibitory weights (see Figure ) in generating sparse distributed representations, these weights were removed (after learning) and the network was exposed to a bright vertical bar moving to the right. The responses of the neurons in the network at the various time steps are shown as histograms (the first vertical bar is the response of the first neuron, the second is that of the second neuron, and so on). The histograms have been normalized such that the maximum response in each graph is one. The responses with lateral inhibition intact are shown to the right at each time step. As is evident from the relatively sparse number of neurons active at each time step in the with lateral inhibition case, the existence of lateral inhibitory connections in the network appears to be crucial for generating sparse distributed representations of the spatiotemporal input stimuli. 19

20 A B C..1 Response Preferred Direction. t. 1 x Space-Time Receptive Field Response Null Direction. Input Stimulus 1 Time from Stimulus Onset Figure 11: Example of Direction Selectivity. The model neuron from Figure 7 was tested for direction selectivity. (A) shows the space-time receptive field of this neuron. (B) depicts the two input stimuli used for testing: a dark horizontal bar in a white background, moving either downwards or upwards. (C) shows the response of the neuron as a function of the time from stimulus onset. As expected from the structure of its space-time receptive field, the neuron exhibits a significant response for a bar moving downwards (preferred direction), reaching a peak response of units at 11 time steps from stimulus onset. On the other hand, a bar moving upwards (null direction) elicits little or no response.

21 DSI = 1.77 % DSI =. % DSI = 3. % DSI = 9. % A Number of Neurons Direction Selectivity Index (DSI) % B Figure 1: Analysis of Direction Selectivity. (A) shows polar response plots of four model neurons when exposed to bars oriented at a particular angle and moving in a direction perpendicular to the orientation. The radius of the plot indicates the strength of the response while the angle indicates the direction of motion of the bar. The sequence of plots is arranged from left to right in the order of increasing direction selectivity, as given by the direction selectivity index DSI (see text for definition). The polar plot on the extreme right is for the neuron in Figure 11, confirming its relatively high direction selectivity. (B) shows the distribution of direction selectivity in the population of neurons in the trained network. A relatively large proportion of the neurons ( of the ) had a direction selectivity index of or greater. 1

22 Several previous models of direction selectivity have utilized recurrent lateral interactions and rectification [Suarez et al., 199; Maex and Orban, 199; Mineiro and Zipser, 1997], although without the statistical perspective pursued herein. In addition, the receptive fields in many of these approaches were hard-wired by hand rather than learned. An interesting issue is how the spatiotemporally varying synaptic weights of neurons in the present model can be implemented biologically by intrinsic synaptic mechanisms and axonal-dendritic interactions over time. In this regard, it is possible that space time receptive fields similar to those obtained by the present method may also arise in other neural circuits that do not explicitly use spatiotemporal synaptic weights. Most of the synaptic weights in the trained networks were found to be space-time inseparable. The lack of larger numbers of separable receptive fields in the trained networks suggests an alternative mechanism for the generation of such receptive fields. Most cortical receptive fields that are space-time separable have a temporal weighting profile that approximates a derivative in time. We have previously shown [Rao and Ballard, 1997b] that a generative model based on a first-order Taylor series approximation of an image produces localized oriented filters that compute spatial derivatives for estimating translations in the image plane. A possibility that we are currently investigating is to use a Taylor series expansion of an image in both space and time, and to ascertain whether such a strategy produces separable filters that compute derivatives in both space and time. Other issues being pursued include explaining contrast normalization effects[albrecht and Geisler, 1991; Heeger, 1991] and recasting the hierarchical framework proposed in[rao and Ballard, 1997a] to accommodate the spatiotemporal generative model proposed herein. References [Adelson and Bergen, 19] E.H. Adelson and J. Bergen. Spatiotemporal energy models for the perception of motion. J. Opt. Soc. Am. A, (): 99, 19. [Adelson and Movshon, 19] E.H. Adelson and J.A. Movshon. Phenomenal coherence of moving visual patterns. Nature, 3:3, 19. [Albrecht and Geisler, 1991] D.G. Albrecht and W.S. Geisler. Motion sensitivity and the contrast-response function of simple cells in the visual cortex. Visual Neurosci., 7:31, [Atick and Redlich, 199] J.J. Atick and A.N. Redlich. What does the retina know about natural scenes? Neural Computation, ():19, 199. [Atick and Redlich, 1993] J.J. Atick and A.N. Redlich. Convergent algorithm for sensory receptive field development. Neural Computation, :, [Atick, 199] J.J. Atick. Could information theory provide an ecological theory of sensory processing. Network, 3:13 1, 199. [Baddeley and Hancock, 1991] R.J. Baddeley and P.J.B. Hancock. A statistical analysis of natural images matches psychophysically derived orientation tuning curves. Proc. R. Soc. Lond. Ser. B, :19 3, [Barlow, 191] H.B. Barlow. Possible principles underlying the transformation of sensory messages. In W.A. Rosenblith, editor, Sensory Communication, pages Cambridge, MA: MIT Press, 191.

23 [Barlow, 197] H.B. Barlow. Single units and cognition: A neurone doctrine for perceptual psychology. Perception, 1:371 39, 197. [Barlow, 199] H.B. Barlow. Unsupervised learning. Neural Computation, 1:9 311, 199. [Barlow, 199] H.B. Barlow. What is the computational goal of the neocortex? In C. Koch and J.L. Davis, editors, Large-Scale Neuronal Theories of the Brain, pages 1. Cambridge, MA: MIT Press, 199. [Barrow, 197] H.G. Barrow. Learning receptive fields. In Proceedings of the IEEE Int. Conf. on Neural Networks, pages 11 11, 197. [Bell and Sejnowski, 1997] A.J. Bell and T.J. Sejnowski. The independent components of natural scenes are edge filters. Vision Research (in press), [Bienenstock et al., 19] E. L. Bienenstock, L. N. Cooper, and P. W. Munro. Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. J. Neurosci., :3, 19. [Bryson and Ho, 197] A.E. Bryson and Y.-C. Ho. Applied Optimal Control. New York: John Wiley and Sons, 197. [Burr et al., 19] D.C. Burr, J. Ross, and M.C. Morrone. Seeing objects in motion. Proc. R. Soc. Lond. Ser. B, 7:9, 19. [Daugman, 19] J.G. Daugman. Two-dimensional spectral analysis of cortical receptive field profiles. Vision Research, :7, 19. [Daugman, 19] J.G. Daugman. Complete discrete -D Gabor transforms by neural networks for image analysis and compression. IEEE Trans. Acoustics, Speech, and Signal Proc., 3(7): , 19. [Dayan et al., 199] P. Dayan, G.E. Hinton, R.M. Neal, and R.S. Zemel. The Helmholtz machine. Neural Computation, 7:9 9, 199. [DeAngelis et al., 1993a] G.C. DeAngelis, I. Ohzawa, and R.D. Freeman. Spatiotemporal organization of simple-cell receptive fields in the cat s striate cortex. I. General characteristics and postnatal development. J. Neurophysiol., 9(): , [DeAngelis et al., 1993b] G.C. DeAngelis, I. Ohzawa, and R.D. Freeman. Spatiotemporal organization of simple-cell receptive fields in the cat s striate cortex. II. Linearity of temporal and spatial summation. J. Neurophysiol., 9(): , [DeAngelis et al., 199] G.C. DeAngelis, I. Ohzawa, and R.D. Freeman. Receptive-field dynamics in the central visual pathways. Trends in Neuroscience, 1:1, 199. [Dempster et al., 1977] A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. J. Royal Statistical Society Series B, 39:1 3,

Dynamic Model of Visual Recognition Predicts Neural Response Properties in the Visual Cortex

Dynamic Model of Visual Recognition Predicts Neural Response Properties in the Visual Cortex To appear in Neural Computation. Also, Technical Report 96.2 (revision of 95.4) National Resource Laboratory for the Study of Brain and Behavior, University of Rochester, November 995. Dynamic Model of

More information

Neuronal responses to plaids

Neuronal responses to plaids Vision Research 39 (1999) 2151 2156 Neuronal responses to plaids Bernt Christian Skottun * Skottun Research, 273 Mather Street, Piedmont, CA 94611-5154, USA Received 30 June 1998; received in revised form

More information

Associative Decorrelation Dynamics: A Theory of Self-Organization and Optimization in Feedback Networks

Associative Decorrelation Dynamics: A Theory of Self-Organization and Optimization in Feedback Networks Associative Decorrelation Dynamics: A Theory of Self-Organization and Optimization in Feedback Networks Dawei W. Dong* Lawrence Berkeley Laboratory University of California Berkeley, CA 94720 Abstract

More information

Sensory Adaptation within a Bayesian Framework for Perception

Sensory Adaptation within a Bayesian Framework for Perception presented at: NIPS-05, Vancouver BC Canada, Dec 2005. published in: Advances in Neural Information Processing Systems eds. Y. Weiss, B. Schölkopf, and J. Platt volume 18, pages 1291-1298, May 2006 MIT

More information

Plasticity of Cerebral Cortex in Development

Plasticity of Cerebral Cortex in Development Plasticity of Cerebral Cortex in Development Jessica R. Newton and Mriganka Sur Department of Brain & Cognitive Sciences Picower Center for Learning & Memory Massachusetts Institute of Technology Cambridge,

More information

Information Processing During Transient Responses in the Crayfish Visual System

Information Processing During Transient Responses in the Crayfish Visual System Information Processing During Transient Responses in the Crayfish Visual System Christopher J. Rozell, Don. H. Johnson and Raymon M. Glantz Department of Electrical & Computer Engineering Department of

More information

Learning in neural networks

Learning in neural networks http://ccnl.psy.unipd.it Learning in neural networks Marco Zorzi University of Padova M. Zorzi - European Diploma in Cognitive and Brain Sciences, Cognitive modeling", HWK 19-24/3/2006 1 Connectionist

More information

M Cells. Why parallel pathways? P Cells. Where from the retina? Cortical visual processing. Announcements. Main visual pathway from retina to V1

M Cells. Why parallel pathways? P Cells. Where from the retina? Cortical visual processing. Announcements. Main visual pathway from retina to V1 Announcements exam 1 this Thursday! review session: Wednesday, 5:00-6:30pm, Meliora 203 Bryce s office hours: Wednesday, 3:30-5:30pm, Gleason https://www.youtube.com/watch?v=zdw7pvgz0um M Cells M cells

More information

Information Maximization in Face Processing

Information Maximization in Face Processing Information Maximization in Face Processing Marian Stewart Bartlett Institute for Neural Computation University of California, San Diego marni@salk.edu Abstract This perspective paper explores principles

More information

CN530, Spring 2004 Final Report Independent Component Analysis and Receptive Fields. Madhusudana Shashanka

CN530, Spring 2004 Final Report Independent Component Analysis and Receptive Fields. Madhusudana Shashanka CN530, Spring 2004 Final Report Independent Component Analysis and Receptive Fields Madhusudana Shashanka shashanka@cns.bu.edu 1 Abstract This report surveys the literature that uses Independent Component

More information

Morton-Style Factorial Coding of Color in Primary Visual Cortex

Morton-Style Factorial Coding of Color in Primary Visual Cortex Morton-Style Factorial Coding of Color in Primary Visual Cortex Javier R. Movellan Institute for Neural Computation University of California San Diego La Jolla, CA 92093-0515 movellan@inc.ucsd.edu Thomas

More information

Early Stages of Vision Might Explain Data to Information Transformation

Early Stages of Vision Might Explain Data to Information Transformation Early Stages of Vision Might Explain Data to Information Transformation Baran Çürüklü Department of Computer Science and Engineering Mälardalen University Västerås S-721 23, Sweden Abstract. In this paper

More information

lateral organization: maps

lateral organization: maps lateral organization Lateral organization & computation cont d Why the organization? The level of abstraction? Keep similar features together for feedforward integration. Lateral computations to group

More information

16/05/2008 #1

16/05/2008  #1 16/05/2008 http://www.scholarpedia.org/wiki/index.php?title=sparse_coding&printable=yes #1 Sparse coding From Scholarpedia Peter Foldiak and Dominik Endres (2008), Scholarpedia, 3(1):2984. revision #37405

More information

Reading Assignments: Lecture 18: Visual Pre-Processing. Chapters TMB Brain Theory and Artificial Intelligence

Reading Assignments: Lecture 18: Visual Pre-Processing. Chapters TMB Brain Theory and Artificial Intelligence Brain Theory and Artificial Intelligence Lecture 18: Visual Pre-Processing. Reading Assignments: Chapters TMB2 3.3. 1 Low-Level Processing Remember: Vision as a change in representation. At the low-level,

More information

Exploring the Functional Significance of Dendritic Inhibition In Cortical Pyramidal Cells

Exploring the Functional Significance of Dendritic Inhibition In Cortical Pyramidal Cells Neurocomputing, 5-5:389 95, 003. Exploring the Functional Significance of Dendritic Inhibition In Cortical Pyramidal Cells M. W. Spratling and M. H. Johnson Centre for Brain and Cognitive Development,

More information

Spectrograms (revisited)

Spectrograms (revisited) Spectrograms (revisited) We begin the lecture by reviewing the units of spectrograms, which I had only glossed over when I covered spectrograms at the end of lecture 19. We then relate the blocks of a

More information

Motion direction signals in the primary visual cortex of cat and monkey

Motion direction signals in the primary visual cortex of cat and monkey Visual Neuroscience (2001), 18, 501 516. Printed in the USA. Copyright 2001 Cambridge University Press 0952-5238001 $12.50 DOI: 10.1017.S0952523801184014 Motion direction signals in the primary visual

More information

OPTO 5320 VISION SCIENCE I

OPTO 5320 VISION SCIENCE I OPTO 5320 VISION SCIENCE I Monocular Sensory Processes of Vision: Color Vision Mechanisms of Color Processing . Neural Mechanisms of Color Processing A. Parallel processing - M- & P- pathways B. Second

More information

Self-Organization and Segmentation with Laterally Connected Spiking Neurons

Self-Organization and Segmentation with Laterally Connected Spiking Neurons Self-Organization and Segmentation with Laterally Connected Spiking Neurons Yoonsuck Choe Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 USA Risto Miikkulainen Department

More information

Local Image Structures and Optic Flow Estimation

Local Image Structures and Optic Flow Estimation Local Image Structures and Optic Flow Estimation Sinan KALKAN 1, Dirk Calow 2, Florentin Wörgötter 1, Markus Lappe 2 and Norbert Krüger 3 1 Computational Neuroscience, Uni. of Stirling, Scotland; {sinan,worgott}@cn.stir.ac.uk

More information

Visual discrimination and adaptation using non-linear unsupervised learning

Visual discrimination and adaptation using non-linear unsupervised learning Visual discrimination and adaptation using non-linear unsupervised learning Sandra Jiménez, Valero Laparra and Jesús Malo Image Processing Lab, Universitat de València, Spain; http://isp.uv.es ABSTRACT

More information

Nonlinear processing in LGN neurons

Nonlinear processing in LGN neurons Nonlinear processing in LGN neurons Vincent Bonin *, Valerio Mante and Matteo Carandini Smith-Kettlewell Eye Research Institute 2318 Fillmore Street San Francisco, CA 94115, USA Institute of Neuroinformatics

More information

Just One View: Invariances in Inferotemporal Cell Tuning

Just One View: Invariances in Inferotemporal Cell Tuning Just One View: Invariances in Inferotemporal Cell Tuning Maximilian Riesenhuber Tomaso Poggio Center for Biological and Computational Learning and Department of Brain and Cognitive Sciences Massachusetts

More information

How has Computational Neuroscience been useful? Virginia R. de Sa Department of Cognitive Science UCSD

How has Computational Neuroscience been useful? Virginia R. de Sa Department of Cognitive Science UCSD How has Computational Neuroscience been useful? 1 Virginia R. de Sa Department of Cognitive Science UCSD What is considered Computational Neuroscience? 2 What is considered Computational Neuroscience?

More information

Subjective randomness and natural scene statistics

Subjective randomness and natural scene statistics Psychonomic Bulletin & Review 2010, 17 (5), 624-629 doi:10.3758/pbr.17.5.624 Brief Reports Subjective randomness and natural scene statistics Anne S. Hsu University College London, London, England Thomas

More information

Monocular and Binocular Mechanisms of Contrast Gain Control. Izumi Ohzawa and Ralph D. Freeman

Monocular and Binocular Mechanisms of Contrast Gain Control. Izumi Ohzawa and Ralph D. Freeman Monocular and Binocular Mechanisms of Contrast Gain Control Izumi Ohzawa and alph D. Freeman University of California, School of Optometry Berkeley, California 9472 E-mail: izumi@pinoko.berkeley.edu ABSTACT

More information

Dendritic compartmentalization could underlie competition and attentional biasing of simultaneous visual stimuli

Dendritic compartmentalization could underlie competition and attentional biasing of simultaneous visual stimuli Dendritic compartmentalization could underlie competition and attentional biasing of simultaneous visual stimuli Kevin A. Archie Neuroscience Program University of Southern California Los Angeles, CA 90089-2520

More information

Visual Nonclassical Receptive Field Effects Emerge from Sparse Coding in a Dynamical System

Visual Nonclassical Receptive Field Effects Emerge from Sparse Coding in a Dynamical System Visual Nonclassical Receptive Field Effects Emerge from Sparse Coding in a Dynamical System Mengchen Zhu 1, Christopher J. Rozell 2 * 1 Wallace H. Coulter Department of Biomedical Engineering, Georgia

More information

arxiv: v1 [q-bio.nc] 13 Jan 2008

arxiv: v1 [q-bio.nc] 13 Jan 2008 Recurrent infomax generates cell assemblies, avalanches, and simple cell-like selectivity Takuma Tanaka 1, Takeshi Kaneko 1,2 & Toshio Aoyagi 2,3 1 Department of Morphological Brain Science, Graduate School

More information

Retinal DOG filters: high-pass or high-frequency enhancing filters?

Retinal DOG filters: high-pass or high-frequency enhancing filters? Retinal DOG filters: high-pass or high-frequency enhancing filters? Adrián Arias 1, Eduardo Sánchez 1, and Luis Martínez 2 1 Grupo de Sistemas Inteligentes (GSI) Centro Singular de Investigación en Tecnologías

More information

Reading Assignments: Lecture 5: Introduction to Vision. None. Brain Theory and Artificial Intelligence

Reading Assignments: Lecture 5: Introduction to Vision. None. Brain Theory and Artificial Intelligence Brain Theory and Artificial Intelligence Lecture 5:. Reading Assignments: None 1 Projection 2 Projection 3 Convention: Visual Angle Rather than reporting two numbers (size of object and distance to observer),

More information

Models of Attention. Models of Attention

Models of Attention. Models of Attention Models of Models of predictive: can we predict eye movements (bottom up attention)? [L. Itti and coll] pop out and saliency? [Z. Li] Readings: Maunsell & Cook, the role of attention in visual processing,

More information

Significance of Natural Scene Statistics in Understanding the Anisotropies of Perceptual Filling-in at the Blind Spot

Significance of Natural Scene Statistics in Understanding the Anisotropies of Perceptual Filling-in at the Blind Spot Significance of Natural Scene Statistics in Understanding the Anisotropies of Perceptual Filling-in at the Blind Spot Rajani Raman* and Sandip Sarkar Saha Institute of Nuclear Physics, HBNI, 1/AF, Bidhannagar,

More information

Space-Time Maps and Two-Bar Interactions of Different Classes of Direction-Selective Cells in Macaque V-1

Space-Time Maps and Two-Bar Interactions of Different Classes of Direction-Selective Cells in Macaque V-1 J Neurophysiol 89: 2726 2742, 2003; 10.1152/jn.00550.2002. Space-Time Maps and Two-Bar Interactions of Different Classes of Direction-Selective Cells in Macaque V-1 Bevil R. Conway and Margaret S. Livingstone

More information

Information and neural computations

Information and neural computations Information and neural computations Why quantify information? We may want to know which feature of a spike train is most informative about a particular stimulus feature. We may want to know which feature

More information

Rolls,E.T. (2016) Cerebral Cortex: Principles of Operation. Oxford University Press.

Rolls,E.T. (2016) Cerebral Cortex: Principles of Operation. Oxford University Press. Digital Signal Processing and the Brain Is the brain a digital signal processor? Digital vs continuous signals Digital signals involve streams of binary encoded numbers The brain uses digital, all or none,

More information

Spatiotemporal clustering of synchronized bursting events in neuronal networks

Spatiotemporal clustering of synchronized bursting events in neuronal networks Spatiotemporal clustering of synchronized bursting events in neuronal networks Uri Barkan a David Horn a,1 a School of Physics and Astronomy, Tel Aviv University, Tel Aviv 69978, Israel Abstract in vitro

More information

Changing expectations about speed alters perceived motion direction

Changing expectations about speed alters perceived motion direction Current Biology, in press Supplemental Information: Changing expectations about speed alters perceived motion direction Grigorios Sotiropoulos, Aaron R. Seitz, and Peggy Seriès Supplemental Data Detailed

More information

This article was originally published in a journal published by Elsevier, and the attached copy is provided by Elsevier for the author s benefit and for the benefit of the author s institution, for non-commercial

More information

A Single Mechanism Can Explain the Speed Tuning Properties of MT and V1 Complex Neurons

A Single Mechanism Can Explain the Speed Tuning Properties of MT and V1 Complex Neurons The Journal of Neuroscience, November 15, 2006 26(46):11987 11991 11987 Brief Communications A Single Mechanism Can Explain the Speed Tuning Properties of MT and V1 Complex Neurons John A. Perrone Department

More information

Analysis of spectro-temporal receptive fields in an auditory neural network

Analysis of spectro-temporal receptive fields in an auditory neural network Analysis of spectro-temporal receptive fields in an auditory neural network Madhav Nandipati Abstract Neural networks have been utilized for a vast range of applications, including computational biology.

More information

Sparse Coding in Sparse Winner Networks

Sparse Coding in Sparse Winner Networks Sparse Coding in Sparse Winner Networks Janusz A. Starzyk 1, Yinyin Liu 1, David Vogel 2 1 School of Electrical Engineering & Computer Science Ohio University, Athens, OH 45701 {starzyk, yliu}@bobcat.ent.ohiou.edu

More information

Group Redundancy Measures Reveal Redundancy Reduction in the Auditory Pathway

Group Redundancy Measures Reveal Redundancy Reduction in the Auditory Pathway Group Redundancy Measures Reveal Redundancy Reduction in the Auditory Pathway Gal Chechik Amir Globerson Naftali Tishby Institute of Computer Science and Engineering and The Interdisciplinary Center for

More information

A Neurally-Inspired Model for Detecting and Localizing Simple Motion Patterns in Image Sequences

A Neurally-Inspired Model for Detecting and Localizing Simple Motion Patterns in Image Sequences A Neurally-Inspired Model for Detecting and Localizing Simple Motion Patterns in Image Sequences Marc Pomplun 1, Yueju Liu 2, Julio Martinez-Trujillo 2, Evgueni Simine 2, and John K. Tsotsos 2 1 Department

More information

Continuous transformation learning of translation invariant representations

Continuous transformation learning of translation invariant representations Exp Brain Res (21) 24:255 27 DOI 1.17/s221-1-239- RESEARCH ARTICLE Continuous transformation learning of translation invariant representations G. Perry E. T. Rolls S. M. Stringer Received: 4 February 29

More information

Statistical Models of Natural Images and Cortical Visual Representation

Statistical Models of Natural Images and Cortical Visual Representation Topics in Cognitive Science 2 (2010) 251 264 Copyright Ó 2009 Cognitive Science Society, Inc. All rights reserved. ISSN: 1756-8757 print / 1756-8765 online DOI: 10.1111/j.1756-8765.2009.01057.x Statistical

More information

Sum of Neurally Distinct Stimulus- and Task-Related Components.

Sum of Neurally Distinct Stimulus- and Task-Related Components. SUPPLEMENTARY MATERIAL for Cardoso et al. 22 The Neuroimaging Signal is a Linear Sum of Neurally Distinct Stimulus- and Task-Related Components. : Appendix: Homogeneous Linear ( Null ) and Modified Linear

More information

A general error-based spike-timing dependent learning rule for the Neural Engineering Framework

A general error-based spike-timing dependent learning rule for the Neural Engineering Framework A general error-based spike-timing dependent learning rule for the Neural Engineering Framework Trevor Bekolay Monday, May 17, 2010 Abstract Previous attempts at integrating spike-timing dependent plasticity

More information

Neuronal Selectivity without Intermediate Cells

Neuronal Selectivity without Intermediate Cells Neuronal Selectivity without Intermediate Cells Lund University, Cognitive Science, Kungshuset, Lundagård, S 223 50 Lund, Sweden Robert Pallbo E-mail: robert.pallbo@fil.lu.se Dept. of Computer Science,

More information

Input-speci"c adaptation in complex cells through synaptic depression

Input-specic adaptation in complex cells through synaptic depression 0 0 0 0 Neurocomputing }0 (00) } Input-speci"c adaptation in complex cells through synaptic depression Frances S. Chance*, L.F. Abbott Volen Center for Complex Systems and Department of Biology, Brandeis

More information

NATURAL IMAGE STATISTICS AND NEURAL REPRESENTATION

NATURAL IMAGE STATISTICS AND NEURAL REPRESENTATION Annu. Rev. Neurosci. 2001. 24:1193 216 Copyright c 2001 by Annual Reviews. All rights reserved NATURAL IMAGE STATISTICS AND NEURAL REPRESENTATION Eero P Simoncelli Howard Hughes Medical Institute, Center

More information

Lateral Geniculate Nucleus (LGN)

Lateral Geniculate Nucleus (LGN) Lateral Geniculate Nucleus (LGN) What happens beyond the retina? What happens in Lateral Geniculate Nucleus (LGN)- 90% flow Visual cortex Information Flow Superior colliculus 10% flow Slide 2 Information

More information

Computational Cognitive Neuroscience

Computational Cognitive Neuroscience Computational Cognitive Neuroscience Computational Cognitive Neuroscience Computational Cognitive Neuroscience *Computer vision, *Pattern recognition, *Classification, *Picking the relevant information

More information

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence To understand the network paradigm also requires examining the history

More information

2/3/17. Visual System I. I. Eye, color space, adaptation II. Receptive fields and lateral inhibition III. Thalamus and primary visual cortex

2/3/17. Visual System I. I. Eye, color space, adaptation II. Receptive fields and lateral inhibition III. Thalamus and primary visual cortex 1 Visual System I I. Eye, color space, adaptation II. Receptive fields and lateral inhibition III. Thalamus and primary visual cortex 2 1 2/3/17 Window of the Soul 3 Information Flow: From Photoreceptors

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 5: Data analysis II Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis II 6 Single

More information

Learning the Meaning of Neural Spikes Through Sensory-Invariance Driven Action

Learning the Meaning of Neural Spikes Through Sensory-Invariance Driven Action Learning the Meaning of Neural Spikes Through Sensory-Invariance Driven Action Yoonsuck Choe and S. Kumar Bhamidipati Department of Computer Science Texas A&M University College Station, TX 77843-32 {choe,bskumar}@tamu.edu

More information

Pre-integration lateral inhibition enhances unsupervised learning

Pre-integration lateral inhibition enhances unsupervised learning Neural Computation, 14(9):2157 79, 2002 Pre-integration lateral inhibition enhances unsupervised learning M. W. Spratling and M. H. Johnson Centre for Brain and Cognitive Development, Birkbeck College,

More information

NMF-Density: NMF-Based Breast Density Classifier

NMF-Density: NMF-Based Breast Density Classifier NMF-Density: NMF-Based Breast Density Classifier Lahouari Ghouti and Abdullah H. Owaidh King Fahd University of Petroleum and Minerals - Department of Information and Computer Science. KFUPM Box 1128.

More information

Early Learning vs Early Variability 1.5 r = p = Early Learning r = p = e 005. Early Learning 0.

Early Learning vs Early Variability 1.5 r = p = Early Learning r = p = e 005. Early Learning 0. The temporal structure of motor variability is dynamically regulated and predicts individual differences in motor learning ability Howard Wu *, Yohsuke Miyamoto *, Luis Nicolas Gonzales-Castro, Bence P.

More information

International Journal of Advanced Computer Technology (IJACT)

International Journal of Advanced Computer Technology (IJACT) Abstract An Introduction to Third Generation of Neural Networks for Edge Detection Being inspired by the structure and behavior of the human visual system the spiking neural networks for edge detection

More information

Cerebral Cortex. Edmund T. Rolls. Principles of Operation. Presubiculum. Subiculum F S D. Neocortex. PHG & Perirhinal. CA1 Fornix CA3 S D

Cerebral Cortex. Edmund T. Rolls. Principles of Operation. Presubiculum. Subiculum F S D. Neocortex. PHG & Perirhinal. CA1 Fornix CA3 S D Cerebral Cortex Principles of Operation Edmund T. Rolls F S D Neocortex S D PHG & Perirhinal 2 3 5 pp Ento rhinal DG Subiculum Presubiculum mf CA3 CA1 Fornix Appendix 4 Simulation software for neuronal

More information

SUPPLEMENTARY INFORMATION. Table 1 Patient characteristics Preoperative. language testing

SUPPLEMENTARY INFORMATION. Table 1 Patient characteristics Preoperative. language testing Categorical Speech Representation in the Human Superior Temporal Gyrus Edward F. Chang, Jochem W. Rieger, Keith D. Johnson, Mitchel S. Berger, Nicholas M. Barbaro, Robert T. Knight SUPPLEMENTARY INFORMATION

More information

The probabilistic approach

The probabilistic approach Introduction Each waking moment, our body s sensory receptors convey a vast amount of information about the surrounding environment to the brain. Visual information, for example, is measured by about 10

More information

The Nature of V1 Neural Responses to 2D Moving Patterns Depends on Receptive-Field Structure in the Marmoset Monkey

The Nature of V1 Neural Responses to 2D Moving Patterns Depends on Receptive-Field Structure in the Marmoset Monkey J Neurophysiol 90: 930 937, 2003. First published April 23, 2003; 10.1152/jn.00708.2002. The Nature of V1 Neural Responses to 2D Moving Patterns Depends on Receptive-Field Structure in the Marmoset Monkey

More information

2012 Course : The Statistician Brain: the Bayesian Revolution in Cognitive Science

2012 Course : The Statistician Brain: the Bayesian Revolution in Cognitive Science 2012 Course : The Statistician Brain: the Bayesian Revolution in Cognitive Science Stanislas Dehaene Chair in Experimental Cognitive Psychology Lecture No. 4 Constraints combination and selection of a

More information

Substructure of Direction-Selective Receptive Fields in Macaque V1

Substructure of Direction-Selective Receptive Fields in Macaque V1 J Neurophysiol 89: 2743 2759, 2003; 10.1152/jn.00822.2002. Substructure of Direction-Selective Receptive Fields in Macaque V1 Margaret S. Livingstone and Bevil R. Conway Department of Neurobiology, Harvard

More information

Temporal Feature of S-cone Pathway Described by Impulse Response Function

Temporal Feature of S-cone Pathway Described by Impulse Response Function VISION Vol. 20, No. 2, 67 71, 2008 Temporal Feature of S-cone Pathway Described by Impulse Response Function Keizo SHINOMORI Department of Information Systems Engineering, Kochi University of Technology

More information

Visual Motion Detection

Visual Motion Detection MS number 1975 Visual Motion Detection B K Dellen, University of Goettingen, Goettingen, Germany R Wessel, Washington University, St Louis, MO, USA Contact information: Babette Dellen, Bernstein Center

More information

Spontaneous Cortical Activity Reveals Hallmarks of an Optimal Internal Model of the Environment. Berkes, Orban, Lengyel, Fiser.

Spontaneous Cortical Activity Reveals Hallmarks of an Optimal Internal Model of the Environment. Berkes, Orban, Lengyel, Fiser. Statistically optimal perception and learning: from behavior to neural representations. Fiser, Berkes, Orban & Lengyel Trends in Cognitive Sciences (2010) Spontaneous Cortical Activity Reveals Hallmarks

More information

Intelligent Edge Detector Based on Multiple Edge Maps. M. Qasim, W.L. Woon, Z. Aung. Technical Report DNA # May 2012

Intelligent Edge Detector Based on Multiple Edge Maps. M. Qasim, W.L. Woon, Z. Aung. Technical Report DNA # May 2012 Intelligent Edge Detector Based on Multiple Edge Maps M. Qasim, W.L. Woon, Z. Aung Technical Report DNA #2012-10 May 2012 Data & Network Analytics Research Group (DNA) Computing and Information Science

More information

Single cell tuning curves vs population response. Encoding: Summary. Overview of the visual cortex. Overview of the visual cortex

Single cell tuning curves vs population response. Encoding: Summary. Overview of the visual cortex. Overview of the visual cortex Encoding: Summary Spikes are the important signals in the brain. What is still debated is the code: number of spikes, exact spike timing, temporal relationship between neurons activities? Single cell tuning

More information

Goals. Visual behavior jobs of vision. The visual system: overview of a large-scale neural architecture. kersten.org

Goals. Visual behavior jobs of vision. The visual system: overview of a large-scale neural architecture. kersten.org The visual system: overview of a large-scale neural architecture Daniel Kersten Computational Vision Lab Psychology Department, U. Minnesota Goals Provide an overview of a major brain subsystem to help

More information

Realization of Visual Representation Task on a Humanoid Robot

Realization of Visual Representation Task on a Humanoid Robot Istanbul Technical University, Robot Intelligence Course Realization of Visual Representation Task on a Humanoid Robot Emeç Erçelik May 31, 2016 1 Introduction It is thought that human brain uses a distributed

More information

An Artificial Neural Network Architecture Based on Context Transformations in Cortical Minicolumns

An Artificial Neural Network Architecture Based on Context Transformations in Cortical Minicolumns An Artificial Neural Network Architecture Based on Context Transformations in Cortical Minicolumns 1. Introduction Vasily Morzhakov, Alexey Redozubov morzhakovva@gmail.com, galdrd@gmail.com Abstract Cortical

More information

Modeling of Hippocampal Behavior

Modeling of Hippocampal Behavior Modeling of Hippocampal Behavior Diana Ponce-Morado, Venmathi Gunasekaran and Varsha Vijayan Abstract The hippocampus is identified as an important structure in the cerebral cortex of mammals for forming

More information

Emanuel Todorov, Athanassios Siapas and David Somers. Dept. of Brain and Cognitive Sciences. E25-526, MIT, Cambridge, MA 02139

Emanuel Todorov, Athanassios Siapas and David Somers. Dept. of Brain and Cognitive Sciences. E25-526, MIT, Cambridge, MA 02139 A Model of Recurrent Interactions in Primary Visual Cortex Emanuel Todorov, Athanassios Siapas and David Somers Dept. of Brain and Cognitive Sciences E25-526, MIT, Cambridge, MA 2139 Email: femo,thanos,somersg@ai.mit.edu

More information

Pre-integration lateral inhibition enhances unsupervised learning

Pre-integration lateral inhibition enhances unsupervised learning Neural Computation, 14(9):2157 79, 2002. Pre-integration lateral inhibition enhances unsupervised learning M. W. Spratling and M. H. Johnson Centre for Brain and Cognitive Development, Birkbeck College,

More information

PCA Enhanced Kalman Filter for ECG Denoising

PCA Enhanced Kalman Filter for ECG Denoising IOSR Journal of Electronics & Communication Engineering (IOSR-JECE) ISSN(e) : 2278-1684 ISSN(p) : 2320-334X, PP 06-13 www.iosrjournals.org PCA Enhanced Kalman Filter for ECG Denoising Febina Ikbal 1, Prof.M.Mathurakani

More information

Learning to See Rotation and Dilation with a Hebb Rule

Learning to See Rotation and Dilation with a Hebb Rule In: R.P. Lippmann, J. Moody, and D.S. Touretzky, eds. (1991), Advances in Neural Information Processing Systems 3. San Mateo, CA: Morgan Kaufmann Publishers, pp. 320-326. Learning to See Rotation and Dilation

More information

Noise Cancellation using Adaptive Filters Algorithms

Noise Cancellation using Adaptive Filters Algorithms Noise Cancellation using Adaptive Filters Algorithms Suman, Poonam Beniwal Department of ECE, OITM, Hisar, bhariasuman13@gmail.com Abstract Active Noise Control (ANC) involves an electro acoustic or electromechanical

More information

Functional Elements and Networks in fmri

Functional Elements and Networks in fmri Functional Elements and Networks in fmri Jarkko Ylipaavalniemi 1, Eerika Savia 1,2, Ricardo Vigário 1 and Samuel Kaski 1,2 1- Helsinki University of Technology - Adaptive Informatics Research Centre 2-

More information

Analysis of in-vivo extracellular recordings. Ryan Morrill Bootcamp 9/10/2014

Analysis of in-vivo extracellular recordings. Ryan Morrill Bootcamp 9/10/2014 Analysis of in-vivo extracellular recordings Ryan Morrill Bootcamp 9/10/2014 Goals for the lecture Be able to: Conceptually understand some of the analysis and jargon encountered in a typical (sensory)

More information

The Visual System. Cortical Architecture Casagrande February 23, 2004

The Visual System. Cortical Architecture Casagrande February 23, 2004 The Visual System Cortical Architecture Casagrande February 23, 2004 Phone: 343-4538 Email: vivien.casagrande@mcmail.vanderbilt.edu Office: T2302 MCN Required Reading Adler s Physiology of the Eye Chapters

More information

A Probabilistic Network Model of Population Responses

A Probabilistic Network Model of Population Responses A Probabilistic Network Model of Population Responses Richard S. Zemel Jonathan Pillow Department of Computer Science Center for Neural Science University of Toronto New York University Toronto, ON MS

More information

Adaptation to contingencies in macaque primary visual cortex

Adaptation to contingencies in macaque primary visual cortex Adaptation to contingencies in macaque primary visual cortex MATTO CARANDINI 1, HORAC B. BARLOW 2, LAWRNC P. O'KF 1, ALLN B. POIRSON 1, AND. ANTHONY MOVSHON 1 1 Howard Hughes Medical Institute and Center

More information

Lateral Inhibition Explains Savings in Conditioning and Extinction

Lateral Inhibition Explains Savings in Conditioning and Extinction Lateral Inhibition Explains Savings in Conditioning and Extinction Ashish Gupta & David C. Noelle ({ashish.gupta, david.noelle}@vanderbilt.edu) Department of Electrical Engineering and Computer Science

More information

Memory, Attention, and Decision-Making

Memory, Attention, and Decision-Making Memory, Attention, and Decision-Making A Unifying Computational Neuroscience Approach Edmund T. Rolls University of Oxford Department of Experimental Psychology Oxford England OXFORD UNIVERSITY PRESS Contents

More information

Two Visual Contrast Processes: One New, One Old

Two Visual Contrast Processes: One New, One Old 1 Two Visual Contrast Processes: One New, One Old Norma Graham and S. Sabina Wolfson In everyday life, we occasionally look at blank, untextured regions of the world around us a blue unclouded sky, for

More information

Photoreceptors Rods. Cones

Photoreceptors Rods. Cones Photoreceptors Rods Cones 120 000 000 Dim light Prefer wavelength of 505 nm Monochromatic Mainly in periphery of the eye 6 000 000 More light Different spectral sensitivities!long-wave receptors (558 nm)

More information

Symbolic Pointillism: Computer Art motivated by Human Perception

Symbolic Pointillism: Computer Art motivated by Human Perception Accepted for the Symposium Artificial Intelligence and Creativity in Arts and Science Symposium at the AISB 2003 Convention: Cognition in Machines and Animals. Symbolic Pointillism: Computer Art motivated

More information

Bayesian inference and attentional modulation in the visual cortex

Bayesian inference and attentional modulation in the visual cortex COGNITIVE NEUROSCIENCE AND NEUROPSYCHOLOGY Bayesian inference and attentional modulation in the visual cortex Rajesh P.N. Rao Department of Computer Science and Engineering, University of Washington, Seattle,

More information

Analysis of sensory coding with complex stimuli Jonathan Touryan* and Yang Dan

Analysis of sensory coding with complex stimuli Jonathan Touryan* and Yang Dan 443 Analysis of sensory coding with complex stimuli Jonathan Touryan* and Yang Dan Most high-level sensory neurons have complex, nonlinear response properties; a comprehensive characterization of these

More information

Edge detection. Gradient-based edge operators

Edge detection. Gradient-based edge operators Edge detection Gradient-based edge operators Prewitt Sobel Roberts Laplacian zero-crossings Canny edge detector Hough transform for detection of straight lines Circle Hough Transform Digital Image Processing:

More information

Optimal speed estimation in natural image movies predicts human performance

Optimal speed estimation in natural image movies predicts human performance ARTICLE Received 8 Apr 215 Accepted 24 Jun 215 Published 4 Aug 215 Optimal speed estimation in natural image movies predicts human performance Johannes Burge 1 & Wilson S. Geisler 2 DOI: 1.138/ncomms89

More information

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES Varinthira Duangudom and David V Anderson School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA 30332

More information

Supplementary materials for: Executive control processes underlying multi- item working memory

Supplementary materials for: Executive control processes underlying multi- item working memory Supplementary materials for: Executive control processes underlying multi- item working memory Antonio H. Lara & Jonathan D. Wallis Supplementary Figure 1 Supplementary Figure 1. Behavioral measures of

More information

Understanding Convolutional Neural

Understanding Convolutional Neural Understanding Convolutional Neural Networks Understanding Convolutional Neural Networks David Stutz July 24th, 2014 David Stutz July 24th, 2014 0/53 1/53 Table of Contents - Table of Contents 1 Motivation

More information