Multistability and Perceptual Inference

Size: px
Start display at page:

Download "Multistability and Perceptual Inference"

Transcription

1 ARTICLE Communicated by Nando de Freitas Multistability and Perceptual Inference Samuel J. Gershman Department of Psychology and Neuroscience Institute, Princeton University, Princeton, NJ, U.S.A. Edward Vul Department of Psychology, University of California, San Diego, CA 92093, U.S.A. Joshua B. Tenenbaum Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, U.S.A. Ambiguous images present a challenge to the visual system: How can uncertainty about the causes of visual inputs be represented when there are multiple equally plausible causes? A Bayesian ideal observer should represent uncertainty in the form of a posterior probability distribution over causes. However, in many real-world situations, computing this distribution is intractable and requires some form of approximation. We argue that the visual system approximates the posterior over underlying causes with a set of samples and that this approximation strategy produces perceptual multistability stochastic alternation between percepts in consciousness. Under our analysis, multistability arises from a dynamic sample-generating process that explores the posterior through stochastic diffusion, implementing a rational form of approximate Bayesian inference known as Markov chain Monte Carlo (MCMC). We examine in detail the most extensively studied form of multistability, binocular rivalry, showing how a variety of experimental phenomena gamma-like stochastic switching, patchy percepts, fusion, and traveling waves can be understood in terms of MCMC sampling over simple graphical models of the underlying perceptual tasks. We conjecture that the stochastic nature of spiking neurons may lend itself to implementing sample-based posterior approximations in the brain. 1 Introduction We perceive the world through our senses, but any stream of sense data is a fundamentally impoverished source of information about the external world. To take the most familiar example, retinal images are Neural Computation 24, 1 24 (2012) c 2011 Massachusetts Institute of Technology

2 2 S. Gershman, E. Vul, and J. Tenenbaum two-dimensional, while the external world is three-dimensional. Our brains must go beyond what is directly available in the sensory data through a process of interpretation to achieve the rich percepts of conscious awareness. Bayesian statistics has provided a powerful framework for understanding this inferential process (Knill & Richards, 1996); its central postulate is that percepts represent hypotheses about the external world whose degree of belief (the posterior probability) is determined by a combination of sensory evidence (the likelihood) and background assumptions about a priori plausible world structures (the prior). Numerous experiments have confirmed the explanatory reach of Bayesian models in perception as well as cognition, motor control, and other areas of brain function (Brainard & Freeman, 1997; Weiss, Simoncelli, & Adelson, 2002; Kording & Wolpert, 2004; Griffiths & Tenenbaum, 2006). Computer vision and artificial intelligence researchers have shown how Bayesian inference operating over networks of random variables known as probabilistic graphical models can solve many realworld perceptual inference and higher-level reasoning tasks (Geman & Geman, 1984; Koller & Friedman, 2009). Computational neuroscientists have formulated hypotheses about how populations of neurons can encode, transform, and decode posterior distributions (Knill & Pouget, 2004). Despite these successes arguing for a unifying Bayesian view of the brain, a major computational obstacle remains: Bayesian inference in large-scale graphical models needed to capture real-world perceptual tasks is computationally intractable. Posterior degrees of belief in these models cannot, in general, be calculated exactly by any resource-limited computational device. Bayesian statisticians and researchers in computer vision and artificial intelligence (AI) routinely use a range of approaches and algorithms for approximating ideal Bayesian inference in graphical models (Koller & Friedman, 2009). Thus, Bayesian theories of perceptual inference remain incomplete until they can explain how the brain approximates posterior inference effectively and quickly. Early Bayesian models of human perception proposed that instead of calculating the full posterior distribution, the brain finds only the point estimate with highest posterior probability (the maximum a posteriori, or MAP, estimate). Although MAP estimation provides a remarkably good account of many experimental phenomena (Schrater & Kersten, 2000; Weiss et al., 2002), it is not sufficient to account for human behavior. Not only is MAP estimation suboptimal because it fails to represent uncertainty about perceptual interpretations, but more important, it can not account for recent evidence that humans and other animals represent their uncertainty in perceptual tasks (Grinband, Hirsch, & Ferrera, 2006; Kiani & Shadlen, 2009). Thus, we must consider alternative approximate inference strategies the brain might adopt. Here we argue that perceptual multistability alternation between conscious percepts in response to ambiguous sensory inputs (Blake &

3 Multistability and Perceptual Inference 3 Logothetis, 2002) provides both qualitative and quantitative evidence for a different and more powerful approach to approximate Bayesian inference in the brain, which can be formalized in terms of algorithms from Bayesian statistics, computer vision, and AI. Specifically, we show how perceptual multistability can be naturally explained as a consequence of Monte Carlo inference, approximating Bayesian posterior distributions with a set of samples (Robert & Casella, 2004). An emerging literature suggests that Monte Carlo approximations might be employed in human (Sanborn, Griffiths, & Navarro, 2006; Levy, Reali, & Griffiths, 2009; Vul, Goodman, Griffiths, & Tenenbaum, 2009) and animal (Daw & Courville, 2007) cognition, and several authors have proposed that the spiking activity of neurons might be interpreted as a set of samples (Hoyer & Hyvärinen, 2003; Lee & Mumford, 2003; Shi & Griffiths, 2009; Fiser, Berkes, Orbán, & Lengyel, 2010). We suggest that Markov chain Monte Carlo (MCMC) methods are particularly well suited to modeling perceptual multistability. These methods provide state-of-the-art engineering performance on the large-scale graphical models needed in computer vision. Moreover, when MCMC algorithms are applied to graphical models, they produce spatiotemporal dynamics in inference that correspond to experimentally observed phenomena of multistability, as we explore below. Prior work has argued that stochastic sampling produces perceptual multistability (Schrater & Sundareswara, 2007; Sundareswara & Schrater, 2008) in the case of the Necker cube. We go beyond this work by showing that a rational MCMC algorithm for approximate Bayesian inference in graphical models produces a broad range of perceptual multistability phenomena while approximating a rational solution to a real vision problem: inferring the correct image interpretation from noisy binocular inputs. We focus on the most extensively studied form of perceptual multistability, binocular rivalry, in which incompatible images (e.g., orthogonal gratings) are presented dichoptically to the eyes. Under most (but not all) conditions, this results in a percept that switches stochastically between the two images (see Blake, 2001, for a review). We first formulate a simple graphical model for retinal images, such that inference in this model using Bayes s rule recovers the latent scene underlying sensory inputs. We then show how approximate Bayesian inference in this model using MCMC can reproduce a variety of psychophysical phenomena. Finally, we discuss the connections between MCMC computations and possible neural implementations. Our model is at present a caricature of the visual processing pathway that gives rise to multistable perception, and binocular rivalry in particular. We have endeavored to simulate only a handful of the numerous phenomena for which quite sophisticated and neurally realistic models have been developed (Laing & Chow, 2002; Freeman, 2005; Moreno-Bote, Rinzel, & Rubin, 2007; Noest, van Ee, Nijs, & Van Wezel, 2007; Wilson, 2007). Our goal in this article is not to supplant the existing theoretical work but rather

4 4 S. Gershman, E. Vul, and J. Tenenbaum Re nal image Outlier process Latent image Figure 1: Generative model. Schematic illustrating the probabilistic process by which retinal images are generated. Shaded nodes denote observed variables; unshaded nodes denote unobserved (latent) variables. Arrows denote causal dependencies. to supplement it by providing a normative statistical motivation for the dynamical processes posited by these models. We believe these ideas are best illustrated in a simplified setting, in which their mechanistic principles are relatively transparent. The idea of sampling over a graphical model structure is very general and can be applied to more sophisticated versions of the model we present. 2 A Probabilistic Model of Binocular Rivalry Our starting point is a probabilistic generative model of retinal images that represents the brain s assumptions about how its sensory inputs were generated (see Figure 1). Formally, this model is a variant of a Markov Random Field (MRF), one of the most standard forms of probabilistic graphical model used in computer vision (Geman & Geman, 1984). In the appendix, we describe this model mathematically, but here we focus on the basic intuitions behind this model. The model posits two sets of latent variables: a latent scene and an eye-specific outlier process that governs whether a given patch of retina observes the latent scene. The goal of the visual system is to infer the underlying image given the observed retinal input. We represent the latent scene and the two retinal images as arrays of luminance values (e.g., a grayscale pixel image), while the outlier processes are represented as binary arrays of the same size. The luminance values for the pixels in each eye are the noise-perturbed luminance of the

5 Multistability and Perceptual Inference 5 corresponding pixels of the scene. However, if a pixel from one eye is deemed to be an outlier, then the luminance at that pixel is meaningless and contains no information with respect to the scene. As we discuss below, this scene representation is admittedly highly simplified, but it is adequate for our goal of studying the dynamics of inference in multistable perception rather than scene perception per se. This outlier process allows the possibility that sometimes the retinal signal does not correspond to the scene at all for instance, if an occluder blocks a portion of one eye s field of view (Shimojo & Nakayama, 1990) or if a portion of the cornea or retina of one eye is damaged. Thus, the outlier process captures the notion that if, for one of these reasons, a portion of the retinal image does not correspond to the scene, it should simply be ignored (suppressed). This aspect of our model allows the two retinal images to compete in rivalry for representation in the inferred scene: if the two eyes see conflicting images, at least one of them must not represent the actual scene present in the world. The psychological and neural plausibility of the outlier process is motivated by evidence that suppression during binocular rivalry is accompanied by a general loss in visual sensitivity (Blake, 2001). For example, target detection performance is lower (Wales & Fox, 1970; Fukuda, 1981) and reaction times are longer (Fox & Check, 1968; O Shea, 1987) for probe stimuli presented to the suppressed eye compared to probe stimuli presented to the dominant eye. Moreover, single-unit recordings (Leopold & Logothetis, 1996) and functional magnetic resonance imaging (fmri) studies (Polonsky, Blake, Braun, & Heeger, 2000) have demonstrated inhibition of responses in primary visual cortex during suppression periods. These findings indicate that the visual system is engaged not only in inferring the latent causes of its sensory inputs, but also inferring the reliability of its measurement devices. Crucially, this outlier process is spatially smooth, so that two adjacent pixels are more likely to have the same outlier status. This corresponds to the idea that if a given portion of retina is not receiving a signal corresponding to the scene, its neighbors are also not likely to represent the scene, since occlusion, macular degeneration, retinitis pigmentosa, cataracts, or something else disrupts the field of view in a spatially smooth manner. Thus far, we have described a probabilistic graphical model designed to infer the luminance value of each pixel of an underlying two-dimensional latent scene independently. However, the visual system has a much more ambitious goal: inferring a complete, semantically coherent, three-dimensional latent scene. A full generative model of the scenes that the visual system considers would involve many complex constraints and incompletely understood assumptions; developing such a model would be hard to do without a complete account of human vision and visual cognition. Instead, as a proxy for such a rich model, we assume an image-like scene representation with luminance values that correspond to some combination of the luminance values of the real images that comprise the stimulus and that the

6 6 S. Gershman, E. Vul, and J. Tenenbaum degree to which pixels correspond to one stimulus or another is spatially smooth. This smooth latent image prior is a simple way to encode the idea of spatially organized semantic coherence in the inferred latent scene. Specifically, we approximate a rich hypothesis space of latent images without having to specify a complete distribution over natural images, by assuming that each latent scene pixel s n is a linear combination of the corresponding real images pixels (x L n and xr n ) in the stimulus: s n = w n xl n + (1 w n )x R n, where w n can theoretically range over the entire real line and is assumed to be spatially smooth. This can be understood as an empirical approximation to the distribution over natural images, using only linear combinations of the observed images. In practice, this heuristic will work nearly as well, since the posterior distribution will be strongly concentrated around the two retinal images. 3 Approximate Inference by Sampling Given an observed pair of retinal images, Bayes s rule can be used to invert the generative model and recover the latent scene: P(s x) = 1 Z P(s) π P(x s,π)p(π ), (3.1) where Z is a normalizing constant and π denotes the outlier process. Because it is generally intractable to calculate the normalizing constant, approximations must be considered. Monte Carlo methods (Robert & Casella, 2004) approximate the posterior with a set of M samples: P(s x) 1 M M δ [ s (m), s ], (3.2) m=1 where δ[, ] is the Kronecker delta function whose value is 1 when its arguments are equal, and 0 otherwise. The challenge is to draw samples from the posterior; in most cases, the posterior cannot be sampled from directly. The key idea behind MCMC methods is that a sequence of distributions (comprising a Markov chain) can be constructed that eventually converges to the posterior. This means that after a burn-in period, samples from this Markov chain will be distributed according to the posterior. Gibbs sampling (Geman & Geman, 1984), perhaps the best-known MCMC method, was originally developed for MRFs similar to our model. It proceeds by performing sweeps over the latent variables (scene and outlier nodes), sampling each variable from its conditional distribution while holding all the other variables fixed (see the appendix for details). In our implementation, the order in which nodes are updated in each sweep is randomized. Due to the topology of the MRF, each conditional distribution depends on only a

7 Multistability and Perceptual Inference 7 subset of the latent variables (see the appendix), and hence Gibbs sampling can be done using only information that is local to each latent variable. 4 Results We now illustrate the behavior of our sampling model using several key paradigms from the binocular rivalry literature. Our goal is to highlight the conceptual principle underlying our explanation of multistable perception: stochastic perceptual dynamics arise from approximate inference in a graphical model of visual inputs. Details about parameter selection and robustness can be found in the appendix. 4.1 Dynamics of Perceptual Switches. In a typical binocular rivalry experiment, subjects are asked to say which of two images corresponds to their global percept. To make the same query of the current state of our simulated Markov chain, we defined a perceptual switch to occur when at least 9/10 of the scene nodes take on the same value. Figure 2A shows a sample time course. It may seem surprising that the model spends relatively little time near the extremes and that switches are fairly gradual. This phenomenonology is consistent with several experiments showing that substantial time is spent in transition periods (Mueller & Blake, 1989; Brascamp, van Ee, Noest, Jacobs, & van den Berg, 2006; Kang, 2009). One of the most robust findings in the literature on binocular rivalry is that switching times between different stable percepts tend to follow a gamma-like distribution (Blake, 2001). In other words, the dominance durations of stability in one mode tend to be neither overwhelmingly short nor long. Figure 2B shows the histogram of dominance durations and the maximum-likelihood fit of a gamma distribution to our simulations and real data, demonstrating that the durations produced by MCMC are well described by a gamma distribution. To account for this gamma-like switching behavior, many papers have described neural circuits that could produce switching oscillations with the right stochastic dynamics by suggesting a competition between mutual inhibition and adaptation (Moreno-Bote et al., 2007; Wilson, 2007). Similarly, existing rational process models of multistability (Dayan, 1998; Schrater & Sundareswara, 2007; Sundareswara & Schrater, 2008) add specific adaptation- or memory-like constraints to produce this effect. In our model, these additional memory constraints are unnecessary because the spatial coupling of adjacent nodes in the latent scene creates a form of memory in the inference. The gamma distribution arises in MCMC on an MRF because each hidden node takes an approximately exponentially distributed amount of time to switch, but these switches must co-occur in sequence. So the total amount of time until enough nodes switch to one mode will be the sum of exponential random variables, or a gamma distribution. Thus, gamma-distributed

8 8 S. Gershman, E. Vul, and J. Tenenbaum Perceptual state Probability A B Time Simulated Empirical α=1.44 β= Dominance duration Figure 2: Perceptual switching dynamics. (A) Simulated time course of bistability. Plotted on the y-axis is the number of nodes with value greater than 0.5. The horizontal lines show the thresholds for a perceptual switch. (B) Distribution of simulated dominance durations (mean-normalized) for MRF with lattice topology. Curves show gamma distributions fitted to simulated (with parameter values shown on the right) and empirical data, replotted from Mamassian and Goutcher (2005). dominance durations fall naturally out of MCMC operating on a spatially smooth MRF. 4.2 Piecemeal Rivalry and Traveling Waves. Another empirical observation about spatiotemporal dynamics in rivalry is that stability is often incomplete across the visual field, producing piecemeal rivalry, in which one portion of the visual field looks like the image in one eye, while another portion looks like the image in the other eye (Mueller & Blake, 1989). One intriguing feature of these piecemeal percepts is the phenomenon known as traveling waves: subjects tend to perceive a perceptual switch as a wave propagating over the visual field (Wilson, Blake, & Lee, 2001; Lee, Blake, & Heeger, 2005): the suppressed stimulus becomes dominant in an isolated location of the visual field and then gradually spreads. These traveling waves reveal an interesting local dynamics during an individual switch itself. Demonstrating the dynamics of traveling waves within patches of the percept requires a different method of probing perception instead of asking

9 Multistability and Perceptual Inference 9 Figure 3: Traveling waves. (A) Annular stimuli used by Lee et al. (2005) (left and center panels) and the subject percept reported by observers (right panel), in which the low-contrast stimulus was seen to spread around the annulus, starting at the top. Figure reprinted with permission from Lee et al. (2005). (B) Propagation time as a function of distance around the annulus, replotted from Wilson et al. (2001). Filled circles represent radial gratings, and open circles represent concentric gratings. A transient increase in contrast of the suppressed stimulus induces a perceptual switch at the location of contrast change. The propagation time for a switch at a probe location increases with distance (around the annulus) from the switch origin. (C) Simulated propagation time (measured by the time to switch percept following a switch at varying distances around the annulus). (D) Average simulated propagation time between nodes separated by a gap compared to nodes without a gap. subjects to evaluate the global percept. Wilson et al. (2001) used annular stimuli (see Figure 3A) and probed a particular patch along the annulus. They showed that the time at which the suppressed stimulus in the test patch becomes dominant is a function of the distance (around the circumference of the annulus) between the test patch and the patch where a dominance switch was induced by transiently increasing the contrast of the suppressed stimulus (see Figure 3B). This dependence of switch time on distance suggested to Wilson et al. that stimulus dominance was propagating around the annulus. Wilson et al. made two additional observations about the nature

10 10 S. Gershman, E. Vul, and J. Tenenbaum Figure 4: Piecemeal rivalry. (A) Exclusive visibility as a function of stimulus size for a single subject. Replotted from O Shea et al. (1997). (B) Simulated fraction exclusivility as a function of stimulus size. of traveling waves: (1) propagation is faster along a collinear (concentric) contour compared to an orthogonal (radial) contour, and (2) introducing a gap in the annulus substantially slows or halts propagation. Subsequently, using fmri, Lee et al. (2005) showed that the propagation of this traveling wave can be observed in primary visual cortex. To simulate such traveling waves within the percept of a stimulus, we constructed a 7 7 MRF with annular topology and measured the propagation time at different nodes along the annulus. We computed pairwise correlation between nodes and converted this to propagation time by exponentially transforming the negative correlation and then rescaling it to match the empirical propagation times. Figure 3C shows the simulated propagation time between image nodes increasing as a function of distance around the annulus. Since the spatial coupling induces local dependencies in the MRF, nodes will be more likely to switch once their neighbors have switched, thus producing a domino effect around the ring. We can account for faster propagation of traveling waves in the concentric rings condition by increasing the strength of spatial coupling for such collinear stimuli (following the observed greater coupling of dominance in collinear patches; see Alais & Blake, 1999). Moreover, since a missing node disrupts the domino effect of spatial coupling, we can reproduce the disruptive effect of a gap on traveling wave propagation (Wilson et al., 2001) by removing one of the nodes in the annulus and comparing propagation with and without the gap (see Figure 3D). Another factor influencing piecemeal rivalry is the size of the rivaling stimuli. Several experiments have reported that exclusivity (the amount of time spent in a stable percept) decreases as a function of stimulus size (see Figure 4A; Blake, O Shea, & Mueller, 1992; O Shea, Sims, & Govan, 1997). We simulated this finding by manipulating the proportion of the retinal images

11 Multistability and Perceptual Inference 11 occupied by the stimuli. Figure 4B shows the results of this simulation, consistent with the experimental findings. The intuitive explanation for this phenomenon is that larger stimuli require more individual node switches to achieve stable percepts; thus, there will be longer periods in which the scene configuration occupies local minima of the energy landscape corresponding to piecemeal percepts. 4.3 Binocular Fusion. In addition to the mode-hopping behavior that characterizes binocular rivalry, bistable percepts often produce other states. In some conditions, the two percepts are known to fuse rather than rival: the percept then becomes a composite or superposition of the two stimuli (Brascamp et al., 2006). This fused perceptual state can be induced most reliably by decreasing the distance in feature space between the two stimuli (Knapen, Kanai, Brascamp, van Boxtel, & van Ee, 2007) or decreasing the contrast of both stimuli (Liu, Tyler, & Schor, 1992; Burke, Alais, & Wenderoth, 1999). Fusion is documented in experiments where subjects are given three options to report their percept: one of two global percepts or something in between. We define such a fused percept as a perceptual state lying between the two bistable modes that is, an interpretation between the two rivalrous, high-probability interpretations. On our account, manipulation of distance in feature space amounts to varying the distance between the two modes, and reduction of contrast corresponds to an increase in the variance around the modes. By making the modes closer together or increasing the variance parameter (σi 2 ) in the model, greater probability mass is assigned to the intermediate interpretation a fused percept. Thus, these manipulations shift the energy landscape so as to systematically increase the odds of fused percepts, matching the phenomenology of these stimuli (see Figure 5). 4.4 Contrast Dependence of Dominance Durations. A classic observation in the psychophysics of binocular rivalry is that when the contrast in one eye (the variable-contrast eye) is increased, the dominance durations in the other (fixed-contrast) eye decrease while the dominance durations in the variable-contrast eye stay relatively unchanged. This phenomenon is often referred to as Levelt s second proposition (Levelt, 1965) and has been a key target for many models of binocular rivalry (Dayan, 1998; Lankheet, 2006; Moreno-Bote et al., 2007; Wilson, 2007). A standard explanation of this phenomenon appeals to the interaction between two dynamic processes: short-timescale changes in mutual inhibition between monocular neural populations and long-timescale changes in adaptation. The neurons representing the currently dominant stimulus inhibit the neurons representing the currently suppressed stimulus, but adaptation of the inhibiting neurons causes this suppression to wane over time, allowing the previously

12 12 S. Gershman, E. Vul, and J. Tenenbaum Figure 5: Fusion of binocular images. Simulated (A) and empirical (B) fraction of exclusive percepts as a function of feature space distance between the binocular stimuli. Data replotted from Knapen et al. (2007). Simulated (C) and empirical (D) fraction of exclusive percepts as a function of stimulus contrast. Model contrast is measured by the inverse of the variance parameter, σi 2. Data replotted from Burke et al. (1999). inhibited neurons to accumulate enough activation to cause a perceptual switch. Brascamp et al. (2006) have observed that Levelt s second proposition is valid only when the stimulus presented to the fixed-contrast eye is relatively high contrast. Under low-contrast conditions, the proposition actually reverses (see Figure 6). Brascamp et al. (2006) suggested amending Levelt s second proposition to: Changes to one of the two contrasts mainly affect dominance durations in the higher contrast eye. Our model captures the contrast dependence of dominance durations under both high and low fixed-contrast conditions (see Figure 6). On our account, this effect arises from transitions between several inference regimes: 1. When contrast to both eyes is (equally) low, then the data in both eyes are uncertain and easy to discount; therefore, the modes corresponding to one or another eye being dominant are easy to escape, fused percepts are common (see section 4.3), and dominance durations in both eyes will be brief.

13 Multistability and Perceptual Inference 13 Figure 6: Effect of stimulus contrast on dominance durations. Simulated (A) and experimental (B) dominance durations as a function of stimulus contrast in the variable contrast eye (x-axis), when the stimulus presented to the fixedcontrast eye is high contrast (the maximum of the contrast range presented to the variable contrast eye). Simulated (C) and experimental (D) dominance durations as a function of stimulus contrast in the variable-contrast eye when the stimulus presented to the fixed-contrast eye is low contrast (the minimum of the contrast range presented to the variable contrast eye). Data replotted from Brascamp et al. (2006). 2. When contrast is low in one eye but high in the other, the low-contrast mode is easy to escape; moreover, the high-contrast eye is hard to discount as an outlier in favor of the low-contrast eye. Altogether, the higher-contrast eye enjoys long dominance durations. 3. When the contrast in both eyes is high, fused or mixed percepts are unlikely and do not last long (Hollins, 1980). Moreover, neither eye is easier to discount: therefore, the advantage that the relatively highcontrast eye had in regime 2 no longer exists, and dominance durations in both eyes will again be brief. Thus, when the fixed-contrast eye has high contrast and the contrast of the variable-contrast eye is increased to match it, the system transitions from regime 2 to regime 3 and consequently will increase the posterior probability that the inputs to the fixed-contrast eye are outliers. This effectively suppresses the fixed-contrast eye and reduces its dominance durations. When the fixed-contrast eye sees a low-contrast image and the contrast of the variable-contrast eye is increased to be higher, the system transitions

14 14 S. Gershman, E. Vul, and J. Tenenbaum from regime 1 to regime 2, thereby increasing the dominance duration of the variable-contrast eye. 5 Discussion Perceptual multistability is perhaps the most compelling evidence available that the brain attempts to approximately represent a posterior distribution, rather than just a point estimate, about the latent causes underlying sensory inputs. The theory developed in this article, though incomplete in many ways, is proof of concept that sampling-based posterior approximations on simple but realistic graphical models can reproduce a variety of psychophysical phenomena, including the rich spatiotemporal dynamics observed in binocular rivalry experiments. Importantly, these psychophysical phenomena arise in our model from an approximately rational solution to the fundamental problem of inferring the correct image interpretation from noisy sensory inputs. 5.1 Related Work. The idea that perceptual multistability can be construed in terms of sampling in a Bayesian model was first proposed by Sundareswara and Schrater (Schrater & Sundareswara, 2007; Sundareswara & Schrater, 2008), and our work follows theirs closely in several respects. However, we believe our work offers several additional theoretical insights. First, Sundareswara and Schrater s model requires sampling from the known full posterior; thus, it is not clear whether their model reflects an effective approximate inference algorithm when the posterior is not known. In contrast, we show that this behavior need not require sampling from the full posterior, but emerges from a simple, common, and effective sampling method for approximate inference in graphical models, demonstrating that multistable perception can be understood as the consequence of the engineering principles responsible for recent progress in computer vision and artificial intelligence (Geman & Geman, 1984; Koller & Friedman, 2009). Second, our model offers an explanation for the memory decay on the latest best sample postulated by Sundareswara and Schrater: this apparent persistence and decay of sampled states is a natural consequence of the principled coupling of adjacent eye and image nodes. These principles highlight a more general point of divergence between our model and other Bayesian models of multistable perception (Dayan, 1998; Hohwy, Roepstorff, & Friston, 2008). Other models postulate intrinsically stochastic neurophysiological processes such as noisy sensory detection, adaptation, inhibition, and so on, to account for the stochastic dynamics of multistability. In contrast, we show that these stochastic dynamics fall out naturally from a rational approach to approximate Bayesian inference in graphical models. To be clear, we do not mean to suggest that the details of stochastic neurophysiological processes are unimportant (see section 5.2), but our goal here is to show how the stochastic dynamics

15 Multistability and Perceptual Inference 15 of perceptual multistability could arise from rational algorithms for approximate probabilistic inference, independent of and at a more abstract level of analysis than any specific mechanism for implementing inference algorithms in the brain. Relating our proposals at the algorithmic level to the details of neural processing is an important avenue for future work. Successful dynamic neural models of binocular rivalry have argued that dominance arises from inhibitory connections between monocular neurons that suppress one eye or the other, while switching arises from adaptation of those inhibitory neurons, reducing suppression of one eye and allowing it to suppress the formerly dominant eye in turn (Wilson, 2003; Noest, van Ee, Nijs, & Van Wezel, 2007). Such models are supported by the observation that suppression and dominance are complementary (Alais, Cass, O Shea, & Blake, 2010): gradual loss of visual sensitivity in one eye is accompanied by gradual gain in the other eye. Our model suggests a rationale for such behavior: when the inferred latent image is inconsistent with the sensory signals in one eye, the visual system must explain away those sensory signals by ascribing them to a noise process. Thus, while an eye is suppressed, signals to that eye are interpreted as arising from noise, resulting in a loss of sensitivity and suppression of input to that eye. Moreover, this outlier process is coupled between eyes because inferring an outlier in both eyes is less parsimonious than simply explaining away the sensory input in one eye; thus, it is sensible for the two eyes to alternate and for the dominance of one to co-occur with the suppression of the other. Which outlier process is preferred depends on local propagation dynamics, and shifts from one state to another are enabled by stochastic changes in the surrounding nodes as well as in the latent image nodes. Thus, adaptation-like dynamics can arise from simple network dynamics. Of course, these two explanations are not mutually exclusive, and adaptation might be a neural heuristic for approximating the appropriate propagation dynamics. Another important explanatory principle employed by a number of recent models is a hierarchy of processing levels at which perceptual competition can occur (Wilson, 2003; Freeman, 2005). This principle is motivated by findings from psychophysical experiments showing that competition occurs both between eyes and between objects. For example, rapid swapping of flickering orthogonal monocular gratings between the eyes results in perceptual dominance durations that exceed the rate of swapping (Logothetis, Leopold, & Sheinberg, 1996), suggesting that rivalry is at the level of perceptual objects rather than the eye of origin. However, in experiments where monocular stimuli are swapped after the subject reports exclusive dominance in one eye, the suppressed perceptual object immediately becomes dominant and the dominant pattern becomes suppressed (Blake, Westendorf, & Overton, 1980), suggesting that rivalry is at the level of eyes rather than objects. These conflicting findings are somewhat resolved by brain imaging studies providing evidence for competition in both V1 and in extrastriate areas (see Tong, Meng, & Blake, 2006, for

16 16 S. Gershman, E. Vul, and J. Tenenbaum a review): rivalry in the brain appears to occur at the level of both the monoocular sensory input and the binocular perceptual object. Our model also captures this notion of hierarchical competition: rivalry at the ocular level arises from the spatial coupling of the eye-specific noise process MRFs, thus encouraging one or another eye to be dominant. In turn, rivalry at the image level arises from the coupling of the latent image MRFs, which captures binocular extrastriate representations and encourages single coherent images to be dominant. Because the sampling dynamics operate over MRFs at both levels, the multistable stochastic alternations are driven by changes at the eye and pattern level and reflect both ocular and pattern rivalry. 5.2 Neural Implementation. Many neurally plausible models have been proposed that emulate binocular rivalry (Laing & Chow, 2002; Freeman, 2005; Moreno-Bote et al., 2007; Noest et al., 2007; Wilson, 2007). We have set out to address a slightly different question that does not challenge the validity of these models: What does such a neural architecture accomplish? Our work in this article can be seen as a rational statistical analysis that gives insight into the neural dynamics of these models why they work as they do. In this section, we bring the focus back to neural architecture, highlighting the similarity between our model and previously proposed architectures. Explicit links can be established between the rational dynamics of inference in an MRF and the dynamics in classic neural network models, including models of rivalry. There is an intimate connection between the MRF model presented here and classical neural networks in computational neuroscience, such as the Hopfield net (Hopfield, 1982) and the Boltzmann machine (Ackley, Hinton, & Sejnowski, 1985). In fact, both the Hopfield net and the Boltzmann machine are examples of MRFs. The latent scene nodes in our model can be thought of as neuron-like units whose firing signals the presence of a particular image feature and whose synaptic input is the negative energy potential contributed by local neurons and sensory inputs (the retinal image nodes). This local connectivity structure is characteristic of primary visual cortex (Das & Gilbert, 1999). Another interesting aspect of this model is that the negative energy potential integrated by each neuron is a logarithmic function of probability; such logarithmic coding has been widely implicated in cortical systems (Gold & Shadlen, 2002). Furthermore, the stochastic nature of neural firing makes it well suited for implementing sample-based approximations (Hoyer & Hyvärinen, 2003). Gibbs sampling operating over the MRF gives rise to what is known as Glauber dynamics (Glauber, 1963), a well-studied phenomenon in the neural networks literature (Amit, 1989). The hallmark of Glauber dynamics (as applied here) is spatiotemporal autocorrelation in neural responses (e.g., traveling waves) as the network diffuses stochastically toward low-energy stable states. Importantly, once a stable state is reached, the stochastic nature of the dynamics ensures that eventually the network will leave that state

17 Multistability and Perceptual Inference 17 and temporarily settle in a new one, thereby anticipating the dynamics of perceptual multistability. To make these connections more explicit and show the link to neurally based models of rivalry, consider the simplified setting where s n {x L n, xr n }, which is equivalent to assuming that w n {0, 1}, and π n = 1, n. That is, w n = 1 if the left eye is dominant at pixel n, and 0 otherwise. The Gibbs sampling updates presented in the appendix can be computed by a stochastic network of neurons. Let y n denote a binary neuron whose firing (y n = 1) signals that the left eye is dominant in the latent image at pixel n. This neuron receives net synaptic input a n and emits a spike with probability 1 P(y n = 1) = 1 + e τa, (5.1) n where a n = ( b n x L 2 ( n) bn x R ) 2 n β (2w j 1). (5.2) j C n The synaptic inputs consist of lateral connections from neighboring binocular neurons (w j ) and monocular retinal inputs (x L n and xr n ). To a first approximation, the net synaptic input reflects competition between eyes as well as between stimulus patterns (as in Wilson, 2003). In particular, the contrastive terms in the activation function can be interpreted as mutually inhibitory connections. Another property embodied by these equations is the stochastic nature of spike generation that enables transitions between attractor states. The role of noise-induced transitions in multistability has been emphasized by several recent models (Moreno-Bote et al., 2007; van Ee, 2009). Our rational analysis elucidates the role that such transitions might play in allowing the visual system to sample statistically optimal percepts in a computationally tractable fashion. It is straightforward to verify that sampling from equation 5.1 is equivalent to sampling from equation A.5 in the appendix. Thus, perceptual inference in our model can be mapped approximately onto a linearnonlinear Poisson cascade (Schwartz, Pillow, Rust, & Simoncelli, 2006): (1) a linear integration of synaptic inputs, (2) a nonlinear transformation of the net input into a scalar firing rate, and (3) stochastic spike generation according to a Poisson process, approximated in discrete time by a Bernoulli process. The linear-nonlinear Poisson cascade is a canonical building block of neural models, particularly in early vision (Carandini et al., 2005); our model provides a functional interpretation of the cascade in terms of rational perceptual inference.

18 18 S. Gershman, E. Vul, and J. Tenenbaum 5.3 Limitations and Extensions. MRFs are just one of many kinds of probabilistic graphical model that have been successfully applied in recent computer vision and AI research. They are well suited to lower-level perceptual tasks but not to more causally structured higher-level problems of scene understanding; there, directed models such as Bayesian networks are more appropriate (Yuille & Kersten, 2006). Likewise, multistability occurs in other perceptual contexts besides binocular rivalry, such as depth reversals in the perception of three-dimensional objects (Blake & Logothetis, 2002). It would be interesting to see if phenomena of multistability occurring in higher-level perceptual contexts can also be explained in terms of MCMC inference on appropriately structured graphical models. While our account emphasizes the algorithmic dynamics of sampling, an alternative perspective, explored by Bialek & DeWeese (1995), is that multistability arises from the dynamics of the sensory inputs themselves. This perspective holds that multistability is a normative adaptation to a nonstationary world. Although intuitively appealing given that natural sensory inputs really are nonstationary, a limitation of this persepctive is that it does not apply as congenially to situations where ambiguity is due to static causes, for example, occlusion of one eye. A more fundamental weakness of our model pertains to its representation of uncertainty. MCMC methods represent uncertainty through the ensemble of samples generated over time, yet we have portrayed the perceptual system as essentially keeping track of only the most recent sample. One way to rehabilitate the normativity of the model s output would be to posit a downstream area (e.g., in extrastriate or inferior temporal cortex) that compiles the samples or estimates statistics in an online fashion. We leave development of such a theory to future work. Although we have focused on perceptual multistability, other cognitive processes may also be amenable to a similar analysis. For example, it has been suggested that the acquisition of logical concepts could arise from MCMC-based diffusion through a space of grammars (Ullman, Goodman, & Tenenbaum, 2010). Anchoring and adjustment effects, in which people seem to incrementally adjust their numerical estimates away from an anchor (Epley & Gilovich, 2006), could arise from a similar process. Generally MCMC methods make inductive inference practical in many different kinds of hypothesis spaces, and their characteristic dynamics lead to testable predictions. In conclusion, we have argued that MCMC algorithms provide a psychologically and biologically plausible explanation for perceptual multistability where traditional point estimation schemes fail. It is worth emphasizing that purely computational-level analyses will not suffice to explain this phenomenon; the explanatory power of our model derives from its algorithmic-level analysis. Although incomplete in many ways, it points toward the fruitfulness of bringing algorithmic ideas into rational statistical theories of cognition.

19 Multistability and Perceptual Inference 19 Appendix: Model Details A.1 Image Model. We formulate the posterior distribution over image and outlier variables using a Gibbs distribution: { N P(s,π x) exp τ En s τ n=1 i ( E xi n + ) } Eπi n, (A.1) where i {L, R} indexes the eyes, τ is a global inverse temperature parameter (which we set to 1/250), and the energy functions are defined as follows: En xi = π ( ) n i x i 2 n s n 2σ 2 i, (A.2) E s n = (b n s n )2 + β j Cn(w n w j ) 2, (A.3) E πi n = j C n γ ( π i n π i j) 2 + απ i j, (A.4) where s n = w n x L n + (1 w n )xr n and C n denotes the subset of pixel locations neighboring n. The parameters β and γ control the smoothnessof the image and outlier MRFs, respectively; we set both of these to 10 in our simulations. The parameter α is a sparsity parameter that penalizes outliers; we set this parameter to 500. The parameter b n isabiastermforthenth pixel, encoding prior knowledge about the latent surface at each location, which is set to 0 except where noted otherwise. The variance parameter σi 2 was set to except where noted otherwise. A.2 Conditional Distributions. In the Gibbs sampler, nodes are updated asynchronously by drawing their values from the following conditional distributions: { P(w n x,π,w /n ) exp τen s τ } En xi, (A.5) i P ( πn i x,πi /n, w ) { n exp τe xi n } τeπi n, (A.6) where s /n denotes all the nodes except the nth node (and likewise for π). Note that because s n is a deterministic function of w n and x n, sampling from P(w n x,π,w /n ) is equivalent to sampling from P(s n x,π,s /n ).

20 20 S. Gershman, E. Vul, and J. Tenenbaum Figure 7: Sensitivity to sparsity level and outlier smoothness. Simulated traveling waves (cf. Figure 3) at different settings of α (top) and γ (bottom). A.3 Parameter Selection and Robustness. The parameters of our model were chosen heuristically so as to approximately match the statistics of the empirically observed dominance duration distribution (see Figure 2). These parameters were then used in all other simulations, except where explicitly manipulated. In principle, one could use stochastic approximation techniques (Swersky, Chen, Marlin, & de Freitas, 2010) to optimize the parameters, but the computation time required to obtain a sufficiently large set of samples makes that approach prohibitively expensive. To illustrate robustness to variations in parameter settings, we recalculated the predicted traveling waves under different values of α (the sparsity parameter) and γ (outlier smoothness parameter). 1 Figure 7 shows the results of these simulations. While the quantitative speed of propagation changes slightly as a function of α and γ, the qualitative pattern shown in Figure 3 remains unchanged; the only exception is for γ = 100, where the difference between concentric and radial propagation times reverses (this is a somewhat degenerate case where the coupling of the outlier process is an order of magnitude greater than the cost of inducing an outlier, α). Thus, model predictions appear to be relatively insensitive to variations in the sparsity level and outlier smoothness. 1 Note that we have already explicitly manipulated β in explaining the difference between radial and concentric propagation times.

21 Multistability and Perceptual Inference 21 Acknowledgments We thank Noah Goodman for helpful discussions, as well as Peter Dayan and Christopher Summerfield for comments on the manuscript. S.J.G. was supported by a Quantitative Computational Neuroscience fellowship from the National Institutes of Health. E.V. was supported by ONR MURI: Complex Learning and Skill Transfer with Video Games N (PI: Daphne Bavelier), an NDSEG fellowship and an NSF DRMS Dissertation grant. References Ackley, D., Hinton, G., & Sejnowski, T. (1985). A learning algorithm for Boltzmann machines. Cognitive Science, 9(1), Alais, D., & Blake, R. (1999). Grouping visual features during binocular rivalry. Vision Research, 39(26), Alais, D., Cass, J., O Shea, R., & Blake, R. (2010). Visual sensitivity underlying changes in visual consciousness. Current Biology, 20, Amit, D. (1989). Modeling brain function: The world of attractor neural networks. Cambridge: Cambridge University Press. Bialek, W., & DeWeese, M. (1995). Random switching and optimal processing in the perception of ambiguous signals. Physical Review Letters, 74(15), Blake, R. (2001). A primer on binocular rivalry, including current controversies. Brain and Mind, 2(1), Blake, R., & Logothetis, N. (2002). Visual competition. Nature Reviews Neuroscience, 3(1), Blake, R., O Shea, R. P., & Mueller, T. J. (1992). Spatial zones of binocular rivalry in central and peripheral vision. Visual Neuroscience, 8, Blake, R., Westendorf, D., & Overton, R. (1980). What is suppressed during binocular rivalry? Perception, 9(2), Brainard, D., & Freeman, W. (1997). Bayesian color constancy. Journal of the Optical Society of America A, 14(7), Brascamp, J., van Ee, R., Noest, A., Jacobs, R., & van den Berg, A. (2006). The time course of binocular rivalry reveals a fundamental role of noise. Journal of Vision, 6(11), 8. Burke, D., Alais, D., & Wenderoth, P. (1999). Determinants of fusion of dichoptically presented orthogonal gratings. Perception, 28, Carandini, M., Demb, J., Mante, V., Tolhurst, D., Dan, Y., Olshausen, B., et al. (2005). Do we know what the early visual system does? Journal of Neuroscience, 25(46), Das, A., & Gilbert, C. (1999). Topography of contextual modulations mediated by short-range interactions in primary visual cortex. Nature, 399(6737), Daw, N., & Courville, A. (2007). The pigeon as particle filter. In J. C. Platt, D. Koller, Y. Singer, & S. Roweis (Eds.), Advances in neural information processing systems, 20 (pp ). Cambridge, MA: MIT.

Perceptual Multistability as Markov Chain Monte Carlo Inference

Perceptual Multistability as Markov Chain Monte Carlo Inference Perceptual Multistability as Markov Chain Monte Carlo Inference Samuel J. Gershman Department of Psychology and Neuroscience Institute Princeton University Princeton, NJ 08540 sjgershm@princeton.edu Edward

More information

Sensory Adaptation within a Bayesian Framework for Perception

Sensory Adaptation within a Bayesian Framework for Perception presented at: NIPS-05, Vancouver BC Canada, Dec 2005. published in: Advances in Neural Information Processing Systems eds. Y. Weiss, B. Schölkopf, and J. Platt volume 18, pages 1291-1298, May 2006 MIT

More information

Master s Thesis. Presented to. The Faculty of the Graduate School of Arts and Sciences. Brandeis University. Department of Psychology

Master s Thesis. Presented to. The Faculty of the Graduate School of Arts and Sciences. Brandeis University. Department of Psychology Testing the Nature of the Representation for Binocular Rivalry Master s Thesis Presented to The Faculty of the Graduate School of Arts and Sciences Brandeis University Department of Psychology József Fiser,

More information

2012 Course : The Statistician Brain: the Bayesian Revolution in Cognitive Science

2012 Course : The Statistician Brain: the Bayesian Revolution in Cognitive Science 2012 Course : The Statistician Brain: the Bayesian Revolution in Cognitive Science Stanislas Dehaene Chair in Experimental Cognitive Psychology Lecture No. 4 Constraints combination and selection of a

More information

The `Bayesian approach to perception, cognition and disease (3)

The `Bayesian approach to perception, cognition and disease (3) A Bayesian theory of the Brain - 1990s- Purpose of the brain: infer state of the world from noisy and incomplete data [G. Hinton, P. Dayan, A. Pouget, R. Zemel, R. Rao, etc..] The `Bayesian approach to

More information

Spontaneous Cortical Activity Reveals Hallmarks of an Optimal Internal Model of the Environment. Berkes, Orban, Lengyel, Fiser.

Spontaneous Cortical Activity Reveals Hallmarks of an Optimal Internal Model of the Environment. Berkes, Orban, Lengyel, Fiser. Statistically optimal perception and learning: from behavior to neural representations. Fiser, Berkes, Orban & Lengyel Trends in Cognitive Sciences (2010) Spontaneous Cortical Activity Reveals Hallmarks

More information

Subjective randomness and natural scene statistics

Subjective randomness and natural scene statistics Psychonomic Bulletin & Review 2010, 17 (5), 624-629 doi:10.3758/pbr.17.5.624 Brief Reports Subjective randomness and natural scene statistics Anne S. Hsu University College London, London, England Thomas

More information

Exploring the Influence of Particle Filter Parameters on Order Effects in Causal Learning

Exploring the Influence of Particle Filter Parameters on Order Effects in Causal Learning Exploring the Influence of Particle Filter Parameters on Order Effects in Causal Learning Joshua T. Abbott (joshua.abbott@berkeley.edu) Thomas L. Griffiths (tom griffiths@berkeley.edu) Department of Psychology,

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

Sensory Cue Integration

Sensory Cue Integration Sensory Cue Integration Summary by Byoung-Hee Kim Computer Science and Engineering (CSE) http://bi.snu.ac.kr/ Presentation Guideline Quiz on the gist of the chapter (5 min) Presenters: prepare one main

More information

ERA: Architectures for Inference

ERA: Architectures for Inference ERA: Architectures for Inference Dan Hammerstrom Electrical And Computer Engineering 7/28/09 1 Intelligent Computing In spite of the transistor bounty of Moore s law, there is a large class of problems

More information

Hierarchical Bayesian Modeling of Individual Differences in Texture Discrimination

Hierarchical Bayesian Modeling of Individual Differences in Texture Discrimination Hierarchical Bayesian Modeling of Individual Differences in Texture Discrimination Timothy N. Rubin (trubin@uci.edu) Michael D. Lee (mdlee@uci.edu) Charles F. Chubb (cchubb@uci.edu) Department of Cognitive

More information

Ideal Observers for Detecting Motion: Correspondence Noise

Ideal Observers for Detecting Motion: Correspondence Noise Ideal Observers for Detecting Motion: Correspondence Noise HongJing Lu Department of Psychology, UCLA Los Angeles, CA 90095 HongJing@psych.ucla.edu Alan Yuille Department of Statistics, UCLA Los Angeles,

More information

Sparse Coding in Sparse Winner Networks

Sparse Coding in Sparse Winner Networks Sparse Coding in Sparse Winner Networks Janusz A. Starzyk 1, Yinyin Liu 1, David Vogel 2 1 School of Electrical Engineering & Computer Science Ohio University, Athens, OH 45701 {starzyk, yliu}@bobcat.ent.ohiou.edu

More information

Frank Tong. Department of Psychology Green Hall Princeton University Princeton, NJ 08544

Frank Tong. Department of Psychology Green Hall Princeton University Princeton, NJ 08544 Frank Tong Department of Psychology Green Hall Princeton University Princeton, NJ 08544 Office: Room 3-N-2B Telephone: 609-258-2652 Fax: 609-258-1113 Email: ftong@princeton.edu Graduate School Applicants

More information

Neurophysiology and Information

Neurophysiology and Information Neurophysiology and Information Christopher Fiorillo BiS 527, Spring 2011 042 350 4326, fiorillo@kaist.ac.kr Part 10: Perception Reading: Students should visit some websites where they can experience and

More information

Using Inverse Planning and Theory of Mind for Social Goal Inference

Using Inverse Planning and Theory of Mind for Social Goal Inference Using Inverse Planning and Theory of Mind for Social Goal Inference Sean Tauber (sean.tauber@uci.edu) Mark Steyvers (mark.steyvers@uci.edu) Department of Cognitive Sciences, University of California, Irvine

More information

Distance in feature space determines exclusivity in visual rivalry

Distance in feature space determines exclusivity in visual rivalry Available online at www.sciencedirect.com Vision Research 47 (2007) 3269 3275 www.elsevier.com/locate/visres Distance in feature space determines exclusivity in visual rivalry Tomas Knapen a, *, Ryota

More information

Meaning of gamma distribution in perceptual rivalry

Meaning of gamma distribution in perceptual rivalry Technical Report on Attention and Cognition (2004) No.29 Meaning of gamma distribution in perceptual rivalry Tsutomu Murata Takashi Hamada Yuki Kakita Toshio Yanagida National Institute of Information

More information

A Bayesian Model of Conditioned Perception

A Bayesian Model of Conditioned Perception presented at: NIPS 21, Vancouver BC Canada, December 5th, 27. to appear in: Advances in Neural Information Processing Systems vol 2, pp XXX-XXX, May 28 MIT Press, Cambridge MA. A Bayesian Model of Conditioned

More information

A contrast paradox in stereopsis, motion detection and vernier acuity

A contrast paradox in stereopsis, motion detection and vernier acuity A contrast paradox in stereopsis, motion detection and vernier acuity S. B. Stevenson *, L. K. Cormack Vision Research 40, 2881-2884. (2000) * University of Houston College of Optometry, Houston TX 77204

More information

Consciousness The final frontier!

Consciousness The final frontier! Consciousness The final frontier! How to Define it??? awareness perception - automatic and controlled memory - implicit and explicit ability to tell us about experiencing it attention. And the bottleneck

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 5: Data analysis II Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis II 6 Single

More information

Neurophysiology and Information: Theory of Brain Function

Neurophysiology and Information: Theory of Brain Function Neurophysiology and Information: Theory of Brain Function Christopher Fiorillo BiS 527, Spring 2012 042 350 4326, fiorillo@kaist.ac.kr Part 1: Inference in Perception, Cognition, and Motor Control Reading:

More information

Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention

Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention Tapani Raiko and Harri Valpola School of Science and Technology Aalto University (formerly Helsinki University of

More information

Information Processing During Transient Responses in the Crayfish Visual System

Information Processing During Transient Responses in the Crayfish Visual System Information Processing During Transient Responses in the Crayfish Visual System Christopher J. Rozell, Don. H. Johnson and Raymon M. Glantz Department of Electrical & Computer Engineering Department of

More information

Self-Organization and Segmentation with Laterally Connected Spiking Neurons

Self-Organization and Segmentation with Laterally Connected Spiking Neurons Self-Organization and Segmentation with Laterally Connected Spiking Neurons Yoonsuck Choe Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 USA Risto Miikkulainen Department

More information

Book Information Jakob Hohwy, The Predictive Mind, Oxford: Oxford University Press, 2013, ix+288, 60.00,

Book Information Jakob Hohwy, The Predictive Mind, Oxford: Oxford University Press, 2013, ix+288, 60.00, 1 Book Information Jakob Hohwy, The Predictive Mind, Oxford: Oxford University Press, 2013, ix+288, 60.00, 978-0-19-968273-7. Review Body The Predictive Mind by Jakob Hohwy is the first monograph to address

More information

Ideal Observers for Detecting Human Motion: Correspondence Noise.

Ideal Observers for Detecting Human Motion: Correspondence Noise. Ideal Observers for Detecting Human Motion: Correspondence Noise. HongJing Lo Department of Psychology University of California at Los Angeles Los Angeles, CA 90095 Alan Yuille Department of Statistics

More information

The Integration of Features in Visual Awareness : The Binding Problem. By Andrew Laguna, S.J.

The Integration of Features in Visual Awareness : The Binding Problem. By Andrew Laguna, S.J. The Integration of Features in Visual Awareness : The Binding Problem By Andrew Laguna, S.J. Outline I. Introduction II. The Visual System III. What is the Binding Problem? IV. Possible Theoretical Solutions

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 11: Attention & Decision making Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis

More information

Dynamics of Perceptual Bistability: Plaids and Binocular Rivalry Compared

Dynamics of Perceptual Bistability: Plaids and Binocular Rivalry Compared In: Binocular Rivalry, MIT Press (2004) Edited by David Alais & Randolph Blake 8 Dynamics of Perceptual Bistability: Plaids and Binocular Rivalry Compared Nava Rubin and Jean Michel Hupé This chapter presents

More information

Bayesian and Frequentist Approaches

Bayesian and Frequentist Approaches Bayesian and Frequentist Approaches G. Jogesh Babu Penn State University http://sites.stat.psu.edu/ babu http://astrostatistics.psu.edu All models are wrong But some are useful George E. P. Box (son-in-law

More information

Chapter 8: Dynamics of perceptual bi-stability: plaids and binocular rivalry compared

Chapter 8: Dynamics of perceptual bi-stability: plaids and binocular rivalry compared Chapter 8: Dynamics of perceptual bi-stability: plaids and binocular rivalry compared Nava Rubin 1 and Jean-Michel Hupé 1,2 1- Center for Neural Science, New York University 2- Present address: Centre

More information

Goals. Visual behavior jobs of vision. The visual system: overview of a large-scale neural architecture. kersten.org

Goals. Visual behavior jobs of vision. The visual system: overview of a large-scale neural architecture. kersten.org The visual system: overview of a large-scale neural architecture Daniel Kersten Computational Vision Lab Psychology Department, U. Minnesota Goals Provide an overview of a major brain subsystem to help

More information

Models of Attention. Models of Attention

Models of Attention. Models of Attention Models of Models of predictive: can we predict eye movements (bottom up attention)? [L. Itti and coll] pop out and saliency? [Z. Li] Readings: Maunsell & Cook, the role of attention in visual processing,

More information

2012 Course: The Statistician Brain: the Bayesian Revolution in Cognitive Sciences

2012 Course: The Statistician Brain: the Bayesian Revolution in Cognitive Sciences 2012 Course: The Statistician Brain: the Bayesian Revolution in Cognitive Sciences Stanislas Dehaene Chair of Experimental Cognitive Psychology Lecture n 5 Bayesian Decision-Making Lecture material translated

More information

Learning Deterministic Causal Networks from Observational Data

Learning Deterministic Causal Networks from Observational Data Carnegie Mellon University Research Showcase @ CMU Department of Psychology Dietrich College of Humanities and Social Sciences 8-22 Learning Deterministic Causal Networks from Observational Data Ben Deverett

More information

Bayes in the Brain On Bayesian Modelling in Neuroscience Matteo Colombo and Peggy Seriès

Bayes in the Brain On Bayesian Modelling in Neuroscience Matteo Colombo and Peggy Seriès Brit. J. Phil. Sci. 63 (2012), 697 723 Bayes in the Brain On Bayesian Modelling in Neuroscience ABSTRACT According to a growing trend in theoretical neuroscience, the human perceptual system is akin to

More information

A Drift Diffusion Model of Proactive and Reactive Control in a Context-Dependent Two-Alternative Forced Choice Task

A Drift Diffusion Model of Proactive and Reactive Control in a Context-Dependent Two-Alternative Forced Choice Task A Drift Diffusion Model of Proactive and Reactive Control in a Context-Dependent Two-Alternative Forced Choice Task Olga Lositsky lositsky@princeton.edu Robert C. Wilson Department of Psychology University

More information

Coding and computation by neural ensembles in the primate retina

Coding and computation by neural ensembles in the primate retina Coding and computation by neural ensembles in the primate retina Liam Paninski Department of Statistics and Center for Theoretical Neuroscience Columbia University http://www.stat.columbia.edu/ liam liam@stat.columbia.edu

More information

Alternation rate in perceptual bistability is maximal at and symmetric around equi<dominance

Alternation rate in perceptual bistability is maximal at and symmetric around equi<dominance Journal of Vision (2010) 10(11):1, 1 18 http://www.journalofvision.org/content/10/11/1 1 Alternation rate in perceptual bistability is maximal at and symmetric around equi

More information

Utility Maximization and Bounds on Human Information Processing

Utility Maximization and Bounds on Human Information Processing Topics in Cognitive Science (2014) 1 6 Copyright 2014 Cognitive Science Society, Inc. All rights reserved. ISSN:1756-8757 print / 1756-8765 online DOI: 10.1111/tops.12089 Utility Maximization and Bounds

More information

Signal Detection Theory and Bayesian Modeling

Signal Detection Theory and Bayesian Modeling Signal Detection Theory and Bayesian Modeling COGS 202: Computational Modeling of Cognition Omar Shanta, Shuai Tang, Gautam Reddy, Reina Mizrahi, Mehul Shah Detection Theory and Psychophysics: A Review

More information

Vision Research 51 (2011) Contents lists available at ScienceDirect. Vision Research. journal homepage:

Vision Research 51 (2011) Contents lists available at ScienceDirect. Vision Research. journal homepage: Vision Research 51 (2011) 521 528 Contents lists available at ScienceDirect Vision Research journal homepage: www.elsevier.com/locate/visres Dynamical systems modeling of Continuous Flash Suppression Daisuke

More information

Rolls,E.T. (2016) Cerebral Cortex: Principles of Operation. Oxford University Press.

Rolls,E.T. (2016) Cerebral Cortex: Principles of Operation. Oxford University Press. Digital Signal Processing and the Brain Is the brain a digital signal processor? Digital vs continuous signals Digital signals involve streams of binary encoded numbers The brain uses digital, all or none,

More information

Plasticity of Cerebral Cortex in Development

Plasticity of Cerebral Cortex in Development Plasticity of Cerebral Cortex in Development Jessica R. Newton and Mriganka Sur Department of Brain & Cognitive Sciences Picower Center for Learning & Memory Massachusetts Institute of Technology Cambridge,

More information

Burn-in, bias, and the rationality of anchoring

Burn-in, bias, and the rationality of anchoring Burn-in, bias, and the rationality of anchoring Falk Lieder Translational Neuromodeling Unit ETH and University of Zurich lieder@biomed.ee.ethz.ch Thomas L. Griffiths Psychology Department University of

More information

Ch.20 Dynamic Cue Combination in Distributional Population Code Networks. Ka Yeon Kim Biopsychology

Ch.20 Dynamic Cue Combination in Distributional Population Code Networks. Ka Yeon Kim Biopsychology Ch.20 Dynamic Cue Combination in Distributional Population Code Networks Ka Yeon Kim Biopsychology Applying the coding scheme to dynamic cue combination (Experiment, Kording&Wolpert,2004) Dynamic sensorymotor

More information

Neural Variability and Sampling-Based Probabilistic Representations in the Visual Cortex

Neural Variability and Sampling-Based Probabilistic Representations in the Visual Cortex Article Neural Variability and Sampling-Based Probabilistic Representations in the Visual Cortex Highlights d Stochastic sampling links perceptual uncertainty to neural response variability Authors Gerg}o

More information

Descending Marr s levels: Standard observers are no panacea. to appear in Behavioral & Brain Sciences

Descending Marr s levels: Standard observers are no panacea. to appear in Behavioral & Brain Sciences Descending Marr s levels: Standard observers are no panacea Commentary on D. Rahnev & R.N. Denison, Suboptimality in Perceptual Decision Making, to appear in Behavioral & Brain Sciences Carlos Zednik carlos.zednik@ovgu.de

More information

Hebbian Plasticity for Improving Perceptual Decisions

Hebbian Plasticity for Improving Perceptual Decisions Hebbian Plasticity for Improving Perceptual Decisions Tsung-Ren Huang Department of Psychology, National Taiwan University trhuang@ntu.edu.tw Abstract Shibata et al. reported that humans could learn to

More information

Lecturer: Rob van der Willigen 11/9/08

Lecturer: Rob van der Willigen 11/9/08 Auditory Perception - Detection versus Discrimination - Localization versus Discrimination - - Electrophysiological Measurements Psychophysical Measurements Three Approaches to Researching Audition physiology

More information

How does Bayesian reverse-engineering work?

How does Bayesian reverse-engineering work? How does Bayesian reverse-engineering work? Carlos Zednik (czednik@uos.de) and Frank Jäkel (fjaekel@uos.de) Institute of Cognitive Science, University of Osnabrück 49069 Osnabrück, Germany Abstract Bayesian

More information

Bayesian models of inductive generalization

Bayesian models of inductive generalization Bayesian models of inductive generalization Neville E. Sanjana & Joshua B. Tenenbaum Department of Brain and Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 239 nsanjana, jbt @mit.edu

More information

using deep learning models to understand visual cortex

using deep learning models to understand visual cortex using deep learning models to understand visual cortex 11-785 Introduction to Deep Learning Fall 2017 Michael Tarr Department of Psychology Center for the Neural Basis of Cognition this lecture A bit out

More information

Lecturer: Rob van der Willigen 11/9/08

Lecturer: Rob van der Willigen 11/9/08 Auditory Perception - Detection versus Discrimination - Localization versus Discrimination - Electrophysiological Measurements - Psychophysical Measurements 1 Three Approaches to Researching Audition physiology

More information

Sources of uncertainty in intuitive physics

Sources of uncertainty in intuitive physics Sources of uncertainty in intuitive physics Kevin A Smith (k2smith@ucsd.edu) and Edward Vul (evul@ucsd.edu) University of California, San Diego Department of Psychology, 9500 Gilman Dr. La Jolla, CA 92093

More information

Model Uncertainty in Classical Conditioning

Model Uncertainty in Classical Conditioning Model Uncertainty in Classical Conditioning A. C. Courville* 1,3, N. D. Daw 2,3, G. J. Gordon 4, and D. S. Touretzky 2,3 1 Robotics Institute, 2 Computer Science Department, 3 Center for the Neural Basis

More information

A HMM-based Pre-training Approach for Sequential Data

A HMM-based Pre-training Approach for Sequential Data A HMM-based Pre-training Approach for Sequential Data Luca Pasa 1, Alberto Testolin 2, Alessandro Sperduti 1 1- Department of Mathematics 2- Department of Developmental Psychology and Socialisation University

More information

An Hierarchical Model of Visual Rivalry

An Hierarchical Model of Visual Rivalry An Hierarchical Model of Visual Rivalry Peter Dayan Department of Brain and Cognitive Sciences E25-21O Massachusetts Institute of Technology Cambridge, MA 02139 dayan@psyche.mit.edu1 Abstract Binocular

More information

A Brief Introduction to Bayesian Statistics

A Brief Introduction to Bayesian Statistics A Brief Introduction to Statistics David Kaplan Department of Educational Psychology Methods for Social Policy Research and, Washington, DC 2017 1 / 37 The Reverend Thomas Bayes, 1701 1761 2 / 37 Pierre-Simon

More information

How has Computational Neuroscience been useful? Virginia R. de Sa Department of Cognitive Science UCSD

How has Computational Neuroscience been useful? Virginia R. de Sa Department of Cognitive Science UCSD How has Computational Neuroscience been useful? 1 Virginia R. de Sa Department of Cognitive Science UCSD What is considered Computational Neuroscience? 2 What is considered Computational Neuroscience?

More information

Learning in neural networks

Learning in neural networks http://ccnl.psy.unipd.it Learning in neural networks Marco Zorzi University of Padova M. Zorzi - European Diploma in Cognitive and Brain Sciences, Cognitive modeling", HWK 19-24/3/2006 1 Connectionist

More information

lateral organization: maps

lateral organization: maps lateral organization Lateral organization & computation cont d Why the organization? The level of abstraction? Keep similar features together for feedforward integration. Lateral computations to group

More information

Commentary on Behavioral and Brain Sciences target article: Jones & Love, Bayesian Fundamentalism or Enlightenment?

Commentary on Behavioral and Brain Sciences target article: Jones & Love, Bayesian Fundamentalism or Enlightenment? Commentary on Behavioral and Brain Sciences target article: Jones & Love, Bayesian Fundamentalism or Enlightenment? Relating Bayes to cognitive mechanisms Mitchell Herschbach & William Bechtel Department

More information

Perception is an unconscious inference (1). Sensory stimuli are

Perception is an unconscious inference (1). Sensory stimuli are Theory of cortical function David J. Heeger a,b, INAUGURAL ARTICLE a Department of Psychology, New York University, New York, NY 3; and b Center for Neural Science, New York University, New York, NY 3

More information

An Ideal Observer Model of Visual Short-Term Memory Predicts Human Capacity Precision Tradeoffs

An Ideal Observer Model of Visual Short-Term Memory Predicts Human Capacity Precision Tradeoffs An Ideal Observer Model of Visual Short-Term Memory Predicts Human Capacity Precision Tradeoffs Chris R. Sims Robert A. Jacobs David C. Knill (csims@cvs.rochester.edu) (robbie@bcs.rochester.edu) (knill@cvs.rochester.edu)

More information

Just One View: Invariances in Inferotemporal Cell Tuning

Just One View: Invariances in Inferotemporal Cell Tuning Just One View: Invariances in Inferotemporal Cell Tuning Maximilian Riesenhuber Tomaso Poggio Center for Biological and Computational Learning and Department of Brain and Cognitive Sciences Massachusetts

More information

Computational Cognitive Neuroscience

Computational Cognitive Neuroscience Computational Cognitive Neuroscience Computational Cognitive Neuroscience Computational Cognitive Neuroscience *Computer vision, *Pattern recognition, *Classification, *Picking the relevant information

More information

A Model of Dopamine and Uncertainty Using Temporal Difference

A Model of Dopamine and Uncertainty Using Temporal Difference A Model of Dopamine and Uncertainty Using Temporal Difference Angela J. Thurnham* (a.j.thurnham@herts.ac.uk), D. John Done** (d.j.done@herts.ac.uk), Neil Davey* (n.davey@herts.ac.uk), ay J. Frank* (r.j.frank@herts.ac.uk)

More information

Individual Differences in Attention During Category Learning

Individual Differences in Attention During Category Learning Individual Differences in Attention During Category Learning Michael D. Lee (mdlee@uci.edu) Department of Cognitive Sciences, 35 Social Sciences Plaza A University of California, Irvine, CA 92697-5 USA

More information

Disruption of implicit perceptual memory by intervening neutral stimuli

Disruption of implicit perceptual memory by intervening neutral stimuli Vision Research 47 (2007) 2675 2683 www.elsevier.com/locate/visres Disruption of implicit perceptual memory by intervening neutral stimuli Ryota Kanai a,b, *, Tomas H.J. Knapen c, Raymond van Ee c, Frans

More information

Morton-Style Factorial Coding of Color in Primary Visual Cortex

Morton-Style Factorial Coding of Color in Primary Visual Cortex Morton-Style Factorial Coding of Color in Primary Visual Cortex Javier R. Movellan Institute for Neural Computation University of California San Diego La Jolla, CA 92093-0515 movellan@inc.ucsd.edu Thomas

More information

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China A Vision-based Affective Computing System Jieyu Zhao Ningbo University, China Outline Affective Computing A Dynamic 3D Morphable Model Facial Expression Recognition Probabilistic Graphical Models Some

More information

PSY380: VISION SCIENCE

PSY380: VISION SCIENCE PSY380: VISION SCIENCE 1) Questions: - Who are you and why are you here? (Why vision?) - What is visual perception? - What is the function of visual perception? 2) The syllabus & instructor 3) Lecture

More information

Top-Down Control of Visual Attention: A Rational Account

Top-Down Control of Visual Attention: A Rational Account Top-Down Control of Visual Attention: A Rational Account Michael C. Mozer Michael Shettel Shaun Vecera Dept. of Comp. Science & Dept. of Comp. Science & Dept. of Psychology Institute of Cog. Science Institute

More information

Asymmetries in ecological and sensorimotor laws: towards a theory of subjective experience. James J. Clark

Asymmetries in ecological and sensorimotor laws: towards a theory of subjective experience. James J. Clark Asymmetries in ecological and sensorimotor laws: towards a theory of subjective experience James J. Clark Centre for Intelligent Machines McGill University This talk will motivate an ecological approach

More information

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence To understand the network paradigm also requires examining the history

More information

Modeling Individual and Group Behavior in Complex Environments. Modeling Individual and Group Behavior in Complex Environments

Modeling Individual and Group Behavior in Complex Environments. Modeling Individual and Group Behavior in Complex Environments Modeling Individual and Group Behavior in Complex Environments Dr. R. Andrew Goodwin Environmental Laboratory Professor James J. Anderson Abran Steele-Feldman University of Washington Status: AT-14 Continuing

More information

Nonlinear processing in LGN neurons

Nonlinear processing in LGN neurons Nonlinear processing in LGN neurons Vincent Bonin *, Valerio Mante and Matteo Carandini Smith-Kettlewell Eye Research Institute 2318 Fillmore Street San Francisco, CA 94115, USA Institute of Neuroinformatics

More information

Object recognition and hierarchical computation

Object recognition and hierarchical computation Object recognition and hierarchical computation Challenges in object recognition. Fukushima s Neocognitron View-based representations of objects Poggio s HMAX Forward and Feedback in visual hierarchy Hierarchical

More information

Dynamics of Color Category Formation and Boundaries

Dynamics of Color Category Formation and Boundaries Dynamics of Color Category Formation and Boundaries Stephanie Huette* Department of Psychology, University of Memphis, Memphis, TN Definition Dynamics of color boundaries is broadly the area that characterizes

More information

Learning and Adaptive Behavior, Part II

Learning and Adaptive Behavior, Part II Learning and Adaptive Behavior, Part II April 12, 2007 The man who sets out to carry a cat by its tail learns something that will always be useful and which will never grow dim or doubtful. -- Mark Twain

More information

OPTO 5320 VISION SCIENCE I

OPTO 5320 VISION SCIENCE I OPTO 5320 VISION SCIENCE I Monocular Sensory Processes of Vision: Color Vision Mechanisms of Color Processing . Neural Mechanisms of Color Processing A. Parallel processing - M- & P- pathways B. Second

More information

A Race Model of Perceptual Forced Choice Reaction Time

A Race Model of Perceptual Forced Choice Reaction Time A Race Model of Perceptual Forced Choice Reaction Time David E. Huber (dhuber@psyc.umd.edu) Department of Psychology, 1147 Biology/Psychology Building College Park, MD 2742 USA Denis Cousineau (Denis.Cousineau@UMontreal.CA)

More information

Bayesian Models for Combining Data Across Subjects and Studies in Predictive fmri Data Analysis

Bayesian Models for Combining Data Across Subjects and Studies in Predictive fmri Data Analysis Bayesian Models for Combining Data Across Subjects and Studies in Predictive fmri Data Analysis Thesis Proposal Indrayana Rustandi April 3, 2007 Outline Motivation and Thesis Preliminary results: Hierarchical

More information

DeSTIN: A Scalable Deep Learning Architecture with Application to High-Dimensional Robust Pattern Recognition

DeSTIN: A Scalable Deep Learning Architecture with Application to High-Dimensional Robust Pattern Recognition DeSTIN: A Scalable Deep Learning Architecture with Application to High-Dimensional Robust Pattern Recognition Itamar Arel and Derek Rose and Robert Coop Department of Electrical Engineering and Computer

More information

CISC453 Winter Probabilistic Reasoning Part B: AIMA3e Ch

CISC453 Winter Probabilistic Reasoning Part B: AIMA3e Ch CISC453 Winter 2010 Probabilistic Reasoning Part B: AIMA3e Ch 14.5-14.8 Overview 2 a roundup of approaches from AIMA3e 14.5-14.8 14.5 a survey of approximate methods alternatives to the direct computing

More information

Memory, Attention, and Decision-Making

Memory, Attention, and Decision-Making Memory, Attention, and Decision-Making A Unifying Computational Neuroscience Approach Edmund T. Rolls University of Oxford Department of Experimental Psychology Oxford England OXFORD UNIVERSITY PRESS Contents

More information

What is mid level vision? Mid Level Vision. What is mid level vision? Lightness perception as revealed by lightness illusions

What is mid level vision? Mid Level Vision. What is mid level vision? Lightness perception as revealed by lightness illusions What is mid level vision? Mid Level Vision March 18, 2004 Josh McDermott Perception involves inferring the structure of the world from measurements of energy generated by the world (in vision, this is

More information

The Spatial Origin of a Perceptual Transition in Binocular Rivalry

The Spatial Origin of a Perceptual Transition in Binocular Rivalry The Spatial Origin of a Perceptual Transition in Binocular Rivalry Chris L. E. Paffen*, Marnix Naber, Frans A. J. Verstraten Helmholtz Institute and Experimental Psychology, Utrecht University, Utrecht,

More information

Distinct Contributions of the Magnocellular and Parvocellular Visual Streams to Perceptual Selection

Distinct Contributions of the Magnocellular and Parvocellular Visual Streams to Perceptual Selection Distinct Contributions of the Magnocellular and Parvocellular Visual Streams to Perceptual Selection Rachel N. Denison and Michael A. Silver Abstract During binocular rivalry, conflicting images presented

More information

Supplementary materials for: Executive control processes underlying multi- item working memory

Supplementary materials for: Executive control processes underlying multi- item working memory Supplementary materials for: Executive control processes underlying multi- item working memory Antonio H. Lara & Jonathan D. Wallis Supplementary Figure 1 Supplementary Figure 1. Behavioral measures of

More information

Natural Scene Statistics and Perception. W.S. Geisler

Natural Scene Statistics and Perception. W.S. Geisler Natural Scene Statistics and Perception W.S. Geisler Some Important Visual Tasks Identification of objects and materials Navigation through the environment Estimation of motion trajectories and speeds

More information

Adventures into terra incognita

Adventures into terra incognita BEWARE: These are preliminary notes. In the future, they will become part of a textbook on Visual Object Recognition. Chapter VI. Adventures into terra incognita In primary visual cortex there are neurons

More information

5/20/2014. Leaving Andy Clark's safe shores: Scaling Predictive Processing to higher cognition. ANC sysmposium 19 May 2014

5/20/2014. Leaving Andy Clark's safe shores: Scaling Predictive Processing to higher cognition. ANC sysmposium 19 May 2014 ANC sysmposium 19 May 2014 Lorentz workshop on HPI Leaving Andy Clark's safe shores: Scaling Predictive Processing to higher cognition Johan Kwisthout, Maria Otworowska, Harold Bekkering, Iris van Rooij

More information

Stochastic variations in sensory awareness are driven by noisy neuronal adaptation: evidence from serial correlations in perceptual bistability

Stochastic variations in sensory awareness are driven by noisy neuronal adaptation: evidence from serial correlations in perceptual bistability 2612 J. Opt. Soc. Am. A/ Vol. 26, No. 12/ December 2009 Raymond van Ee Stochastic variations in sensory awareness are driven by noisy neuronal adaptation: evidence from serial correlations in perceptual

More information

Supplemental Information: Adaptation can explain evidence for encoding of probabilistic. information in macaque inferior temporal cortex

Supplemental Information: Adaptation can explain evidence for encoding of probabilistic. information in macaque inferior temporal cortex Supplemental Information: Adaptation can explain evidence for encoding of probabilistic information in macaque inferior temporal cortex Kasper Vinken and Rufin Vogels Supplemental Experimental Procedures

More information

Physical predictions over time

Physical predictions over time Physical predictions over time Kevin A Smith (k2smith@ucsd.edu), 1 Eyal Dechter (edechter@mit.edu), 2 Joshua B Tenenbaum (jbt@mit.edu), 2 Edward Vul (evul@ucsd.edu) 1 1. University of California, San Diego,

More information