Computational Cognitive Neuroscience Approach for Saliency Map detection using Graph-Based Visual Saliency (GBVS) tool in Machine Vision

Size: px
Start display at page:

Download "Computational Cognitive Neuroscience Approach for Saliency Map detection using Graph-Based Visual Saliency (GBVS) tool in Machine Vision"

Transcription

1 Volume 118 No , ISSN: (printed version); ISSN: (on-line version) url: ijpam.eu Computational Cognitive Neuroscience Approach for Saliency Map detection using Graph-Based Visual Saliency (GBVS) tool in Machine Vision Manjunath R Kounte1 1 and B K Sujatha 2 1 School of Electronics and Communication Engineering REVA University, Bangalore, Karnataka, India manjunath.kounte@gmail.com 2 Telecommunication Engineering, M S Ramaiah Institute of Technology, Bangalore, Karnataka bksujatha@msrit.edu January 1, 2018 Abstract Objective: In this paper, we propose computational cognitive neuroscience based visual saliency model called as Graph based Visual Saliency (GBVS). The Objectives includes designing and implementing bottom up model of visual attention, wherein it tries to predict the human eye fixation points by taking real time digital images chosen randomly from everyday life as input. Methods / Statistical Analysis: The methodology used here supports biological modelling and parallelized naturally, the process involves identifying feature channels, then forming of activation maps and finally normalizing all three in a way which highlights similarity of conspicuity and combination with other maps. The basic evaluation technique

2 used here is the calculation of the ROC using graph based visual saliency compared with the other techniques. Findings: In the implementation, after taking digital input as input, feature maps are evaluated, followed by calculation of the saliency map. In the results, the foreground and background is achieved by fixing the threshold values for the saliency map points. The main parameter used for performance evaluation of analyzing human eye fixation was ROC. The ROC measurement is better than the ROC value evaluated for the traditional saliency map computations. The ROC value for GBVS technique is 0.74, whereas the RIC value for traditional saliency map is Application / Improvement: Applications for visual attention, reinforcement learning, adaptation, sensory processing involves the basic concept of surprise at its core. We present their unique way of mathematical analysis for surprise theory using the Bayesian modelling. The surprise generated by inducement or an event needs to be characterized quantitatively by using mathematical modelling for all data coming from multifaceted natural environments. Key Words : Visual Attention, Graph Based Visual Saliency (GBVS), Saliency Map, Computational Cognitive Neuroscience (CCNS), Visual Attention. 1 Introduction Machine vision provides us a new area of research and in general vision community research includes an area where predicting the human eye fixating at a particular scenario. The human beings use this ability to view a scene to identify in detail the most important and relevant areas, while the amount and energy spent on processing of these information is very less. William James said Everyone knows what attention is. It is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others 1. One of the most severe problems of perception is information

3 overload. Peripheral sensors generate afferent signals more or less continuously and it would be computationally costly to process all this incoming information all the time 2 3. Thus, it is important for the nervous system to make decisions on which part of the available information is to be selected for further, more detailed processing, and which parts are to be discarded. Furthermore, the selected stimuli need to be prioritized, with the most relevant being processed first and the less important ones later, thus leading to a sequential treatment of different parts of the visual scene. This selection and ordering process is called selective attention. Among many other functions, attention to a stimulus has been considered necessary for it to be perceived consciously 4 5. What determines which stimuli are selected by the attentional process and which will be discarded? Many interacting factors contribute to this decision. The decision of distinguishing between selections and discarding the information based on the stimuli helps in classifying top-down as well as bottom-up factors. The bottomup control factors are mainly dependent on the instantaneous sensory input, without taking into account the internal state of the organism 6. Top-down control, on the other hand, does take into account the internal state, such as goals the organisms has at this time, personal history and experiences, etc 7. A dramatic example of a stimulus that attracts attention using bottom-up mechanisms is a fire-cracker going off suddenly while an example of top-down attention is the focusing onto difficult-to-find food items by an animal that is hungry, ignoring more salient stimuli 8 9. One of the most important mechanism which attention provides is the neglecting of interference from parallel events during the process of selection of relevant features of a scene for subsequent processing. A common misinterpretation is that attention and ocular fixation are one and the same phenomenon. Attention focuses processing on a selected region of the visual field that needn t coincide with the center of fixation Visual attention is an appliance of the human visual system to highlight on certain portions of a scene first, before attention is fatigued on to the other parts. Such areas that capture primary attention are called visual attention regions (VARs). For various multimedia applications, the Visual Attention Region (VAR) should then indeed be the regions of interest and an automatic practice of extracting the VARs be

4 comes necessary Documentation of VARs has been presented to be useful for object recognition and region based image retrieval. Similarly, images can be adapted for different users with different device proficiencies based on the VARs extracted from the image, thus improving viewing choice. Examples of such version comprise involuntary surfing for large images, image resolution adaptation and automatic thumbnail generation 14. Feature Selection has received increasing attention in machine learning research. Even though the ever upgrading memory and CPU configurations have made nowadays computers more and more powerful, the amount of information, and sometimes the large dimensionalities of datasets, still challenge the efficiency and effectiveness of conventional algorithms. Section II gives information on extracting early visual features and visual processing from the retina. Section III describes Saliency Map detection and hypothesis for extraction of saliency map and identifying the visual attention region, Section IV discusses Graph Based Visual Saliency design and implementation, followed by discussion on Results, Conclusion. 2 Extracting Early Visual Features 1.1. Visual Processing Figure 1: Visual Processing from the retina through lateral geniculate nucleus of the thalamus to primary visual cortex. Figure 1, shows the graphical representation of basic optics and transmission corridors of input visual signals, which entre through

5 the retina, and headway to the lateral geniculate nucleus of the thalamus (LGN), and further to primary visual cortex (V1) 15.The primary organizing principles at work here, and in other perceptual modalities and perceptual areas more generally, are: i) Transduction of different information in the retina, photoreceptors are sensitive to different wavelengths of light (red = long wavelengths, green = medium wavelengths, and blue = short wavelengths), giving us color vision, but the retinal signals also differ in their spatial frequency (how coarse or fine of a feature they detect photoreceptors in the central fovea region can have high spatial frequency = fine resolution, while those in the periphery are lower resolution), and in their temporal response (fast vs. slow responding, including differential sensitivity to motion). 3 Hypothesis for Saliency Map Detection 1.2. Taxonomy of Saliency Detection Methods Saliency estimation methods can broadly be classified as into three categories as shown in fig 2, namely General Computational modelling, Hybrid Modelling and Computational Cognitive Neuroscience Modelling. Figure 2: Taxonomy of Saliency Detection Methods. The goal of any method, in general, is to detect properties of contrast, rarity, or unpredictability in images, of a central region with its surroundings, either locally or globally, using one

6 or more low-level features like color, intensity, and orientation. General computational methods mainly rely on principles of, spectral domain processing, information theory or signal processing for saliency detection In Hybrid individual feature maps are created separately and then combined to obtain the final saliency map, while in some others, a combined-feature saliency map is computed. Computational Cognitive Neuroscience model based methods attempt to mimic known models of the visual system for detecting saliency. Some of these algorithms detect saliency over manifold scales, while others work on a single scale Itti and Koch Model The model proposed in is shown in Figure.3. The Model is inspired from the feature integration theory, which explains human visual search strategies. First, Visual input is decomposed into a set of topographic feature maps. All feature maps which then in a bottom-up manner, combine into a saliency map. This model consequently represents a complete interpretation of bottom-up saliency and does not require any top-down mechanism to change the attention. This framework provides an immensely parallel method for the fast selection of a small number of attention-grabbing image locations to be explored by object-recognition processes 20. Figure 3: Koch and Itti Model

7 3.3 Bayesian Theory of Surprise The surprise in general can be can be explained or defined by using the following two principles. First, it is said be available only because of uncertainty, because the uncertainty comes from intrinsic randomness, unavailability of information, or lack of resources used for computing. Example: if the real world is mathematically predictable and deterministic in nature, then obviously there is no surprise in it. Second, surprise can never be absolute, its can only be explained relatively in terms of expectations of the visual observer, or a machine observer. The given input data of visual stimulus can result varying surprises for different individuals or machines, similarly the same input visual input data for an individual at varying times may result in different surprises 21. Therefore the Bayesian theory of surprise is defined based on relativity rather than absolute definition, the optimum way to mathematically model would best be in terms and decision theory and probability. Its definition involve (1) to define surprise and randomness in terms of probabilistic concepts; (2) Earlier (prior) data and later (posterior) data distributions in terms of decision theory 22. Fig 4, shows the Computing surprise in early sensory neurons. Fig 4(a) shows the earlier (Prior) data observations, whereas fig 4(b) shows the standard Bayesian fitting model 23. As common with this Bayesian approach, the contextual information of an observer is apprehended by his/her/its earlier (prior) data probability distribution P(H)HS where the H represents hypothesis over the space S The observer D now has to convert this prior data P(H)HS into posterior data P(H D) HS via Bayes theorem. Whereby HɛS, {P (H D)} = P (DV H) P (D) P (H) (1) In this context, the new data observation D is said to contain no surprise if the earlier data and later data are similar; and it is said to contain surprise if the earlier data (prior) of the observer D contains significantly different data than the later (posterior) data. Therefore the relative of prior and posterior data helps in defining the surprise, As per Kullback (KL) separation24, surprise is defined

8 as follows: SUR(D, S) = KL(P (H D), P (H)) = s P (HV D) P (H D)log dh P (H) Figure 4: Computing surprise in early sensory neurons. 4 Graph Based Visual Saliency Computing Feature Maps The Fig 4, shows the visual features are derived from the applied digital nature visual input image and separated into three new different passages with run parallel to each other. After this extraction and an individual management, a feature map is computed for each channel. The first feature map, termed as Intensity Channel is obtained from the red color, green color and blue color channels of the input image. If Ir, Ig and Ib are assumed to be the red color channel,

9 green color channel and blue color channels respectively, then the total intensity It, is given by taking average of all three channels. It = ( Ir + Ig + Ib ) /3 (3) The total intensity It is the then used to compute the Gaussian pyramid It(α). Where is the total number of down samples used in low pass filtering process. The evaluation of luminance is varies due to encoding values or representations. It is 30% due to red, 59% due to green and 11% due to blue. The Second feature map, Color Channel is constructed by taking into consideration pyramids as shown Fig 5. The colors red, green and blue are normalized (mathematical Process) by It in order to separate hue (shade) from the intensity. Four color channels are formed as follows: CR = Ir - (Ig+Ib)/2 (4) is formed for the color red. CG = Ig - (Ir+Ib)/2 (5) is formed for the color green. CB = Ib - (Ir+Ig)/2 (6) is formed for the color blue. CY=Ir+Ig-2( Ir Ig +Ib) (7) is formed for the color yellow. Gaussian Pyramids are now created for all four colors. They are represented as GR(α), GG(α), GB(α) and GY(α). Fig 5, shows the implementation of the 4 color channels and behaviour of the chromatic activity of these channels. The third feature map, Local orientation is computed from the total intensity It using Gabor pyramids Channels GPO (σ) are calculated and evaluated by performing convolution of the altitudes of the intensity values evaluated earlier with Gabor filters: GF(α, σ), where αɛ[0...80%] represents the scale and ωɛ{0 o, 45 o, 90 o, 135 o } is

10 the favored orientation. The feature maps considered due to the Orientation are denoted by, OF(ce,su,ω), encode, as a group, with the orientation for the nearby locations(local) varies in between the main center(focused) and nearby surrounding values. Later, the Center-surround approachable fields are replicated by a crossscale numerical subtraction among two maps at the center and the neighboring (surround) levels in these pyramids, thus resulting in the calculation of the yet another feature map. Later saliency map is extracted from the feature maps. Figure 5: Feature Map Fusion Forming an Activation Map There are three feature maps generated which include the formation of intensity, color and orientation. Lets assume the each

11 feature map is represented as F: [n]2 R. The next most important objective is to evaluate the activation map A: [n]2 R, such that, naturally, the locations points are denoted by (x,y) ɛ [n]2 where It, or as a substitution, M (x,y); will be represented with respect to its neighborhood. If the values are high, it represents larger value of the activation map A. The Markovian Approach, is one of the better way to represent the dissimilarity index, its a more rational way to represent compared to any other representations. Now, dissimilarity is defined with respect to M(x,y) with respect to M(r,s) d((x, y) (r, s)) = log M(x,y) (8) log M(r,s) The above definition above indicate the dissimilarity index. It gives the ratio between the two entities. The dissimilarity index about the two entities is characterized in terms of logarithmic scale. The exemplification can also be done in terms of taking the difference between the M(x,y) and M (r,s). The representation in terms of logarithmic scale and taking the difference both works well. In graph based visual saliency, let us consider the full graph which consists of nodes connected with each other. We assume that its a directed graph and all the nodes are connected to each other. Let us denote the graph as GVS. Now each node M with its indices (x,y) ɛ [n]2 is connected to all other nodes. The edge point from a node in 2-dimensional plane (x, y) to node (r, s) will be given as a numerical value i.e. weight as follows: w1((x, y), (r, s) = d((x, y) (r, s)).f (x r, y s) (9) F (m, n) = exp( (m2+n2) 2σ2 ) (10) is a free constraint of the Graph Based Visual Saliency (GBVS) algorithm The next constraint to be discussed here is the weight of the edge from node (x,y) to node (r,s) is directly proportional to dissimilarity index. The markovian approach suggests that the normalization of the graph GVS has to be performed by taking the normalization factor with each node to 1. We call the result as cognitive based as the nodes M (in this

12 case neurons) are available in a connected and well organised manner, the directed of all nodes is equivalent to the entire network (i.e. visual cortex). The fig 6, shows the information measure and entropy framework. Figure 6: Framework for Information Measure Normalizing an Activation Map There are numerous means to mix or combine the feature map details extracted using the center- surround mechanisms process as discussed in the previous subsection. One of the many accessible techniques like non-linear amplification, localized iterations etc is selected and the operator is denoted as N (.). Normalize the numerical values in all the feature maps to a particular fixed range [0...GM] so that the modality-dependent generosity differences can be eliminated. Find the location of the feature maps global maximum GM and compute the average maximum g m of all its other maximums from the local area; and globally multiplying the map by(gm-g m) 2. In the graph based visual saliency (GBVS) algorithm, one of the main objective of the normalization step is certain and a clear standard process as compared to the previous activation step. Recent trends in the process of activation and normalization suggests that its a critical area of interest in many aspects. The map re

13 sulted after the normalization process earlier to the mathematical addition combination, may end up with containing less or no information. Thus, the basic objective of this step would be identify the attention quantity on activation maps 27. Since the attention quantity on activation maps is available, the markovian analysis will be as follows for the normalization step: The activation map is given by A: [n]2 R, which needs to be normalized. The next step would be to build the graph GN with n2 neurons, i.e. nodes are marked with indexing starting from [n]2. For each of the any node (x, y) and every other node (r,s) (including (x,y)) to which it is linked) introduce an edge from (x, y) to (r, s) with weight: w2((x, y), (r, s)) = A(r, s).f (x r; y s) (11) Finally the iterations are used in the normalization process. The markovian approach used provides the advantage of steadiness distribution for all nodes, along with the assumption of attention quantity weights of edges for each node as 1. If activation is large, the attention quantity will flow automatically. This algorithm as same advantages as that others Results and Discussion The basic evaluation technique used here is the calculation of the ROC using graph based visual saliency compared with the other techniques. The objective in to try to predict the human eye fixation points by taking real time digital images chosen randomly from everyday life as input. As discussed above in the previous steps: the process used is simple, first taking the input image which is digital in nature, perform the feature map selection process using known techniques, then second step includes identifying the activation maps and last step is normalizing the data by the factor of one, using iteration technique. Fig 7, shows the saliency map detection based on graphs. The iteration technique is used for normalization process, and the algorithm is denoted by It as described earlier also. For experiment validation, the saliency toolbox is used. For analyzing the performance, first digital image is taken as input, feature maps are evaluated, then then saliency maps is calcu

14 lated, then the target locations are given, i.e. in a real time natural image, what normally a human eye focuses on, called human eye fixation. Then, a threshold value is fixed as an evaluation parameter, if the saliency map points goes above the threshold value, it is classified as foreground or target achieved, else if points are below the threshold value, it is classified as background. The one more performance metric considered for analyzing human eye fixation by the saliency map is ROC. The fig 8(a), shows the image for this performance analysis. The image samples are collected from the above. Each image is cropped for the resolution 600x400 pixels, to maintain fair comparison, the first step involved using same algorithm to compute the feature maps. The next involved finding the orientation maps, again to maintain the same comparison, the angles used were also same ω = {00, 450, 900, 1350}. The markings xs provides the information of the human eye fixation points. Fig 8(b) shows the output when using graph based visual saliency, the ROC measurement is better than the ROC value evaluated for the traditional saliency map computations. The ROC value for GBVS technique is 0.74, whereas the ROC value for traditional saliency map (fig 8(c)) is The calculation of center-surround maps by finding the difference between the given digital image feature map value and the feature map evaluated after the activation map. Figure 7: Saliency Detection based on graphs

15 15 Figure 8: a) Sample Picture with Fixation b) Graph-Based Saliency Map c) Traditional Saliency Map. 1221

16 5 Conclusion As researchers we all welcome a new novel, easy to approach solution for any existing research problems, however we must also seek to answer the scientific question of how it is possible that, given access to the same feature information, GBVS predicts human fixations more reliably than the standard algorithms. The first observation is that, because r point along the image periphery, it is an emergent property that GBVS promotes higher saliency values in the center of the image plane. In this paper, we have designed and implemented a visual saliency model using computational cognitive neuroscience approach called Graph-Based Visual Saliency (GBVS) which provides a new prospective in modelling of visual saliency, the implementation is quite simple in its approach, supports biological modelling and parallelized naturally, the process involves identifying feature channels, then forming of activation maps and finally normalizing all three in a way which highlights similarity of conspicuity and combination with other maps. Also, we have presented how computational cognitive neuroscience approach is the best approach in order for the implementation of saliency map technique for identification of visual attention regions using Graph-Based Visual Saliency (GBVS) tool in machine vision. References [1] W. James, The principles of psychology, New York: Holt, 1890, pp [2] Lee DK, Itti L, Koch C, Braun J. Attention activates winnertake-all competition among visual filters, Nature neuroscience Apr 1;2(4): [3] Einhuser W, Kruse W, Hoffmann KP, Knig P. Differences of monkey and human overt attention under natural conditions, Vision research Apr 30;46(8): [4] Itti L, Baldi PF. Bayesian surprise attracts human attention. InAdvances in neural information processing systems, 2005, pp

17 [5] Bruce N, Tsotsos J. Saliency based on information maximization. InAdvances in neural information processing systems, 2005, pp [6] Parkhurst D, Law K, Niebur E. Modeling the role of salience in the allocation of overt visual attention. Vision research Jan 31;42(1): [7] Ahmed MR, Sujatha BK. A review of reinforcement learning in neuromorphic VLSI chips using computational cognitive neuroscience, International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE) Aug;2(8): [8] Ahmed MR, Sujatha BK. Learning and Memory issues in Neuromorphic Engineering: A Review, International Journal of Applied Engineering Research (IJAER) Oct;10(12): [9] Ahmed MR, Sujatha BK. Levels of Abstraction and Implementation Issues in Neuromorphic Engineering, 2015 Sep;7(4): [10] Banu T, Kounte M. Performance Analysis of Irreversible and Reversible Counter, HCTL Open Science and Technology Letters (STL): Volume 1, June Jun 30:69. [11] Soumyalatha N, Ambhati RK, Kounte MR. Performance evaluation of ip wireless networks using two way active measurement protocol, InAdvances in Computing, Communications and Informatics (ICACCI), 2013 International Conference on 2013 Aug 22 pp [12] Kounte MR, Sujatha DB. Top-Down Approach for Modelling Visual Attention using Scene Context Features in Machine Vision, International Journal of Applied Engineering Research.2015 Oct;10(12): [13] Kounte MR, Sujatha BK. Identification of Visual Attention Regions in Machine Vision Using Saliency Map, International Conference on Communications and Signal Processing (ICCSP), Apr 2. pp

18 [14] Kounte MR. Dr. B. K. Sujatha. Bottom up Approach for Modelling Visual Attention Using Saliency Map in Machine vision: A Computational Cognitive Neuroscience Approach, International Journal of Applied Engineering Research. 2015;10(11): [15] OReilly, R. C., Munakata, Y., Frank, M. J., Hazy, T. E., and Contributors. Computational Cognitive Neuroscience, WiKi Book, 1st Edition [16] B. Mohd Jabarullah, C. Nelson Kennedy Babu. Segmentation Using Saliencycolor Mapping Technique. Indian Journal of Science and Technology, 2015, 8(15), 1-5. [17] Shailesh Kishore, Ramneek Singh Grover, N. Sandeep. Object Separation Using Saliency Algorithm. Indian Journal of Science and Technology, 2015, 8(S2), [18] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience, 2001 Mar 1;2(3), [19] Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis IEEE Transactions on pattern analysis and machine intelligence Nov 1;20(11), [20] Borji A, Itti L. State-of-the-art in visual attention modeling, IEEE transactions on pattern analysis and machine intelligence Jan;35(1), [21] Itti L, Baldi PF. Bayesian surprise attracts human attention, In Advances in neural information processing systems pp [22] Borji A, Itti L. Bayesian optimization explains human active search, In Advances in neural information processing systems 2013 pp [23] Zhang L, Tong MH, Marks TK, Shan H, Cottrell GW. SUN: A Bayesian framework for saliency using natural statistics, Journal of vision May 3;8(7):

19 [24] Kullback, S. Information Theory and Statistics Information Theory and Statistics,1959. [25] Harel J, Koch C, Perona P. Graph-based visual saliency. In Advances in neural information processing systems pp [26] Rutishauser U, Walther D, Koch C, Perona P. Is bottom-up attention useful for object recognition?. In Computer Vision and Pattern Recognition, CVPR Proceedings of the 2004 IEEE Computer Society Conference on 2004 Jul 4 (Vol. 2, pp. II-37). [27] Walther D, Rutishauser U, Koch C, Perona P. On the usefulness of attention for object recognition, In Workshop on Attention and Performance in Computational Vision at ECCV 2004 May (pp ). [28] Walther D, Rutishauser U, Koch C, Perona P. Selective visual attention enables learning and recognition of multiple objects in cluttered scenes, Computer Vision and Image Understanding Nov 30;100(1):

20 1226

Computational Cognitive Science

Computational Cognitive Science Computational Cognitive Science Lecture 15: Visual Attention Chris Lucas (Slides adapted from Frank Keller s) School of Informatics University of Edinburgh clucas2@inf.ed.ac.uk 14 November 2017 1 / 28

More information

Computational Cognitive Science. The Visual Processing Pipeline. The Visual Processing Pipeline. Lecture 15: Visual Attention.

Computational Cognitive Science. The Visual Processing Pipeline. The Visual Processing Pipeline. Lecture 15: Visual Attention. Lecture 15: Visual Attention School of Informatics University of Edinburgh keller@inf.ed.ac.uk November 11, 2016 1 2 3 Reading: Itti et al. (1998). 1 2 When we view an image, we actually see this: The

More information

The 29th Fuzzy System Symposium (Osaka, September 9-, 3) Color Feature Maps (BY, RG) Color Saliency Map Input Image (I) Linear Filtering and Gaussian

The 29th Fuzzy System Symposium (Osaka, September 9-, 3) Color Feature Maps (BY, RG) Color Saliency Map Input Image (I) Linear Filtering and Gaussian The 29th Fuzzy System Symposium (Osaka, September 9-, 3) A Fuzzy Inference Method Based on Saliency Map for Prediction Mao Wang, Yoichiro Maeda 2, Yasutake Takahashi Graduate School of Engineering, University

More information

Computational Models of Visual Attention: Bottom-Up and Top-Down. By: Soheil Borhani

Computational Models of Visual Attention: Bottom-Up and Top-Down. By: Soheil Borhani Computational Models of Visual Attention: Bottom-Up and Top-Down By: Soheil Borhani Neural Mechanisms for Visual Attention 1. Visual information enter the primary visual cortex via lateral geniculate nucleus

More information

Keywords- Saliency Visual Computational Model, Saliency Detection, Computer Vision, Saliency Map. Attention Bottle Neck

Keywords- Saliency Visual Computational Model, Saliency Detection, Computer Vision, Saliency Map. Attention Bottle Neck Volume 3, Issue 9, September 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Visual Attention

More information

Can Saliency Map Models Predict Human Egocentric Visual Attention?

Can Saliency Map Models Predict Human Egocentric Visual Attention? Can Saliency Map Models Predict Human Egocentric Visual Attention? Kentaro Yamada 1, Yusuke Sugano 1, Takahiro Okabe 1 Yoichi Sato 1, Akihiro Sugimoto 2, and Kazuo Hiraki 3 1 The University of Tokyo, Tokyo,

More information

On the implementation of Visual Attention Architectures

On the implementation of Visual Attention Architectures On the implementation of Visual Attention Architectures KONSTANTINOS RAPANTZIKOS AND NICOLAS TSAPATSOULIS DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING NATIONAL TECHNICAL UNIVERSITY OF ATHENS 9, IROON

More information

Validating the Visual Saliency Model

Validating the Visual Saliency Model Validating the Visual Saliency Model Ali Alsam and Puneet Sharma Department of Informatics & e-learning (AITeL), Sør-Trøndelag University College (HiST), Trondheim, Norway er.puneetsharma@gmail.com Abstract.

More information

Dynamic Visual Attention: Searching for coding length increments

Dynamic Visual Attention: Searching for coding length increments Dynamic Visual Attention: Searching for coding length increments Xiaodi Hou 1,2 and Liqing Zhang 1 1 Department of Computer Science and Engineering, Shanghai Jiao Tong University No. 8 Dongchuan Road,

More information

VIDEO SALIENCY INCORPORATING SPATIOTEMPORAL CUES AND UNCERTAINTY WEIGHTING

VIDEO SALIENCY INCORPORATING SPATIOTEMPORAL CUES AND UNCERTAINTY WEIGHTING VIDEO SALIENCY INCORPORATING SPATIOTEMPORAL CUES AND UNCERTAINTY WEIGHTING Yuming Fang, Zhou Wang 2, Weisi Lin School of Computer Engineering, Nanyang Technological University, Singapore 2 Department of

More information

Computational Cognitive Science

Computational Cognitive Science Computational Cognitive Science Lecture 19: Contextual Guidance of Attention Chris Lucas (Slides adapted from Frank Keller s) School of Informatics University of Edinburgh clucas2@inf.ed.ac.uk 20 November

More information

Neural circuits PSY 310 Greg Francis. Lecture 05. Rods and cones

Neural circuits PSY 310 Greg Francis. Lecture 05. Rods and cones Neural circuits PSY 310 Greg Francis Lecture 05 Why do you need bright light to read? Rods and cones Photoreceptors are not evenly distributed across the retina 1 Rods and cones Cones are most dense in

More information

Video Saliency Detection via Dynamic Consistent Spatio- Temporal Attention Modelling

Video Saliency Detection via Dynamic Consistent Spatio- Temporal Attention Modelling AAAI -13 July 16, 2013 Video Saliency Detection via Dynamic Consistent Spatio- Temporal Attention Modelling Sheng-hua ZHONG 1, Yan LIU 1, Feifei REN 1,2, Jinghuan ZHANG 2, Tongwei REN 3 1 Department of

More information

Selective Attention. Inattentional blindness [demo] Cocktail party phenomenon William James definition

Selective Attention. Inattentional blindness [demo] Cocktail party phenomenon William James definition Selective Attention Inattentional blindness [demo] Cocktail party phenomenon William James definition Everyone knows what attention is. It is the taking possession of the mind, in clear and vivid form,

More information

A Visual Saliency Map Based on Random Sub-Window Means

A Visual Saliency Map Based on Random Sub-Window Means A Visual Saliency Map Based on Random Sub-Window Means Tadmeri Narayan Vikram 1,2, Marko Tscherepanow 1 and Britta Wrede 1,2 1 Applied Informatics Group 2 Research Institute for Cognition and Robotics

More information

Models of Attention. Models of Attention

Models of Attention. Models of Attention Models of Models of predictive: can we predict eye movements (bottom up attention)? [L. Itti and coll] pop out and saliency? [Z. Li] Readings: Maunsell & Cook, the role of attention in visual processing,

More information

Real-time computational attention model for dynamic scenes analysis

Real-time computational attention model for dynamic scenes analysis Computer Science Image and Interaction Laboratory Real-time computational attention model for dynamic scenes analysis Matthieu Perreira Da Silva Vincent Courboulay 19/04/2012 Photonics Europe 2012 Symposium,

More information

SUN: A Bayesian Framework for Saliency Using Natural Statistics

SUN: A Bayesian Framework for Saliency Using Natural Statistics SUN: A Bayesian Framework for Saliency Using Natural Statistics Lingyun Zhang Tim K. Marks Matthew H. Tong Honghao Shan Garrison W. Cottrell Dept. of Computer Science and Engineering University of California,

More information

Advertisement Evaluation Based On Visual Attention Mechanism

Advertisement Evaluation Based On Visual Attention Mechanism 2nd International Conference on Economics, Management Engineering and Education Technology (ICEMEET 2016) Advertisement Evaluation Based On Visual Attention Mechanism Yu Xiao1, 2, Peng Gan1, 2, Yuling

More information

An Evaluation of Motion in Artificial Selective Attention

An Evaluation of Motion in Artificial Selective Attention An Evaluation of Motion in Artificial Selective Attention Trent J. Williams Bruce A. Draper Colorado State University Computer Science Department Fort Collins, CO, U.S.A, 80523 E-mail: {trent, draper}@cs.colostate.edu

More information

2/3/17. Visual System I. I. Eye, color space, adaptation II. Receptive fields and lateral inhibition III. Thalamus and primary visual cortex

2/3/17. Visual System I. I. Eye, color space, adaptation II. Receptive fields and lateral inhibition III. Thalamus and primary visual cortex 1 Visual System I I. Eye, color space, adaptation II. Receptive fields and lateral inhibition III. Thalamus and primary visual cortex 2 1 2/3/17 Window of the Soul 3 Information Flow: From Photoreceptors

More information

Lateral Geniculate Nucleus (LGN)

Lateral Geniculate Nucleus (LGN) Lateral Geniculate Nucleus (LGN) What happens beyond the retina? What happens in Lateral Geniculate Nucleus (LGN)- 90% flow Visual cortex Information Flow Superior colliculus 10% flow Slide 2 Information

More information

Control of Selective Visual Attention: Modeling the "Where" Pathway

Control of Selective Visual Attention: Modeling the Where Pathway Control of Selective Visual Attention: Modeling the "Where" Pathway Ernst Niebur Computation and Neural Systems 139-74 California Institute of Technology Christof Koch Computation and Neural Systems 139-74

More information

Asymmetries in ecological and sensorimotor laws: towards a theory of subjective experience. James J. Clark

Asymmetries in ecological and sensorimotor laws: towards a theory of subjective experience. James J. Clark Asymmetries in ecological and sensorimotor laws: towards a theory of subjective experience James J. Clark Centre for Intelligent Machines McGill University This talk will motivate an ecological approach

More information

On the role of context in probabilistic models of visual saliency

On the role of context in probabilistic models of visual saliency 1 On the role of context in probabilistic models of visual saliency Date Neil Bruce, Pierre Kornprobst NeuroMathComp Project Team, INRIA Sophia Antipolis, ENS Paris, UNSA, LJAD 2 Overview What is saliency?

More information

Putting Context into. Vision. September 15, Derek Hoiem

Putting Context into. Vision. September 15, Derek Hoiem Putting Context into Vision Derek Hoiem September 15, 2004 Questions to Answer What is context? How is context used in human vision? How is context currently used in computer vision? Conclusions Context

More information

VISUAL PERCEPTION & COGNITIVE PROCESSES

VISUAL PERCEPTION & COGNITIVE PROCESSES VISUAL PERCEPTION & COGNITIVE PROCESSES Prof. Rahul C. Basole CS4460 > March 31, 2016 How Are Graphics Used? Larkin & Simon (1987) investigated usefulness of graphical displays Graphical visualization

More information

SUN: A Model of Visual Salience Using Natural Statistics. Gary Cottrell Lingyun Zhang Matthew Tong Tim Marks Honghao Shan Nick Butko Javier Movellan

SUN: A Model of Visual Salience Using Natural Statistics. Gary Cottrell Lingyun Zhang Matthew Tong Tim Marks Honghao Shan Nick Butko Javier Movellan SUN: A Model of Visual Salience Using Natural Statistics Gary Cottrell Lingyun Zhang Matthew Tong Tim Marks Honghao Shan Nick Butko Javier Movellan 1 Collaborators QuickTime and a TIFF (LZW) decompressor

More information

The Integration of Features in Visual Awareness : The Binding Problem. By Andrew Laguna, S.J.

The Integration of Features in Visual Awareness : The Binding Problem. By Andrew Laguna, S.J. The Integration of Features in Visual Awareness : The Binding Problem By Andrew Laguna, S.J. Outline I. Introduction II. The Visual System III. What is the Binding Problem? IV. Possible Theoretical Solutions

More information

Finding Saliency in Noisy Images

Finding Saliency in Noisy Images Finding Saliency in Noisy Images Chelhwon Kim and Peyman Milanfar Electrical Engineering Department, University of California, Santa Cruz, CA, USA ABSTRACT Recently, many computational saliency models

More information

Methods for comparing scanpaths and saliency maps: strengths and weaknesses

Methods for comparing scanpaths and saliency maps: strengths and weaknesses Methods for comparing scanpaths and saliency maps: strengths and weaknesses O. Le Meur olemeur@irisa.fr T. Baccino thierry.baccino@univ-paris8.fr Univ. of Rennes 1 http://www.irisa.fr/temics/staff/lemeur/

More information

Top-down Attention Signals in Saliency 邓凝旖

Top-down Attention Signals in Saliency 邓凝旖 Top-down Attention Signals in Saliency WORKS BY VIDHYA NAVALPAKKAM 邓凝旖 2014.11.10 Introduction of Vidhya Navalpakkam EDUCATION * Ph.D, Computer Science, Fall 2006, University of Southern California (USC),

More information

Biologically Motivated Local Contextual Modulation Improves Low-Level Visual Feature Representations

Biologically Motivated Local Contextual Modulation Improves Low-Level Visual Feature Representations Biologically Motivated Local Contextual Modulation Improves Low-Level Visual Feature Representations Xun Shi,NeilD.B.Bruce, and John K. Tsotsos Department of Computer Science & Engineering, and Centre

More information

Motion Saliency Outweighs Other Low-level Features While Watching Videos

Motion Saliency Outweighs Other Low-level Features While Watching Videos Motion Saliency Outweighs Other Low-level Features While Watching Videos Dwarikanath Mahapatra, Stefan Winkler and Shih-Cheng Yen Department of Electrical and Computer Engineering National University of

More information

Spiking Inputs to a Winner-take-all Network

Spiking Inputs to a Winner-take-all Network Spiking Inputs to a Winner-take-all Network Matthias Oster and Shih-Chii Liu Institute of Neuroinformatics University of Zurich and ETH Zurich Winterthurerstrasse 9 CH-857 Zurich, Switzerland {mao,shih}@ini.phys.ethz.ch

More information

OPTO 5320 VISION SCIENCE I

OPTO 5320 VISION SCIENCE I OPTO 5320 VISION SCIENCE I Monocular Sensory Processes of Vision: Color Vision Mechanisms of Color Processing . Neural Mechanisms of Color Processing A. Parallel processing - M- & P- pathways B. Second

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 5: Data analysis II Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis II 6 Single

More information

Construction of the Visual Image

Construction of the Visual Image Construction of the Visual Image Anne L. van de Ven 8 Sept 2003 BioE 492/592 Sensory Neuroengineering Lecture 3 Visual Perception Light Photoreceptors Interneurons Visual Processing Ganglion Neurons Optic

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 11: Attention & Decision making Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis

More information

I. INTRODUCTION VISUAL saliency, which is a term for the pop-out

I. INTRODUCTION VISUAL saliency, which is a term for the pop-out IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 6, JUNE 2016 1177 Spatiochromatic Context Modeling for Color Saliency Analysis Jun Zhang, Meng Wang, Member, IEEE, Shengping Zhang,

More information

Implementation of Automatic Retina Exudates Segmentation Algorithm for Early Detection with Low Computational Time

Implementation of Automatic Retina Exudates Segmentation Algorithm for Early Detection with Low Computational Time www.ijecs.in International Journal Of Engineering And Computer Science ISSN: 2319-7242 Volume 5 Issue 10 Oct. 2016, Page No. 18584-18588 Implementation of Automatic Retina Exudates Segmentation Algorithm

More information

(In)Attention and Visual Awareness IAT814

(In)Attention and Visual Awareness IAT814 (In)Attention and Visual Awareness IAT814 Week 5 Lecture B 8.10.2009 Lyn Bartram lyn@sfu.ca SCHOOL OF INTERACTIVE ARTS + TECHNOLOGY [SIAT] WWW.SIAT.SFU.CA This is a useful topic Understand why you can

More information

The Eye. Cognitive Neuroscience of Language. Today s goals. 5 From eye to brain. Today s reading

The Eye. Cognitive Neuroscience of Language. Today s goals. 5 From eye to brain. Today s reading Cognitive Neuroscience of Language 5 From eye to brain Today s goals Look at the pathways that conduct the visual information from the eye to the visual cortex Marielle Lange http://homepages.inf.ed.ac.uk/mlange/teaching/cnl/

More information

Carlson (7e) PowerPoint Lecture Outline Chapter 6: Vision

Carlson (7e) PowerPoint Lecture Outline Chapter 6: Vision Carlson (7e) PowerPoint Lecture Outline Chapter 6: Vision This multimedia product and its contents are protected under copyright law. The following are prohibited by law: any public performance or display,

More information

MEMORABILITY OF NATURAL SCENES: THE ROLE OF ATTENTION

MEMORABILITY OF NATURAL SCENES: THE ROLE OF ATTENTION MEMORABILITY OF NATURAL SCENES: THE ROLE OF ATTENTION Matei Mancas University of Mons - UMONS, Belgium NumediArt Institute, 31, Bd. Dolez, Mons matei.mancas@umons.ac.be Olivier Le Meur University of Rennes

More information

CS/NEUR125 Brains, Minds, and Machines. Due: Friday, April 14

CS/NEUR125 Brains, Minds, and Machines. Due: Friday, April 14 CS/NEUR125 Brains, Minds, and Machines Assignment 5: Neural mechanisms of object-based attention Due: Friday, April 14 This Assignment is a guided reading of the 2014 paper, Neural Mechanisms of Object-Based

More information

Competing Frameworks in Perception

Competing Frameworks in Perception Competing Frameworks in Perception Lesson II: Perception module 08 Perception.08. 1 Views on perception Perception as a cascade of information processing stages From sensation to percept Template vs. feature

More information

Competing Frameworks in Perception

Competing Frameworks in Perception Competing Frameworks in Perception Lesson II: Perception module 08 Perception.08. 1 Views on perception Perception as a cascade of information processing stages From sensation to percept Template vs. feature

More information

Prof. Greg Francis 7/31/15

Prof. Greg Francis 7/31/15 s PSY 200 Greg Francis Lecture 06 How do you recognize your grandmother? Action potential With enough excitatory input, a cell produces an action potential that sends a signal down its axon to other cells

More information

Local Image Structures and Optic Flow Estimation

Local Image Structures and Optic Flow Estimation Local Image Structures and Optic Flow Estimation Sinan KALKAN 1, Dirk Calow 2, Florentin Wörgötter 1, Markus Lappe 2 and Norbert Krüger 3 1 Computational Neuroscience, Uni. of Stirling, Scotland; {sinan,worgott}@cn.stir.ac.uk

More information

Adding Shape to Saliency: A Computational Model of Shape Contrast

Adding Shape to Saliency: A Computational Model of Shape Contrast Adding Shape to Saliency: A Computational Model of Shape Contrast Yupei Chen 1, Chen-Ping Yu 2, Gregory Zelinsky 1,2 Department of Psychology 1, Department of Computer Science 2 Stony Brook University

More information

Natural Scene Statistics and Perception. W.S. Geisler

Natural Scene Statistics and Perception. W.S. Geisler Natural Scene Statistics and Perception W.S. Geisler Some Important Visual Tasks Identification of objects and materials Navigation through the environment Estimation of motion trajectories and speeds

More information

UC Merced Proceedings of the Annual Meeting of the Cognitive Science Society

UC Merced Proceedings of the Annual Meeting of the Cognitive Science Society UC Merced Proceedings of the Annual Meeting of the Cognitive Science Society Title SUNDAy: Saliency Using Natural Statistics for Dynamic Analysis of Scenes Permalink https://escholarship.org/uc/item/6z26h76d

More information

Recurrent Refinement for Visual Saliency Estimation in Surveillance Scenarios

Recurrent Refinement for Visual Saliency Estimation in Surveillance Scenarios 2012 Ninth Conference on Computer and Robot Vision Recurrent Refinement for Visual Saliency Estimation in Surveillance Scenarios Neil D. B. Bruce*, Xun Shi*, and John K. Tsotsos Department of Computer

More information

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China A Vision-based Affective Computing System Jieyu Zhao Ningbo University, China Outline Affective Computing A Dynamic 3D Morphable Model Facial Expression Recognition Probabilistic Graphical Models Some

More information

Extraction of Blood Vessels and Recognition of Bifurcation Points in Retinal Fundus Image

Extraction of Blood Vessels and Recognition of Bifurcation Points in Retinal Fundus Image International Journal of Research Studies in Science, Engineering and Technology Volume 1, Issue 5, August 2014, PP 1-7 ISSN 2349-4751 (Print) & ISSN 2349-476X (Online) Extraction of Blood Vessels and

More information

A Model of Saliency-Based Visual Attention for Rapid Scene Analysis

A Model of Saliency-Based Visual Attention for Rapid Scene Analysis A Model of Saliency-Based Visual Attention for Rapid Scene Analysis Itti, L., Koch, C., Niebur, E. Presented by Russell Reinhart CS 674, Fall 2018 Presentation Overview Saliency concept and motivation

More information

Compound Effects of Top-down and Bottom-up Influences on Visual Attention During Action Recognition

Compound Effects of Top-down and Bottom-up Influences on Visual Attention During Action Recognition Compound Effects of Top-down and Bottom-up Influences on Visual Attention During Action Recognition Bassam Khadhouri and Yiannis Demiris Department of Electrical and Electronic Engineering Imperial College

More information

Chapter 6. Attention. Attention

Chapter 6. Attention. Attention Chapter 6 Attention Attention William James, in 1890, wrote Everyone knows what attention is. Attention is the taking possession of the mind, in clear and vivid form, of one out of what seem several simultaneously

More information

IAT 355 Perception 1. Or What You See is Maybe Not What You Were Supposed to Get

IAT 355 Perception 1. Or What You See is Maybe Not What You Were Supposed to Get IAT 355 Perception 1 Or What You See is Maybe Not What You Were Supposed to Get Why we need to understand perception The ability of viewers to interpret visual (graphical) encodings of information and

More information

Relevance of Computational model for Detection of Optic Disc in Retinal images

Relevance of Computational model for Detection of Optic Disc in Retinal images Relevance of Computational model for Detection of Optic Disc in Retinal images Nilima Kulkarni Department of Computer Science and Engineering Amrita school of Engineering, Bangalore, Amrita Vishwa Vidyapeetham

More information

Experiences on Attention Direction through Manipulation of Salient Features

Experiences on Attention Direction through Manipulation of Salient Features Experiences on Attention Direction through Manipulation of Salient Features Erick Mendez Graz University of Technology Dieter Schmalstieg Graz University of Technology Steven Feiner Columbia University

More information

Skin color detection for face localization in humanmachine

Skin color detection for face localization in humanmachine Research Online ECU Publications Pre. 2011 2001 Skin color detection for face localization in humanmachine communications Douglas Chai Son Lam Phung Abdesselam Bouzerdoum 10.1109/ISSPA.2001.949848 This

More information

Detection of Glaucoma and Diabetic Retinopathy from Fundus Images by Bloodvessel Segmentation

Detection of Glaucoma and Diabetic Retinopathy from Fundus Images by Bloodvessel Segmentation International Journal of Engineering and Advanced Technology (IJEAT) ISSN: 2249 8958, Volume-5, Issue-5, June 2016 Detection of Glaucoma and Diabetic Retinopathy from Fundus Images by Bloodvessel Segmentation

More information

CS294-6 (Fall 2004) Recognizing People, Objects and Actions Lecture: January 27, 2004 Human Visual System

CS294-6 (Fall 2004) Recognizing People, Objects and Actions Lecture: January 27, 2004 Human Visual System CS294-6 (Fall 2004) Recognizing People, Objects and Actions Lecture: January 27, 2004 Human Visual System Lecturer: Jitendra Malik Scribe: Ryan White (Slide: layout of the brain) Facts about the brain:

More information

HUMAN VISUAL PERCEPTION CONCEPTS AS MECHANISMS FOR SALIENCY DETECTION

HUMAN VISUAL PERCEPTION CONCEPTS AS MECHANISMS FOR SALIENCY DETECTION HUMAN VISUAL PERCEPTION CONCEPTS AS MECHANISMS FOR SALIENCY DETECTION Oana Loredana BUZATU Gheorghe Asachi Technical University of Iasi, 11, Carol I Boulevard, 700506 Iasi, Romania lbuzatu@etti.tuiasi.ro

More information

Reading Assignments: Lecture 18: Visual Pre-Processing. Chapters TMB Brain Theory and Artificial Intelligence

Reading Assignments: Lecture 18: Visual Pre-Processing. Chapters TMB Brain Theory and Artificial Intelligence Brain Theory and Artificial Intelligence Lecture 18: Visual Pre-Processing. Reading Assignments: Chapters TMB2 3.3. 1 Low-Level Processing Remember: Vision as a change in representation. At the low-level,

More information

Webpage Saliency. National University of Singapore

Webpage Saliency. National University of Singapore Webpage Saliency Chengyao Shen 1,2 and Qi Zhao 2 1 Graduate School for Integrated Science and Engineering, 2 Department of Electrical and Computer Engineering, National University of Singapore Abstract.

More information

A Dynamical Systems Approach to Visual Attention Based on Saliency Maps

A Dynamical Systems Approach to Visual Attention Based on Saliency Maps A Dynamical Systems Approach to Visual Attention Based on Saliency Maps Timothy Randall Rost E H U N I V E R S I T Y T O H F R G E D I N B U Master of Science School of Informatics University of Edinburgh

More information

PHGY 210,2,4 - Physiology SENSORY PHYSIOLOGY. Martin Paré

PHGY 210,2,4 - Physiology SENSORY PHYSIOLOGY. Martin Paré PHGY 210,2,4 - Physiology SENSORY PHYSIOLOGY Martin Paré Associate Professor of Physiology & Psychology pare@biomed.queensu.ca http://brain.phgy.queensu.ca/pare PHGY 210,2,4 - Physiology SENSORY PHYSIOLOGY

More information

International Journal of Advanced Computer Technology (IJACT)

International Journal of Advanced Computer Technology (IJACT) Abstract An Introduction to Third Generation of Neural Networks for Edge Detection Being inspired by the structure and behavior of the human visual system the spiking neural networks for edge detection

More information

Self-Organization and Segmentation with Laterally Connected Spiking Neurons

Self-Organization and Segmentation with Laterally Connected Spiking Neurons Self-Organization and Segmentation with Laterally Connected Spiking Neurons Yoonsuck Choe Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 USA Risto Miikkulainen Department

More information

Object detection in natural scenes by feedback

Object detection in natural scenes by feedback In: H.H. Būlthoff et al. (eds.), Biologically Motivated Computer Vision. Lecture Notes in Computer Science. Berlin, Heidelberg, New York: Springer Verlag, 398-407, 2002. c Springer-Verlag Object detection

More information

Morton-Style Factorial Coding of Color in Primary Visual Cortex

Morton-Style Factorial Coding of Color in Primary Visual Cortex Morton-Style Factorial Coding of Color in Primary Visual Cortex Javier R. Movellan Institute for Neural Computation University of California San Diego La Jolla, CA 92093-0515 movellan@inc.ucsd.edu Thomas

More information

Today s Agenda. Human abilities Cognition Review for Exam1

Today s Agenda. Human abilities Cognition Review for Exam1 Today s Agenda Human abilities Cognition Review for Exam1 Announcement Exam 1 is scheduled Monday, Oct. 1 st, in class Cover materials until Sep. 24 Most of materials from class lecture notes You are allowed

More information

Reading Assignments: Lecture 5: Introduction to Vision. None. Brain Theory and Artificial Intelligence

Reading Assignments: Lecture 5: Introduction to Vision. None. Brain Theory and Artificial Intelligence Brain Theory and Artificial Intelligence Lecture 5:. Reading Assignments: None 1 Projection 2 Projection 3 Convention: Visual Angle Rather than reporting two numbers (size of object and distance to observer),

More information

Plasticity of Cerebral Cortex in Development

Plasticity of Cerebral Cortex in Development Plasticity of Cerebral Cortex in Development Jessica R. Newton and Mriganka Sur Department of Brain & Cognitive Sciences Picower Center for Learning & Memory Massachusetts Institute of Technology Cambridge,

More information

SENSES: VISION. Chapter 5: Sensation AP Psychology Fall 2014

SENSES: VISION. Chapter 5: Sensation AP Psychology Fall 2014 SENSES: VISION Chapter 5: Sensation AP Psychology Fall 2014 Sensation versus Perception Top-Down Processing (Perception) Cerebral cortex/ Association Areas Expectations Experiences Memories Schemas Anticipation

More information

Outline 2/19/2013. Please see me after class: Sarah Pagliero Ryan Paul Demetrius Prowell-Reed Ashley Rehm Giovanni Reynel Patricia Rochin

Outline 2/19/2013. Please see me after class: Sarah Pagliero Ryan Paul Demetrius Prowell-Reed Ashley Rehm Giovanni Reynel Patricia Rochin Outline 2/19/2013 PSYC 120 General Psychology Spring 2013 Lecture 8: Sensation and Perception 1 Dr. Bart Moore bamoore@napavalley.edu Office hours Tuesdays 11:00-1:00 How we sense and perceive the world

More information

ELL 788 Computational Perception & Cognition July November 2015

ELL 788 Computational Perception & Cognition July November 2015 ELL 788 Computational Perception & Cognition July November 2015 Module 8 Audio and Multimodal Attention Audio Scene Analysis Two-stage process Segmentation: decomposition to time-frequency segments Grouping

More information

(Visual) Attention. October 3, PSY Visual Attention 1

(Visual) Attention. October 3, PSY Visual Attention 1 (Visual) Attention Perception and awareness of a visual object seems to involve attending to the object. Do we have to attend to an object to perceive it? Some tasks seem to proceed with little or no attention

More information

Information Processing During Transient Responses in the Crayfish Visual System

Information Processing During Transient Responses in the Crayfish Visual System Information Processing During Transient Responses in the Crayfish Visual System Christopher J. Rozell, Don. H. Johnson and Raymon M. Glantz Department of Electrical & Computer Engineering Department of

More information

Neuro-Inspired Statistical. Rensselaer Polytechnic Institute National Science Foundation

Neuro-Inspired Statistical. Rensselaer Polytechnic Institute National Science Foundation Neuro-Inspired Statistical Pi Prior Model lfor Robust Visual Inference Qiang Ji Rensselaer Polytechnic Institute National Science Foundation 1 Status of Computer Vision CV has been an active area for over

More information

Basics of Computational Neuroscience

Basics of Computational Neuroscience Basics of Computational Neuroscience 1 1) Introduction Lecture: Computational Neuroscience, The Basics A reminder: Contents 1) Brain, Maps,, Networks,, and The tough stuff: 2,3) Membrane Models 3,4) Spiking

More information

Visual Selection and Attention

Visual Selection and Attention Visual Selection and Attention Retrieve Information Select what to observe No time to focus on every object Overt Selections Performed by eye movements Covert Selections Performed by visual attention 2

More information

The discriminant center-surround hypothesis for bottom-up saliency

The discriminant center-surround hypothesis for bottom-up saliency Appears in the Neural Information Processing Systems (NIPS) Conference, 27. The discriminant center-surround hypothesis for bottom-up saliency Dashan Gao Vijay Mahadevan Nuno Vasconcelos Department of

More information

Subjective randomness and natural scene statistics

Subjective randomness and natural scene statistics Psychonomic Bulletin & Review 2010, 17 (5), 624-629 doi:10.3758/pbr.17.5.624 Brief Reports Subjective randomness and natural scene statistics Anne S. Hsu University College London, London, England Thomas

More information

Neuroinformatics. Ilmari Kurki, Urs Köster, Jukka Perkiö, (Shohei Shimizu) Interdisciplinary and interdepartmental

Neuroinformatics. Ilmari Kurki, Urs Köster, Jukka Perkiö, (Shohei Shimizu) Interdisciplinary and interdepartmental Neuroinformatics Aapo Hyvärinen, still Academy Research Fellow for a while Post-docs: Patrik Hoyer and Jarmo Hurri + possibly international post-docs PhD students Ilmari Kurki, Urs Köster, Jukka Perkiö,

More information

Fundamentals of Cognitive Psychology, 3e by Ronald T. Kellogg Chapter 2. Multiple Choice

Fundamentals of Cognitive Psychology, 3e by Ronald T. Kellogg Chapter 2. Multiple Choice Multiple Choice 1. Which structure is not part of the visual pathway in the brain? a. occipital lobe b. optic chiasm c. lateral geniculate nucleus *d. frontal lobe Answer location: Visual Pathways 2. Which

More information

What we see is most likely to be what matters: Visual attention and applications

What we see is most likely to be what matters: Visual attention and applications What we see is most likely to be what matters: Visual attention and applications O. Le Meur P. Le Callet olemeur@irisa.fr patrick.lecallet@univ-nantes.fr http://www.irisa.fr/temics/staff/lemeur/ November

More information

A Survey on Localizing Optic Disk

A Survey on Localizing Optic Disk International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 4, Number 14 (2014), pp. 1355-1359 International Research Publications House http://www. irphouse.com A Survey on Localizing

More information

Object-based Saliency as a Predictor of Attention in Visual Tasks

Object-based Saliency as a Predictor of Attention in Visual Tasks Object-based Saliency as a Predictor of Attention in Visual Tasks Michal Dziemianko (m.dziemianko@sms.ed.ac.uk) Alasdair Clarke (a.clarke@ed.ac.uk) Frank Keller (keller@inf.ed.ac.uk) Institute for Language,

More information

A Model of Visually Guided Plasticity of the Auditory Spatial Map in the Barn Owl

A Model of Visually Guided Plasticity of the Auditory Spatial Map in the Barn Owl A Model of Visually Guided Plasticity of the Auditory Spatial Map in the Barn Owl Andrea Haessly andrea@cs.utexas.edu Joseph Sirosh sirosh@cs.utexas.edu Risto Miikkulainen risto@cs.utexas.edu Abstract

More information

Vision Seeing is in the mind

Vision Seeing is in the mind 1 Vision Seeing is in the mind Stimulus: Light 2 Light Characteristics 1. Wavelength (hue) 2. Intensity (brightness) 3. Saturation (purity) 3 4 Hue (color): dimension of color determined by wavelength

More information

Ch 5. Perception and Encoding

Ch 5. Perception and Encoding Ch 5. Perception and Encoding Cognitive Neuroscience: The Biology of the Mind, 2 nd Ed., M. S. Gazzaniga, R. B. Ivry, and G. R. Mangun, Norton, 2002. Summarized by Y.-J. Park, M.-H. Kim, and B.-T. Zhang

More information

The Role of Color and Attention in Fast Natural Scene Recognition

The Role of Color and Attention in Fast Natural Scene Recognition Color and Fast Scene Recognition 1 The Role of Color and Attention in Fast Natural Scene Recognition Angela Chapman Department of Cognitive and Neural Systems Boston University 677 Beacon St. Boston, MA

More information

M Cells. Why parallel pathways? P Cells. Where from the retina? Cortical visual processing. Announcements. Main visual pathway from retina to V1

M Cells. Why parallel pathways? P Cells. Where from the retina? Cortical visual processing. Announcements. Main visual pathway from retina to V1 Announcements exam 1 this Thursday! review session: Wednesday, 5:00-6:30pm, Meliora 203 Bryce s office hours: Wednesday, 3:30-5:30pm, Gleason https://www.youtube.com/watch?v=zdw7pvgz0um M Cells M cells

More information

Neuromorphic convolutional recurrent neural network for road safety or safety near the road

Neuromorphic convolutional recurrent neural network for road safety or safety near the road Neuromorphic convolutional recurrent neural network for road safety or safety near the road WOO-SUP HAN 1, IL SONG HAN 2 1 ODIGA, London, U.K. 2 Korea Advanced Institute of Science and Technology, Daejeon,

More information

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES Varinthira Duangudom and David V Anderson School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA 30332

More information

Lighta part of the spectrum of Electromagnetic Energy. (the part that s visible to us!)

Lighta part of the spectrum of Electromagnetic Energy. (the part that s visible to us!) Introduction to Physiological Psychology Vision ksweeney@cogsci.ucsd.edu cogsci.ucsd.edu/~ /~ksweeney/psy260.html Lighta part of the spectrum of Electromagnetic Energy (the part that s visible to us!)

More information