What we see is most likely to be what matters: Visual attention and applications
|
|
- Cuthbert Reginald Ford
- 6 years ago
- Views:
Transcription
1 What we see is most likely to be what matters: Visual attention and applications O. Le Meur P. Le Callet November 9,
2 Introduction Yarbus [Yarbus, 1967] demonstrated how eye movements changed depending on the question asked of the subject. 1 No question asked 2 Judge economic status 3 What were they doing before the visitor arrived? 4 What clothes are they wearing? 5 Where are they? 6 How long is it since the visitor has seen the family? 7 Estimate how long the unexpected visitor had been away from the family. Each recording lasted 3 minutes. 2
3 1 Introduction 2 3 Hierarchical models Statistical models Some examples 4 Building a ground truth... Metrics Limitations 5 Retargeting Compression Others 6 3
4 1 Introduction
5 For the computational modelling, two `schools' can be considered: One based on the assumption that there is an unique saliency map [Koch et al., 1985][Li, 2002]: Denition (saliency map) A topographic representation that combines the information from the individual feature maps into one global measure of conspicuity. This map can be modulated by a higher-level feedback. A comfortable view for the computational modelling... Our dierent senses Computer Saliency map Memory Eye Movements 5
6 For the computational modelling, two `schools' can be considered: There exist multiple saliency maps (distributed throughout the visual areas) [Tsotsos et al., 1995]). Many candidate locations for a saliency map: Primary visual cortex[li, 2002] Lateral IntraParietal area (LIP) [Kusunoki et al., 2000] Medial Temporal cortex [Treue et al., 2006] `At each level, saliency can thus be used as a gain control mechanism to spatially gate relevant information for the next processing level. From [Van Rullen, 2003]. Priority map [Fecteau et al., 2006] 6 From [Van Rullen, 2003].
7 Hierarchical models Statistical models Some examples 1 Introduction 2 3 Hierarchical models Statistical models Some examples
8 Hierarchical models Hierarchical models Statistical models Some examples Itti's model [Itti et al., 1998], probably the most known... Based on the Koch and Ullman's scheme Hierarchical decomposition (Gaussian) Early visual features extraction in a massively parallel manner Center-surround operations Pooling of the feature maps to form the saliency map 8
9 Hierarchical models Hierarchical models Statistical models Some examples Le Meur's model [Le Meur et al., 2006], an extension of Itti's model... Based on the Koch and Ullman's scheme Light adaptation and Contrast Sensitivity Function Hierarchical and oriented decomposition (Fourier spectrum) Early visual features extraction in a massively parallel manner Center-surround operations on each oriented subband Enhanced pooling [Le Meur et al., 2007] of the feature maps to form the saliency map Other models in the same vein: [Marat et al., 2009], [Bur et al., 2007]... 9
10 Statistical models Hierarchical models Statistical models Some examples Statistical models are based on a probabilistic framework taken their origin in the information theory. Denition (Self-information) Self-information is a measure of the amount information provided by an event. For a discrete X r.v dened by A = {x 1,..., x N } and by a pdf, the amount of information of the event X = x i is given by: I (X = x i ) = log 2 p(x = x i ), bit/symbol Properties if p(x = x i ) < p(x = x j ) then I (X = x i ) > I (X = x j ) p(x = x i ) 0, I (X = x i ) + The saliency of visual content could be deduced from the self-information measure. Self-information rareness, surprise, contrast... 10
11 Statistical models Hierarchical models Statistical models Some examples First model resting on this approach has been proposed in 2003 [Oliva et al., 2003]: S(x) = 1 p(v l (x)), where v l is a feature vector (48 dimensions), the pdf is computed over the whole image. Bruce in 2004 [Bruce, 2004] and 2009 [Bruce et al., 2009] modied the previous approach by using the self-information locally on independent coecients (projection on a given basis). From [Bruce et al., 2009] Other models in the same vein: [Mancas et al., 2006], [Zhang, 2008]. 11
12 Some examples Hierarchical models Statistical models Some examples (a) Original (b) Itti (c) Le Meur (d) Bruce (e) Zhang 12
13 Building a ground truth... Metrics Limitations 1 Introduction Building a ground truth... Metrics Limitations
14 Ground truth Building a ground truth... Metrics Limitations (a) SM,FD (b) SM,FN (c) Data A good review of parsing algorithm in [Salvucci et al.,2000]. On the web (for natural images): Bruce's database Le Meur's database (28 color pictures) Rajashekar's database 14
15 Metrics Building a ground truth... Metrics Limitations Dierent methods are used to assess the degree of similarity between the ground truth and the prediction: Saliency-map-based method: (a) Original (b) Exp. SM (c) Predicted SM ROC (Receiver Operating Characteristic). Each pixel is labeled as xated or not. Several thresholds are used (AUC (Area Under Curve)). The higher the AUC, the better is the prediction, with 0.50 indicating random performance and 1.00 denoting perfect performance. KL-Divergence, Linear correlation coecient... 15
16 Metrics Building a ground truth... Metrics Limitations Fixation point-based method: NSS (Normalized Scanpath salience) gives the degree of correspondence between human xation locations and predicted saliency maps [Parkhurst et al.,2002],[peters et al., 2005]. 1 Each saliency map is normalized to have zero mean and one unit standard deviation. 2 Extraction of the predicted saliency at a given human xation point. 3 Average of the previous values. From [Peters et al., 2005] NSS = 0: random performance; NSS >> 0: correspondence between human xation locations and the predicted salient points: NSS << 0: anti-correspondence. 16
17 Limitations Building a ground truth... Metrics Limitations Mostly inherent to the building of the ground truth and to the experimental setting... 1 Several parameters can have a signicant impact on the results: the task subjects performed. What is the question we should asked? the nature of the stimuli viewed the apparatus used to record the eye movements a signicant central bias cognitive constraints (higher-level goals, prior knowledge, expectations...) 2 Has the xation the same meaning? 17
18 Limitations and perspectives Building a ground truth... Metrics Limitations The assessment of computational model mainly rests on the analysis of individual xations. But... Is there a focal-ambient dichotomy? [Unema et al., 2005][Follet et al., 2009]. Does every xation convey the same processing? Would it be possible to categorize xations as attentional xations, semantic xations...? A promising solution to disentangle dierent processes is called the EFRP (Eye-Fixation-Related Potentials) [Baccino et al., 2005]. 18 Courtesy of T. Baccino. Measuring ERP (Event-Related Potential) and EM conjointly to track the cognitive processes.
19 Retargeting Compression Others 1 Introduction Retargeting Compression Others 6 19
20 Retargeting Compression Others Retargeting (or reframing): Principle [Fan et al.,2003][chamaret et al., 2008]: Examples: From [Le Meur et al., 2006]. 20
21 Retargeting Compression Others Video compression: To allocate more bit rate to the salient areas than to others (adaptive quantization). Below, the macroblock cost distribution for a H.264 coding and for a saliency-based H.264 coding: A spatial blur is applied to the input frames such that the non regions of interest are strongly blurred [Itti, 2004]. From [Itti, 2004]. 21
22 Retargeting Compression Others Quality assessment: to weight the distortion of an area by its level of interest to have a better prediction of the quality score[ninassi et al., 2007]. (a) 1 impaired patch Structured document evaluation: (b) 3 impaired patches (a) Original images from web sites (b) Mouse-tracking map (From [Mancas, 2009]) Others: robot navigation, super-resolution, advertising... 22
23 1 Introduction
24 How to model a network of saliency/priority maps? How do they cooperate? How to evaluate accurately a computational model? Do the visual xations convey the same information? Eye movements are the results of a multiple source guidance [Henderson, 2003]. 24
25 24 [Baccino et al., 2005] T. Baccino, Y. Manunta. Eye-xation-related potentials: insight into parafoveal processing. Journal Of Psychophysiology, 19(3), pp , [Bruce, 2004] N.D.B. Bruce. Image Analysis through local information measures. International Conference on Pattern Recognition, [Bruce et al., 2009] N.D.B. Bruce, J.K. Tsotsos. Saliency, attention and visual search: an information theoretic approach. Journal of Vision, 9(3), pp. 1-24, [Bur et al., 2007] A. Bur, H. Hügli. Dynamic visual attention: competitive versus motion priority scheme. Proc. ICVS Workshop on Computational Attention &, [Chamaret et al., 2008] C. Chamaret, O. Le Meur. Attention-based video reframing: validation using eye-tracking. ICPR, [Henderson, 2003] J. M. Henderson, Human gaze control during real-world scene perception, trends in cognitive sciences, Vol. 7, N. 11, [Itti et al., 1998] L. Itti, C., Koch, E., Niebur. A model for saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Analysis and Machine Intelligence, 20, pp , [Itti, 2004] L. Itti. Automatic Foveation for Video Compression Using a Neurobiological Model of Visual Attention, IEEE Transactions on Image Processing, 13(10), pp , [Kusunoki et al., 2000] M., Kusunoki, J. Gottlieb, M.E. Goldberg. The lateral intraparietal area as a salience map: The representation of abrupt onset, stimulus motion, and task relevance. Vision Research, 40(10), pp , 2000.
26 [Koch et al., 1985] C. Koch, S. Ullman. shifts in selective visual attention: towards the underlying neural circuitry. Human Neurobiology 4, pp , [Fan et al.,2003] X. Fan, X. Xie, W. Ma, H. Zhang, H. Zhou. Visual attention based image browsing on mobile devices, ICME, pp , [Fecteau et al., 2006] J.H. Fecteau, D. P. Munoz. Salience, relevance, and ring: a priority map for target selection, Trends in cognitive sciences, 10, pp , [Follet et al., 2009] B. Follet, O. Le Meur, T. Baccino. Relationship between coarse-to-ne process and ambient-to-focal visual xations. ECEM, [Le Meur et al., 2007] O. Le Meur, P. Le Callet, and D. Barba. Predicting visual xations on video based on low-level visual features. Vision Research, 47(19), pp , [Le Meur et al., 2006] O. Le Meur, P. Le Callet, D. Barba and D. Thoreau. A coherent computational approach to model the bottom-up visual attention. IEEE Trans. on Pattern Analysis and Machine Intelligence, 28(5), pp , [Li, 2002] Z. Li. A saliency map in primary visual cortex. Trends Cognitive Sciences 6(1), pp. 9-16, [Mancas et al., 2006] M. Mancas, C. Mancas-Thillou, B. Gosselin, B. Macq. A rarity-based visual attention map - application to texture description. ICIP, [Mancas, 2009] M. Mancas. Relative Inuence of Bottom-Up and Top-Down Attention. Attention in Cognitive Systems, Lecture Notes in Computer Science,
27 [Marat et al., 2009] S. Marat, T. Ho Phuoc, L. Granjon, N. Guyader, D. Pellerin, A. Guerin-Dugue. Modelling Spatio-Temporal Saliency to Predict Gaze Direction for Short Videos, International Journal of Computer Vision, 82(3), pp , [Ninassi et al., 2007] A. Ninassi, O. Le Meur, P. Le Callet, D. Barba. Does where you gaze on an image aect your perception of quality? Applying to image quality metric, ICIP [Oliva et al., 2003] A. Oliva, A. Torralba, M. S. Castelhano, and J. M. Henderson. Top-down control of visual attention in object detection. ICIP, vol. 1, pp , [Parkhurst et al.,2002] D. Parkhurst, K. Law, E. Niebur. Modelling the role of salience in the allocation of overt visual attention. Vision Research, 42(1), pp , [Peters et al., 2005] R. Peters, A. Iyer, L. Itti, C. Koch. Components of bottom-up gaze allocation in natural images. Vision Research, [Salvucci et al.,2000] D.D. Salvucci, J.H. Goldberg. Identifying xations and saccades in eye-tracking protocols. ETRA, pp , [Treue et al., 2006] S. Treue, J. C. Martinez-Trujillo. Visual search and single-cell electrophysiology of attention: Area MT, from sensation to perception. Visual Cognition, 14(4), pp , [Tsotsos et al., 1995] J.K., Tsotsos, S. Culhane, W., Wai, Y., Lai, N., Davis, F., Nuo. Modeling visual attention via selective tuning, Articial Intelligence 78(1-2), pp ,
28 [Unema et al., 2005] P. Unema, S. Pannasch, M. Joos and B. Velichkovsky. Time course of information processing during scene perception: the relationship between saccade amplitude and xation duration. Visual Cognition, 12(3), pp , [Van Rullen, 2003] R. Van Rullen. Saliency and spike timing in the ventral visual pathway. Journal Of Physiology, [Yarbus, 1967] A. L. Yarbus. Eye Movements and vision. New York: Plenum, [Zhang, 2008] L. Zhang, M. H. Tong, T. K. Marks, H. Shan, and G. W. Cottrell. SUN: A Bayesian framework for saliency using natural statistics. Journal of Vision, vol. 8, no. 7, pp. 1-20,
Methods for comparing scanpaths and saliency maps: strengths and weaknesses
Methods for comparing scanpaths and saliency maps: strengths and weaknesses O. Le Meur olemeur@irisa.fr T. Baccino thierry.baccino@univ-paris8.fr Univ. of Rennes 1 http://www.irisa.fr/temics/staff/lemeur/
More informationValidating the Visual Saliency Model
Validating the Visual Saliency Model Ali Alsam and Puneet Sharma Department of Informatics & e-learning (AITeL), Sør-Trøndelag University College (HiST), Trondheim, Norway er.puneetsharma@gmail.com Abstract.
More informationVideo Saliency Detection via Dynamic Consistent Spatio- Temporal Attention Modelling
AAAI -13 July 16, 2013 Video Saliency Detection via Dynamic Consistent Spatio- Temporal Attention Modelling Sheng-hua ZHONG 1, Yan LIU 1, Feifei REN 1,2, Jinghuan ZHANG 2, Tongwei REN 3 1 Department of
More informationRelative Influence of Bottom-up & Top-down Attention
Relative Influence of Bottom-up & Top-down Attention Matei Mancas 1 1 Engineering Faculty of Mons (FPMs) 31, Bd. Dolez, 7000 Mons, Belgium Matei.Mancas@fpms.ac.be Abstract. Attention and memory are very
More informationVIDEO SALIENCY INCORPORATING SPATIOTEMPORAL CUES AND UNCERTAINTY WEIGHTING
VIDEO SALIENCY INCORPORATING SPATIOTEMPORAL CUES AND UNCERTAINTY WEIGHTING Yuming Fang, Zhou Wang 2, Weisi Lin School of Computer Engineering, Nanyang Technological University, Singapore 2 Department of
More informationKeywords- Saliency Visual Computational Model, Saliency Detection, Computer Vision, Saliency Map. Attention Bottle Neck
Volume 3, Issue 9, September 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Visual Attention
More informationOn the role of context in probabilistic models of visual saliency
1 On the role of context in probabilistic models of visual saliency Date Neil Bruce, Pierre Kornprobst NeuroMathComp Project Team, INRIA Sophia Antipolis, ENS Paris, UNSA, LJAD 2 Overview What is saliency?
More informationMEMORABILITY OF NATURAL SCENES: THE ROLE OF ATTENTION
MEMORABILITY OF NATURAL SCENES: THE ROLE OF ATTENTION Matei Mancas University of Mons - UMONS, Belgium NumediArt Institute, 31, Bd. Dolez, Mons matei.mancas@umons.ac.be Olivier Le Meur University of Rennes
More informationOn the implementation of Visual Attention Architectures
On the implementation of Visual Attention Architectures KONSTANTINOS RAPANTZIKOS AND NICOLAS TSAPATSOULIS DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING NATIONAL TECHNICAL UNIVERSITY OF ATHENS 9, IROON
More informationAn Evaluation of Motion in Artificial Selective Attention
An Evaluation of Motion in Artificial Selective Attention Trent J. Williams Bruce A. Draper Colorado State University Computer Science Department Fort Collins, CO, U.S.A, 80523 E-mail: {trent, draper}@cs.colostate.edu
More informationComputational Cognitive Science
Computational Cognitive Science Lecture 19: Contextual Guidance of Attention Chris Lucas (Slides adapted from Frank Keller s) School of Informatics University of Edinburgh clucas2@inf.ed.ac.uk 20 November
More informationComputational Cognitive Science
Computational Cognitive Science Lecture 15: Visual Attention Chris Lucas (Slides adapted from Frank Keller s) School of Informatics University of Edinburgh clucas2@inf.ed.ac.uk 14 November 2017 1 / 28
More informationComputational Cognitive Science. The Visual Processing Pipeline. The Visual Processing Pipeline. Lecture 15: Visual Attention.
Lecture 15: Visual Attention School of Informatics University of Edinburgh keller@inf.ed.ac.uk November 11, 2016 1 2 3 Reading: Itti et al. (1998). 1 2 When we view an image, we actually see this: The
More informationDynamic Visual Attention: Searching for coding length increments
Dynamic Visual Attention: Searching for coding length increments Xiaodi Hou 1,2 and Liqing Zhang 1 1 Department of Computer Science and Engineering, Shanghai Jiao Tong University No. 8 Dongchuan Road,
More informationModels of Attention. Models of Attention
Models of Models of predictive: can we predict eye movements (bottom up attention)? [L. Itti and coll] pop out and saliency? [Z. Li] Readings: Maunsell & Cook, the role of attention in visual processing,
More informationRecurrent Refinement for Visual Saliency Estimation in Surveillance Scenarios
2012 Ninth Conference on Computer and Robot Vision Recurrent Refinement for Visual Saliency Estimation in Surveillance Scenarios Neil D. B. Bruce*, Xun Shi*, and John K. Tsotsos Department of Computer
More informationMethods for comparing scanpaths and saliency maps: strengths and weaknesses
Methods for comparing scanpaths and saliency maps: strengths and weaknesses Olivier Le Meur, Thierry Baccino To cite this version: Olivier Le Meur, Thierry Baccino. Methods for comparing scanpaths and
More informationSaliency aggregation: Does unity make strength?
Saliency aggregation: Does unity make strength? Olivier Le Meur a and Zhi Liu a,b a IRISA, University of Rennes 1, FRANCE b School of Communication and Information Engineering, Shanghai University, CHINA
More informationContribution of Color Information in Visual Saliency Model for Videos
Contribution of Color Information in Visual Saliency Model for Videos Shahrbanoo Hamel, Nathalie Guyader, Denis Pellerin, and Dominique Houzet GIPSA-lab, UMR 5216, Grenoble, France Abstract. Much research
More informationA Visual Saliency Map Based on Random Sub-Window Means
A Visual Saliency Map Based on Random Sub-Window Means Tadmeri Narayan Vikram 1,2, Marko Tscherepanow 1 and Britta Wrede 1,2 1 Applied Informatics Group 2 Research Institute for Cognition and Robotics
More informationAn Attentional Framework for 3D Object Discovery
An Attentional Framework for 3D Object Discovery Germán Martín García and Simone Frintrop Cognitive Vision Group Institute of Computer Science III University of Bonn, Germany Saliency Computation Saliency
More informationUC Merced Proceedings of the Annual Meeting of the Cognitive Science Society
UC Merced Proceedings of the Annual Meeting of the Cognitive Science Society Title SUNDAy: Saliency Using Natural Statistics for Dynamic Analysis of Scenes Permalink https://escholarship.org/uc/item/6z26h76d
More informationReal-time computational attention model for dynamic scenes analysis
Computer Science Image and Interaction Laboratory Real-time computational attention model for dynamic scenes analysis Matthieu Perreira Da Silva Vincent Courboulay 19/04/2012 Photonics Europe 2012 Symposium,
More informationComputational Models of Visual Attention: Bottom-Up and Top-Down. By: Soheil Borhani
Computational Models of Visual Attention: Bottom-Up and Top-Down By: Soheil Borhani Neural Mechanisms for Visual Attention 1. Visual information enter the primary visual cortex via lateral geniculate nucleus
More informationModeling visual attention on scenes
Modeling visual attention on scenes Brice Follet (*) (***) Olivier Le Meur (**) Thierry Baccino (***) (*) Thomson, Compression Lab. 1 av. de Belle Fontaine 35576 Cesson-Sevigne France brice.follet@thomson.net
More informationBiologically Motivated Local Contextual Modulation Improves Low-Level Visual Feature Representations
Biologically Motivated Local Contextual Modulation Improves Low-Level Visual Feature Representations Xun Shi,NeilD.B.Bruce, and John K. Tsotsos Department of Computer Science & Engineering, and Centre
More informationNatural Scene Statistics and Perception. W.S. Geisler
Natural Scene Statistics and Perception W.S. Geisler Some Important Visual Tasks Identification of objects and materials Navigation through the environment Estimation of motion trajectories and speeds
More informationAttention and Scene Understanding
Attention and Scene Understanding Vidhya Navalpakkam, Michael Arbib and Laurent Itti University of Southern California, Los Angeles Abstract Abstract: This paper presents a simplified, introductory view
More informationCan Saliency Map Models Predict Human Egocentric Visual Attention?
Can Saliency Map Models Predict Human Egocentric Visual Attention? Kentaro Yamada 1, Yusuke Sugano 1, Takahiro Okabe 1 Yoichi Sato 1, Akihiro Sugimoto 2, and Kazuo Hiraki 3 1 The University of Tokyo, Tokyo,
More informationFinding Saliency in Noisy Images
Finding Saliency in Noisy Images Chelhwon Kim and Peyman Milanfar Electrical Engineering Department, University of California, Santa Cruz, CA, USA ABSTRACT Recently, many computational saliency models
More informationSUN: A Model of Visual Salience Using Natural Statistics. Gary Cottrell Lingyun Zhang Matthew Tong Tim Marks Honghao Shan Nick Butko Javier Movellan
SUN: A Model of Visual Salience Using Natural Statistics Gary Cottrell Lingyun Zhang Matthew Tong Tim Marks Honghao Shan Nick Butko Javier Movellan 1 Collaborators QuickTime and a TIFF (LZW) decompressor
More informationMotion Saliency Outweighs Other Low-level Features While Watching Videos
Motion Saliency Outweighs Other Low-level Features While Watching Videos Dwarikanath Mahapatra, Stefan Winkler and Shih-Cheng Yen Department of Electrical and Computer Engineering National University of
More informationCompound Effects of Top-down and Bottom-up Influences on Visual Attention During Action Recognition
Compound Effects of Top-down and Bottom-up Influences on Visual Attention During Action Recognition Bassam Khadhouri and Yiannis Demiris Department of Electrical and Electronic Engineering Imperial College
More informationHUMAN VISUAL PERCEPTION CONCEPTS AS MECHANISMS FOR SALIENCY DETECTION
HUMAN VISUAL PERCEPTION CONCEPTS AS MECHANISMS FOR SALIENCY DETECTION Oana Loredana BUZATU Gheorghe Asachi Technical University of Iasi, 11, Carol I Boulevard, 700506 Iasi, Romania lbuzatu@etti.tuiasi.ro
More informationKnowledge-driven Gaze Control in the NIM Model
Knowledge-driven Gaze Control in the NIM Model Joyca P. W. Lacroix (j.lacroix@cs.unimaas.nl) Eric O. Postma (postma@cs.unimaas.nl) Department of Computer Science, IKAT, Universiteit Maastricht Minderbroedersberg
More informationVISUAL attention is an important characteristic of the
3910 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 23, NO. 9, SEPTEMBER 2014 Video Saliency Incorporating Spatiotemporal Cues and Uncertainty Weighting Yuming Fang, Zhou Wang, Fellow, IEEE, WeisiLin,Senior
More informationVisual Task Inference Using Hidden Markov Models
Visual Task Inference Using Hidden Markov Models Abstract It has been known for a long time that visual task, such as reading, counting and searching, greatly influences eye movement patterns. Perhaps
More informationEvaluation of the Impetuses of Scan Path in Real Scene Searching
Evaluation of the Impetuses of Scan Path in Real Scene Searching Chen Chi, Laiyun Qing,Jun Miao, Xilin Chen Graduate University of Chinese Academy of Science,Beijing 0009, China. Key Laboratory of Intelligent
More informationThe 29th Fuzzy System Symposium (Osaka, September 9-, 3) Color Feature Maps (BY, RG) Color Saliency Map Input Image (I) Linear Filtering and Gaussian
The 29th Fuzzy System Symposium (Osaka, September 9-, 3) A Fuzzy Inference Method Based on Saliency Map for Prediction Mao Wang, Yoichiro Maeda 2, Yasutake Takahashi Graduate School of Engineering, University
More informationMethods for comparing scanpaths and saliency maps: strengths and weaknesses
Behav Res (2013) 45:251 266 DOI 10.3758/s13428-012-0226-9 Methods for comparing scanpaths and saliency maps: strengths and weaknesses Olivier Le Meur Thierry Baccino Published online: 7 July 2012 # Psychonomic
More informationComputational Cognitive Neuroscience and Its Applications
Computational Cognitive Neuroscience and Its Applications Laurent Itti University of Southern California Introduction and motivation A number of tasks which can be effortlessly achieved by humans and other
More information9.S913 SYLLABUS January Understanding Visual Attention through Computation
9.S913 SYLLABUS January 2014 Understanding Visual Attention through Computation Dept. of Brain and Cognitive Sciences Massachusetts Institute of Technology Title: Instructor: Years: Level: Prerequisites:
More informationPathGAN: Visual Scanpath Prediction with Generative Adversarial Networks
PathGAN: Visual Scanpath Prediction with Generative Adversarial Networks Marc Assens 1, Kevin McGuinness 1, Xavier Giro-i-Nieto 2, and Noel E. O Connor 1 1 Insight Centre for Data Analytic, Dublin City
More informationComputational modeling of visual attention and saliency in the Smart Playroom
Computational modeling of visual attention and saliency in the Smart Playroom Andrew Jones Department of Computer Science, Brown University Abstract The two canonical modes of human visual attention bottomup
More informationSUN: A Bayesian Framework for Saliency Using Natural Statistics
SUN: A Bayesian Framework for Saliency Using Natural Statistics Lingyun Zhang Tim K. Marks Matthew H. Tong Honghao Shan Garrison W. Cottrell Dept. of Computer Science and Engineering University of California,
More informationSaliency in Crowd. 1 Introduction. Ming Jiang, Juan Xu, and Qi Zhao
Saliency in Crowd Ming Jiang, Juan Xu, and Qi Zhao Department of Electrical and Computer Engineering National University of Singapore Abstract. Theories and models on saliency that predict where people
More informationHuman Learning of Contextual Priors for Object Search: Where does the time go?
Human Learning of Contextual Priors for Object Search: Where does the time go? Barbara Hidalgo-Sotelo, Aude Oliva, Antonio Torralba Department of Brain and Cognitive Sciences and CSAIL, MIT MIT, Cambridge,
More informationQuantifying the relationship between visual salience and visual importance
Quantifying the relationship between visual salience and visual importance Junle Wang, Damon M. Chandler, Patrick Le Callet To cite this version: Junle Wang, Damon M. Chandler, Patrick Le Callet. Quantifying
More informationA Neurally-Inspired Model for Detecting and Localizing Simple Motion Patterns in Image Sequences
A Neurally-Inspired Model for Detecting and Localizing Simple Motion Patterns in Image Sequences Marc Pomplun 1, Yueju Liu 2, Julio Martinez-Trujillo 2, Evgueni Simine 2, and John K. Tsotsos 2 1 Department
More informationNeurally Inspired Mechanisms for the Dynamic Visual Attention Map Generation Task
Neurally Inspired Mechanisms for the Dynamic Visual Attention Map Generation Task Maria T. Lopez 1, Miguel A. Fernandez 1, Antonio Fernandez-Caballero 1, and Ana E. Delgado 2 Departamento de Informatica
More informationA Locally Weighted Fixation Density-Based Metric for Assessing the Quality of Visual Saliency Predictions
FINAL VERSION PUBLISHED IN IEEE TRANSACTIONS ON IMAGE PROCESSING 06 A Locally Weighted Fixation Density-Based Metric for Assessing the Quality of Visual Saliency Predictions Milind S. Gide and Lina J.
More informationA Hierarchical Visual Saliency Model for Character Detection in Natural Scenes
A Hierarchical Visual Saliency Model for Character Detection in Natural Scenes Renwu Gao 1(B), Faisal Shafait 2, Seiichi Uchida 3, and Yaokai Feng 3 1 Information Sciene and Electrical Engineering, Kyushu
More informationIMAGE COMPLEXITY MEASURE BASED ON VISUAL ATTENTION
IMAGE COMPLEXITY MEASURE BASED ON VISUAL ATTENTION Matthieu Perreira da Silva, Vincent Courboulay, Pascal Estraillier To cite this version: Matthieu Perreira da Silva, Vincent Courboulay, Pascal Estraillier.
More informationINCORPORATING VISUAL ATTENTION MODELS INTO IMAGE QUALITY METRICS
INCORPORATING VISUAL ATTENTION MODELS INTO IMAGE QUALITY METRICS Welington Y.L. Akamine and Mylène C.Q. Farias, Member, IEEE Department of Computer Science University of Brasília (UnB), Brasília, DF, 70910-900,
More informationThe Attraction of Visual Attention to Texts in Real-World Scenes
The Attraction of Visual Attention to Texts in Real-World Scenes Hsueh-Cheng Wang (hchengwang@gmail.com) Marc Pomplun (marc@cs.umb.edu) Department of Computer Science, University of Massachusetts at Boston,
More informationAn Information Theoretic Model of Saliency and Visual Search
An Information Theoretic Model of Saliency and Visual Search Neil D.B. Bruce and John K. Tsotsos Department of Computer Science and Engineering and Centre for Vision Research York University, Toronto,
More informationCONTEXTUAL INFORMATION BASED VISUAL SALIENCY MODEL. Seungchul Ryu, Bumsub Ham and *Kwanghoon Sohn
CONTEXTUAL INFORMATION BASED VISUAL SALIENCY MODEL Seungchul Ryu Bumsub Ham and *Kwanghoon Sohn Digital Image Media Laboratory (DIML) School of Electrical and Electronic Engineering Yonsei University Seoul
More informationVENUS: A System for Novelty Detection in Video Streams with Learning
VENUS: A System for Novelty Detection in Video Streams with Learning Roger S. Gaborski, Vishal S. Vaingankar, Vineet S. Chaoji, Ankur M. Teredesai Laboratory for Applied Computing, Rochester Institute
More informationA Model for Automatic Diagnostic of Road Signs Saliency
A Model for Automatic Diagnostic of Road Signs Saliency Ludovic Simon (1), Jean-Philippe Tarel (2), Roland Brémond (2) (1) Researcher-Engineer DREIF-CETE Ile-de-France, Dept. Mobility 12 rue Teisserenc
More informationObject-based Saliency as a Predictor of Attention in Visual Tasks
Object-based Saliency as a Predictor of Attention in Visual Tasks Michal Dziemianko (m.dziemianko@sms.ed.ac.uk) Alasdair Clarke (a.clarke@ed.ac.uk) Frank Keller (keller@inf.ed.ac.uk) Institute for Language,
More informationEmpirical Validation of the Saliency-based Model of Visual Attention
Electronic Letters on Computer Vision and Image Analysis 3(1):13-24, 2004 Empirical Validation of the Saliency-based Model of Visual Attention Nabil Ouerhani, Roman von Wartburg +, Heinz Hügli and René
More informationUC Merced Proceedings of the Annual Meeting of the Cognitive Science Society
UC Merced Proceedings of the Annual Meeting of the Cognitive Science Society Title Eyes Closed and Eyes Open Expectations Guide Fixations in Real-World Search Permalink https://escholarship.org/uc/item/81z9n61t
More informationINVESTIGATION COGNITIVE AVEC LES EFRPS (EYE-FIXATION-RELATED POTENTIALS) Thierry Baccino CHART-LUTIN (EA 4004) Université de Paris VIII
INVESTIGATION COGNITIVE AVEC LES EFRPS (EYE-FIXATION-RELATED POTENTIALS) Thierry Baccino CHART-LUTIN (EA 4004) Université de Paris VIII MESURES OCULAIRES PRELIMINARY QUESTIONS TO EFRPS Postulate: Individual
More informationSearching in the dark: Cognitive relevance drives attention in real-world scenes
Psychonomic Bulletin & Review 2009, 16 (5), 850-856 doi:10.3758/pbr.16.5.850 Searching in the dark: Cognitive relevance drives attention in real-world scenes JOHN M. HENDERSON AND GEORGE L. MALCOLM University
More informationEvaluating Visual Saliency Algorithms: Past, Present and Future
Journal of Imaging Science and Technology R 59(5): 050501-1 050501-17, 2015. c Society for Imaging Science and Technology 2015 Evaluating Visual Saliency Algorithms: Past, Present and Future Puneet Sharma
More informationVisual Attention and Change Detection
Visual Attention and Change Detection Ty W. Boyer (tywboyer@indiana.edu) Thomas G. Smith (thgsmith@indiana.edu) Chen Yu (chenyu@indiana.edu) Bennett I. Bertenthal (bbertent@indiana.edu) Department of Psychological
More informationWebpage Saliency. National University of Singapore
Webpage Saliency Chengyao Shen 1,2 and Qi Zhao 2 1 Graduate School for Integrated Science and Engineering, 2 Department of Electrical and Computer Engineering, National University of Singapore Abstract.
More informationActions in the Eye: Dynamic Gaze Datasets and Learnt Saliency Models for Visual Recognition
Actions in the Eye: Dynamic Gaze Datasets and Learnt Saliency Models for Visual Recognition Stefan Mathe, Cristian Sminchisescu Presented by Mit Shah Motivation Current Computer Vision Annotations subjectively
More informationColor Information in a Model of Saliency
Color Information in a Model of Saliency Shahrbanoo Hamel, Nathalie Guyader, Denis Pellerin, Dominique Houzet To cite this version: Shahrbanoo Hamel, Nathalie Guyader, Denis Pellerin, Dominique Houzet.
More informationSaliency in Crowd. Ming Jiang, Juan Xu, and Qi Zhao
Saliency in Crowd Ming Jiang, Juan Xu, and Qi Zhao Department of Electrical and Computer Engineering National University of Singapore, Singapore eleqiz@nus.edu.sg Abstract. Theories and models on saliency
More informationVISUAL search is necessary for rapid scene analysis
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 25, NO. 8, AUGUST 2016 3475 A Unified Framework for Salient Structure Detection by Contour-Guided Visual Search Kai-Fu Yang, Hui Li, Chao-Yi Li, and Yong-Jie
More informationObjects do not predict fixations better than early saliency: A re-analysis of Einhäuser et al. s data
Journal of Vision (23) 3():8, 4 http://www.journalofvision.org/content/3//8 Objects do not predict fixations better than early saliency: A re-analysis of Einhäuser et al. s data Ali Borji Department of
More informationIntroduction to Computational Neuroscience
Introduction to Computational Neuroscience Lecture 11: Attention & Decision making Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis
More informationLocal Image Structures and Optic Flow Estimation
Local Image Structures and Optic Flow Estimation Sinan KALKAN 1, Dirk Calow 2, Florentin Wörgötter 1, Markus Lappe 2 and Norbert Krüger 3 1 Computational Neuroscience, Uni. of Stirling, Scotland; {sinan,worgott}@cn.stir.ac.uk
More informationThe Role of Top-down and Bottom-up Processes in Guiding Eye Movements during Visual Search
The Role of Top-down and Bottom-up Processes in Guiding Eye Movements during Visual Search Gregory J. Zelinsky, Wei Zhang, Bing Yu, Xin Chen, Dimitris Samaras Dept. of Psychology, Dept. of Computer Science
More informationIncreasing Spatial Competition Enhances Visual Prediction Learning
(2011). In A. Cangelosi, J. Triesch, I. Fasel, K. Rohlfing, F. Nori, P.-Y. Oudeyer, M. Schlesinger, Y. and Nagai (Eds.), Proceedings of the First Joint IEEE Conference on Development and Learning and on
More informationObject detection in natural scenes by feedback
In: H.H. Būlthoff et al. (eds.), Biologically Motivated Computer Vision. Lecture Notes in Computer Science. Berlin, Heidelberg, New York: Springer Verlag, 398-407, 2002. c Springer-Verlag Object detection
More informationState-of-the-Art in Visual Attention Modeling
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 35, NO. 1, JANUARY 2013 185 State-of-the-Art in Visual Attention Modeling Ali Borji, Member, IEEE, and Laurent Itti, Member, IEEE Abstract
More informationChapter 14 Mining Videos for Features that Drive Attention
Chapter 4 Mining Videos for Features that Drive Attention Farhan Baluch and Laurent Itti Abstract Certain features of a video capture human attention and this can be measured by recording eye movements
More informationSaliency Prediction with Active Semantic Segmentation
JIANG et al.: SALIENCY PREDICTION WITH ACTIVE SEMANTIC SEGMENTATION 1 Saliency Prediction with Active Semantic Segmentation Ming Jiang 1 mjiang@u.nus.edu Xavier Boix 1,3 elexbb@nus.edu.sg Juan Xu 1 jxu@nus.edu.sg
More informationAdding Shape to Saliency: A Computational Model of Shape Contrast
Adding Shape to Saliency: A Computational Model of Shape Contrast Yupei Chen 1, Chen-Ping Yu 2, Gregory Zelinsky 1,2 Department of Psychology 1, Department of Computer Science 2 Stony Brook University
More informationPRECISE processing of only important regions of visual
Unsupervised Neural Architecture for Saliency Detection: Extended Version Natalia Efremova and Sergey Tarasenko arxiv:1412.3717v2 [cs.cv] 10 Apr 2015 Abstract We propose a novel neural network architecture
More informationRelevance of Computational model for Detection of Optic Disc in Retinal images
Relevance of Computational model for Detection of Optic Disc in Retinal images Nilima Kulkarni Department of Computer Science and Engineering Amrita school of Engineering, Bangalore, Amrita Vishwa Vidyapeetham
More informationA Model of Saliency-Based Visual Attention for Rapid Scene Analysis
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis Itti, L., Koch, C., Niebur, E. Presented by Russell Reinhart CS 674, Fall 2018 Presentation Overview Saliency concept and motivation
More informationPre-Attentive Visual Selection
Pre-Attentive Visual Selection Li Zhaoping a, Peter Dayan b a University College London, Dept. of Psychology, UK b University College London, Gatsby Computational Neuroscience Unit, UK Correspondence to
More informationLearning Spatiotemporal Gaps between Where We Look and What We Focus on
Express Paper Learning Spatiotemporal Gaps between Where We Look and What We Focus on Ryo Yonetani 1,a) Hiroaki Kawashima 1,b) Takashi Matsuyama 1,c) Received: March 11, 2013, Accepted: April 24, 2013,
More informationintensities saliency map
Neuromorphic algorithms for computer vision and attention Florence Miau 1, Constantine Papageorgiou 2 and Laurent Itti 1 1 Department of Computer Science, University of Southern California, Los Angeles,
More informationReading Assignments: Lecture 18: Visual Pre-Processing. Chapters TMB Brain Theory and Artificial Intelligence
Brain Theory and Artificial Intelligence Lecture 18: Visual Pre-Processing. Reading Assignments: Chapters TMB2 3.3. 1 Low-Level Processing Remember: Vision as a change in representation. At the low-level,
More informationPredicting human gaze using low-level saliency combined with face detection
Predicting human gaze using low-level saliency combined with face detection Moran Cerf Computation and Neural Systems California Institute of Technology Pasadena, CA 925 moran@klab.caltech.edu Wolfgang
More informationHuman Learning of Contextual Priors for Object Search: Where does the time go?
Human Learning of Contextual Priors for Object Search: Where does the time go? Barbara Hidalgo-Sotelo 1 Aude Oliva 1 Antonio Torralba 2 1 Department of Brain and Cognitive Sciences, 2 Computer Science
More informationarxiv: v1 [cs.cv] 1 Sep 2016
arxiv:1609.00072v1 [cs.cv] 1 Sep 2016 Attentional Push: Augmenting Salience with Shared Attention Modeling Siavash Gorji James J. Clark Centre for Intelligent Machines, Department of Electrical and Computer
More informationI. INTRODUCTION VISUAL saliency, which is a term for the pop-out
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 6, JUNE 2016 1177 Spatiochromatic Context Modeling for Color Saliency Analysis Jun Zhang, Meng Wang, Member, IEEE, Shengping Zhang,
More informationControl of Selective Visual Attention: Modeling the "Where" Pathway
Control of Selective Visual Attention: Modeling the "Where" Pathway Ernst Niebur Computation and Neural Systems 139-74 California Institute of Technology Christof Koch Computation and Neural Systems 139-74
More informationSingle cell tuning curves vs population response. Encoding: Summary. Overview of the visual cortex. Overview of the visual cortex
Encoding: Summary Spikes are the important signals in the brain. What is still debated is the code: number of spikes, exact spike timing, temporal relationship between neurons activities? Single cell tuning
More informationAn Audiovisual Saliency Model For Conferencing and Conversation
An Audiovisual Saliency Model For Conferencing and Conversation Videos Naty Ould Sidaty, Mohamed-Chaker Larabi, Abdelhakim Saadane 2 XLIM Lab., University of Poitiers, France 2 XLIM Lab., Polytech, University
More informationAttention Estimation by Simultaneous Observation of Viewer and View
Attention Estimation by Simultaneous Observation of Viewer and View Anup Doshi and Mohan M. Trivedi Computer Vision and Robotics Research Lab University of California, San Diego La Jolla, CA 92093-0434
More informationObject and Gist Perception in a Dual Task Paradigm: Is Attention Important?
Object and Gist Perception in a Dual Task Paradigm: Is Attention Important? Maria Koushiou1 (maria_koushiou@yahoo.com) Elena Constantinou1 (elena_constantinou@yahoo.gr) 1 Department of Psychology, University
More informationThe influence of clutter on real-world scene search: Evidence from search efficiency and eye movements
The influence of clutter on real-world scene search: Evidence from search efficiency and eye movements John Henderson, Myriam Chanceaux, Tim Smith To cite this version: John Henderson, Myriam Chanceaux,
More informationSelective Attention. Modes of Control. Domains of Selection
The New Yorker (2/7/5) Selective Attention Perception and awareness are necessarily selective (cell phone while driving): attention gates access to awareness Selective attention is deployed via two modes
More informationA Bayesian Hierarchical Framework for Multimodal Active Perception
A Bayesian Hierarchical Framework for Multimodal Active Perception João Filipe Ferreira and Jorge Dias Institute of Systems and Robotics, FCT-University of Coimbra Coimbra, Portugal {jfilipe,jorge}@isr.uc.pt
More information