Real-time computational attention model for dynamic scenes analysis

Similar documents
Evaluation of preys / predators systems for visual attention simulation

Computational Cognitive Science

An Attentional Framework for 3D Object Discovery

Computational Cognitive Science. The Visual Processing Pipeline. The Visual Processing Pipeline. Lecture 15: Visual Attention.

IMAGE COMPLEXITY MEASURE BASED ON VISUAL ATTENTION

Computational Cognitive Science

An Evaluation of Motion in Artificial Selective Attention

Computational Models of Visual Attention: Bottom-Up and Top-Down. By: Soheil Borhani

The 29th Fuzzy System Symposium (Osaka, September 9-, 3) Color Feature Maps (BY, RG) Color Saliency Map Input Image (I) Linear Filtering and Gaussian

SUN: A Model of Visual Salience Using Natural Statistics. Gary Cottrell Lingyun Zhang Matthew Tong Tim Marks Honghao Shan Nick Butko Javier Movellan

Goal-directed search with a top-down modulated computational attention system

A Model of Saliency-Based Visual Attention for Rapid Scene Analysis

On the implementation of Visual Attention Architectures

On the role of context in probabilistic models of visual saliency

Visual Selection and Attention

Validating the Visual Saliency Model

Advertisement Evaluation Based On Visual Attention Mechanism

Evaluation of the Impetuses of Scan Path in Real Scene Searching

Introduction to Computational Neuroscience

INCORPORATING VISUAL ATTENTION MODELS INTO IMAGE QUALITY METRICS

Top-down Attention Signals in Saliency 邓凝旖

MEMORABILITY OF NATURAL SCENES: THE ROLE OF ATTENTION

Models of Attention. Models of Attention

A Model for Automatic Diagnostic of Road Signs Saliency

Relative Influence of Bottom-up & Top-down Attention

Empirical Validation of the Saliency-based Model of Visual Attention

What we see is most likely to be what matters: Visual attention and applications

Experiences on Attention Direction through Manipulation of Salient Features

Compound Effects of Top-down and Bottom-up Influences on Visual Attention During Action Recognition

Attention! 5. Lecture 16 Attention. Science B Attention. 1. Attention: What is it? --> Selection

Keywords- Saliency Visual Computational Model, Saliency Detection, Computer Vision, Saliency Map. Attention Bottle Neck

VIDEO SALIENCY INCORPORATING SPATIOTEMPORAL CUES AND UNCERTAINTY WEIGHTING

Computational modeling of visual attention and saliency in the Smart Playroom

A Computational Model of Saliency Depletion/Recovery Phenomena for the Salient Region Extraction of Videos

Video Saliency Detection via Dynamic Consistent Spatio- Temporal Attention Modelling

Computational Cognitive Neuroscience and Its Applications

Dynamic Visual Attention: Searching for coding length increments

Object detection in natural scenes by feedback

The discriminant center-surround hypothesis for bottom-up saliency

HUMAN VISUAL PERCEPTION CONCEPTS AS MECHANISMS FOR SALIENCY DETECTION

VENUS: A System for Novelty Detection in Video Streams with Learning

A Bayesian Hierarchical Framework for Multimodal Active Perception

A Visual Saliency Map Based on Random Sub-Window Means

Mathematics Meets Oncology

Can Saliency Map Models Predict Human Egocentric Visual Attention?

Supplementary Material: The Interaction of Visual and Linguistic Saliency during Syntactic Ambiguity Resolution

Attention Estimation by Simultaneous Observation of Viewer and View

Visual Attention Framework: Application to Event Analysis

Detection of terrorist threats in air passenger luggage: expertise development

A context-dependent attention system for a social robot

Increasing Spatial Competition Enhances Visual Prediction Learning

Foundations of Artificial Intelligence

Contribution of Color Information in Visual Saliency Model for Videos

Detection of Inconsistent Regions in Video Streams

Motion Saliency Outweighs Other Low-level Features While Watching Videos

Bayesian Nonparametric Methods for Precision Medicine

Top-Down Control of Visual Attention: A Rational Account

Contents. Foundations of Artificial Intelligence. Agents. Rational Agents

Biologically Inspired Autonomous Mental Development Model Based on Visual Selective Attention Mechanism

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China

Motion Control for Social Behaviours

Natural Scene Statistics and Perception. W.S. Geisler

This is an author-deposited version published in : Eprints ID : 12835

Learning and Adaptive Behavior, Part II

Measuring the attentional effect of the bottom-up saliency map of natural images

Local Image Structures and Optic Flow Estimation

TJHSST Computer Systems Lab Senior Research Project

M.Sc. in Cognitive Systems. Model Curriculum

Overview of the visual cortex. Ventral pathway. Overview of the visual cortex

(Visual) Attention. October 3, PSY Visual Attention 1

Computational Saliency Models Cheston Tan, Sharat Chikkerur

Control of visuo-spatial attention. Emiliano Macaluso

Object-based Saliency as a Predictor of Attention in Visual Tasks

Visual Task Inference Using Hidden Markov Models

Neurally Inspired Mechanisms for the Dynamic Visual Attention Map Generation Task

910 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 16, NO. 4, JULY KangWoo Lee, Hilary Buxton, and Jianfeng Feng /$20.

The Role of Top-down and Bottom-up Processes in Guiding Eye Movements during Visual Search

Designing Human-like Video Game Synthetic Characters through Machine Consciousness

Knowledge-driven Gaze Control in the NIM Model

Development of goal-directed gaze shift based on predictive learning

A Neurally-Inspired Model for Detecting and Localizing Simple Motion Patterns in Image Sequences

SUN: A Bayesian Framework for Saliency Using Natural Statistics

Reading Assignments: Lecture 18: Visual Pre-Processing. Chapters TMB Brain Theory and Artificial Intelligence

Artificial Intelligence Lecture 7

The synergy of top-down and bottom-up attention in complex task: going beyond saliency models.

Lecture 6. Perceptual and Motor Schemas

Recurrent Refinement for Visual Saliency Estimation in Surveillance Scenarios

Between-word regressions as part of rational reading

Pre-Attentive Visual Selection

Autonomous object modeling based on affordances for spatial organization of behavior

UC Merced Proceedings of the Annual Meeting of the Cognitive Science Society

intensities saliency map

Grounding Ontologies in the External World

Large Pupils Predict Goal-driven Eye Movements

Attention and Scene Understanding

Introduction to Artificial Intelligence 2 nd semester 2016/2017. Chapter 2: Intelligent Agents

Finding Saliency in Noisy Images

An Information Theoretic Model of Saliency and Visual Search

Human Learning of Contextual Priors for Object Search: Where does the time go?

MOST complex animals, in particular mammals, are distinct

Transcription:

Computer Science Image and Interaction Laboratory Real-time computational attention model for dynamic scenes analysis Matthieu Perreira Da Silva Vincent Courboulay 19/04/2012 Photonics Europe 2012 Symposium, Brussels, 16-19 April

OVERVIEW Introduction Reference systems Our contribution Conclusion and outlook Dynamic scenes Experiments 2

Introduction Reference systems Our contribution Conclusion and outlook Dynamic scenes Experiments INTRODUCTION 3

WHAT ARE WE TALKING ABOUT? What is visual attention? A tool for Selectively concentrating on one aspect of the visual environment while ignoring other ones Allocating processing resources Links with saliency maps Describes how important a part of the visual signal is Some theory claim the existence of such a map(s) in our brain 2 types of visual attention Overt : eye movement Covert : mental focus Saccades vs fixations (cf previous pres) 2 types of attention driving Bottom-up stimulus based (involuntary) Rarity / surprise / novelty Top-down goal directed (voluntary) 4

WHAT FOR? A building block for Computer vision / robotics architecture Smarter Scene exploration Resource allocation Artificial intelligence Detect novel / important data Learn from it Understanding human visual attention Replace eye-tracking MM Applications (smart TV, ) 5

WHY ANOTHER MODEL? Many visual attention models [Itti1998], [Ouerhani2003], [Tsotsos2005], [LeMeur2005], [Hamker2005], [Frintrop2006], [Mancas2007],[Bruce2009] and others Cf. presentation of Mr Stentiford Usually 1 model = 1 set of constraints / hypothesis In our case Real time Image and video Focus points (no saliency map) Dynamical results 6

Introduction Reference systems Our contribution Conclusion and outlook Dynamic scenes Experiments REFERENCE SYSTEMS 7

L.Itti s original architecture One more time the famous Itti Architecture Well known attention model (1998) Open source implementation Biologically inspired Quite fast But normalization, fusion no dynamic in simulation 8

S. Frintrop improvements (VOCUS) Almost the same architecture Better normalization operator Better center-surround filtering Faster 9

WHAT CAN WE DO NEXT? Better conspicuity maps fusion Normalization + linear combination are difficult to adjust in the absence of prior knowledge Maps fusion is a competition between different information to gain attention why not using existing preys / predators models? Dynamical scene analysis 10

Introduction Reference systems Our contribution Conclusion and outlook Dynamic scenes Experiments OUR CONTRIBUTION 11

MODIFIED ARCHITECTURE 4 integral images 10 feature maps 3 conspicuity maps 12

WHY PREYS / PREDATORS SYSTEM FOR CONSPICUITY MAPS FUSION? Dynamical system Time evolution is intrinsically handled Visual attention focus (max of predators population) can evolve dynamically Competition as a default fusion strategy Different types of information to mix Hard to find a good default fusion strategy No top down information or pregnancy Natural equilibrium Chaotic behavior Comes from discrete dynamic systems Usually not a wanted property, but Allows emergence of original exploration path even in non salient area Curiosity! 13

HOW DO PREYS / PREDATORS SYSTEMS WORK? Equations proposed independently by V. Volterra and A. J. Lotka in the 1920 s. first-order, non-linear, differential equations describe the dynamics of biological systems in which two species interact Used originally to model fish catches in the Adriatic In it s simplest form : and Where x is the number of preys y is the number of predators α is the prey s birth rate (exponential growth) β is the predation rate γ is the predators natural death rate δ is the predators growth rate (linked to predation) In theory, solutions to the equations are periodic 14

TRANSPOSING PREYS / PREDATORS SYSTEMS General features 2D preys / predators system (maps) Preys and predators can move (diffusion) Metaphor The system is comprised of 3 types of resources 3 types of preys 1 type of predators TO VISUAL ATTENTION Preys represent the spatial distribution of curiosity generated by the 3 types of resources (conspicuity maps) : intensity, color and orientation Predators represent the interest generated by the consumption of curiosity (preys) The global maximum of the predators map (interest) is the focus of attention at time t 15

Our preys / predators systems equations C: preys (curiosity) I: predators (interest) S: Image conspicuity G: Gaussian map R: random map e: entropy of the conspicuity map h: preys birth rate b: preys growth factor (0.005) m c : preys death factor g: central bias factor (0.1) a: randomness factor (0.3) f: diffusion factor (0.2) w: quadratic term (0.001) s: predation / predators growth factor m i : predators death factor 16

Introduction Reference systems Our contribution Conclusion and outlook Dynamic scenes Experiments EXPERIMENTS 17

EXPERIMENTS Validation of subjective and objectives evaluation Evaluation of preys / predators systems for visual attention simulation, in [VISAPP 2010 - International Conference on Computer Vision Theory and Applications, 275-282, INSTICC, Angers (2010). Objective Validation Of A Dynamical And Plausible Computational Model Of Visual Attention, in IEEE European workshop on visual information processing, France (2011). Image Complexity Measure Based On Visual Attention, in IEEE ICIP, 3342-3345 (2011). 18

Introduction Reference systems Our contribution Conclusion and outlook Dynamic scenes Experiments DYNAMIC SCENES 19

WHAT ABOUT DYNAMIC? Top-Down feedback & adaptation mechanisms global weighting of feature maps: allows a bias of the attentional system in favor of the distinctive features of a target object local weighting of feature maps: allows specifying prior knowledge about the target localization a) heatmap generated with default parameters, b) heatmap generated with lower color weights,c) heatmap generated with high color weight 20

WHAT ABOUT DYNAMIC? Scene exploration: different scenario scene exploration maximization : the attentional system will favor unvisited areas; ˆfocalization stability : the attentional system will favor already visited areas 21

DEMONSTRATION Let s try a little demo Please start a little prey for me :o) 22

Introduction Reference systems Our contribution Conclusion and outlook Dynamic scenes Experiments CONCLUSION AND OUTLOOK 23

CONCLUSION Better conspicuity maps fusion Normalization + linear combination are difficult to adjust in the absence of prior knowledge Maps fusion is a competition between different information to gain attention why not using existing preys / predators models? Dynamical scene analysis 24

CONCLUSION Fast but efficient conspicuity maps generation Dynamical systems based conspicuity maps fusion architecture Generates real time attentional focus Seems efficient A validated method Dynamic may be included in several ways 25

OUTLOOK Ongoing projects Integrate complex feature and conspicuity maps Integrate top-down maps Video streams and depth flow Outlook Working but not evaluated yet Attention as a decision in information space [J.Gottlieb2010] 26

27