INCORPORATING VISUAL ATTENTION MODELS INTO IMAGE QUALITY METRICS
|
|
- Ellen Stephens
- 5 years ago
- Views:
Transcription
1 INCORPORATING VISUAL ATTENTION MODELS INTO IMAGE QUALITY METRICS Welington Y.L. Akamine and Mylène C.Q. Farias, Member, IEEE Department of Computer Science University of Brasília (UnB), Brasília, DF, , Brazil ABSTRACT A recent development in the area of image quality consists of trying to incorporate aspects of visual attention in the metric design. This is generally achieved by combining distortion maps and saliency maps. Although a good number of saliency-inspired image quality metrics have been proposed, results are not yet conclusive. Some researchers have reported that the incorporation of visual attention increases the performance of quality metrics, while others have reported no or very little improvement. In this work, we investigate the benefit of incorporating saliency maps obtained with computational visual attention models into three image quality metrics (SSIM, PSNR, and MSE). More specifically, we compare the performance of quality metrics using saliency maps obtained with computational visual attention models versus quality metrics using subjective saliency maps. Results show that performance is linked to not only the precision of the saliency map, but also to the type of distortion. I. INTRODUCTION AND MOTIVATION Objective visual quality metrics can be classified as data metrics, which measure the fidelity of the signal without considering its content, or picture metrics, which estimate quality considering the visual information contained in the data. Customarily, quality measurements in the area of image processing have been largely limited to a few data metrics, such as mean absolute error (MAE), mean square error (MSE), and peak signal-to-noise ratio (PSNR), supplemented by limited subjective evaluation. Although over the years data metrics have been widely criticized for not correlating well with perceived quality measurements, it has been shown that such metrics can predict subjective ratings with reasonable accuracy as long as the comparisons are made with the same content, the same technique, or the same type of distortions. One of the major reasons why these simple metrics do not generally perform as desired is because they do not This work was supported in part by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) Brazil, in part by Fundação de Empreendimentos Científicos e Tecnológicos (Finatec), and in part by University of Brasília (UnB) ProIC/DPP. incorporate any human visual system (HVS) features in their computation. It has been discovered that, in the primary visual cortex of mammals, an image is not represented in the pixel domain, but in a rather different manner. Unfortunately, the measurements produced by metrics like MSE or PSNR are simply based on a pixel to pixel comparison of the data, without considering what is the content and the relationships among pixels in an image (or frames). In the past few years, a big effort in the scientific community has been devoted to the development of better image and video quality metrics that incorporate HVS features (i.e. picture metrics) and, therefore, correlate better with the human perception of quality [1][2]. A recent development in the area of image quality consists of trying to incorporate aspects of visual attention in the design of visual quality metrics, mostly using the assumption that visual distortions appearing in less salient areas might be less visible and, therefore, less annoying [3], [4]. This research area is still in its infancy and results obtained by different groups are not yet conclusive, as pointed out by Engelke et al [5]. Some researchers have reported that the incorporation of saliency maps increases the performance of quality metrics, while others have reported no or very little improvement. Among the works that have reported some improvement, most use subjective saliency maps, i.e. saliency maps generated from eye-tracking data obtained experimentally [6] [7]. This raises the question of how the metric performance is affected by the precision of the saliency map and the integration model. Another open question is how the distortion type affects the saliency map and, consequently, the metric performance. In this work, we investigate the benefit of incorporating saliency maps obtained with computational visual attention models into three image quality metrics (SSIM, PSNR, and MSE). More specifically, we compare the performance of quality metrics using saliency maps obtained with computational visual attention models versus quality metrics using subjective saliency maps [6]. We also discuss the influence of different types of distortion on the saliency maps (objective and subjective) and the performance of the tested metrics.
2 (a) Original image Caps (b) Itti s saliency map of Caps (c) Achanta s saliency map of Caps (d) Subjective saliency map of Caps (e) Original image Rapids (f) Itti s saliency map of Rapids (g) Achanta s saliency map of Rapids (h) Subjective saliency map of Rapids Fig. 1. Saliency maps corresponding to the images Caps and Rapids, obtained using computational attention models by Itti [8] and Achanta et al [9], and subjective saliency maps from TUD LIVE Eye Tracking database [10]. II. VISUAL ATTENTION AND IMAGE QUALITY When observing a scene, the human eye typically filters the large amount of visual information available on the scene and attend to selected areas [8]. Oculo-motor mechanisms allow the gaze of attention to either hold on a particular location (fixation) or to shift to another location when sufficient information has already been collected (saccades). The selection of fixations is based on the visual properties of the scene. Priority is given to areas with a high concentration of information, minimizing the amount of data to be processed by the brain while maximizing the quality of the collected information. Visual attention is, therefore, a feature of the human visual system that has the goal of reducing the complexity of scene analysis. It can be divided in two mechanisms that, combined, define which areas of the scene are to be considered relevant and, therefore, should be attended. These two mechanisms are known as bottom-up and top-down attention selection. The bottom-up mechanism is an automated selection that is controlled mostly by the signal, independent of the task being performed. It is fast and short lasting, being performed as a response to low-level features that are perceived as visually salient, standing out from the background of the scene. The top-down mechanism is controlled by higher cognitive factors and external influences, such as semantic information, viewing task, personal preferences, and context. It is slower than bottom-up attention, requiring a voluntary effort. In this work, we use two bottom-up visual attention computational models and a database of subjective saliency maps. The first computational model is the visual attention proposed by Itti [8] and implemented by Walther and Koch[11]. Itti s model analyzes the images by looking for the most relevant visual properties: color, intensity, and orientation. The algorithm creates maps corresponding to these three properties and combines these maps to predict probabilities of fixations in specific areas of the image. Using a winner-take-all approach, predicted fixations are selected as the most salient points in the image. In order to combine the attention information with the objective quality metrics, a saliency map is needed, instead of the individual (ordered) fixation points. To obtain such a map, we filter the set of predicted fixation points with a Gaussian filter. The second visual attention computational model used in this work was proposed by Achanta et al [9]. To obtain a saliency map, the model uses a contrast determination filter operating at various scales. First, the algorithm calculates the local contrast of a sub-region with respect to its neighboring regions at various scales (varying sizes). Then, the distance between the average feature vector of this sub-region and the average feature vector of its neighboring regions is calculated. Using these feature vectors, feature maps at a given scale are calculated. These individual maps are combined to produce the final saliency map. For comparison purposes, we also used the subjective saliency maps from the TUD LIVE Eye Tracking database [10], collected in a subjective experiment performed at the Technical University of Delft. The experiment used an eyetracker equipment and twenty-nine source images of the LIVE image quality assessment database [12]. The saliency maps used in this work were derived from the eye-tracking data. In Fig. 1, the images Caps and Rapids are depicted, along with their saliency maps. Columns 2 and 3 of Fig. 1 correspond to the saliency maps obtained using Itti s and Achanta s computational model, respectively. Column 4 of Fig. 1 corresponds to the subjective saliency maps from the
3 TUD LIVE Eye Tracking database. In the saliency maps, higher (lighter) luminance values correspond to a higher saliency, while lower (darker) values correspond to points where a low attention was predicted. Notice that all saliency maps are able to capture (at least) the most salient area of the images Caps and Rapids, which correspond to the caps and the boat, respectively. Nevertheless, the three maps show lots of differences. For example, there seems to be less salient areas in the subjective maps. Also, the maps obtained with Achanta s model have less uniform areas. Without further processing, Itti s model seem to provide a saliency map that is closer to the subjective maps. III. INCORPORATING SALIENCY INTO IMAGE QUALITY METRICS In the last few years, many works have incorporated saliency maps into image quality metrics using simple combination rules. In this work, our goal is to investigate the benefit of incorporating saliency maps obtained using computational models. We incorporate the information from the saliency map into three different image quality (fidelity) metrics: Mean Square Error (MSE), Peak signal-to-noise ratio (PSNR), and Structural Similarity (SSIM) [13]. The MSE metric is calculated using the following equation: M N x=1 y=1 MSE = (I o(x,y) I t (x,y)) 2, (1) M N where I o (x,y) is the reference image pixel, I t (x,y) is the test image pixel, and M N is the size of the images. PSNR is calculated using the following equation PSNR = 20log MAX i MSE, (2) where MAX i is the highest intensity level of the pixels. For colored images, the total MSE and PSNR is calculated by taking the average of the MSE and PSNR for the three color planes. The third metric, SSIM, is a more complex and robust fullreference (FR) image quality metric. The general equation for SSIM is: (2µ o µ t +C 1 )(2σ ot +C 2 ) SSIM(I o,i t ) = (µ 2 o +µ2 t +C 1)(σo 2 +σ2 t +C 2), (3) where µ is the average intensity, σ is the standard deviation, andσ ot is the covariance between the original image (I o ) and the test image (I t ). The variables C 1, C 2, C 3 are control constants used to avoid problems when the denominator reaches values close to zero. In order to combine the saliency and quality models, we used an approach similar to the approach used by Liu and Heynderickx [7]. Since all the metrics considered generate distortion maps that have the same size as the saliency maps, the points in the saliency maps are used as weights for the distortion maps generated by the quality metrics. In the case of MSE, the integration model is obtained by multiplying each pixel in the MSE-distortion map by the corresponding pixel in the saliency map. In other words, the modified saliency-based MSE (SalMSE) is given by: M N x=1 y=1 SalMSE = MSE(x,y) Sal(x,y)) M N x=1 y=1 Sal(x,y), (4) where MSE(x, y) is the MSE-distortion map, Sal(x, y) is the saliency map pixel, and x and y are the horizontal and vertical coordinates. In the same way, the modified saliency-based PSNR (SalPSNR) is given by the following equation. SalPSNR = 20log MAX i SalMSE (5) and the modified saliency-based SSIM (SalSSIM) formula is given by M N x=1 y=1 SalSSIM = (SSIM map(x,y) Sal(x,y)) M N x=1 y=1 Sal(x,y), where SSIM map (x,y) is the SSIM distortion map. IV. SIMULATION RESULTS To study the effect of integrating visual information into image quality metrics, we tested the quality metrics with and without incorporating saliency maps. We used the images of the LIVE image quality assessment database [12], which contains the following degradations: JPEG, JPEG2000, Gaussian Blur, Fast Fading, and White Noise. In Fig. 2, we show images with examples of very strong degradations with the purpose of exemplifying each type of distortion. In Tables I, II, III, IV, and V, we present the Pearson and Spearman correlation coefficients obtained for the quality metrics described in the previous sections and tested on images with JPEG, JPEG2000, Gaussian Blur, Fast Fading, and White Noise degradations, respectively. The tables show the correlation values for the metrics with no saliency map ( none : MSE, PSNR, and SSIM) and with the incorporation of the three saliency maps considered in this work (Itti s and Achanta s computational attention models and the subjective saliency maps from TUD). The columns 4, 6, and 8 in each table show the differences in correlation values obtained by incorporating the saliency map into the quality metric. A positive difference means that an improvement in performance was obtained, while a negative value means a deterioration in the performance. Differences below are, probably, not significant. Notice that the Spearman correlation values are always higher than the Pearson correlation values. The Pearson correlation is more sensitive to a linear relationship between the two variables, while the Spearman correlation emphasizes the monotonicity of the relationship between the variables. (6)
4 (a) Original (b) JPEG (c) JPEG2000 (d) Gaussian blur (e) fast fading (f) white noise Fig. 2. Sample images illustrating degradations of LIVE image quality assessment database [12]. Therefore, we may conclude that the relationship between the objective and subjective scores is monotonic, but not necessarily linear. The results in Tables I-V show that the performance of metrics with incorporated saliency maps depends on the type of saliency map used and also on the type of distortion. For example, in the case of JPEG2000 distortions, the SSIM performance only improved (slightly) when subjective saliency maps were used. For Gaussian Blur and Fast Fading distortions, the addition of saliency maps improved the SSIM performance, with Achanta s model representing the highest gain, followed by the subjective saliency maps. For JPEG distortions, the SSIM performance did not improved at all. With the addition of the saliency maps, the correlation coefficients corresponding to MSE and PSNR improved for all tested distortions, except for White Noise. For JPEG and JPEG200 distortions, the addition of the subjective saliency maps incurred in the highest difference in performance. For Gaussian Blur and Fast Fading, in most cases the subjective saliency maps incurred in the highest correlation values, followed by Achanta s model. We can also notice that the positive correlation difference values for MSE and PSNR are always higher than those for SSIM. This is expected, since MSE and PSNR are extremely simple metrics with a lot of room for improvement. It is worth pointing out that for the degradation White Noise, none of the metrics showed a significant improvement in performance with the addition of saliency maps, either subjective or objective. This is in agreement with other studies that show that the importance of saliency maps depends on the type of distortion [5]. As can be seen in Fig. 2(f), noise completely changes the saliency areas in an image. V. CONCLUSIONS AND FUTURE WORK In this work, we have investigated the benefit of incorporating saliency maps obtained with computational visual attention models. More specifically, we have compared the performance of quality metrics using saliency maps obtained with computational visual attention models versus quality metrics using subjective saliency maps. We have also discussed the influence of different types of distortion on the saliency maps (objective and subjective) and the performance of the tested metrics. Our results show that the saliency information has improved the results of the two simple fidelity metrics tested. For the more complex metric, SSIM, the addition of saliency has not always represented an improvement in performance. The results have also shown that subjective saliency maps have incurred in a slightly higher improvement in performance than saliency maps generated by computational models. Nevertheless, the results depend on the distortion type, with White Noise and JPEG distortions presenting the lowest levels of improvement. Further studies are needed in order to understand how visual attention affects quality and interacts with the different types of distortions. More importantly, better ways of incorporating visual attention to the design of quality metrics are urgently needed. VI. REFERENCES [1] Zhou Wang and Alan C Bovik. Mean squared error: Love it or leave it? IEEE Signal Processing Magazine, 1(January):98 117, [2] S. Chikkerur, V. Sundaram, M. Reisslein, and L.J. Karam. Objective video quality assessment methods: A classification, review, and performance comparison. Broadcasting, IEEE Transactions on, 57(2): , june 2011.
5 Table I. Pearson and Spearman correlation coefficient values between objective (SSIM, PSNR, and MSE) and subjective measures, for images with JPEG distortions. JPEG Map SSIM no saliency PSNR no saliency MSE no saliency Pearson CC none Achanta Itti Subjective Spearman CC none Achanta Itti Subjective Table II. Pearson and Spearman correlation coefficient values between objective (SSIM, PSNR, and MSE) and subjective measures, for images with JPEG2000 distortions. JPEG2000 Map SSIM no saliency PSNR no saliency MSE no saliency Pearson CC none Achanta Itti Subjective Spearman CC none Achanta Itti Subjective Table III. Pearson and Spearman correlation coefficient values between objective (SSIM, PSNR, and MSE) and subjective measures, for images with Gaussian Blur distortions. Gaussian Blur Map SSIM no saliency PSNR no saliency MSE no saliency Pearson CC none Achanta Itti Subjective Spearman CC none Achanta Itti Subjective Table IV. Pearson and Spearman correlation coefficient values between objective (SSIM, PSNR, and MSE) and subjective measures, for images with Fast Fading distortions. Fast Fading Map SSIM no saliency PSNR no saliency MSE no saliency Pearson CC none Achanta Itti Subjective Spearman CC none Achanta Itti Subjective
6 Table V. Pearson and Spearman correlation coefficient values between objective (SSIM, PSNR, and MSE) and subjective measures, for images with White Noise distortions. White Noise Map SSIM no saliency PSNR no saliency MSE no saliency Pearson CC none Achanta Itti Subjective Spearman CC none Achanta Itti Subjective [3] C Oprea, I Pirnog, C Paleologu, and M Udrea. Perceptual Video Quality Assessment Based on Salient Region Detection. In Telecommunications, AICT 09. Fifth Advanced International Conference on, pages , May [4] J. Redi, Hantao Liu, P. Gastaldo, R. Zunino, and I. Heynderickx. How to apply spatial saliency into objective metrics for jpeg compressed images? In Image Processing (ICIP), th IEEE International Conference on, pages , nov [5] U. Engelke, H. Kaprykowsky, H.-J. Zepernick, and P. Ndjiki-Nya. Visual attention in quality assessment. Signal Processing Magazine, IEEE, 28(6):50 59, nov [6] Hantao Liu and Ingrid Heynderickx. Studying the added value of visual attention in objective image quality metrics based on eye movement data. [7] Hantao Liu and I. Heynderickx. Studying the added value of visual attention in objective image quality metrics based on eye movement data. In Image Processing (ICIP), th IEEE International Conference on, pages , nov [8] Laurent Itti and Christof Koch. Computational modelling of visual attention. Nature Reviews Neuroscience, 2(3): , [9] Radhakrishna Achanta, Francisco Estrada, Patricia Wils, and Sabine Susstrunk. Salient region detection and segmentation. In Antonios Gasteratos, Markus Vincze, and John Tsotsos, editors, Computer Vision Systems, volume 5008 of Lecture Notes in Computer Science, pages Springer Berlin / Heidelberg, [10] H. Liu and I. Heynderickx. Tud image quality database: Eye-tracking release 1, [11] Dirk Walther and Christof Koch. Modeling attention to salient proto-objects. Neural Networks, 19: , [12] H.R. Sheikh, Z.Wang, L. Cormack, and A.C. Bovik. Live image quality assessment database release 2. [13] Zhou Wang, Alan C. Bovik, Hamid R. Sheikh, and Eero P. Simoncelli. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4): , 2004.
Computational Cognitive Science. The Visual Processing Pipeline. The Visual Processing Pipeline. Lecture 15: Visual Attention.
Lecture 15: Visual Attention School of Informatics University of Edinburgh keller@inf.ed.ac.uk November 11, 2016 1 2 3 Reading: Itti et al. (1998). 1 2 When we view an image, we actually see this: The
More informationComputational Cognitive Science
Computational Cognitive Science Lecture 15: Visual Attention Chris Lucas (Slides adapted from Frank Keller s) School of Informatics University of Edinburgh clucas2@inf.ed.ac.uk 14 November 2017 1 / 28
More informationVIDEO SALIENCY INCORPORATING SPATIOTEMPORAL CUES AND UNCERTAINTY WEIGHTING
VIDEO SALIENCY INCORPORATING SPATIOTEMPORAL CUES AND UNCERTAINTY WEIGHTING Yuming Fang, Zhou Wang 2, Weisi Lin School of Computer Engineering, Nanyang Technological University, Singapore 2 Department of
More informationValidating the Visual Saliency Model
Validating the Visual Saliency Model Ali Alsam and Puneet Sharma Department of Informatics & e-learning (AITeL), Sør-Trøndelag University College (HiST), Trondheim, Norway er.puneetsharma@gmail.com Abstract.
More informationSaliency Inspired Modeling of Packet-loss Visibility in Decoded Videos
1 Saliency Inspired Modeling of Packet-loss Visibility in Decoded Videos Tao Liu*, Xin Feng**, Amy Reibman***, and Yao Wang* *Polytechnic Institute of New York University, Brooklyn, NY, U.S. **Chongqing
More informationComputational Cognitive Science
Computational Cognitive Science Lecture 19: Contextual Guidance of Attention Chris Lucas (Slides adapted from Frank Keller s) School of Informatics University of Edinburgh clucas2@inf.ed.ac.uk 20 November
More informationSaliency Inspired Full-Reference Quality Metrics for Packet-Loss-Impaired Video
IEEE TRANSACTIONS ON BROADCASTING, VOL. 57, NO. 1, MARCH 2011 81 Saliency Inspired Full-Reference Quality Metrics for Packet-Loss-Impaired Video Xin Feng, Tao Liu, Member, IEEE, Dan Yang, and Yao Wang,
More informationAdvertisement Evaluation Based On Visual Attention Mechanism
2nd International Conference on Economics, Management Engineering and Education Technology (ICEMEET 2016) Advertisement Evaluation Based On Visual Attention Mechanism Yu Xiao1, 2, Peng Gan1, 2, Yuling
More informationReal-time computational attention model for dynamic scenes analysis
Computer Science Image and Interaction Laboratory Real-time computational attention model for dynamic scenes analysis Matthieu Perreira Da Silva Vincent Courboulay 19/04/2012 Photonics Europe 2012 Symposium,
More informationNo-Reference Video quality assessment of H.264 video streams based on semantic saliency maps
No-Reference Video quality assessment of H.264 video streams based on semantic saliency maps Hugo Boujut, Jenny Benois-Pineau, Toufik Ahmed, Ofer Hadar, Patrick Bonnet To cite this version: Hugo Boujut,
More informationFROM H.264 TO HEVC: CODING GAIN PREDICTED BY OBJECTIVE VIDEO QUALITY ASSESSMENT MODELS. Kai Zeng, Abdul Rehman, Jiheng Wang and Zhou Wang
FROM H. TO : CODING GAIN PREDICTED BY OBJECTIVE VIDEO QUALITY ASSESSMENT MODELS Kai Zeng, Abdul Rehman, Jiheng Wang and Zhou Wang Dept. of Electrical and Computer Engineering, University of Waterloo, Waterloo,
More informationThe 29th Fuzzy System Symposium (Osaka, September 9-, 3) Color Feature Maps (BY, RG) Color Saliency Map Input Image (I) Linear Filtering and Gaussian
The 29th Fuzzy System Symposium (Osaka, September 9-, 3) A Fuzzy Inference Method Based on Saliency Map for Prediction Mao Wang, Yoichiro Maeda 2, Yasutake Takahashi Graduate School of Engineering, University
More informationLocal Image Structures and Optic Flow Estimation
Local Image Structures and Optic Flow Estimation Sinan KALKAN 1, Dirk Calow 2, Florentin Wörgötter 1, Markus Lappe 2 and Norbert Krüger 3 1 Computational Neuroscience, Uni. of Stirling, Scotland; {sinan,worgott}@cn.stir.ac.uk
More informationA Model for Automatic Diagnostic of Road Signs Saliency
A Model for Automatic Diagnostic of Road Signs Saliency Ludovic Simon (1), Jean-Philippe Tarel (2), Roland Brémond (2) (1) Researcher-Engineer DREIF-CETE Ile-de-France, Dept. Mobility 12 rue Teisserenc
More informationAn Evaluation of Motion in Artificial Selective Attention
An Evaluation of Motion in Artificial Selective Attention Trent J. Williams Bruce A. Draper Colorado State University Computer Science Department Fort Collins, CO, U.S.A, 80523 E-mail: {trent, draper}@cs.colostate.edu
More informationFinding Saliency in Noisy Images
Finding Saliency in Noisy Images Chelhwon Kim and Peyman Milanfar Electrical Engineering Department, University of California, Santa Cruz, CA, USA ABSTRACT Recently, many computational saliency models
More informationOVER the past decades, we have witnessed tremendous
1266 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 6, JUNE 2016 The Application of Visual Saliency Models in Objective Image Quality Assessment: A Statistical Evaluation Wei Zhang,
More informationHUMAN VISUAL PERCEPTION CONCEPTS AS MECHANISMS FOR SALIENCY DETECTION
HUMAN VISUAL PERCEPTION CONCEPTS AS MECHANISMS FOR SALIENCY DETECTION Oana Loredana BUZATU Gheorghe Asachi Technical University of Iasi, 11, Carol I Boulevard, 700506 Iasi, Romania lbuzatu@etti.tuiasi.ro
More informationA Visual Saliency Map Based on Random Sub-Window Means
A Visual Saliency Map Based on Random Sub-Window Means Tadmeri Narayan Vikram 1,2, Marko Tscherepanow 1 and Britta Wrede 1,2 1 Applied Informatics Group 2 Research Institute for Cognition and Robotics
More informationMEMORABILITY OF NATURAL SCENES: THE ROLE OF ATTENTION
MEMORABILITY OF NATURAL SCENES: THE ROLE OF ATTENTION Matei Mancas University of Mons - UMONS, Belgium NumediArt Institute, 31, Bd. Dolez, Mons matei.mancas@umons.ac.be Olivier Le Meur University of Rennes
More informationTHE INFLUENCE OF SHORT-TERM MEMORY IN SUBJECTIVE IMAGE QUALITY ASSESSMENT. Steven Le Moan, Marius Pedersen, Ivar Farup, Jana Blahová
THE INFLUENCE OF SHORT-TERM MEMORY IN SUBJECTIVE IMAGE QUALITY ASSESSMENT Steven Le Moan, Marius Pedersen, Ivar Farup, Jana Blahová Faculty of Computer Science and Media Technology NTNU - Norwegian University
More informationAn Attentional Framework for 3D Object Discovery
An Attentional Framework for 3D Object Discovery Germán Martín García and Simone Frintrop Cognitive Vision Group Institute of Computer Science III University of Bonn, Germany Saliency Computation Saliency
More informationIMAGE QUALITY ASSESSMENT BY USING AR PREDICTION ALGORITHM WITH INTERNAL GENERATIVE MECHANISM
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 8, August 2014,
More informationIEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 1
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 1 The Application of Visual Saliency Models in Objective Image Quality Assessment: A Statistical Evaluation Wei Zhang, Student Member, IEEE, Ali
More informationComputational modeling of visual attention and saliency in the Smart Playroom
Computational modeling of visual attention and saliency in the Smart Playroom Andrew Jones Department of Computer Science, Brown University Abstract The two canonical modes of human visual attention bottomup
More informationLung Cancer Diagnosis from CT Images Using Fuzzy Inference System
Lung Cancer Diagnosis from CT Images Using Fuzzy Inference System T.Manikandan 1, Dr. N. Bharathi 2 1 Associate Professor, Rajalakshmi Engineering College, Chennai-602 105 2 Professor, Velammal Engineering
More informationPerformance evaluation of the various edge detectors and filters for the noisy IR images
Performance evaluation of the various edge detectors and filters for the noisy IR images * G.Padmavathi ** P.Subashini ***P.K.Lavanya Professor and Head, Lecturer (SG), Research Assistant, ganapathi.padmavathi@gmail.com
More informationVideo Saliency Detection via Dynamic Consistent Spatio- Temporal Attention Modelling
AAAI -13 July 16, 2013 Video Saliency Detection via Dynamic Consistent Spatio- Temporal Attention Modelling Sheng-hua ZHONG 1, Yan LIU 1, Feifei REN 1,2, Jinghuan ZHANG 2, Tongwei REN 3 1 Department of
More informationCan Saliency Map Models Predict Human Egocentric Visual Attention?
Can Saliency Map Models Predict Human Egocentric Visual Attention? Kentaro Yamada 1, Yusuke Sugano 1, Takahiro Okabe 1 Yoichi Sato 1, Akihiro Sugimoto 2, and Kazuo Hiraki 3 1 The University of Tokyo, Tokyo,
More informationChanging expectations about speed alters perceived motion direction
Current Biology, in press Supplemental Information: Changing expectations about speed alters perceived motion direction Grigorios Sotiropoulos, Aaron R. Seitz, and Peggy Seriès Supplemental Data Detailed
More informationObject-based Saliency as a Predictor of Attention in Visual Tasks
Object-based Saliency as a Predictor of Attention in Visual Tasks Michal Dziemianko (m.dziemianko@sms.ed.ac.uk) Alasdair Clarke (a.clarke@ed.ac.uk) Frank Keller (keller@inf.ed.ac.uk) Institute for Language,
More informationKeywords- Saliency Visual Computational Model, Saliency Detection, Computer Vision, Saliency Map. Attention Bottle Neck
Volume 3, Issue 9, September 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Visual Attention
More informationInternational Journal of Software and Web Sciences (IJSWS)
International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) ISSN (Print): 2279-0063 ISSN (Online): 2279-0071 International
More informationRelative Influence of Bottom-up & Top-down Attention
Relative Influence of Bottom-up & Top-down Attention Matei Mancas 1 1 Engineering Faculty of Mons (FPMs) 31, Bd. Dolez, 7000 Mons, Belgium Matei.Mancas@fpms.ac.be Abstract. Attention and memory are very
More informationComplexity Perception of Texture Images
Complexity Perception of Texture Images Gianluigi Ciocca 1,2, Silvia Corchs 1,2(B), and Francesca Gasparini 1,2 1 Dipartimento di Informatica, Sistemistica e Comunicazione, University of Milano-Bicocca,
More informationOn the implementation of Visual Attention Architectures
On the implementation of Visual Attention Architectures KONSTANTINOS RAPANTZIKOS AND NICOLAS TSAPATSOULIS DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING NATIONAL TECHNICAL UNIVERSITY OF ATHENS 9, IROON
More informationDetection of terrorist threats in air passenger luggage: expertise development
Loughborough University Institutional Repository Detection of terrorist threats in air passenger luggage: expertise development This item was submitted to Loughborough University's Institutional Repository
More informationComputational Models of Visual Attention: Bottom-Up and Top-Down. By: Soheil Borhani
Computational Models of Visual Attention: Bottom-Up and Top-Down By: Soheil Borhani Neural Mechanisms for Visual Attention 1. Visual information enter the primary visual cortex via lateral geniculate nucleus
More informationA Locally Weighted Fixation Density-Based Metric for Assessing the Quality of Visual Saliency Predictions
FINAL VERSION PUBLISHED IN IEEE TRANSACTIONS ON IMAGE PROCESSING 06 A Locally Weighted Fixation Density-Based Metric for Assessing the Quality of Visual Saliency Predictions Milind S. Gide and Lina J.
More informationUSING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES
USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES Varinthira Duangudom and David V Anderson School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA 30332
More informationDynamic Visual Attention: Searching for coding length increments
Dynamic Visual Attention: Searching for coding length increments Xiaodi Hou 1,2 and Liqing Zhang 1 1 Department of Computer Science and Engineering, Shanghai Jiao Tong University No. 8 Dongchuan Road,
More informationMotion Saliency Outweighs Other Low-level Features While Watching Videos
Motion Saliency Outweighs Other Low-level Features While Watching Videos Dwarikanath Mahapatra, Stefan Winkler and Shih-Cheng Yen Department of Electrical and Computer Engineering National University of
More informationWhat we see is most likely to be what matters: Visual attention and applications
What we see is most likely to be what matters: Visual attention and applications O. Le Meur P. Le Callet olemeur@irisa.fr patrick.lecallet@univ-nantes.fr http://www.irisa.fr/temics/staff/lemeur/ November
More informationObject detection in natural scenes by feedback
In: H.H. Būlthoff et al. (eds.), Biologically Motivated Computer Vision. Lecture Notes in Computer Science. Berlin, Heidelberg, New York: Springer Verlag, 398-407, 2002. c Springer-Verlag Object detection
More informationNeurally Inspired Mechanisms for the Dynamic Visual Attention Map Generation Task
Neurally Inspired Mechanisms for the Dynamic Visual Attention Map Generation Task Maria T. Lopez 1, Miguel A. Fernandez 1, Antonio Fernandez-Caballero 1, and Ana E. Delgado 2 Departamento de Informatica
More informationFEATURE EXTRACTION USING GAZE OF PARTICIPANTS FOR CLASSIFYING GENDER OF PEDESTRIANS IN IMAGES
FEATURE EXTRACTION USING GAZE OF PARTICIPANTS FOR CLASSIFYING GENDER OF PEDESTRIANS IN IMAGES Riku Matsumoto, Hiroki Yoshimura, Masashi Nishiyama, and Yoshio Iwai Department of Information and Electronics,
More informationIncreasing Spatial Competition Enhances Visual Prediction Learning
(2011). In A. Cangelosi, J. Triesch, I. Fasel, K. Rohlfing, F. Nori, P.-Y. Oudeyer, M. Schlesinger, Y. and Nagai (Eds.), Proceedings of the First Joint IEEE Conference on Development and Learning and on
More informationLearning to Predict Saliency on Face Images
Learning to Predict Saliency on Face Images Mai Xu, Yun Ren, Zulin Wang School of Electronic and Information Engineering, Beihang University, Beijing, 9, China MaiXu@buaa.edu.cn Abstract This paper proposes
More informationEXTRACT THE BREAST CANCER IN MAMMOGRAM IMAGES
International Journal of Civil Engineering and Technology (IJCIET) Volume 10, Issue 02, February 2019, pp. 96-105, Article ID: IJCIET_10_02_012 Available online at http://www.iaeme.com/ijciet/issues.asp?jtype=ijciet&vtype=10&itype=02
More informationCompound Effects of Top-down and Bottom-up Influences on Visual Attention During Action Recognition
Compound Effects of Top-down and Bottom-up Influences on Visual Attention During Action Recognition Bassam Khadhouri and Yiannis Demiris Department of Electrical and Electronic Engineering Imperial College
More informationMeaning-based guidance of attention in scenes as revealed by meaning maps
SUPPLEMENTARY INFORMATION Letters DOI: 1.138/s41562-17-28- In the format provided by the authors and unedited. -based guidance of attention in scenes as revealed by meaning maps John M. Henderson 1,2 *
More informationSubjective randomness and natural scene statistics
Psychonomic Bulletin & Review 2010, 17 (5), 624-629 doi:10.3758/pbr.17.5.624 Brief Reports Subjective randomness and natural scene statistics Anne S. Hsu University College London, London, England Thomas
More informationA Computational Model of Saliency Depletion/Recovery Phenomena for the Salient Region Extraction of Videos
A Computational Model of Saliency Depletion/Recovery Phenomena for the Salient Region Extraction of Videos July 03, 2007 Media Information Laboratory NTT Communication Science Laboratories Nippon Telegraph
More information(Visual) Attention. October 3, PSY Visual Attention 1
(Visual) Attention Perception and awareness of a visual object seems to involve attending to the object. Do we have to attend to an object to perceive it? Some tasks seem to proceed with little or no attention
More informationA contrast paradox in stereopsis, motion detection and vernier acuity
A contrast paradox in stereopsis, motion detection and vernier acuity S. B. Stevenson *, L. K. Cormack Vision Research 40, 2881-2884. (2000) * University of Houston College of Optometry, Houston TX 77204
More informationInternational Journal of Engineering Trends and Applications (IJETA) Volume 4 Issue 2, Mar-Apr 2017
RESEARCH ARTICLE OPEN ACCESS Knowledge Based Brain Tumor Segmentation using Local Maxima and Local Minima T. Kalaiselvi [1], P. Sriramakrishnan [2] Department of Computer Science and Applications The Gandhigram
More informationAutomatic Detection of Diabetic Retinopathy Level Using SVM Technique
International Journal of Innovation and Scientific Research ISSN 2351-8014 Vol. 11 No. 1 Oct. 2014, pp. 171-180 2014 Innovative Space of Scientific Research Journals http://www.ijisr.issr-journals.org/
More informationlateral organization: maps
lateral organization Lateral organization & computation cont d Why the organization? The level of abstraction? Keep similar features together for feedforward integration. Lateral computations to group
More informationRecurrent Refinement for Visual Saliency Estimation in Surveillance Scenarios
2012 Ninth Conference on Computer and Robot Vision Recurrent Refinement for Visual Saliency Estimation in Surveillance Scenarios Neil D. B. Bruce*, Xun Shi*, and John K. Tsotsos Department of Computer
More informationThe Attraction of Visual Attention to Texts in Real-World Scenes
The Attraction of Visual Attention to Texts in Real-World Scenes Hsueh-Cheng Wang (hchengwang@gmail.com) Marc Pomplun (marc@cs.umb.edu) Department of Computer Science, University of Massachusetts at Boston,
More informationThe Role of Top-down and Bottom-up Processes in Guiding Eye Movements during Visual Search
The Role of Top-down and Bottom-up Processes in Guiding Eye Movements during Visual Search Gregory J. Zelinsky, Wei Zhang, Bing Yu, Xin Chen, Dimitris Samaras Dept. of Psychology, Dept. of Computer Science
More informationANALYSIS AND DETECTION OF BRAIN TUMOUR USING IMAGE PROCESSING TECHNIQUES
ANALYSIS AND DETECTION OF BRAIN TUMOUR USING IMAGE PROCESSING TECHNIQUES P.V.Rohini 1, Dr.M.Pushparani 2 1 M.Phil Scholar, Department of Computer Science, Mother Teresa women s university, (India) 2 Professor
More informationVision Research. Clutter perception is invariant to image size. Gregory J. Zelinsky a,b,, Chen-Ping Yu b. abstract
Vision Research 116 (2015) 142 151 Contents lists available at ScienceDirect Vision Research journal homepage: www.elsevier.com/locate/visres Clutter perception is invariant to image size Gregory J. Zelinsky
More informationInformation Maximization in Face Processing
Information Maximization in Face Processing Marian Stewart Bartlett Institute for Neural Computation University of California, San Diego marni@salk.edu Abstract This perspective paper explores principles
More informationBiologically Motivated Local Contextual Modulation Improves Low-Level Visual Feature Representations
Biologically Motivated Local Contextual Modulation Improves Low-Level Visual Feature Representations Xun Shi,NeilD.B.Bruce, and John K. Tsotsos Department of Computer Science & Engineering, and Centre
More informationObject-Level Saliency Detection Combining the Contrast and Spatial Compactness Hypothesis
Object-Level Saliency Detection Combining the Contrast and Spatial Compactness Hypothesis Chi Zhang 1, Weiqiang Wang 1, 2, and Xiaoqian Liu 1 1 School of Computer and Control Engineering, University of
More informationPCA Enhanced Kalman Filter for ECG Denoising
IOSR Journal of Electronics & Communication Engineering (IOSR-JECE) ISSN(e) : 2278-1684 ISSN(p) : 2320-334X, PP 06-13 www.iosrjournals.org PCA Enhanced Kalman Filter for ECG Denoising Febina Ikbal 1, Prof.M.Mathurakani
More informationVisual Saliency in Image Quality Assessment
Visual Saliency in Image Quality Assessment Wei Zhang School of Computer Science and Informatics Cardiff University A thesis submitted in partial fulfilment of the requirement for the degree of Doctor
More informationADAPTING COPYCAT TO CONTEXT-DEPENDENT VISUAL OBJECT RECOGNITION
ADAPTING COPYCAT TO CONTEXT-DEPENDENT VISUAL OBJECT RECOGNITION SCOTT BOLLAND Department of Computer Science and Electrical Engineering The University of Queensland Brisbane, Queensland 4072 Australia
More informationSaliency aggregation: Does unity make strength?
Saliency aggregation: Does unity make strength? Olivier Le Meur a and Zhi Liu a,b a IRISA, University of Rennes 1, FRANCE b School of Communication and Information Engineering, Shanghai University, CHINA
More informationPerceptual-Based Objective Picture Quality Measurements
Perceptual-Based Objective Picture Quality Measurements Introduction In video systems, a wide range of video processing devices can affect overall picture quality. Encoders and decoders compress and decompress
More informationImproving Search Task Performance Using Subtle Gaze Direction
Improving Search Task Performance Using Subtle Gaze Direction Ann McNamara Saint Louis University Reynold Bailey Rochester Institute of Technology Cindy Grimm Washington University in Saint Louis Figure
More informationInternational Journal of Computational Science, Mathematics and Engineering Volume2, Issue6, June 2015 ISSN(online): Copyright-IJCSME
Various Edge Detection Methods In Image Processing Using Matlab K. Narayana Reddy 1, G. Nagalakshmi 2 12 Department of Computer Science and Engineering 1 M.Tech Student, SISTK, Puttur 2 HOD of CSE Department,
More information196 IEEE TRANSACTIONS ON AUTONOMOUS MENTAL DEVELOPMENT, VOL. 2, NO. 3, SEPTEMBER 2010
196 IEEE TRANSACTIONS ON AUTONOMOUS MENTAL DEVELOPMENT, VOL. 2, NO. 3, SEPTEMBER 2010 Top Down Gaze Movement Control in Target Search Using Population Cell Coding of Visual Context Jun Miao, Member, IEEE,
More informationEfficient Salient Region Detection with Soft Image Abstraction
Efficient Salient Region Detection with Soft Image Abstraction Ming-Ming Cheng Jonathan Warrell Wen-Yan Lin Shuai Zheng Vibhav Vineet Nigel Crook Vision Group, Oxford Brookes University Abstract Detecting
More informationHow does image noise affect actual and predicted human gaze allocation in assessing image quality?
How does image noise affect actual and predicted human gaze allocation in assessing image quality? Florian Röhrbein 1, Peter Goddard 2, Michael Schneider 1,Georgina James 2, Kun Guo 2 1 Institut für Informatik
More informationOutline. Teager Energy and Modulation Features for Speech Applications. Dept. of ECE Technical Univ. of Crete
Teager Energy and Modulation Features for Speech Applications Alexandros Summariza(on Potamianos and Emo(on Tracking in Movies Dept. of ECE Technical Univ. of Crete Alexandros Potamianos, NatIONAL Tech.
More informationA context-dependent attention system for a social robot
A context-dependent attention system for a social robot Cynthia Breazeal and Brian Scassellati MIT Artificial Intelligence Lab 545 Technology Square Cambridge, MA 02139 U. S. A. Abstract This paper presents
More informationEVIDENCE OF CHANGE BLINDNESS IN SUBJECTIVE IMAGE FIDELITY ASSESSMENT
EVIDENCE OF CHANGE BLINDNESS IN SUBJECTIVE IMAGE FIDELITY ASSESSMENT Steven Le Moan Massey University Palmerston North, New Zealand Marius Pedersen Norwegian University of Science and Technology Gjøvik,
More informationGESTALT SALIENCY: SALIENT REGION DETECTION BASED ON GESTALT PRINCIPLES
GESTALT SALIENCY: SALIENT REGION DETECTION BASED ON GESTALT PRINCIPLES Jie Wu and Liqing Zhang MOE-Microsoft Laboratory for Intelligent Computing and Intelligent Systems Dept. of CSE, Shanghai Jiao Tong
More informationA Neurally-Inspired Model for Detecting and Localizing Simple Motion Patterns in Image Sequences
A Neurally-Inspired Model for Detecting and Localizing Simple Motion Patterns in Image Sequences Marc Pomplun 1, Yueju Liu 2, Julio Martinez-Trujillo 2, Evgueni Simine 2, and John K. Tsotsos 2 1 Department
More informationNAILFOLD CAPILLAROSCOPY USING USB DIGITAL MICROSCOPE IN THE ASSESSMENT OF MICROCIRCULATION IN DIABETES MELLITUS
NAILFOLD CAPILLAROSCOPY USING USB DIGITAL MICROSCOPE IN THE ASSESSMENT OF MICROCIRCULATION IN DIABETES MELLITUS PROJECT REFERENCE NO. : 37S0841 COLLEGE BRANCH GUIDE : DR.AMBEDKAR INSTITUTE OF TECHNOLOGY,
More informationVENUS: A System for Novelty Detection in Video Streams with Learning
VENUS: A System for Novelty Detection in Video Streams with Learning Roger S. Gaborski, Vishal S. Vaingankar, Vineet S. Chaoji, Ankur M. Teredesai Laboratory for Applied Computing, Rochester Institute
More informationA Model of Saliency-Based Visual Attention for Rapid Scene Analysis
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis Itti, L., Koch, C., Niebur, E. Presented by Russell Reinhart CS 674, Fall 2018 Presentation Overview Saliency concept and motivation
More informationAn Information Theoretic Model of Saliency and Visual Search
An Information Theoretic Model of Saliency and Visual Search Neil D.B. Bruce and John K. Tsotsos Department of Computer Science and Engineering and Centre for Vision Research York University, Toronto,
More information10-1 MMSE Estimation S. Lall, Stanford
0 - MMSE Estimation S. Lall, Stanford 20.02.02.0 0 - MMSE Estimation Estimation given a pdf Minimizing the mean square error The minimum mean square error (MMSE) estimator The MMSE and the mean-variance
More informationComparative Evaluation of Color Differences between Color Palettes
2018, Society for Imaging Science and Technology Comparative Evaluation of Color Differences between Color Palettes Qianqian Pan a, Stephen Westland a a School of Design, University of Leeds, Leeds, West
More informationMethods for comparing scanpaths and saliency maps: strengths and weaknesses
Methods for comparing scanpaths and saliency maps: strengths and weaknesses O. Le Meur olemeur@irisa.fr T. Baccino thierry.baccino@univ-paris8.fr Univ. of Rennes 1 http://www.irisa.fr/temics/staff/lemeur/
More informationA new path to understanding vision
A new path to understanding vision from the perspective of the primary visual cortex Frontal brain areas Visual cortices Primary visual cortex (V1) Li Zhaoping Retina A new path to understanding vision
More informationFunctional Fixedness: The Functional Significance of Delayed Disengagement Based on Attention Set
In press, Journal of Experimental Psychology: Human Perception and Performance Functional Fixedness: The Functional Significance of Delayed Disengagement Based on Attention Set Timothy J. Wright 1, Walter
More informationNeuromorphic convolutional recurrent neural network for road safety or safety near the road
Neuromorphic convolutional recurrent neural network for road safety or safety near the road WOO-SUP HAN 1, IL SONG HAN 2 1 ODIGA, London, U.K. 2 Korea Advanced Institute of Science and Technology, Daejeon,
More informationIMAGE COMPLEXITY MEASURE BASED ON VISUAL ATTENTION
IMAGE COMPLEXITY MEASURE BASED ON VISUAL ATTENTION Matthieu Perreira da Silva, Vincent Courboulay, Pascal Estraillier To cite this version: Matthieu Perreira da Silva, Vincent Courboulay, Pascal Estraillier.
More information(In)Attention and Visual Awareness IAT814
(In)Attention and Visual Awareness IAT814 Week 5 Lecture B 8.10.2009 Lyn Bartram lyn@sfu.ca SCHOOL OF INTERACTIVE ARTS + TECHNOLOGY [SIAT] WWW.SIAT.SFU.CA This is a useful topic Understand why you can
More informationAttention Estimation by Simultaneous Observation of Viewer and View
Attention Estimation by Simultaneous Observation of Viewer and View Anup Doshi and Mohan M. Trivedi Computer Vision and Robotics Research Lab University of California, San Diego La Jolla, CA 92093-0434
More informationELL 788 Computational Perception & Cognition July November 2015
ELL 788 Computational Perception & Cognition July November 2015 Module 8 Audio and Multimodal Attention Audio Scene Analysis Two-stage process Segmentation: decomposition to time-frequency segments Grouping
More informationDevelopment of goal-directed gaze shift based on predictive learning
4th International Conference on Development and Learning and on Epigenetic Robotics October 13-16, 2014. Palazzo Ducale, Genoa, Italy WePP.1 Development of goal-directed gaze shift based on predictive
More informationVisual Task Inference Using Hidden Markov Models
Visual Task Inference Using Hidden Markov Models Abstract It has been known for a long time that visual task, such as reading, counting and searching, greatly influences eye movement patterns. Perhaps
More informationSensory Cue Integration
Sensory Cue Integration Summary by Byoung-Hee Kim Computer Science and Engineering (CSE) http://bi.snu.ac.kr/ Presentation Guideline Quiz on the gist of the chapter (5 min) Presenters: prepare one main
More informationSegmentation of Tumor Region from Brain Mri Images Using Fuzzy C-Means Clustering And Seeded Region Growing
IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 18, Issue 5, Ver. I (Sept - Oct. 2016), PP 20-24 www.iosrjournals.org Segmentation of Tumor Region from Brain
More informationEdge Detection Techniques Using Fuzzy Logic
Edge Detection Techniques Using Fuzzy Logic Essa Anas Digital Signal & Image Processing University Of Central Lancashire UCLAN Lancashire, UK eanas@uclan.a.uk Abstract This article reviews and discusses
More information