Object-Level Saliency Detection Combining the Contrast and Spatial Compactness Hypothesis
|
|
- Madlyn Miles
- 6 years ago
- Views:
Transcription
1 Object-Level Saliency Detection Combining the Contrast and Spatial Compactness Hypothesis Chi Zhang 1, Weiqiang Wang 1, 2, and Xiaoqian Liu 1 1 School of Computer and Control Engineering, University of Chinese Academy of Sciences, Beijing, China 2 Key Lab of intelligence information Processing, Institute of Computing Technology, CAS, Beijing, China Abstract - Object-level saliency detection is an important branch of visual saliency. Most previous methods are based on the contrast hypothesis which regards the regions presenting high contrast in a certain context as salient. Although the contrast hypothesis is valid in many cases, it can t handle some difficult cases, especially when the salient object is large. To make up for the deficiencies of contrast hypothesis, we incorporate a novel spatial compactness hypothesis which can effectively handle those tough cases. In addition, we propose a unified framework which integrates multiple saliency maps generated on different feature maps built on different hypotheses. Our algorithm can automatically select saliency maps of high quality according to the quality evaluation score defined in this paper. The experimental results demonstrate that each key component of our method contributes to the final performance and the full version of our method outperforms all state-of-the-art methods on the most popular dataset. Keywords: Saliency Detection, Salient Object Detection 1 Introduction Human vision system is capable to effectively preselect the data of potential interest from a complex scene for further processing. The computer vision systems will be much more efficient if we endow them with such an excellent ability. To this end, a lot of efforts have been conducted on bottom-up saliency detection. In early years, saliency detection methods [1][2][3][4] are usually biologically inspired, they aim to produce consistent predictive fixations with human vision systems and the evaluation datasets are eye movement datasets which record the information of fixations. We call this category of saliency detection as fixation-level saliency detection. In recent years, a new trend that aims to uniformly highlight the whole salient object becomes popular because of its various applications in object based tasks, such as object recognition [5], image cropping [6], and content-aware image resizing [7], etc. This kind of work is evaluated on the datasets with salient objects manually labeled, so we call it as object-level saliency detection. In this paper, our research mainly focuses on the object-level saliency detection. Due to the lack of high-level knowledge, every bottomup detection method is established on some effective hypotheses inspired by the characteristics of salient objects or background. Among all hypotheses, the contrast hypothesis is the most widely used. It takes the region presenting high contrast in a local or global context as salient. The contrast hypothesis is effective in most cases, because the salient object is usually distinct from its surroundings. So most existing object-level saliency detection methods, either explicitly or implicitly make use of this hypothesis. They usually implement it by measuring the difference between the current region and its surroundings in pixel-level [9], patch-level [10], super-pixel-level [12], the sliding window-level [18] or partial combination of them [11]. However, the contrast hypothesis is not omnipotent, because there are some tough cases it can t handle well. As shown in Fig.2, due to intrinsic flaw of the contrast hypothesis, the pure contrast hypothesis based methods (FT-CVPR09 [9], CA- CVPR10 [10] and RC-CVPR11 [12]) are usually fail to highlight the interior region of large salient object and the salient objects sharing similar appearance with the background, meanwhile, they can t suppress the highly textured background regions which also present high contrast. To cover the shortage of contrast hypothesis, we incorporate an effective spatial compactness hypothesis which believes that the salient regions should be spatially more compact than the background regions. In other words, a region is salient if its appearance-similar regions are nearby, and it is less salient if there are many appearance-similar regions far away from it. In this paper, we intuitively implement the spatial compactness hypothesis by computing the reciprocal of the sum of appearance weighted distances from the current region to all the other regions. It is fortunate that the spatial compactness can usually handle those tough cases described above. However, same like the contrast hypothesis, the spatial compactness hypothesis is not always valid because sometimes the background regions may be composed of some isolated regions which also show great spatial compactness or there are multiple similar salient objects far away from each other. At this moment, the contrast hypothesis is needed to make up for the deficiencies of the spatial compactness. So these two hypotheses could nicely complement each other and by combining the contrast hypothesis and spatial compactness hypothesis, our method is able to effectively handle most cases. Previous work which is the most relevant to ours is [14]. To our best knowledge, [14] is the first method that employs both contrast hypothesis and spatial
2 compactness hypothesis, though it considers the spatial compactness hypothesis as one type of contrast hypothesis. It firstly over-segments the image into some super-pixels, and each super-pixel is represented by the mean color value of the pixels within the super-pixel in Lab space. Then the color uniqueness (contrast) value and color distribution (spatial compactness) value of each super-pixel are computed. Finally, the color uniqueness values and color distribution values are combined and the saliency value is assigned to each super-pixel after a smoothing procedure. This method achieved improvement over the contrast hypothesis based state-of-the-art method [12]. However, in experiments, we find that the salient object sometimes presents salient only in individual color channels but not in the whole color space, so it is not wise to simply represent each super-pixel by the mean color value or only compute the contrast and spatial compactness values in the whole color space. In this paper, we compute the contrast and spatial compactness values not only in the whole color space but also in individual color channels. To fuse multiple saliency maps generated on different feature maps with different hypotheses into the final saliency map, we propose a unified framework that can automatically select saliency maps of high quality according to the quality evaluation score defined in this paper. The evaluation score is computed by combining three novel heuristic rules which are proposed according to the characteristics of salient object. What s more, it is convenient to add new features and hypotheses into this unified framework, e.g. we can easily generalize our method to video by incorporating some proprietary cues of video such as motion information. 2 The proposed method 2.1 Overview The whole procedure of our method is listed as follows: firstly, the input image is converted into CIE Lab color space and split into uniform patches. The compact representations of patches in each color channel are obtained by PCA; then, the contrast saliency values and spatial compactness saliency values are computed for each color channel and the whole color space; next, the multiple saliency maps are refined by incorporating an image segmentation component. Finally, the quality score of each saliency map is computed and some saliency maps are selected and fused into the final saliency map. In the following, we will elaborate each step of our method. 2.2 Patch representation For an input color image I, we first convert its color space into CIE Lab and then partition it into some nonoverlapping square patches size of (we set to 8 in our implementation). For each patch at row and column,,, it is represented as three vectors,, where is formed by concatenating the pixel values in patch on color channel and the length of each vector is. Correspondingly, image I is represented as three patch matrices, To eliminate the dimension with noises and make the following computation more efficient, we employ principal component analysis (PCA) to reduce the dimension of the patch representation and obtain the compact patch matrices, is the compact vector in low-dimensional space corresponding to. In addition, the spatial position of patch is denoted by the vector in the following formulations. 2.3 Saliency maps generation We introduce the details of how to generate saliency maps based on the contrast hypothesis and spatial compactness hypothesis in this subsection. It is interesting that the computation of both the contrast value and spatial compactness value can be formulated into a unified weighted linear combination form like the following formulation, (1) where is the main factor and is the weight. The position dissimilarity between patches and appearance dissimilarity between patches alternately act as the main factor and weight. Concretely, for the contrast hypothesis, the appearance dissimilarity acts as the main factor, while the position dissimilarity is used to compute weight. They swap roles for the spatial compactness hypothesis. In this paper, the computation methods of those two kinds of dissimilarity are different. The appearance dissimilarity is measured by the Euclidean distance defined in Eqn. (2), while the position dissimilarity is computed via infinite norm distance defined in Eqn. (3), ( ), (2) ( ). (3) With the appearance dissimilarity in individual color channels, we can obtain the appearance dissimilarity in the whole color space easily by adding them together, namely, ( )= ( ). We add the whole color space Lab to the color channel set, namely, in the following,. When position dissimilarity and appearance dissimilarity act as weights, both of them will be feed into a Gaussian function, which are respectively defined in Eqn. (4) and (5), ( ) ( ( ) ( ( ) ), (4) ( ) ), (5)
3 where and are the normalization factors to ensure ( ) and ( ) equal 1. And is an important factor that controls the influence of weights. The bigger the is, the smaller the influence of weights will be. So it is very important to choose an appropriate. Because ( ) and ( ) vary greatly with different images, it is difficult to select a fixed suiting all the images. So we employ adaptive and defined as, ( ), (6) ( ), (7) in which, and are constant factors. We set both of them to 0.35 in this paper Contrast value computation Similar to [14] [19], we obtain the contrast value by computing the sum of spatially weighted dissimilarities between the current patch and all other patches. Concretely, the contrast value of patch in color channel is defined as, ( ) ( ), (8) where ( ) denotes the appearance dissimilarity between patch and in color channel defined in Eqn. (2) and ( ) is the position weight defined in (4) Spatial compactness value computation In this paper, the spatial compactness is defined as the reciprocal of the sum of appearance weighted position dissimilarities between the current patch and all the other patches. So the spatial compactness value of patch in color channel is defined as, ( ( ) ( )), (9) where ( ) denotes the position dissimilarity between patch and defined in Eqn. (3) and ( ) is the appearance weight defined in Eqn. (5) Saliency value assignment To reduce the impact of noise and improve the homogeneity of saliency map, we employ a weighted sum of the contrast values or spatial compactness values of the patches in a square neighborhood of the current patch as the saliency value defined as,, (10) in which, denotes the color channel, and denotes the hypothesis we employ, and controls the size of square neighborhood, in our method =7. Finally, we normalize the obtained saliency maps to [0, 1] via a linear stretch defined as, ( ) ( ) ( ) ( ) ( ). (11) By now, we obtain eight patch-level saliency maps generated on different color channel with different hypotheses. Based on these patch-level saliency maps, the pixel-level saliency maps can be easily obtained through assigning saliency value of the patch to the pixels within it. Then we get eight pixel-level saliency maps, they are,,,,,,,. 2.4 Saliency maps refinement The obtained saliency maps are generated on the unit of patch, which may suffer from the blur of salient object boundary, because object boundaries may fall into different patches when an image is partitioned into the patches of the same size and shape without considering visual content. Therefore, these saliency maps may not achieve accurate saliency values near the object boundaries. So, we incorporate an image segmentation module to refine our saliency maps. The original image is over-segmented into a group of regions via the mean shift method implemented in the EDISON system [8], we set the min area of each region to 1000 in our implementation. Then the pixel-level saliency value in each region is given by the mean of pixel-level saliency values in it, i.e.,, (12) where denotes the region which the pixel at location belongs to, and is the total number of pixels in region. The linear stretch normalization is performed to each refined saliency map as Eqn. (11). 2.5 Selection and fusion of saliency maps The last but not least step is to fuse the multiple saliency maps obtained in preceding steps into the final saliency map. As shown in Fig.1, not all the obtained saliency maps are able to highlight the salient object uniformly and accurately, some saliency maps even mistakenly highlight the background regions. So it is not wise to fuse all those saliency maps into final result, the saliency maps of high quality should be picked out and those of poor quality should be discarded. Based on the above thoughts, we propose a fusion framework which can automatically pick out saliency maps of high quality according to the quality evaluation score defined in the following. In the framework, we first compute the evaluation score of each saliency map according to three
4 Figure 1. Refined saliency maps of each individual color channel and the whole color space built on both contrast hypothesis and spatial compactness hypothesis. In the first row from left to right: original image,,, and. In the second row from left to right: ground-truth,,, and. The evalutation scores of the first four saliency maps respectively are 1.83, 2.24, 1.79 and The evalutation scores of the last four saliency maps respectively are 2.82, 2.82, 2.24 and The selected saliency maps are,, and novel heuristic rules we proposed. Then, the saliency map whose evaluation score is higher than a pre-defined threshold will be selected. Finally, we fuse those selected saliency maps in a linear weighted sum form, and the weight is the corresponding evaluation score Definition of the evaluation scores According to the characteristics of salient object, we propose three novel heuristic rules to evaluate the quality of a saliency map, i.e., ratio of center-surround, distribution compactness and saliency variance. The final evaluation score is computed based on those three rules to measure the extent that how accurately and uniformly the saliency map highlights the salient object. Based on the fact that salient objects usually lie near the center of an image and the finding that human eyes also tend to fixate on the center when viewing scenes [20], we choose the ratio of center-surround as one assessment rule. The ratio of center-surround is defined as the ratio of the sum of saliency values of the central region to that of the surrounding region, and in our implementation it is formulated as, ( ),(13) where and denote the width and height of the saliency map respectively. The high ratio of center-surround reflects high quality of a saliency map. However, salient objects don t always lie in the center of image, so we need other rules to enhance the reliability of the final evaluation score. Because the salient objects are spatially compact as described by the spatial compactness hypothesis, in the saliency map of high quality the pixels with high saliency values should be spatially close as well. So we take the distribution compactness as the second assessment rule defined as, ( ) ( ( ( ) )), (14), (15), (16), (17) where the is the normalization factor, and denote the coordinates and of the gravity center, and are the width and height of the saliency map. A lot of observations in the experiments show that the saliency map which is well consistent with the ground truth usually has a high pixel variance, so we employ the pixel variance of saliency map as the last assessment rule defined as, where ( ), (18), (19) is the mean value of the saliency map. With the above three assessment rules, we define the final quality evaluation score as, ( ) ( ) ( ) ( ), (20)
5 Original FT- CVPR09 CA- CVPR10 LIU- PAMI11 RC- CVPR11 BS-TIP12 LR- CVPR12 SF- CVPR12 GS- ECCV12 CC-OURS Groud- Truth Figure 2. Three examples of qualitative comparison results between our method (CC-OURS) and other 8 state-of-the-art methods (FT-CVPR09 [9], CA-CVPR10 [10], LIU-PAMI11 [11], RC-CVPR11 [12], BS-TIP12 [13], LR-CVPR12 [14], SF- CVPR12 [15] and GS-ECCV12 [16]). where, and are the normalization factors, which are respectively the maximum of ( ), ( ) and ( ), i.e., ( ), (21) ( ), (22) ( ). (23) Combination of saliency maps After computing the quality evaluation score of each saliency map, we combine those selected saliency maps in a linear weighted sum form, which is formulated as, ( ( )), (24) ( ( )) { ( ) ( ) ( ), (25) ( ), (26) where is an adaptive threshold defined in Eqn. (26). The saliency map will be discarded, if its quality evaluation score is smaller than. Finally, we normalize the final saliency map as Eqn. (11). It is worth mentioning that, with the novel fusion framework, we can easily integrating new features and hypotheses into our method. 3 Experiments and results We test our method on a publicly available dataset which contains 1000 images along with ground-truth (GT) in the form of human-labeled masks for salient objects provided by Achanta et al. [9]. We compare the proposed method (CC- OURS) with 8 state-of-the-art object-level saliency detection methods. We select these methods according to: popularity (FT-CVPR09 [9], CA-CVPR10 [10], LIU-PAMI11 [11] and RC-CVPR11 [12]) and recency (BS-TIP12 [13], LR-CVPR12 [14], SF-CVPR12 [15] and GS-ECCV12 [16]). The saliency maps of FT-CVPR09 [9] and RC-CVPR11 [12] are provided by [12]. The saliency maps of LIU-PAMI11 [11] are generated with the parameters reported in their paper using the code provided by the author. The saliency maps of other methods are obtained from the corresponding authors homepages. In our experiments, both qualitative and quantitative comparisons are performed. Three examples of qualitative comparison results are shown in Fig.2. These three examples were selected as the most difficult stimuli by [17], because they represent three typical tough cases of the objectlevel saliency detection task, including large salient object, textured background and low contrast between salient object and background. It can be seen that our method consistently produces saliency maps close to ground-truth while the other methods don t perform well in at least one case. So our method is more robust than the rest 8 state-of-the-art methods and can handle the three tough cases well. In quantitative comparison, similar to [9], we employ the recall-precision curve to evaluate our method. With saliency value in the range [0,255], we binarize the saliency maps with varying thresholds from 0 to 255 and the recalls and precisions are obtained by comparing the binary results with the groundtruth. Concretely, the precision and recall under a certain threshold are defined as,, (27), (28) where is the binary mask under threshold of the image, is the corresponding ground-truth and is the number of test images. The recall-precision curves of 8 stateof-the-art methods and our method are shown in Fig.3 (a) (b). We can see that our method outperforms all the other 8 methods. Our method can predict the most salient regions accurately with a maximum precision about 0.96, meanwhile it keeps high precisions when the recalls are high (achieve a precision over 0.9 when the recall is 0.8 and a precision over 0.8 when the recall is 0.9). In addition, we also evaluate the incomplete versions of our method. Each of them eliminates
6 one key component from the full version of our method. They are the version without contrast hypothesis, the version without spatial compactness hypothesis and the version without the novel fusion framework. The last one is implemented by only computing the contrast values and spatial compactness values in the whole color space and simply adding the saliency maps together without weights. The evaluation results are shown in Fig.3 (c). We can see that the proposed fusion framework play an important role in our method, relying on it the two incomplete versions respectively only employing one hypothesis both achieve satisfactory results. The results also show that compared with the version without spatial compactness hypothesis, the version without contrast hypothesis achieves higher precisions especially when the recalls are high. This is mainly because the spatial compactness hypothesis can usually highlight the whole salient object even when the salient object is large or doesn t present high contrast and suppress the background even if the background is textured while the contrast hypothesis isn t good at these tough cases. So the spatial compactness hypothesis is a nice complement to contrast hypothesis. By integrating all these three key components, the full version of our method achieves the optimal performance. (c) Figure 3. (a)(b) The recall-precision curves of 8 state-of-theart methods and our method. (c) Evaluation of the methods without contrast hypothesis, without spatial compactness hypothesis, without thefusion framework and the full version of our method. Figure 4. The recall-precision curves of the proposed method and the results generated by manually selecting the saliency maps of high quality in the fusion stage. (a) (b) Figure 5. The failure cases of our method. The first column is the original image, the second column is the saliency map of our method, the third column is the ground-truth.
7 4 Conclusions In this paper, we propose an object-level saliency detection method which integrates the popular contrast hypothesis and the effective spatial compactness hypothesis. The experimental results show that our method achieves satisfactory performance and outperforms all state-of-the-art methods in both qualitative and quantitative comparisons. In addition, we propose a fusion framework of multiple saliency maps that can automatically select saliency maps of high quality. The experimental results demonstrate that this fusion framework is very effective and plays an important role in our method. However, all though the proposed three assessment rules could help pick out the saliency maps of high quality accurately, as shown in Fig.4, there is still room for improvement comparing with manual selection. So we are supposed to find some new assessment rules to make the automatic selection more accurate. As shown in Fig.5, though the contrast hypothesis and spatial compactness hypothesis can handle most cases via cooperation, there are still some tough cases that neither of those two hypotheses can effectively deal with. So, to improve our method, we can incorporate some other effective hypotheses into our framework. As introduced before, we will try to make our method be utilized in video by incorporating some proprietary cues of video. 5 Acknowledgement This work was supported by the National Natural Science Foundation of China under Grant No , No , No References [1] L. Itti, C. Koch, and E. Niebur, A model of saliencybased visual attention for rapid scene analysis, TPAMI, vol. 20, no. 11, pp , [2] N.D.B. Bruce and J.K. Tsotsos, Saliency based on information maximization, in NIPS, [3] X. Hou and L. Zhang, Saliency detection: A spectral residual approach, in CVPR, 2007, pp [4] J. Harel, C. Koch, and P. Perona, Graph-based visual saliency, in NIPS, [5] U. Rutishauser, D. Walther, C. Koch, and P. Perona, Is bottom-up attention useful for object recognition?, in CVPR, [6] A. Santella, M. Agrawala, D. DeCarlo, D. Salesin, and M. Cohen, Gaze-based interaction for semi-automatic photo cropping, in Proceedings of the SIGCHI conference on Human Factors, [7] H. Wu, Y.S. Wang, K.C. Feng, T.T. Wong, T.Y. Lee, and P.A. Heng, Resizing by symmetry-summarization, ACM Trans. Graph., vol. 29, no. 6, pp. 159:1 159:10, [8] D. Comaniciu and P. Meer, Mean shift: A robust approach toward feature space analysis, TPAMI, vol. 24, no. 5, pp , [9] R. Achanta, S. Hemami, F. Estrada, and Sabine Ssstrunk, Frequency-tuned salient region detection, in CVPR, [10] Stas Goferman, Lihi Zelnik-Manor, and Ayellet Tal, Context-aware saliency detection, in CVPR, [11] T. Liu, Z.J. Yuan, J. Sun, J.D. Wang, N.N. Zheng, X.O. Tang, and H.Y. Shum, Learning to detect a salient object, TPAMI, vol. 33, no. 2, pp , [12] M.M. Cheng, G.X. Zhang, N.J. Mitra, X.L Huang, and S.M Hu, Global contrast based salient region detection, in CVPR, [13] Yulin Xie, Huchuan Lu, and Ming-Hsuan Yang, Bayesian saliency via low and mid level cues, TIP, vol. 22, no. 2, pp , [14] F. Perazzi, P. Krahenbul, Y. Pritch and A. Hornung, Saliency filters: contrast based filtering for salient region detection, in: CVPR, [15] X.H Shen and Y. Wu, A unified approach to salient object detection via low rank matrix recovery, in CVPR, [16] Y.C. Wei, F. Wen, W.J. Zhu, and J. Sun, Geodesic Saliency Using Background Priors, in ECCV, [17] Ali Borji, Dicky N. Sihite, and Laurent Itti, Salient Object Detection: A Benchmark, in ECCV, [18] Esa Rahtu, Juho Kannala, Mikko Salo, and Janne Heikkila, Segmenting salient objects from images and videos, in ECCV, [19] L.J. Duan, C.P Wu, J. Miao, L.Y. Qing and Y. Fu Visual Saliency Detection by Spatially Weighted Dissimilarity, in CVPR, [20] Benjamin W. Tatler, The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions, Journal of Vision, vol. 7, no. 14, pp. 1 17, 2007.
Visual Saliency Based on Multiscale Deep Features Supplementary Material
Visual Saliency Based on Multiscale Deep Features Supplementary Material Guanbin Li Yizhou Yu Department of Computer Science, The University of Hong Kong https://sites.google.com/site/ligb86/mdfsaliency/
More informationGESTALT SALIENCY: SALIENT REGION DETECTION BASED ON GESTALT PRINCIPLES
GESTALT SALIENCY: SALIENT REGION DETECTION BASED ON GESTALT PRINCIPLES Jie Wu and Liqing Zhang MOE-Microsoft Laboratory for Intelligent Computing and Intelligent Systems Dept. of CSE, Shanghai Jiao Tong
More informationEfficient Salient Region Detection with Soft Image Abstraction
Efficient Salient Region Detection with Soft Image Abstraction Ming-Ming Cheng Jonathan Warrell Wen-Yan Lin Shuai Zheng Vibhav Vineet Nigel Crook Vision Group, Oxford Brookes University Abstract Detecting
More informationEXEMPLAR BASED IMAGE SALIENT OBJECT DETECTION. {zzwang, ruihuang, lwan,
EXEMPLAR BASED IMAGE SALIENT OBJECT DETECTION Zezheng Wang Rui Huang, Liang Wan 2 Wei Feng School of Computer Science and Technology, Tianjin University, Tianjin, China 2 School of Computer Software, Tianjin
More informationSalient Object Detection in RGB-D Image Based on Saliency Fusion and Propagation
Salient Object Detection in RGB-D Image Based on Saliency Fusion and Propagation Jingfan Guo,, Tongwei Ren,,, Jia Bei,, Yujin Zhu State Key Laboratory for Novel Software Technology, Nanjing University,
More informationVIDEO SALIENCY INCORPORATING SPATIOTEMPORAL CUES AND UNCERTAINTY WEIGHTING
VIDEO SALIENCY INCORPORATING SPATIOTEMPORAL CUES AND UNCERTAINTY WEIGHTING Yuming Fang, Zhou Wang 2, Weisi Lin School of Computer Engineering, Nanyang Technological University, Singapore 2 Department of
More informationA Visual Saliency Map Based on Random Sub-Window Means
A Visual Saliency Map Based on Random Sub-Window Means Tadmeri Narayan Vikram 1,2, Marko Tscherepanow 1 and Britta Wrede 1,2 1 Applied Informatics Group 2 Research Institute for Cognition and Robotics
More informationValidating the Visual Saliency Model
Validating the Visual Saliency Model Ali Alsam and Puneet Sharma Department of Informatics & e-learning (AITeL), Sør-Trøndelag University College (HiST), Trondheim, Norway er.puneetsharma@gmail.com Abstract.
More informationComputational Cognitive Science. The Visual Processing Pipeline. The Visual Processing Pipeline. Lecture 15: Visual Attention.
Lecture 15: Visual Attention School of Informatics University of Edinburgh keller@inf.ed.ac.uk November 11, 2016 1 2 3 Reading: Itti et al. (1998). 1 2 When we view an image, we actually see this: The
More informationComputational Cognitive Science
Computational Cognitive Science Lecture 15: Visual Attention Chris Lucas (Slides adapted from Frank Keller s) School of Informatics University of Edinburgh clucas2@inf.ed.ac.uk 14 November 2017 1 / 28
More informationFinding Saliency in Noisy Images
Finding Saliency in Noisy Images Chelhwon Kim and Peyman Milanfar Electrical Engineering Department, University of California, Santa Cruz, CA, USA ABSTRACT Recently, many computational saliency models
More informationUnified Saliency Detection Model Using Color and Texture Features
RESEARCH ARTICLE Unified Saliency Detection Model Using Color and Texture Features Libo Zhang *, Lin Yang, Tiejian Luo School of Computer and Control, University of Chinese Academy of Sciences, Beijing,
More informationarxiv: v1 [cs.cv] 1 Nov 2017
ROBUST SALIENCY DETECTION VIA FUSING FOREGROUND AND BACKGROUND PRIORS Kan Huang, Chunbiao Zhu* and Ge Li School of Electronic and Computer Engineering Shenzhen Graduate School, Peking University *zhuchunbiao@pku.edu.cn
More informationMEMORABILITY OF NATURAL SCENES: THE ROLE OF ATTENTION
MEMORABILITY OF NATURAL SCENES: THE ROLE OF ATTENTION Matei Mancas University of Mons - UMONS, Belgium NumediArt Institute, 31, Bd. Dolez, Mons matei.mancas@umons.ac.be Olivier Le Meur University of Rennes
More informationAn Experimental Analysis of Saliency Detection with respect to Three Saliency Levels
An Experimental Analysis of Saliency Detection with respect to Three Saliency Levels Antonino Furnari, Giovanni Maria Farinella, Sebastiano Battiato {furnari,gfarinella,battiato}@dmi.unict.it Department
More informationVideo Saliency Detection via Dynamic Consistent Spatio- Temporal Attention Modelling
AAAI -13 July 16, 2013 Video Saliency Detection via Dynamic Consistent Spatio- Temporal Attention Modelling Sheng-hua ZHONG 1, Yan LIU 1, Feifei REN 1,2, Jinghuan ZHANG 2, Tongwei REN 3 1 Department of
More informationThe 29th Fuzzy System Symposium (Osaka, September 9-, 3) Color Feature Maps (BY, RG) Color Saliency Map Input Image (I) Linear Filtering and Gaussian
The 29th Fuzzy System Symposium (Osaka, September 9-, 3) A Fuzzy Inference Method Based on Saliency Map for Prediction Mao Wang, Yoichiro Maeda 2, Yasutake Takahashi Graduate School of Engineering, University
More informationFusing Generic Objectness and Visual Saliency for Salient Object Detection
Fusing Generic Objectness and Visual Saliency for Salient Object Detection Yasin KAVAK 06/12/2012 Citation 1: Salient Object Detection: A Benchmark Fusing for Salient Object Detection INDEX (Related Work)
More informationVISUAL search is necessary for rapid scene analysis
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 25, NO. 8, AUGUST 2016 3475 A Unified Framework for Salient Structure Detection by Contour-Guided Visual Search Kai-Fu Yang, Hui Li, Chao-Yi Li, and Yong-Jie
More informationSupplementary Material for submission 2147: Traditional Saliency Reloaded: A Good Old Model in New Shape
Sulementary Material for submission 247: Traditional Saliency Reloaded: A Good Old Model in New Shae Simone Frintro, Thomas Werner, and Germán M. García Institute of Comuter Science III Rheinische Friedrich-Wilhelms-Universität
More informationarxiv: v1 [cs.cv] 2 Jun 2017
INTEGRATED DEEP AND SHALLOW NETWORKS FOR SALIENT OBJECT DETECTION Jing Zhang 1,2, Bo Li 1, Yuchao Dai 2, Fatih Porikli 2 and Mingyi He 1 1 School of Electronics and Information, Northwestern Polytechnical
More informationSalient Object Detection in Videos Based on SPATIO-Temporal Saliency Maps and Colour Features
Salient Object Detection in Videos Based on SPATIO-Temporal Saliency Maps and Colour Features U.Swamy Kumar PG Scholar Department of ECE, K.S.R.M College of Engineering (Autonomous), Kadapa. ABSTRACT Salient
More informationKeywords- Saliency Visual Computational Model, Saliency Detection, Computer Vision, Saliency Map. Attention Bottle Neck
Volume 3, Issue 9, September 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Visual Attention
More informationLearning Spatiotemporal Gaps between Where We Look and What We Focus on
Express Paper Learning Spatiotemporal Gaps between Where We Look and What We Focus on Ryo Yonetani 1,a) Hiroaki Kawashima 1,b) Takashi Matsuyama 1,c) Received: March 11, 2013, Accepted: April 24, 2013,
More informationVisual Saliency with Statistical Priors
Int J Comput Vis (2014) 107:239 253 DOI 10.1007/s11263-013-0678-0 Visual Saliency with Statistical Priors Jia Li Yonghong Tian Tiejun Huang Received: 21 December 2012 / Accepted: 21 November 2013 / Published
More informationAn Attentional Framework for 3D Object Discovery
An Attentional Framework for 3D Object Discovery Germán Martín García and Simone Frintrop Cognitive Vision Group Institute of Computer Science III University of Bonn, Germany Saliency Computation Saliency
More informationComputational modeling of visual attention and saliency in the Smart Playroom
Computational modeling of visual attention and saliency in the Smart Playroom Andrew Jones Department of Computer Science, Brown University Abstract The two canonical modes of human visual attention bottomup
More informationSaliency aggregation: Does unity make strength?
Saliency aggregation: Does unity make strength? Olivier Le Meur a and Zhi Liu a,b a IRISA, University of Rennes 1, FRANCE b School of Communication and Information Engineering, Shanghai University, CHINA
More informationA Hierarchical Visual Saliency Model for Character Detection in Natural Scenes
A Hierarchical Visual Saliency Model for Character Detection in Natural Scenes Renwu Gao 1(B), Faisal Shafait 2, Seiichi Uchida 3, and Yaokai Feng 3 1 Information Sciene and Electrical Engineering, Kyushu
More informationOn the implementation of Visual Attention Architectures
On the implementation of Visual Attention Architectures KONSTANTINOS RAPANTZIKOS AND NICOLAS TSAPATSOULIS DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING NATIONAL TECHNICAL UNIVERSITY OF ATHENS 9, IROON
More informationAn Evaluation of Motion in Artificial Selective Attention
An Evaluation of Motion in Artificial Selective Attention Trent J. Williams Bruce A. Draper Colorado State University Computer Science Department Fort Collins, CO, U.S.A, 80523 E-mail: {trent, draper}@cs.colostate.edu
More informationOn the role of context in probabilistic models of visual saliency
1 On the role of context in probabilistic models of visual saliency Date Neil Bruce, Pierre Kornprobst NeuroMathComp Project Team, INRIA Sophia Antipolis, ENS Paris, UNSA, LJAD 2 Overview What is saliency?
More informationDynamic Visual Attention: Searching for coding length increments
Dynamic Visual Attention: Searching for coding length increments Xiaodi Hou 1,2 and Liqing Zhang 1 1 Department of Computer Science and Engineering, Shanghai Jiao Tong University No. 8 Dongchuan Road,
More informationImproving Saliency Models by Predicting Human Fixation Patches
Improving Saliency Models by Predicting Human Fixation Patches Rachit Dubey 1, Akshat Dave 2, and Bernard Ghanem 1 1 King Abdullah University of Science and Technology, Saudi Arabia 2 University of California
More informationA Model of Saliency-Based Visual Attention for Rapid Scene Analysis
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis Itti, L., Koch, C., Niebur, E. Presented by Russell Reinhart CS 674, Fall 2018 Presentation Overview Saliency concept and motivation
More informationHierarchical Convolutional Features for Visual Tracking
Hierarchical Convolutional Features for Visual Tracking Chao Ma Jia-Bin Huang Xiaokang Yang Ming-Husan Yang SJTU UIUC SJTU UC Merced ICCV 2015 Background Given the initial state (position and scale), estimate
More informationWhat is and What is not a Salient Object? Learning Salient Object Detector by Ensembling Linear Exemplar Regressors
What is and What is not a Salient Object? Learning Salient Object Detector by Ensembling Linear Exemplar Regressors Changqun Xia 1, Jia Li 1,2, Xiaowu Chen 1, Anlin Zheng 1, Yu Zhang 1 1 State Key Laboratory
More informationSaliency in Crowd. 1 Introduction. Ming Jiang, Juan Xu, and Qi Zhao
Saliency in Crowd Ming Jiang, Juan Xu, and Qi Zhao Department of Electrical and Computer Engineering National University of Singapore Abstract. Theories and models on saliency that predict where people
More informationRegion Proposals. Jan Hosang, Rodrigo Benenson, Piotr Dollar, Bernt Schiele
Region Proposals Jan Hosang, Rodrigo Benenson, Piotr Dollar, Bernt Schiele Who has read a proposal paper? 2 Who has read a proposal paper? Who knows what Average Recall is? 2 Who has read a proposal paper?
More informationComputer-Aided Quantitative Analysis of Liver using Ultrasound Images
6 JEST-M, Vol 3, Issue 1, 2014 Computer-Aided Quantitative Analysis of Liver using Ultrasound Images Email: poojaanandram @gmail.com P.G. Student, Department of Electronics and Communications Engineering,
More informationCONTEXTUAL INFORMATION BASED VISUAL SALIENCY MODEL. Seungchul Ryu, Bumsub Ham and *Kwanghoon Sohn
CONTEXTUAL INFORMATION BASED VISUAL SALIENCY MODEL Seungchul Ryu Bumsub Ham and *Kwanghoon Sohn Digital Image Media Laboratory (DIML) School of Electrical and Electronic Engineering Yonsei University Seoul
More informationModels of Attention. Models of Attention
Models of Models of predictive: can we predict eye movements (bottom up attention)? [L. Itti and coll] pop out and saliency? [Z. Li] Readings: Maunsell & Cook, the role of attention in visual processing,
More informationANALYSIS AND DETECTION OF BRAIN TUMOUR USING IMAGE PROCESSING TECHNIQUES
ANALYSIS AND DETECTION OF BRAIN TUMOUR USING IMAGE PROCESSING TECHNIQUES P.V.Rohini 1, Dr.M.Pushparani 2 1 M.Phil Scholar, Department of Computer Science, Mother Teresa women s university, (India) 2 Professor
More informationExperiment Presentation CS Chris Thomas Experiment: What is an Object? Alexe, Bogdan, et al. CVPR 2010
Experiment Presentation CS 3710 Chris Thomas Experiment: What is an Object? Alexe, Bogdan, et al. CVPR 2010 1 Preliminaries Code for What is An Object? available online Version 2.2 Achieves near 90% recall
More informationSaliency Inspired Modeling of Packet-loss Visibility in Decoded Videos
1 Saliency Inspired Modeling of Packet-loss Visibility in Decoded Videos Tao Liu*, Xin Feng**, Amy Reibman***, and Yao Wang* *Polytechnic Institute of New York University, Brooklyn, NY, U.S. **Chongqing
More informationObject-based Saliency as a Predictor of Attention in Visual Tasks
Object-based Saliency as a Predictor of Attention in Visual Tasks Michal Dziemianko (m.dziemianko@sms.ed.ac.uk) Alasdair Clarke (a.clarke@ed.ac.uk) Frank Keller (keller@inf.ed.ac.uk) Institute for Language,
More informationA Locally Weighted Fixation Density-Based Metric for Assessing the Quality of Visual Saliency Predictions
FINAL VERSION PUBLISHED IN IEEE TRANSACTIONS ON IMAGE PROCESSING 06 A Locally Weighted Fixation Density-Based Metric for Assessing the Quality of Visual Saliency Predictions Milind S. Gide and Lina J.
More informationMulti-attention Guided Activation Propagation in CNNs
Multi-attention Guided Activation Propagation in CNNs Xiangteng He and Yuxin Peng (B) Institute of Computer Science and Technology, Peking University, Beijing, China pengyuxin@pku.edu.cn Abstract. CNNs
More informationRecurrent Refinement for Visual Saliency Estimation in Surveillance Scenarios
2012 Ninth Conference on Computer and Robot Vision Recurrent Refinement for Visual Saliency Estimation in Surveillance Scenarios Neil D. B. Bruce*, Xun Shi*, and John K. Tsotsos Department of Computer
More informationChapter 1. Introduction
Chapter 1 Introduction 1.1 Motivation and Goals The increasing availability and decreasing cost of high-throughput (HT) technologies coupled with the availability of computational tools and data form a
More informationCancer Cells Detection using OTSU Threshold Algorithm
Cancer Cells Detection using OTSU Threshold Algorithm Nalluri Sunny 1 Velagapudi Ramakrishna Siddhartha Engineering College Mithinti Srikanth 2 Velagapudi Ramakrishna Siddhartha Engineering College Kodali
More informationA Comparison of Collaborative Filtering Methods for Medication Reconciliation
A Comparison of Collaborative Filtering Methods for Medication Reconciliation Huanian Zheng, Rema Padman, Daniel B. Neill The H. John Heinz III College, Carnegie Mellon University, Pittsburgh, PA, 15213,
More informationLearning a Combined Model of Visual Saliency for Fixation Prediction Jingwei Wang, Ali Borji, Member, IEEE, C.-C. Jay Kuo, and Laurent Itti
1566 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 25, NO. 4, APRIL 2016 Learning a Combined Model of Visual Saliency for Fixation Prediction Jingwei Wang, Ali Borji, Member, IEEE, C.-C. Jay Kuo, and Laurent
More informationAdding Shape to Saliency: A Computational Model of Shape Contrast
Adding Shape to Saliency: A Computational Model of Shape Contrast Yupei Chen 1, Chen-Ping Yu 2, Gregory Zelinsky 1,2 Department of Psychology 1, Department of Computer Science 2 Stony Brook University
More informationSaliency in Crowd. Ming Jiang, Juan Xu, and Qi Zhao
Saliency in Crowd Ming Jiang, Juan Xu, and Qi Zhao Department of Electrical and Computer Engineering National University of Singapore, Singapore eleqiz@nus.edu.sg Abstract. Theories and models on saliency
More informationComparative Study of K-means, Gaussian Mixture Model, Fuzzy C-means algorithms for Brain Tumor Segmentation
Comparative Study of K-means, Gaussian Mixture Model, Fuzzy C-means algorithms for Brain Tumor Segmentation U. Baid 1, S. Talbar 2 and S. Talbar 1 1 Department of E&TC Engineering, Shri Guru Gobind Singhji
More informationType II Fuzzy Possibilistic C-Mean Clustering
IFSA-EUSFLAT Type II Fuzzy Possibilistic C-Mean Clustering M.H. Fazel Zarandi, M. Zarinbal, I.B. Turksen, Department of Industrial Engineering, Amirkabir University of Technology, P.O. Box -, Tehran, Iran
More informationAdvertisement Evaluation Based On Visual Attention Mechanism
2nd International Conference on Economics, Management Engineering and Education Technology (ICEMEET 2016) Advertisement Evaluation Based On Visual Attention Mechanism Yu Xiao1, 2, Peng Gan1, 2, Yuling
More informationWebpage Saliency. National University of Singapore
Webpage Saliency Chengyao Shen 1,2 and Qi Zhao 2 1 Graduate School for Integrated Science and Engineering, 2 Department of Electrical and Computer Engineering, National University of Singapore Abstract.
More informationINCORPORATING VISUAL ATTENTION MODELS INTO IMAGE QUALITY METRICS
INCORPORATING VISUAL ATTENTION MODELS INTO IMAGE QUALITY METRICS Welington Y.L. Akamine and Mylène C.Q. Farias, Member, IEEE Department of Computer Science University of Brasília (UnB), Brasília, DF, 70910-900,
More informationThe discriminant center-surround hypothesis for bottom-up saliency
Appears in the Neural Information Processing Systems (NIPS) Conference, 27. The discriminant center-surround hypothesis for bottom-up saliency Dashan Gao Vijay Mahadevan Nuno Vasconcelos Department of
More informationReveal Relationships in Categorical Data
SPSS Categories 15.0 Specifications Reveal Relationships in Categorical Data Unleash the full potential of your data through perceptual mapping, optimal scaling, preference scaling, and dimension reduction
More informationComputational Cognitive Science
Computational Cognitive Science Lecture 19: Contextual Guidance of Attention Chris Lucas (Slides adapted from Frank Keller s) School of Informatics University of Edinburgh clucas2@inf.ed.ac.uk 20 November
More informationLearning Saliency Maps for Object Categorization
Learning Saliency Maps for Object Categorization Frank Moosmann, Diane Larlus, Frederic Jurie INRIA Rhône-Alpes - GRAVIR - CNRS, {Frank.Moosmann,Diane.Larlus,Frederic.Jurie}@inrialpes.de http://lear.inrialpes.fr
More informationEARLY STAGE DIAGNOSIS OF LUNG CANCER USING CT-SCAN IMAGES BASED ON CELLULAR LEARNING AUTOMATE
EARLY STAGE DIAGNOSIS OF LUNG CANCER USING CT-SCAN IMAGES BASED ON CELLULAR LEARNING AUTOMATE SAKTHI NEELA.P.K Department of M.E (Medical electronics) Sengunthar College of engineering Namakkal, Tamilnadu,
More informationCan Saliency Map Models Predict Human Egocentric Visual Attention?
Can Saliency Map Models Predict Human Egocentric Visual Attention? Kentaro Yamada 1, Yusuke Sugano 1, Takahiro Okabe 1 Yoichi Sato 1, Akihiro Sugimoto 2, and Kazuo Hiraki 3 1 The University of Tokyo, Tokyo,
More informationI. INTRODUCTION VISUAL saliency, which is a term for the pop-out
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 6, JUNE 2016 1177 Spatiochromatic Context Modeling for Color Saliency Analysis Jun Zhang, Meng Wang, Member, IEEE, Shengping Zhang,
More informationTop down saliency estimation via superpixel-based discriminative dictionaries
KOCAK ET AL.: TOP DOWN SALIENCY ESTIMATION 1 Top down saliency estimation via superpixel-based discriminative dictionaries Aysun Kocak aysunkocak@cs.hacettepe.edu.tr Kemal Cizmeciler kemalcizmeci@gmail.com
More informationExperiences on Attention Direction through Manipulation of Salient Features
Experiences on Attention Direction through Manipulation of Salient Features Erick Mendez Graz University of Technology Dieter Schmalstieg Graz University of Technology Steven Feiner Columbia University
More informationAvailable online at ScienceDirect. Procedia Computer Science 54 (2015 )
Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 54 (2015 ) 756 763 Eleventh International Multi-Conference on Information Processing-2015 (IMCIP-2015) Analysis of Attention
More informationPutting Context into. Vision. September 15, Derek Hoiem
Putting Context into Vision Derek Hoiem September 15, 2004 Questions to Answer What is context? How is context used in human vision? How is context currently used in computer vision? Conclusions Context
More informationNatural Scene Statistics and Perception. W.S. Geisler
Natural Scene Statistics and Perception W.S. Geisler Some Important Visual Tasks Identification of objects and materials Navigation through the environment Estimation of motion trajectories and speeds
More informationA Review on Brain Tumor Detection in Computer Visions
International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 4, Number 14 (2014), pp. 1459-1466 International Research Publications House http://www. irphouse.com A Review on Brain
More informationEvaluation of the Impetuses of Scan Path in Real Scene Searching
Evaluation of the Impetuses of Scan Path in Real Scene Searching Chen Chi, Laiyun Qing,Jun Miao, Xilin Chen Graduate University of Chinese Academy of Science,Beijing 0009, China. Key Laboratory of Intelligent
More informationA Model for Automatic Diagnostic of Road Signs Saliency
A Model for Automatic Diagnostic of Road Signs Saliency Ludovic Simon (1), Jean-Philippe Tarel (2), Roland Brémond (2) (1) Researcher-Engineer DREIF-CETE Ile-de-France, Dept. Mobility 12 rue Teisserenc
More informationMulti-Scale Salient Object Detection with Pyramid Spatial Pooling
Multi-Scale Salient Object Detection with Pyramid Spatial Pooling Jing Zhang, Yuchao Dai, Fatih Porikli and Mingyi He School of Electronics and Information, Northwestern Polytechnical University, China.
More informationEvaluating Visual Saliency Algorithms: Past, Present and Future
Journal of Imaging Science and Technology R 59(5): 050501-1 050501-17, 2015. c Society for Imaging Science and Technology 2015 Evaluating Visual Saliency Algorithms: Past, Present and Future Puneet Sharma
More informationLook, Perceive and Segment: Finding the Salient Objects in Images via Two-stream Fixation-Semantic CNNs
Look, Perceive and Segment: Finding the Salient Objects in Images via Two-stream Fixation-Semantic CNNs Xiaowu Chen 1, Anlin Zheng 1, Jia Li 1,2, Feng Lu 1,2 1 State Key Laboratory of Virtual Reality Technology
More informationDevelopment of novel algorithm by combining Wavelet based Enhanced Canny edge Detection and Adaptive Filtering Method for Human Emotion Recognition
International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 12, Issue 9 (September 2016), PP.67-72 Development of novel algorithm by combining
More informationFacial expression recognition with spatiotemporal local descriptors
Facial expression recognition with spatiotemporal local descriptors Guoying Zhao, Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and Information Engineering, P. O. Box
More informationMRI Image Processing Operations for Brain Tumor Detection
MRI Image Processing Operations for Brain Tumor Detection Prof. M.M. Bulhe 1, Shubhashini Pathak 2, Karan Parekh 3, Abhishek Jha 4 1Assistant Professor, Dept. of Electronics and Telecommunications Engineering,
More informationComputational Models of Visual Attention: Bottom-Up and Top-Down. By: Soheil Borhani
Computational Models of Visual Attention: Bottom-Up and Top-Down By: Soheil Borhani Neural Mechanisms for Visual Attention 1. Visual information enter the primary visual cortex via lateral geniculate nucleus
More informationDesign of Palm Acupuncture Points Indicator
Design of Palm Acupuncture Points Indicator Wen-Yuan Chen, Shih-Yen Huang and Jian-Shie Lin Abstract The acupuncture points are given acupuncture or acupressure so to stimulate the meridians on each corresponding
More informationThe Role of Top-down and Bottom-up Processes in Guiding Eye Movements during Visual Search
The Role of Top-down and Bottom-up Processes in Guiding Eye Movements during Visual Search Gregory J. Zelinsky, Wei Zhang, Bing Yu, Xin Chen, Dimitris Samaras Dept. of Psychology, Dept. of Computer Science
More informationAutomated Brain Tumor Segmentation Using Region Growing Algorithm by Extracting Feature
Automated Brain Tumor Segmentation Using Region Growing Algorithm by Extracting Feature Shraddha P. Dhumal 1, Ashwini S Gaikwad 2 1 Shraddha P. Dhumal 2 Ashwini S. Gaikwad ABSTRACT In this paper, we propose
More informationAUTOMATIC MEASUREMENT ON CT IMAGES FOR PATELLA DISLOCATION DIAGNOSIS
AUTOMATIC MEASUREMENT ON CT IMAGES FOR PATELLA DISLOCATION DIAGNOSIS Qi Kong 1, Shaoshan Wang 2, Jiushan Yang 2,Ruiqi Zou 3, Yan Huang 1, Yilong Yin 1, Jingliang Peng 1 1 School of Computer Science and
More informationShu Kong. Department of Computer Science, UC Irvine
Ubiquitous Fine-Grained Computer Vision Shu Kong Department of Computer Science, UC Irvine Outline 1. Problem definition 2. Instantiation 3. Challenge and philosophy 4. Fine-grained classification with
More informationProgressive Attention Guided Recurrent Network for Salient Object Detection
Progressive Attention Guided Recurrent Network for Salient Object Detection Xiaoning Zhang, Tiantian Wang, Jinqing Qi, Huchuan Lu, Gang Wang Dalian University of Technology, China Alibaba AILabs, China
More informationAn Integrated Model for Effective Saliency Prediction
Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17) An Integrated Model for Effective Saliency Prediction Xiaoshuai Sun, 1,2 Zi Huang, 1 Hongzhi Yin, 1 Heng Tao Shen 1,3
More informationAnnouncements. Perceptual Grouping. Quiz: Fourier Transform. What you should know for quiz. What you should know for quiz
Announcements Quiz on Tuesday, March 10. Material covered (Union not Intersection) All lectures before today (March 3). Forsyth and Ponce Readings: Chapters 1.1, 4, 5.1, 5.2, 5.3, 7,8, 9.1, 9.2, 9.3, 6.5.2,
More informationNon-Local Spatial Redundancy Reduction for Bottom-Up Saliency Estimation
Non-Local Spatial Redundancy Reduction for Bottom-Up Saliency Estimation Jinjian Wu a, Fei Qi a, Guangming Shi a,1,, Yongheng Lu a a School of Electronic Engineering, Xidian University, Xi an, Shaanxi,
More informationLearning to Predict Saliency on Face Images
Learning to Predict Saliency on Face Images Mai Xu, Yun Ren, Zulin Wang School of Electronic and Information Engineering, Beihang University, Beijing, 9, China MaiXu@buaa.edu.cn Abstract This paper proposes
More informationRecognizing Scenes by Simulating Implied Social Interaction Networks
Recognizing Scenes by Simulating Implied Social Interaction Networks MaryAnne Fields and Craig Lennon Army Research Laboratory, Aberdeen, MD, USA Christian Lebiere and Michael Martin Carnegie Mellon University,
More informationFEATURE EXTRACTION USING GAZE OF PARTICIPANTS FOR CLASSIFYING GENDER OF PEDESTRIANS IN IMAGES
FEATURE EXTRACTION USING GAZE OF PARTICIPANTS FOR CLASSIFYING GENDER OF PEDESTRIANS IN IMAGES Riku Matsumoto, Hiroki Yoshimura, Masashi Nishiyama, and Yoshio Iwai Department of Information and Electronics,
More informationLocal Image Structures and Optic Flow Estimation
Local Image Structures and Optic Flow Estimation Sinan KALKAN 1, Dirk Calow 2, Florentin Wörgötter 1, Markus Lappe 2 and Norbert Krüger 3 1 Computational Neuroscience, Uni. of Stirling, Scotland; {sinan,worgott}@cn.stir.ac.uk
More informationOutlier Analysis. Lijun Zhang
Outlier Analysis Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Outline Introduction Extreme Value Analysis Probabilistic Models Clustering for Outlier Detection Distance-Based Outlier Detection Density-Based
More information196 IEEE TRANSACTIONS ON AUTONOMOUS MENTAL DEVELOPMENT, VOL. 2, NO. 3, SEPTEMBER 2010
196 IEEE TRANSACTIONS ON AUTONOMOUS MENTAL DEVELOPMENT, VOL. 2, NO. 3, SEPTEMBER 2010 Top Down Gaze Movement Control in Target Search Using Population Cell Coding of Visual Context Jun Miao, Member, IEEE,
More informationRelative Influence of Bottom-up & Top-down Attention
Relative Influence of Bottom-up & Top-down Attention Matei Mancas 1 1 Engineering Faculty of Mons (FPMs) 31, Bd. Dolez, 7000 Mons, Belgium Matei.Mancas@fpms.ac.be Abstract. Attention and memory are very
More informationAutomated Volumetric Cardiac Ultrasound Analysis
Whitepaper Automated Volumetric Cardiac Ultrasound Analysis ACUSON SC2000 Volume Imaging Ultrasound System Bogdan Georgescu, Ph.D. Siemens Corporate Research Princeton, New Jersey USA Answers for life.
More informationGroup-Wise FMRI Activation Detection on Corresponding Cortical Landmarks
Group-Wise FMRI Activation Detection on Corresponding Cortical Landmarks Jinglei Lv 1,2, Dajiang Zhu 2, Xintao Hu 1, Xin Zhang 1,2, Tuo Zhang 1,2, Junwei Han 1, Lei Guo 1,2, and Tianming Liu 2 1 School
More information