Color representation in CNNs: parallelisms with biological vision

Size: px
Start display at page:

Download "Color representation in CNNs: parallelisms with biological vision"

Transcription

1 Color representation in CNNs: parallelisms with biological vision Ivet Rafegas & Maria Vanrell Computer Vision Center Universitat Autònoma de Barcelona Bellaterra, Barcelona (Spain) {ivet.rafegas, Abstract Convolutional Neural Networks (CNNs) trained for object recognition tasks present representational capabilities approaching to primate visual systems [1]. This provides a computational framework to explore how image features are efficiently represented. Here, we dissect a trained CNN [2] to study how color is represented. We use a classical methodology used in physiology that is measuring index of selectivity of individual neurons to specific features. We use ImageNet Dataset [20] images and synthetic versions of them to quantify color tuning properties of artificial neurons to provide a classification of the network population. We conclude three main levels of color representation showing some parallelisms with biological visual systems: (a) a decomposition in a circular hue space to represent single color regions with a wider hue sampling beyond the first layer (V2), (b) the emergence of opponent low-dimensional spaces in early stages to represent color edges (V1); and (c) a strong entanglement between color and shape patterns representing object-parts (e.g. wheel of a car), objectshapes (e.g. faces) or object-surrounds configurations (e.g. blue sky surrounding an object) in deeper layers (V4 or IT). 1. Introduction Deep learning techniques and, more specifically, Convolutional Neural Networks (CNNs) have become central in the computer vision field due to their impressive performance on solving diversity of vision tasks, such as object recognition or object detection. They consist in several stacked layers that successively make visual representational transformations, as a result of an automatic training that learns the best weights (neurons) to represent their input and achieve the best performance on a specific visual task. These artificial architectures have also been proposed as a suitable framework to model biological vision [10, 11, 1], and some trained CNNs have shown powerful representational capabilities that rival the primate visual system on the performance achieved on a visual recognition task [1]. Although they can present an undesirable black-box nature, several works have proposed different techniques in order to understand the intermediate CNNs representation and visualize learned features [29, 14, 25, 24, 28, 16] providing a visualization of how images are represented in each level of representation and giving some insights of representational inspiration. In this work we use an artificial neural network with a high-level of representational capability to understand how color can be efficiently represented in recognition tasks. We work on a CNN trained by [2] whose architecture has been proved as to rival the primate visual system performance [1]. We analyze how trained neurons of this network codify color through the different convolutional layers and we relate the conclusions to known evidences about how color is represented in biological visual systems. To perform the analysis we use a selectivity index that gets inspiration from physiological methodologies. The network was trained for image classification based on labeled content and without any constraint on the filter weights. From this training three different levels of color representation emerge: (a) homogeneous color regions are encoded on a set of neurons selective to one single hue; (b) color edges are represented with color-opponent neurons acting as low-dimensional spaces; and (c) most color selective neurons entangle color and shape together in a pattern matching approach. Remarkably, these representation levels agree with some evidences on how color is encoded through the visual pathway in biological systems: like the existence of a small number of primary colors and a threedimensional color opponent space in early stages of the visual system [12, 13, 7, 3], a more dense sampling of different hues in V2 (hue maps) [27, 26, 8], and a clear dependence between shape and color in higher level neurons [5, 21]. 2697

2 2. Color neurons In order to understand how color is represented in CNNs we estimate a color selectivity index for all the network neurons. Color neurons are those holding the property of being highly activated when a particular color appears in the input image and, on the contrary, they present a low activation when this color is not present [22, 6, 19, 18]. One way to measure this selectivity index is by computing the variation of the neuron activation between color images and their gray-level versions [18]. These gray-level images are channel-wise replicated to fit into the constraints of the network. Following this idea, color selectivity property is measured on the N-top image patches of the training set that activate a specific neuron the most. The Area Under the Activation Curve 1 (AUAC) of a neuron is obtained from the top activations, both for color images and for their gray-level versions and are combined to compute the color selectivity index in this way: N j=1 w j α = 1 N j=1 w j where {w j } j=1:n are the neuron activation values to the original N-top ranked image patches, and {w j } j=1:n are the activation values obtained by the same neuron to the gray-level versions of the N-top images. Small α value indicates the neuron is low color selective while a large α indicates high color selectivity. Above index allows to rank the set of neurons in terms of how their activation is related to a specific color or not. Using this index, we propose a classification of the network neurons based on: (a) degree of selectivity (non color selective, low color selective or high color selective); (b) number of colors a neuron is selective to (single or double); and (c) the opponency property of double color neurons (opponent or non opponent), that is related to the hue angular distance between the color pair (opponent is 180 ). In next lines we outline the details of the classification we used in the experiments. First, neurons are classified depending on their color selectivity index α, being non color selective neurons those with α < th 1, high color selective neurons when α > th 2 (th 2 > th 1 ) and, low color selective when α is in between. Second, high color selective neurons are classified as single neurons if they are selective only to one specific color or double neurons if they are selective to a pair of colors. This step is performed using the Neuron Feature (NF) image [19], which is an approximation of the intrinsic spatiochromatic feature that activates the neuron. It is obtained by computing a weighted average of the N-top image patches that activate the neuron (here, N = 100), where weights 1 We refer to Activation Curve as the set of activation values provided by a neuron to a series of different input stimuli varying on a specific property. (1) are proportional to the activation value produced by each image to this neuron. NF color distribution is fitted with a Gaussian mixture model using Expectation-Maximization (EM) algorithm. Color distribution is computed on the hue dimension of an opponent color space 2. Different numbers of Gaussians are used for the fitting, and it is finally set to the minimum number that differs less than a 10% over the global mean square error with the distribution. This step allows to get one (a pair of) representative hue (hues) for each single (double) color neuron 3. Finally, double color neurons are classified in opponent neurons if the pair of colors which a neuron is selective has an angular distance close to 180. These angles are considered with respect to the center of the O 2 -O 3 chromatic plane. To study the set of axes emerged from each layer we group the double neurons depending on their pair of colors. For this clustering, we make use of the mean of the Gaussians that fit the hue distribution for each neuron. Therefore, each double neuron can be represented in a 2 dimensional hue space. Here we propose to use a K-means clustering for grouping neurons that share their pair of colors. We test fromk = 3 to7to detect the best number of clusters using Elbow method, which is based on a ratio of the betweengroup variance with respect to the total variance. Each cluster will be used to infer the main axes emerged from each layer. To refer easily to each group, we manually labeled each cluster by their colors. All clusters emerged from the detected double color selective neurons can be seen in table 2, where each row joins neurons belonging to the same cluster. This classification provides a general map of color neurons through layers giving insight on how color is represented in the CNN. In the next section, we analyze different properties of the color neurons we found in the trained CNN. 3. Results and discussion In this section we summarize the results of exploring how color information is represented across the layers of the CNN VGG-M trained by Chatfield et al. [2] on the ImageNet dataset [20] for an object categorization task. We selected this network due to its similarity with the one reported proved to have representational performance that competes with primate capabilities [1]. This network consists of five convolutional layers (with squared filters of sizes 7, 5, 3, 3 and 3, respectively) followed by three fully connected layers. To reduce the representation size and to introduce some scale invariance, it also incorporates 3 max- 2 The opponent color space (OPP) used in this work is the one proposed by Platanoits et al. in [17] but normalizing and shifting the three axes within the range [ 1,1 ] 3 Selectivity to three or four colors was never found in our experiments but may appear in other networks 2698

3 Figure 1: Color selectivity index distribution along the 5 convolutional layers. pooling layers after first, second and fifth convolutional layers. This network expects input images of 224x224 pixels in RGB, where each convolutional layer analyzes receptive fields of sizes 7x7x3, 27x27x3, 75x75x3, 107x107x3 and 139x139x3, from first to fifth convolutional layer. The version of the ImageNet dataset used by Chatfield et al. that we do also use in this work is the ILSRVC12 4, which consists of around 1.2M of images classified in 1000 categories, according to the lexical WordNet hierarchy [15]. The distribution of the color selectivity indexes for all neurons in convolutional layers of the network can be seen in Fig. 1. We set th 1 = 0.1, th 2 = 0.25 to classify each neuron as non, low or high color selective. From our experience, different variations of these thresholds would bring to similar conclusions. Shallower layers have neurons with extreme color index values (either very high or very low) while deeper neurons are mainly described by intermediate index values. From this figure it can be concluded that, although higher α values decrease through layers, color remains as an important feature in deeper layers where noncolor selective neurons are almost disappeared. In table 1 we summarize the global results of classifying all the neurons of the convolutional layers of the network. We give the number of neurons in each class and the corresponding percentage in the layer. A visualization of these classified color neurons through layers is shown in table 2. For each group we show a subset of NFs belonging to the group. By observing NFs we can observe that complexity increases through layers. First, simple features such as homogeneous regions, oriented color edges and different frequency and orientation selective gray-level edges are found in Conv1. Second, slightly more complex features such as blobs, curved edges 4 Images can be browsed on as well the concepts (synsets) that describe each image. Figure 2: Distribution of color selective neurons on a hue dimension (black line). ImageNet color distribution on hue dimension (colored bars). Dashed line indicates the dataset bias on a Orangish-Bluish axis. and textured regions appear in Conv2. Finally, we group Conv3, Conv4 and Conv5 in a third level presenting a similar shape complexity: object-surround configurations, complex objects like humans, dog faces or buildings. Main differences between these three layers rely on object sizes, we found similar shapes activating different neurons at different layers thus at different image resolutions. We grouped on a common histogram all color selective neurons (α > 0.25) of all the network according to the colors they are selective to. For each neuron we counted on the hue axis (single neuron color is counted twice while double color neurons are counted 1 for each color), the histogram is represented by the black line in Fig. 2, overlapped on the color distribution of the ImageNet dataset, that is shown with colored bars. A clear bias on Orangish and Bluish regions can be observed, both in the dataset and the color neuron distributions. Thus, both distributions are highly correlated, with a Pearson correlation coefficient (r) of In the following subsections we will analyze more in detail each level of color representation that can be inferred and we hypothesize about the parallelisms of these three levels with the known evidences human visual system First level: single color regions A first level to study is how regions of a single color are represented by the network. We observe that they are based on a composition of basic primary colors represented by single color selective neurons. These neurons are mainly 2699

4 Selectivity Conv1 Conv2 Conv3 Conv4 Conv5 #Neurons Non Color 56 (58.33%) 118 (53.88%) 225 (43.95%) 113 (22.07%) 52 (10.16%) Low Color Sel 2 (2.08%) 28 (12.79%) 69 (33.01%) 255 (49.80%) 250 (48.83%) Color Sel 38 (39.58%) 73 (33.33%) 118 (23.05%) 144 (28.13%) 210 (41.02%) Single Color 12 (12.50%) 49 (22.37%) 102 (19.92%) 134 (26.16%) 198 (38.67%) Double Color 26 (27.08%) 24 (10.96%) 16 (3.13%) 10 (1.95%) 12 (2.34%) Opponent 19 (19.79%) 14 (6.39%) 8 (1.56%) 1 (0.20%) 1 (0.20%) Non opponent 7 (7.29%) 10 (4.57%) 8 (1.56%) 9 (1.76%) 11 (2.15%) Table 1: Number of neurons for each neuron class (% of neurons of the group within the layer). Conv1 Conv2 Conv3 Conv4 Conv5 High Color Selectivity Double Single Non Oppon. Opponent Low Color Sel. Non Color Sel. Table 2: Examples of NFs for each convolutional layer and classified in different groups regarding color properties. found in Conv1 and Conv2. In the first convolutional layer single color neurons are specialized on the following basic hues: Red, Green, Blue, Magenta, Cyan and Yellow, which is a quite standard and balanced sampling of the hue circle, whose combination allows to represent all the color hues. In Fig. 3.(a) we can see the activation curves of these basic neurons to homogeneous regions along the hue dimension. Some of them present more than a neuron for a similar primary hue. Color representation is more detailed on the second convolutional layer, where a more dense sampling of primary hues is covered by the set of single neurons. Thus the color of a region can be more precisely described in this more extensive basis. In Fig. 3.(b) we can observe neuron activation curves of single neurons in the second layer. Activation curves were computed by rotated hue versions of the first top activating image. The density of the hue sampling performed by the single color neurons is measured. In table 3 we show the results of computing a sparsity measure, l 0 = {j : C j = 0} measure, studied in [9]. From these measurements we can clearly see that Conv2 present a less sparse distribution of sampled colors. 2700

5 (a) Conv1 (b) Conv2 Figure 3: Activation curves of single neurons in (a) Conv1 and (b) Conv2 over series of images varying along the hue dimension. Stimuli are rotated hue versions of the maximum activating image for each neuron. In Fig. 4 we show how color single color neurons are distributed along the hue dimension. We can see that in shallower layers a more equitable distribution across the hue dimension is presented. However, in deeper layers the majority of single neurons are concentrated on the Orangish and Bluish regions, aligned with the color bias of the training dataset. Here we want to highlight a clear parallelism of this level of representation with what is known in the primate visual system. A more dense sampling of hue has been reported in several studies [27, 26, 8] with the existence of hue maps in V2 cortical areas, where a continuous color tuning along the hue is found, providing a more precise color perception. The role of V2 as an intermediate layer in between the low dimensional color space of primary colors found in V1 and the entanglement of color and shape in posterior areas is nicely revised in [4]. We can observe in Fig. 4 how this role of intermediate layer is presented by Conv2 in the artificial network, which appears in between a low dimensional representation of color in Conv1 with six primaries and the emergence of an oversampled region on the orangish hue linked to the color of the majority of objects, that starts in Conv2 and that is clearly the essence color representation in posterior layers Second level: color edges In this section we explore a second level of representation that is about how color edges are represented by the network. Color edges are low level features mainly represented by double color neurons in layers Conv1 and Conv2. First convolutional layer presents a key role in this representation due to the strong opponency shown by all clusters of double color neurons. They are representing a three dimensional opponent space based on three chromatic channels Red-Cyan, Blue-Yellow and Magenta-Green axes, where each pair of colors has an angular distance (oppo- Bin-size Conv Conv Conv Conv Conv Table 3: Hue sparsity representation of high single color selective neurons set for each layer (l 0 measure) and for different bin sizes. Minimums (in bold) are found in Conv2 for all tested sizes. nency property) of166.74, and160, respectively. In Fig. 5 we show activation curves of three double color neurons in Conv1 oriented in 45 (one for each color axis) to different synthetic color edges in the same orientation. We compute the activation on 4 different type of edges composed by pairs of colors holding different angular distances between them: 45, 90, 135 and 180, each type of edge is generated along a sampling of the entire hue. In this figure, for each type of edges we plot the color edge achieving the maximum activation for each neuron. For the case of opponent edges (180, maximum color pair contrast difference), the maximal neuron activations are achieved when the edge coincides with the NF pattern (see the legend for a visualization of NFs of the studied neurons). The rest of color edges are represented by different triplets of weights on the neuron basis. In this way we show how any coloredge is represented by the three main group of opponent axes emerging in this layer Conv1. The emergence of the opponent space defined by 4 opponent axes (adding Black-White) shows a strong correlation with early stages of the primate visual systems. On one hand, the existence of opponent neurons in the axes of Black-White, Red-Green (cyan in our case) and Blue- Yellow was initially proved by the findings of Derrington 2701

6 (a) Conv1 (b) Conv2 (c) Conv3 (d) Conv4 (e) Conv5 Figure 4: Number of single color neurons per hue for the five convolutional layers. Shallower layers (conv1 and conv2) present a more equitable distribution along the hue dimension. Deeper layers (conv3, conv4 and conv5) concentrate single neurons on Orangish and Bluish regions following the color bias of the training dataset. (a) (b) Figure 5: Representation of color edges. Activation curves of three different double opponent color neurons at Conv1 to 4 different types of edges regarding the opponency property of the pair of colors on the tested edges: opponent edges (180), 135, 90 and 45 color edges. Figure 6: Orange-blue edge representation in Conv1. (a) Activation values of some neurons in Conv1 (NFs are in first row) to a set of images (first column) in a color scale from blue (minimum) to red (maximum). (b) Double neurons in Conv1 over the hue space. Orange-Blue image (dashed line) is represented between two main neurons in Conv1 (Blue-Yelow and Red-Cyan). et al. in [7], and later by Lennie et al. in [12, 13]. On the other hand, an additional fourth opponent axis in the direction of Magenta-Green could coincide with posterior studies [3]. We think it is quite remarkable that this artificial architecture shows such a coincidence with the biological vision, which are giving support to hypothesis regarding the efficiency of neural information to represent natural image statistics. Further correlations are given by the fact that these double color neurons present selectivity to oriented and low spatial-frequency edges, meanwhile blackwhite neurons present selectivity to a wider variety of spatial frequencies of oriented edges as it is revised in [22]. In the rest of the layers, different axes emerge from the double color neurons, but they move away from the opponency property and with shapes more complex than edges. Only with the exception of the Bluish-Orangish axis, which is present in all these layers, holding an angular distance greater than 160 between the two colors. This axis is a rotation of the Blue-Yellow axis found in Conv1 that matches the bias of the training dataset. In Fig. 6.(a) we show how several image patches that highly activate a Bluish- Orangish neuron of Conv2 are represented in Conv1. NFs of the first convolutional layer that are most activated by these kind of Orange-Blue edge stimuli (oriented in 135 ) are plotted in the first row. Tested images are shown in the first column. Activation values are shown in a scale from blue (minimum) to red (maximum). In Fig. 6.(b) we show how double color neurons are distributed in the chromaticity plane of the opponent color space, including the representation of a Blue-Orange image. This example brings us to speculate about the hierarchy of the Bluish-Orangish neuron population code in Conv2: it is activated when a Red-Cyan neuron of Conv1 with an edge of 135 is highly activated jointly with a high activation of a Yellow-Blue neuron (also in Conv1) with a 90 oriented edge 6. 6 Note that in Conv1 a Blue-Yellow neuron having an oriented edge of 2702

7 3.3. Third level: color-shape entanglement As seen in previous sections, color is an important property that may characterize the activity of a neuron. Nevertheless, this property always appears strongly linked with the intrinsic shape that also activates the neuron. In this section we analyze the entanglement of both properties, shape and color, which can be understood as a template matching scheme linked to the activity of any neuron along the artificial network. With color-shape entanglement we mean that the activation of a single neuron requires the appearance of a specific color in an appropriate configuration or shape. This color-shape entanglement appears through all the layers. Regarding shallower layers we plot in Fig. 7 a set of neuron activation curves for two color neurons in (a) Conv1 and (b) Conv2, both representing colored edges oriented in 135. Each neuron activation curve is obtained from a set of oriented edge images, sharing the same orientation but with different opponent color pairs. Different oriented edges are tested for both neurons and maximum activations are achieved when color pair and orientation match with the input image (see the neuron NF visualization on the top left corner). In the same line, neurons in deeper layers which are responsible to detect more complex shapes also present this color-shape dependence. As we can see in table 2 NFs in layers Conv3, Conv4 and Conv5 present pattern of recognizable objects (like bodies, mushrooms, ladybugs etc.), object parts (like wheels of cars, dog faces, semicircles, etc.) or object surround configurations (like blue object in a green surround, squared object under a blue sky, etc.). For this kind of neurons we show two examples of activation curves where either color or shape were varied. First, in Fig. 8 we show the effect of a neuron in Conv5 with a high color selectivity index and which is ladybug selective. We show the activation curve of the neuron when different colored versions of the image activate the neuron. Ladybug image is transformed by rotating color image pixels on the chromaticity plane. The rotation is performed along the hue dimension. Note that this neuron is uniquely selective to orangish ladybugs. Second, for another neuron we study the activation curve when shape is varied. In Fig. 9 we plot several activation curves of a neuron found in Conv4 with a high color selective index, and very selective to faces of a specific size and in the usual vertical orientation. The curves show the variation of the activation when same face images that highly activate this neuron are spatially rotated. Note that the same image with spatial rotations modifies the activity of the neuron, although sharing same color appearance. The strong entanglement between color and shape in the 135 (with yellow in the bottom-left and blue in the top-right) was not found. In its absence, the vertical edge neuron is the most similar. human visual system has been proved in several works, revised by Shapley et al. [23]. Although deeper visual ares of the human visual system are not known like in previous areas, in V4 and PIT (posterior inferior temporal cortex) color and complex shape selectivity has been found [5]. Moreover, [21] stated that neurons in V4 are selective to large range of colors and white surfaces and also sensitive to surrounds that may participate in the separation between object and background. 4. Conclusions Artificial neural networks have been a revolutionary technique due to their impressive performance in solving different visual tasks. Although they are easily trained on large datasets, the intermediate representation learned by the network architecture are not fully understood. In this work we focus on the analysis of how color is particularly encoded through the different convolutional layers. We find important parallelisms with the human visual system. To this end, we use a methodology inspired on physiological neural research, which is based on the estimation of a color selectivity index for each neuron. We measure several color properties that allow to classify all the network neurons: (a) the color selectivity index for all neurons; (b) the amount of distinctive colors a neuron is selective to, this allows to define single (one color) and double (two colors) neurons; (c) the opponency property between the pair of colors in double neurons; and (d) the sparsity of the sampling of the color neurons over a hue dimension. We found color selective neurons in all convolutional layers, although shallower layers present higher indexes of color selectivity. We define three main levels of color representation. First, we study single color neurons representing single color regions, this representation is based on encoding all colors in a primary basis of six basic colors (in Conv1) while a more dense hue sampling appears in Conv2. This way of encoding basic color and the increase in hue sampling correlates with evidences in the primate visual system [27, 26, 8]. Second, we focus on double color neurons devoted to color edge detection. Here the network learns a subset of neurons in Conv1 whose pairs define a 4-dimensional opponent space (Red-Cyan, Blue-Yellow, Magenta-Green and Black-White) in the first convolutional layer presenting a clear correlation with known results in the primary visual cortex. [7, 12, 13, 3]. Finally, we find that color and shape present a strong entanglement at all levels of the network, either for simple features as edges, bars or textures in shallower layers, or for more complex features as faces, bodies or cars in deeper layers. This high level of entanglement has also been suggested in several physiological studies [5, 21]. 2703

8 (a) (b) Figure 7: Neuron activation of two edge-oriented double color selective neurons, (a) Conv1 (α = 0.99) and (b) Conv2 (α = 0.44). Activation curve for different synthetic oriented color edges, varying in hue-pairs and orientation. Blue line connects activations corresponding to images with the same orientation but different color-pair. Neuron NFs are framed in blue on the top-left corners. Figure 8: Activation curve of a ladybug-colorselective neuron in Conv5 (α = 0.94) for different versions of the same image by rotating hues. Maximum activation corresponds to a perfect matching with the original color of the image patch that maximally activates this neuron. Neuron NF is framed in blue (top-left). Figure 9: Activation curves of 3 face-color-selective neuron in Conv4 (α = 0.59) for different versions of the same image by spatially rotating the set of top image patches. Maximum activations are achieved corresponds to the original orientation of the image patch. Examples of the tested images are shown on the top of the plot, while the NF of the neuron is framed in blue in the bottom. 5. Acknowledgements Project partially funded by MINECO Ref. TIN (TIN R) and Generalitat de Catalunya Ref. SGR-669. References [1] C. F. Cadieu, H. Hong, D. L. K. Yamins, N. Pinto, D. Ardila, E. A. Solomon, N. J. Majaj, and J. J. DiCarlo. Deep neural networks rival the representation of primate it cortex for core visual object recognition. PLoS computational biology, 10, 2014 Dec [2] K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details: Delving deep into convolutional nets. In BMVC, [3] B. R. Conway. Spatial structure of cone inputs to color cells in alert macaque primary visual cortex (v-1). Journal of Neuroscience, 21(8): , [4] B. R. Conway. Colour vision: A clue to hue in v2. Current 2704

9 Biology, 13(8):R308 R310, [5] B. R. Conway, S. Chatterjee, G. D. Field, G. D. Horwitz, E. N. Johnson, K. Koida,, and K. Mancuso. Advances in color science: from retina to behavior. The Journal of Neuroscience, 30(45): , [6] B. R. Conway and D. Y. Tsao. Color-tuned neurons are spatially clustered according to color preference within alert macaque posterior inferior temporal cortex. Proc Natl Acad Sci U S A., 42(106): , [7] A. M. Derrington, J. Krauskopf, and P. Lennie. Chromatic mechanisms in lateral geniculate nucleus of macaque. J. Physiol., pages , [8] L. H., W. Y., X. Y., H. M., and F. DJ. Organization of hue selectivity in macaque v2 thin stripes. Journal of Neurophysiology, 102(5). [9] N. Hurley and S. Rickard. Comparing measures of sparsity. IEEE Trans. Inf. Theor., 55(10): , Oct [10] N. Kriegeskorte. Deep neural networks: A new framework for modeling biological vision and brain information processing. Annual Review of Vision Science., 1: , [11] N. Kruger, P. Janssen, S. Kalkan, M. Lappe, A. Leonardis, J. Piater, A. J. Rodriguez-Sanchez, and L. Wiskott. Deep hierarchies in the primate visual cortex: What can we learn for computer vision? IEEE Trans. Pattern Anal. Mach. Intell., 35(8), Aug [12] P. Lennie and M. DZmura. Mechanisms of color vision. In CRC Crit. Rev. Neurobiol., chapter 3, pages [13] P. Lennie, J. Krauskopf, and S. G. Chromatic mechanisms in striate cortex of macaque. The Journal of Neuroscience, 10: , [14] Y. Li, J. Yosinski, J. Clune, H. Lipson, and J. E. Hopcroft. Convergent learning: Do different neural networks learn the same representations? In ICLR, [15] G. A. Miller. Wordnet: A lexical database for english. Commun. ACM, 38(11):39 41, Nov [16] A. Nguyen, J. Yosinski, and J. Clune. Multifaceted Feature Visualization: Uncovering the Different Types of Features Learned By Each Neuron in Deep Neural Networks. In Workshop on Visualization for Deep Learning, International Conference on Machine Learning (ICML), June [17] K. Plataniotis and A. Venetsanopoulos. Color Image Processing and Applications. Springer, [18] I. Rafegas and M. Vanrell. Color encoding in biologicallyinspired convolutional neural networks. Technical report, Computer Vision Center, Universitat Autònoma de Barcelona, [19] I. Rafegas, M. Vanrell, and L. A. Alexandre. Understanding trained CNNs by indexing neuron selectivity. ArXiv e-prints., Feb [20] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3): , [21] S. J. Schein and R. Desimone. Spectral properties of v4 neurons in the macaque. VR, 51(7): , [22] R. Shapley and M. Hawken. Color in the cortex: Single- and double-opponent cells. VR, 51(7): , [23] R. Shapley and M. Hawken. Color in the cortex: Single- and double-opponent cells. Vision Research, 51(7): , [24] K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. In In ICLR Workshop 2014, [25] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. A. Riedmiller. Striving for simplicity: The all convolutional net. ICLR, [26] M. A. Webster, Y. Mizokami, and S. M. Webster. Hue maps in primate striate cortex. NeuroImage, 35(2). [27] Y. Xiao and Y. W. D. Felleman. A spatially organized representation of colour in macaque cortical area v2. Nature, 421. [28] J. Yosinski, J. Clune, A. Nguyen, T. Fuchs, and H. Lipson. Understanding neural networks through deep visualization. In Deep Learning Workshop, (ICML), [29] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In ECCV,

Local Image Structures and Optic Flow Estimation

Local Image Structures and Optic Flow Estimation Local Image Structures and Optic Flow Estimation Sinan KALKAN 1, Dirk Calow 2, Florentin Wörgötter 1, Markus Lappe 2 and Norbert Krüger 3 1 Computational Neuroscience, Uni. of Stirling, Scotland; {sinan,worgott}@cn.stir.ac.uk

More information

Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition

Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition Charles F. Cadieu, Ha Hong, Daniel L. K. Yamins, Nicolas Pinto, Diego Ardila, Ethan A. Solomon, Najib

More information

An Artificial Neural Network Architecture Based on Context Transformations in Cortical Minicolumns

An Artificial Neural Network Architecture Based on Context Transformations in Cortical Minicolumns An Artificial Neural Network Architecture Based on Context Transformations in Cortical Minicolumns 1. Introduction Vasily Morzhakov, Alexey Redozubov morzhakovva@gmail.com, galdrd@gmail.com Abstract Cortical

More information

Morton-Style Factorial Coding of Color in Primary Visual Cortex

Morton-Style Factorial Coding of Color in Primary Visual Cortex Morton-Style Factorial Coding of Color in Primary Visual Cortex Javier R. Movellan Institute for Neural Computation University of California San Diego La Jolla, CA 92093-0515 movellan@inc.ucsd.edu Thomas

More information

CSE Introduction to High-Perfomance Deep Learning ImageNet & VGG. Jihyung Kil

CSE Introduction to High-Perfomance Deep Learning ImageNet & VGG. Jihyung Kil CSE 5194.01 - Introduction to High-Perfomance Deep Learning ImageNet & VGG Jihyung Kil ImageNet Classification with Deep Convolutional Neural Networks Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton,

More information

arxiv: v2 [cs.cv] 19 Dec 2017

arxiv: v2 [cs.cv] 19 Dec 2017 An Ensemble of Deep Convolutional Neural Networks for Alzheimer s Disease Detection and Classification arxiv:1712.01675v2 [cs.cv] 19 Dec 2017 Jyoti Islam Department of Computer Science Georgia State University

More information

arxiv: v2 [cs.cv] 22 Mar 2018

arxiv: v2 [cs.cv] 22 Mar 2018 Deep saliency: What is learnt by a deep network about saliency? Sen He 1 Nicolas Pugeault 1 arxiv:1801.04261v2 [cs.cv] 22 Mar 2018 Abstract Deep convolutional neural networks have achieved impressive performance

More information

Hierarchical Convolutional Features for Visual Tracking

Hierarchical Convolutional Features for Visual Tracking Hierarchical Convolutional Features for Visual Tracking Chao Ma Jia-Bin Huang Xiaokang Yang Ming-Husan Yang SJTU UIUC SJTU UC Merced ICCV 2015 Background Given the initial state (position and scale), estimate

More information

Network Dissection: Quantifying Interpretability of Deep Visual Representation

Network Dissection: Quantifying Interpretability of Deep Visual Representation Name: Pingchuan Ma Student number: 3526400 Date: August 19, 2018 Seminar: Explainable Machine Learning Lecturer: PD Dr. Ullrich Köthe SS 2018 Quantifying Interpretability of Deep Visual Representation

More information

Reading Assignments: Lecture 5: Introduction to Vision. None. Brain Theory and Artificial Intelligence

Reading Assignments: Lecture 5: Introduction to Vision. None. Brain Theory and Artificial Intelligence Brain Theory and Artificial Intelligence Lecture 5:. Reading Assignments: None 1 Projection 2 Projection 3 Convention: Visual Angle Rather than reporting two numbers (size of object and distance to observer),

More information

arxiv: v2 [cs.cv] 20 Feb 2018

arxiv: v2 [cs.cv] 20 Feb 2018 Color-opponent mechanisms for local hue encoding in a hierarchical framework Paria Mehrani, Andrei Mouraviev, Oscar J. Avella Gonzalez, John K. Tsotsos paria@cse.yorku.ca, andrei.mouraviev@gmail.com, ojavellag@cse.yorku.ca,

More information

arxiv: v1 [q-bio.nc] 12 Jun 2014

arxiv: v1 [q-bio.nc] 12 Jun 2014 1 arxiv:1406.3284v1 [q-bio.nc] 12 Jun 2014 Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition Charles F. Cadieu 1,, Ha Hong 1,2, Daniel L. K. Yamins 1,

More information

Just One View: Invariances in Inferotemporal Cell Tuning

Just One View: Invariances in Inferotemporal Cell Tuning Just One View: Invariances in Inferotemporal Cell Tuning Maximilian Riesenhuber Tomaso Poggio Center for Biological and Computational Learning and Department of Brain and Cognitive Sciences Massachusetts

More information

OPTO 5320 VISION SCIENCE I

OPTO 5320 VISION SCIENCE I OPTO 5320 VISION SCIENCE I Monocular Sensory Processes of Vision: Color Vision Mechanisms of Color Processing . Neural Mechanisms of Color Processing A. Parallel processing - M- & P- pathways B. Second

More information

Quantifying Radiographic Knee Osteoarthritis Severity using Deep Convolutional Neural Networks

Quantifying Radiographic Knee Osteoarthritis Severity using Deep Convolutional Neural Networks Quantifying Radiographic Knee Osteoarthritis Severity using Deep Convolutional Neural Networks Joseph Antony, Kevin McGuinness, Noel E O Connor, Kieran Moran Insight Centre for Data Analytics, Dublin City

More information

Deep Networks and Beyond. Alan Yuille Bloomberg Distinguished Professor Depts. Cognitive Science and Computer Science Johns Hopkins University

Deep Networks and Beyond. Alan Yuille Bloomberg Distinguished Professor Depts. Cognitive Science and Computer Science Johns Hopkins University Deep Networks and Beyond Alan Yuille Bloomberg Distinguished Professor Depts. Cognitive Science and Computer Science Johns Hopkins University Artificial Intelligence versus Human Intelligence Understanding

More information

Multi-attention Guided Activation Propagation in CNNs

Multi-attention Guided Activation Propagation in CNNs Multi-attention Guided Activation Propagation in CNNs Xiangteng He and Yuxin Peng (B) Institute of Computer Science and Technology, Peking University, Beijing, China pengyuxin@pku.edu.cn Abstract. CNNs

More information

On the Selective and Invariant Representation of DCNN for High-Resolution. Remote Sensing Image Recognition

On the Selective and Invariant Representation of DCNN for High-Resolution. Remote Sensing Image Recognition On the Selective and Invariant Representation of DCNN for High-Resolution Remote Sensing Image Recognition Jie Chen, Chao Yuan, Min Deng, Chao Tao, Jian Peng, Haifeng Li* School of Geosciences and Info

More information

Supplementary materials for: Executive control processes underlying multi- item working memory

Supplementary materials for: Executive control processes underlying multi- item working memory Supplementary materials for: Executive control processes underlying multi- item working memory Antonio H. Lara & Jonathan D. Wallis Supplementary Figure 1 Supplementary Figure 1. Behavioral measures of

More information

Towards Open Set Deep Networks: Supplemental

Towards Open Set Deep Networks: Supplemental Towards Open Set Deep Networks: Supplemental Abhijit Bendale*, Terrance E. Boult University of Colorado at Colorado Springs {abendale,tboult}@vast.uccs.edu In this supplement, we provide we provide additional

More information

Lateral Geniculate Nucleus (LGN)

Lateral Geniculate Nucleus (LGN) Lateral Geniculate Nucleus (LGN) What happens beyond the retina? What happens in Lateral Geniculate Nucleus (LGN)- 90% flow Visual cortex Information Flow Superior colliculus 10% flow Slide 2 Information

More information

Stable Receptive Field Structure of Color Neurons in Primary Visual Cortex under Adapting and Non-adapting Conditions

Stable Receptive Field Structure of Color Neurons in Primary Visual Cortex under Adapting and Non-adapting Conditions Stable Receptive Field Structure of Color Neurons in Primary Visual Cortex under Adapting and Non-adapting Conditions Bevil R. Conway, Department of Neurobiology, Harvard Medical School, Boston MA 02115.

More information

Plasticity of Cerebral Cortex in Development

Plasticity of Cerebral Cortex in Development Plasticity of Cerebral Cortex in Development Jessica R. Newton and Mriganka Sur Department of Brain & Cognitive Sciences Picower Center for Learning & Memory Massachusetts Institute of Technology Cambridge,

More information

On the implementation of Visual Attention Architectures

On the implementation of Visual Attention Architectures On the implementation of Visual Attention Architectures KONSTANTINOS RAPANTZIKOS AND NICOLAS TSAPATSOULIS DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING NATIONAL TECHNICAL UNIVERSITY OF ATHENS 9, IROON

More information

M Cells. Why parallel pathways? P Cells. Where from the retina? Cortical visual processing. Announcements. Main visual pathway from retina to V1

M Cells. Why parallel pathways? P Cells. Where from the retina? Cortical visual processing. Announcements. Main visual pathway from retina to V1 Announcements exam 1 this Thursday! review session: Wednesday, 5:00-6:30pm, Meliora 203 Bryce s office hours: Wednesday, 3:30-5:30pm, Gleason https://www.youtube.com/watch?v=zdw7pvgz0um M Cells M cells

More information

LISC-322 Neuroscience Cortical Organization

LISC-322 Neuroscience Cortical Organization LISC-322 Neuroscience Cortical Organization THE VISUAL SYSTEM Higher Visual Processing Martin Paré Assistant Professor Physiology & Psychology Most of the cortex that covers the cerebral hemispheres is

More information

Efficient Deep Model Selection

Efficient Deep Model Selection Efficient Deep Model Selection Jose Alvarez Researcher Data61, CSIRO, Australia GTC, May 9 th 2017 www.josemalvarez.net conv1 conv2 conv3 conv4 conv5 conv6 conv7 conv8 softmax prediction???????? Num Classes

More information

Nature Neuroscience doi: /nn Supplementary Figure 1. Characterization of viral injections.

Nature Neuroscience doi: /nn Supplementary Figure 1. Characterization of viral injections. Supplementary Figure 1 Characterization of viral injections. (a) Dorsal view of a mouse brain (dashed white outline) after receiving a large, unilateral thalamic injection (~100 nl); demonstrating that

More information

Early Stages of Vision Might Explain Data to Information Transformation

Early Stages of Vision Might Explain Data to Information Transformation Early Stages of Vision Might Explain Data to Information Transformation Baran Çürüklü Department of Computer Science and Engineering Mälardalen University Västerås S-721 23, Sweden Abstract. In this paper

More information

using deep learning models to understand visual cortex

using deep learning models to understand visual cortex using deep learning models to understand visual cortex 11-785 Introduction to Deep Learning Fall 2017 Michael Tarr Department of Psychology Center for the Neural Basis of Cognition this lecture A bit out

More information

Identification of Tissue Independent Cancer Driver Genes

Identification of Tissue Independent Cancer Driver Genes Identification of Tissue Independent Cancer Driver Genes Alexandros Manolakos, Idoia Ochoa, Kartik Venkat Supervisor: Olivier Gevaert Abstract Identification of genomic patterns in tumors is an important

More information

B657: Final Project Report Holistically-Nested Edge Detection

B657: Final Project Report Holistically-Nested Edge Detection B657: Final roject Report Holistically-Nested Edge Detection Mingze Xu & Hanfei Mei May 4, 2016 Abstract Holistically-Nested Edge Detection (HED), which is a novel edge detection method based on fully

More information

Can Saliency Map Models Predict Human Egocentric Visual Attention?

Can Saliency Map Models Predict Human Egocentric Visual Attention? Can Saliency Map Models Predict Human Egocentric Visual Attention? Kentaro Yamada 1, Yusuke Sugano 1, Takahiro Okabe 1 Yoichi Sato 1, Akihiro Sugimoto 2, and Kazuo Hiraki 3 1 The University of Tokyo, Tokyo,

More information

Significance of Natural Scene Statistics in Understanding the Anisotropies of Perceptual Filling-in at the Blind Spot

Significance of Natural Scene Statistics in Understanding the Anisotropies of Perceptual Filling-in at the Blind Spot Significance of Natural Scene Statistics in Understanding the Anisotropies of Perceptual Filling-in at the Blind Spot Rajani Raman* and Sandip Sarkar Saha Institute of Nuclear Physics, HBNI, 1/AF, Bidhannagar,

More information

Neural circuits PSY 310 Greg Francis. Lecture 05. Rods and cones

Neural circuits PSY 310 Greg Francis. Lecture 05. Rods and cones Neural circuits PSY 310 Greg Francis Lecture 05 Why do you need bright light to read? Rods and cones Photoreceptors are not evenly distributed across the retina 1 Rods and cones Cones are most dense in

More information

Object Detectors Emerge in Deep Scene CNNs

Object Detectors Emerge in Deep Scene CNNs Object Detectors Emerge in Deep Scene CNNs Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, Antonio Torralba Presented By: Collin McCarthy Goal: Understand how objects are represented in CNNs Are

More information

Convolutional Neural Networks (CNN)

Convolutional Neural Networks (CNN) Convolutional Neural Networks (CNN) Algorithm and Some Applications in Computer Vision Luo Hengliang Institute of Automation June 10, 2014 Luo Hengliang (Institute of Automation) Convolutional Neural Networks

More information

Carlson (7e) PowerPoint Lecture Outline Chapter 6: Vision

Carlson (7e) PowerPoint Lecture Outline Chapter 6: Vision Carlson (7e) PowerPoint Lecture Outline Chapter 6: Vision This multimedia product and its contents are protected under copyright law. The following are prohibited by law: any public performance or display,

More information

Image-Based Estimation of Real Food Size for Accurate Food Calorie Estimation

Image-Based Estimation of Real Food Size for Accurate Food Calorie Estimation Image-Based Estimation of Real Food Size for Accurate Food Calorie Estimation Takumi Ege, Yoshikazu Ando, Ryosuke Tanno, Wataru Shimoda and Keiji Yanai Department of Informatics, The University of Electro-Communications,

More information

Motivation: Attention: Focusing on specific parts of the input. Inspired by neuroscience.

Motivation: Attention: Focusing on specific parts of the input. Inspired by neuroscience. Outline: Motivation. What s the attention mechanism? Soft attention vs. Hard attention. Attention in Machine translation. Attention in Image captioning. State-of-the-art. 1 Motivation: Attention: Focusing

More information

A convolutional neural network to classify American Sign Language fingerspelling from depth and colour images

A convolutional neural network to classify American Sign Language fingerspelling from depth and colour images A convolutional neural network to classify American Sign Language fingerspelling from depth and colour images Ameen, SA and Vadera, S http://dx.doi.org/10.1111/exsy.12197 Title Authors Type URL A convolutional

More information

A Neurally-Inspired Model for Detecting and Localizing Simple Motion Patterns in Image Sequences

A Neurally-Inspired Model for Detecting and Localizing Simple Motion Patterns in Image Sequences A Neurally-Inspired Model for Detecting and Localizing Simple Motion Patterns in Image Sequences Marc Pomplun 1, Yueju Liu 2, Julio Martinez-Trujillo 2, Evgueni Simine 2, and John K. Tsotsos 2 1 Department

More information

CS-E Deep Learning Session 4: Convolutional Networks

CS-E Deep Learning Session 4: Convolutional Networks CS-E4050 - Deep Learning Session 4: Convolutional Networks Jyri Kivinen Aalto University 23 September 2015 Credits: Thanks to Tapani Raiko for slides material. CS-E4050 - Deep Learning Session 4: Convolutional

More information

The Integration of Features in Visual Awareness : The Binding Problem. By Andrew Laguna, S.J.

The Integration of Features in Visual Awareness : The Binding Problem. By Andrew Laguna, S.J. The Integration of Features in Visual Awareness : The Binding Problem By Andrew Laguna, S.J. Outline I. Introduction II. The Visual System III. What is the Binding Problem? IV. Possible Theoretical Solutions

More information

Ch 5. Perception and Encoding

Ch 5. Perception and Encoding Ch 5. Perception and Encoding Cognitive Neuroscience: The Biology of the Mind, 2 nd Ed., M. S. Gazzaniga, R. B. Ivry, and G. R. Mangun, Norton, 2002. Summarized by Y.-J. Park, M.-H. Kim, and B.-T. Zhang

More information

Supplementary Online Content

Supplementary Online Content Supplementary Online Content Ting DS, Cheung CY-L, Lim G, et al. Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic

More information

Networks and Hierarchical Processing: Object Recognition in Human and Computer Vision

Networks and Hierarchical Processing: Object Recognition in Human and Computer Vision Networks and Hierarchical Processing: Object Recognition in Human and Computer Vision Guest&Lecture:&Marius&Cătălin&Iordan&& CS&131&8&Computer&Vision:&Foundations&and&Applications& 01&December&2014 1.

More information

Attentional Masking for Pre-trained Deep Networks

Attentional Masking for Pre-trained Deep Networks Attentional Masking for Pre-trained Deep Networks IROS 2017 Marcus Wallenberg and Per-Erik Forssén Computer Vision Laboratory Department of Electrical Engineering Linköping University 2014 2017 Per-Erik

More information

EDGE DETECTION. Edge Detectors. ICS 280: Visual Perception

EDGE DETECTION. Edge Detectors. ICS 280: Visual Perception EDGE DETECTION Edge Detectors Slide 2 Convolution & Feature Detection Slide 3 Finds the slope First derivative Direction dependent Need many edge detectors for all orientation Second order derivatives

More information

Modeling the Deployment of Spatial Attention

Modeling the Deployment of Spatial Attention 17 Chapter 3 Modeling the Deployment of Spatial Attention 3.1 Introduction When looking at a complex scene, our visual system is confronted with a large amount of visual information that needs to be broken

More information

The Visual System. Cortical Architecture Casagrande February 23, 2004

The Visual System. Cortical Architecture Casagrande February 23, 2004 The Visual System Cortical Architecture Casagrande February 23, 2004 Phone: 343-4538 Email: vivien.casagrande@mcmail.vanderbilt.edu Office: T2302 MCN Required Reading Adler s Physiology of the Eye Chapters

More information

Implementation of Automatic Retina Exudates Segmentation Algorithm for Early Detection with Low Computational Time

Implementation of Automatic Retina Exudates Segmentation Algorithm for Early Detection with Low Computational Time www.ijecs.in International Journal Of Engineering And Computer Science ISSN: 2319-7242 Volume 5 Issue 10 Oct. 2016, Page No. 18584-18588 Implementation of Automatic Retina Exudates Segmentation Algorithm

More information

Lighta part of the spectrum of Electromagnetic Energy. (the part that s visible to us!)

Lighta part of the spectrum of Electromagnetic Energy. (the part that s visible to us!) Introduction to Physiological Psychology Vision ksweeney@cogsci.ucsd.edu cogsci.ucsd.edu/~ /~ksweeney/psy260.html Lighta part of the spectrum of Electromagnetic Energy (the part that s visible to us!)

More information

2 INSERM U280, Lyon, France

2 INSERM U280, Lyon, France Natural movies evoke responses in the primary visual cortex of anesthetized cat that are not well modeled by Poisson processes Jonathan L. Baker 1, Shih-Cheng Yen 1, Jean-Philippe Lachaux 2, Charles M.

More information

Ch 5. Perception and Encoding

Ch 5. Perception and Encoding Ch 5. Perception and Encoding Cognitive Neuroscience: The Biology of the Mind, 2 nd Ed., M. S. Gazzaniga,, R. B. Ivry,, and G. R. Mangun,, Norton, 2002. Summarized by Y.-J. Park, M.-H. Kim, and B.-T. Zhang

More information

Automated Brain Tumor Segmentation Using Region Growing Algorithm by Extracting Feature

Automated Brain Tumor Segmentation Using Region Growing Algorithm by Extracting Feature Automated Brain Tumor Segmentation Using Region Growing Algorithm by Extracting Feature Shraddha P. Dhumal 1, Ashwini S Gaikwad 2 1 Shraddha P. Dhumal 2 Ashwini S. Gaikwad ABSTRACT In this paper, we propose

More information

Chapter 5. Summary and Conclusions! 131

Chapter 5. Summary and Conclusions! 131 ! Chapter 5 Summary and Conclusions! 131 Chapter 5!!!! Summary of the main findings The present thesis investigated the sensory representation of natural sounds in the human auditory cortex. Specifically,

More information

Prof. Greg Francis 7/31/15

Prof. Greg Francis 7/31/15 s PSY 200 Greg Francis Lecture 06 How do you recognize your grandmother? Action potential With enough excitatory input, a cell produces an action potential that sends a signal down its axon to other cells

More information

Mammogram Analysis: Tumor Classification

Mammogram Analysis: Tumor Classification Mammogram Analysis: Tumor Classification Term Project Report Geethapriya Raghavan geeragh@mail.utexas.edu EE 381K - Multidimensional Digital Signal Processing Spring 2005 Abstract Breast cancer is the

More information

We (1 4) and many others (e.g., 5 8) have studied the

We (1 4) and many others (e.g., 5 8) have studied the Some transformations of color information from lateral geniculate nucleus to striate cortex Russell L. De Valois*, Nicolas P. Cottaris, Sylvia D. Elfar*, Luke E. Mahon, and J. Anthony Wilson* *Psychology

More information

Explicit information for category-orthogonal object properties increases along the ventral stream

Explicit information for category-orthogonal object properties increases along the ventral stream Explicit information for category-orthogonal object properties increases along the ventral stream Ha Hong 3,5, Daniel L K Yamins,2,5, Najib J Majaj,2,4 & James J DiCarlo,2 Extensive research has revealed

More information

Retinal DOG filters: high-pass or high-frequency enhancing filters?

Retinal DOG filters: high-pass or high-frequency enhancing filters? Retinal DOG filters: high-pass or high-frequency enhancing filters? Adrián Arias 1, Eduardo Sánchez 1, and Luis Martínez 2 1 Grupo de Sistemas Inteligentes (GSI) Centro Singular de Investigación en Tecnologías

More information

Automated Assessment of Diabetic Retinal Image Quality Based on Blood Vessel Detection

Automated Assessment of Diabetic Retinal Image Quality Based on Blood Vessel Detection Y.-H. Wen, A. Bainbridge-Smith, A. B. Morris, Automated Assessment of Diabetic Retinal Image Quality Based on Blood Vessel Detection, Proceedings of Image and Vision Computing New Zealand 2007, pp. 132

More information

Theta sequences are essential for internally generated hippocampal firing fields.

Theta sequences are essential for internally generated hippocampal firing fields. Theta sequences are essential for internally generated hippocampal firing fields. Yingxue Wang, Sandro Romani, Brian Lustig, Anthony Leonardo, Eva Pastalkova Supplementary Materials Supplementary Modeling

More information

MEMORABILITY OF NATURAL SCENES: THE ROLE OF ATTENTION

MEMORABILITY OF NATURAL SCENES: THE ROLE OF ATTENTION MEMORABILITY OF NATURAL SCENES: THE ROLE OF ATTENTION Matei Mancas University of Mons - UMONS, Belgium NumediArt Institute, 31, Bd. Dolez, Mons matei.mancas@umons.ac.be Olivier Le Meur University of Rennes

More information

Validating the Visual Saliency Model

Validating the Visual Saliency Model Validating the Visual Saliency Model Ali Alsam and Puneet Sharma Department of Informatics & e-learning (AITeL), Sør-Trøndelag University College (HiST), Trondheim, Norway er.puneetsharma@gmail.com Abstract.

More information

Natural Scene Statistics and Perception. W.S. Geisler

Natural Scene Statistics and Perception. W.S. Geisler Natural Scene Statistics and Perception W.S. Geisler Some Important Visual Tasks Identification of objects and materials Navigation through the environment Estimation of motion trajectories and speeds

More information

Computational modeling of visual attention and saliency in the Smart Playroom

Computational modeling of visual attention and saliency in the Smart Playroom Computational modeling of visual attention and saliency in the Smart Playroom Andrew Jones Department of Computer Science, Brown University Abstract The two canonical modes of human visual attention bottomup

More information

Shu Kong. Department of Computer Science, UC Irvine

Shu Kong. Department of Computer Science, UC Irvine Ubiquitous Fine-Grained Computer Vision Shu Kong Department of Computer Science, UC Irvine Outline 1. Problem definition 2. Instantiation 3. Challenge 4. Fine-grained classification with holistic representation

More information

Biological Bases of Behavior. 6: Vision

Biological Bases of Behavior. 6: Vision Biological Bases of Behavior 6: Vision Sensory Systems The brain detects events in the external environment and directs the contractions of the muscles Afferent neurons carry sensory messages to brain

More information

Automatic Detection of Knee Joints and Quantification of Knee Osteoarthritis Severity using Convolutional Neural Networks

Automatic Detection of Knee Joints and Quantification of Knee Osteoarthritis Severity using Convolutional Neural Networks Automatic Detection of Knee Joints and Quantification of Knee Osteoarthritis Severity using Convolutional Neural Networks Joseph Antony 1, Kevin McGuinness 1, Kieran Moran 1,2 and Noel E O Connor 1 Insight

More information

Visual discrimination and adaptation using non-linear unsupervised learning

Visual discrimination and adaptation using non-linear unsupervised learning Visual discrimination and adaptation using non-linear unsupervised learning Sandra Jiménez, Valero Laparra and Jesús Malo Image Processing Lab, Universitat de València, Spain; http://isp.uv.es ABSTRACT

More information

Supporting Information

Supporting Information 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 Supporting Information Variances and biases of absolute distributions were larger in the 2-line

More information

Framework for Comparative Research on Relational Information Displays

Framework for Comparative Research on Relational Information Displays Framework for Comparative Research on Relational Information Displays Sung Park and Richard Catrambone 2 School of Psychology & Graphics, Visualization, and Usability Center (GVU) Georgia Institute of

More information

COGS 101A: Sensation and Perception

COGS 101A: Sensation and Perception COGS 101A: Sensation and Perception 1 Virginia R. de Sa Department of Cognitive Science UCSD Lecture 5: LGN and V1: Magno and Parvo streams Chapter 3 Course Information 2 Class web page: http://cogsci.ucsd.edu/

More information

Shu Kong. Department of Computer Science, UC Irvine

Shu Kong. Department of Computer Science, UC Irvine Ubiquitous Fine-Grained Computer Vision Shu Kong Department of Computer Science, UC Irvine Outline 1. Problem definition 2. Instantiation 3. Challenge and philosophy 4. Fine-grained classification with

More information

Deep learning and non-negative matrix factorization in recognition of mammograms

Deep learning and non-negative matrix factorization in recognition of mammograms Deep learning and non-negative matrix factorization in recognition of mammograms Bartosz Swiderski Faculty of Applied Informatics and Mathematics Warsaw University of Life Sciences, Warsaw, Poland bartosz_swiderski@sggw.pl

More information

EARLY STAGE DIAGNOSIS OF LUNG CANCER USING CT-SCAN IMAGES BASED ON CELLULAR LEARNING AUTOMATE

EARLY STAGE DIAGNOSIS OF LUNG CANCER USING CT-SCAN IMAGES BASED ON CELLULAR LEARNING AUTOMATE EARLY STAGE DIAGNOSIS OF LUNG CANCER USING CT-SCAN IMAGES BASED ON CELLULAR LEARNING AUTOMATE SAKTHI NEELA.P.K Department of M.E (Medical electronics) Sengunthar College of engineering Namakkal, Tamilnadu,

More information

Nonlinear processing in LGN neurons

Nonlinear processing in LGN neurons Nonlinear processing in LGN neurons Vincent Bonin *, Valerio Mante and Matteo Carandini Smith-Kettlewell Eye Research Institute 2318 Fillmore Street San Francisco, CA 94115, USA Institute of Neuroinformatics

More information

Competing Frameworks in Perception

Competing Frameworks in Perception Competing Frameworks in Perception Lesson II: Perception module 08 Perception.08. 1 Views on perception Perception as a cascade of information processing stages From sensation to percept Template vs. feature

More information

Competing Frameworks in Perception

Competing Frameworks in Perception Competing Frameworks in Perception Lesson II: Perception module 08 Perception.08. 1 Views on perception Perception as a cascade of information processing stages From sensation to percept Template vs. feature

More information

Reading Assignments: Lecture 18: Visual Pre-Processing. Chapters TMB Brain Theory and Artificial Intelligence

Reading Assignments: Lecture 18: Visual Pre-Processing. Chapters TMB Brain Theory and Artificial Intelligence Brain Theory and Artificial Intelligence Lecture 18: Visual Pre-Processing. Reading Assignments: Chapters TMB2 3.3. 1 Low-Level Processing Remember: Vision as a change in representation. At the low-level,

More information

Object recognition and hierarchical computation

Object recognition and hierarchical computation Object recognition and hierarchical computation Challenges in object recognition. Fukushima s Neocognitron View-based representations of objects Poggio s HMAX Forward and Feedback in visual hierarchy Hierarchical

More information

The Impact of Visual Saliency Prediction in Image Classification

The Impact of Visual Saliency Prediction in Image Classification Dublin City University Insight Centre for Data Analytics Universitat Politecnica de Catalunya Escola Tècnica Superior d Enginyeria de Telecomunicacions de Barcelona Eric Arazo Sánchez The Impact of Visual

More information

Information Processing During Transient Responses in the Crayfish Visual System

Information Processing During Transient Responses in the Crayfish Visual System Information Processing During Transient Responses in the Crayfish Visual System Christopher J. Rozell, Don. H. Johnson and Raymon M. Glantz Department of Electrical & Computer Engineering Department of

More information

lateral organization: maps

lateral organization: maps lateral organization Lateral organization & computation cont d Why the organization? The level of abstraction? Keep similar features together for feedforward integration. Lateral computations to group

More information

Skin cancer reorganization and classification with deep neural network

Skin cancer reorganization and classification with deep neural network Skin cancer reorganization and classification with deep neural network Hao Chang 1 1. Department of Genetics, Yale University School of Medicine 2. Email: changhao86@gmail.com Abstract As one kind of skin

More information

Improving the Interpretability of DEMUD on Image Data Sets

Improving the Interpretability of DEMUD on Image Data Sets Improving the Interpretability of DEMUD on Image Data Sets Jake Lee, Jet Propulsion Laboratory, California Institute of Technology & Columbia University, CS 19 Intern under Kiri Wagstaff Summer 2018 Government

More information

Group-Wise FMRI Activation Detection on Corresponding Cortical Landmarks

Group-Wise FMRI Activation Detection on Corresponding Cortical Landmarks Group-Wise FMRI Activation Detection on Corresponding Cortical Landmarks Jinglei Lv 1,2, Dajiang Zhu 2, Xintao Hu 1, Xin Zhang 1,2, Tuo Zhang 1,2, Junwei Han 1, Lei Guo 1,2, and Tianming Liu 2 1 School

More information

IAT 355 Visual Analytics. Encoding Information: Design. Lyn Bartram

IAT 355 Visual Analytics. Encoding Information: Design. Lyn Bartram IAT 355 Visual Analytics Encoding Information: Design Lyn Bartram 4 stages of visualization design 2 Recall: Data Abstraction Tables Data item (row) with attributes (columns) : row=key, cells = values

More information

arxiv: v2 [cs.cv] 8 Mar 2018

arxiv: v2 [cs.cv] 8 Mar 2018 Automated soft tissue lesion detection and segmentation in digital mammography using a u-net deep learning network Timothy de Moor a, Alejandro Rodriguez-Ruiz a, Albert Gubern Mérida a, Ritse Mann a, and

More information

Nature Neuroscience: doi: /nn Supplementary Figure 1. Behavioral training.

Nature Neuroscience: doi: /nn Supplementary Figure 1. Behavioral training. Supplementary Figure 1 Behavioral training. a, Mazes used for behavioral training. Asterisks indicate reward location. Only some example mazes are shown (for example, right choice and not left choice maze

More information

VISUAL PERCEPTION & COGNITIVE PROCESSES

VISUAL PERCEPTION & COGNITIVE PROCESSES VISUAL PERCEPTION & COGNITIVE PROCESSES Prof. Rahul C. Basole CS4460 > March 31, 2016 How Are Graphics Used? Larkin & Simon (1987) investigated usefulness of graphical displays Graphical visualization

More information

2/3/17. Visual System I. I. Eye, color space, adaptation II. Receptive fields and lateral inhibition III. Thalamus and primary visual cortex

2/3/17. Visual System I. I. Eye, color space, adaptation II. Receptive fields and lateral inhibition III. Thalamus and primary visual cortex 1 Visual System I I. Eye, color space, adaptation II. Receptive fields and lateral inhibition III. Thalamus and primary visual cortex 2 1 2/3/17 Window of the Soul 3 Information Flow: From Photoreceptors

More information

CS294-6 (Fall 2004) Recognizing People, Objects and Actions Lecture: January 27, 2004 Human Visual System

CS294-6 (Fall 2004) Recognizing People, Objects and Actions Lecture: January 27, 2004 Human Visual System CS294-6 (Fall 2004) Recognizing People, Objects and Actions Lecture: January 27, 2004 Human Visual System Lecturer: Jitendra Malik Scribe: Ryan White (Slide: layout of the brain) Facts about the brain:

More information

Continuous transformation learning of translation invariant representations

Continuous transformation learning of translation invariant representations Exp Brain Res (21) 24:255 27 DOI 1.17/s221-1-239- RESEARCH ARTICLE Continuous transformation learning of translation invariant representations G. Perry E. T. Rolls S. M. Stringer Received: 4 February 29

More information

POC Brain Tumor Segmentation. vlife Use Case

POC Brain Tumor Segmentation. vlife Use Case Brain Tumor Segmentation vlife Use Case 1 Automatic Brain Tumor Segmentation using CNN Background Brain tumor segmentation seeks to separate healthy tissue from tumorous regions such as the advancing tumor,

More information

arxiv: v1 [stat.ml] 23 Jan 2017

arxiv: v1 [stat.ml] 23 Jan 2017 Learning what to look in chest X-rays with a recurrent visual attention model arxiv:1701.06452v1 [stat.ml] 23 Jan 2017 Petros-Pavlos Ypsilantis Department of Biomedical Engineering King s College London

More information

Two Visual Contrast Processes: One New, One Old

Two Visual Contrast Processes: One New, One Old 1 Two Visual Contrast Processes: One New, One Old Norma Graham and S. Sabina Wolfson In everyday life, we occasionally look at blank, untextured regions of the world around us a blue unclouded sky, for

More information

Contextual Influences in Visual Processing

Contextual Influences in Visual Processing C Contextual Influences in Visual Processing TAI SING LEE Computer Science Department and Center for Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, USA Synonyms Surround influence;

More information