Investigating training, transfer and viewpoint effects resulting from recurrent CBT of X-Ray image interpretation

Size: px
Start display at page:

Download "Investigating training, transfer and viewpoint effects resulting from recurrent CBT of X-Ray image interpretation"

Transcription

1 J Transp Secur (2008) 1: DOI /s Investigating training, transfer and viewpoint effects resulting from recurrent CBT of X-Ray image interpretation Saskia M. Koller & Diana Hardmeier & Stefan Michel & Adrian Schwaninger Received: 26 October 2007 / Accepted: 1 November 2007 / Published online: 9 January 2008 # Springer Science + Business Media, LLC 2007 Abstract X-ray screening of passenger bags is an essential task at airport security checkpoints. In this study we investigated how well airport security screeners can detect guns, knives, improvised explosive devices (IEDs) and other threat objects in X-ray images of passenger bags before and after 3 and 6 months of recurrent (about 20 min per week) computer-based training (CBT). Two experiments conducted at different airports gave very similar results. Training with X-ray Tutor (XRT), an individually adaptive CBT, resulted in large performance increases, especially for detecting IEDs. While performance for detecting IEDs was initially substantially lower than for guns, IEDs could be detected as well as guns after several months of training. A large transfer effect was observed as well: Training with XRT helped screeners recognize new threat objects that were similar in shape as the trained objects. Threat recognition was dependent on the rotation of the objects. If depicted from an unusual viewpoint, prohibited items were more difficult to recognize. The results were compared to two conventional (not adaptive) CBT systems. For one system no training and transfer effects were observed whereas small training and transfer effects were found for the other conventional CBT system. Keywords Airport security. Human computer interaction. Human factors. Object recognition. Perceptual learning. X-ray screening S. M. Koller : D. Hardmeier : S. Michel : A. Schwaninger Department of Psychology, University of Zurich, Binzmühlestrasse 14/22, 8050 Zurich, Switzerland S. Michel : A. Schwaninger (*) Department Bülthoff, Max Planck Institute for Biological Cybernetics, Spemannstraße 38, Tübingen, Germany a.schwaninger@psychologie.uzh.ch

2 82 S.M. Koller, et al. Introduction The importance of aviation security has increased dramatically in the last years. As a consequence of the new threat situation, large investments were made into modern security technology. State of the art X-ray screening equipment offers good image quality, high resolution and many image enhancement functions. However, the decision whether an X-ray image of a passenger bag contains a prohibited item or not, is still being taken by a human operator, i.e. an airport security screener. Object shapes that are not similar to ones stored in visual memory are difficult to recognize (e.g., Graf et al. 2002; Schwaninger 2004, 2005). Schwaninger et al. (2005) have shown that X-ray screener performance depends on knowledge-based and imagebased factors. A prerequisite for good X-ray detection performance is knowledge about which objects are prohibited and what they look like in X-ray images. Such knowledge is acquired by computer-based, class-room and on the job training (knowledge-based factors). Image-based factors refer to image difficulty resulting from viewpoint variation of threat objects, superposition of threat objects by other objects in a bag, and bag complexity depending on the number and type of other objects in the bag. The ability to cope with image-based factors is related to individual visual-cognitive abilities rather than a mere result of training (Hardmeier et al. 2006). Computer-based training is expected to be a very important determinant of X-ray image interpretation competency, because many threat objects are not known from everyday experience and because objects look quite different in X-ray images than in reality. This is illustrated in Fig. 1 with two examples. Schwaninger and Hofer (2004) and Schwaninger et al. (2007) could show that detection of improvised explosive devices (IEDs) in hold baggage screening (HBS) can be significantly improved if people are trained with an individually adaptive training system such as X-ray Tutor (XRT). Schwaninger et al. (2005) compared detection performance of novices with the one of aviation security screeners. A rather poor recognition of unfamiliar object shapes (e.g., self-defense gas spray, electric shock device etc.) in x-ray images was found for novices. For experienced aviation security personnel, a much higher recognition performance was observed. McCarley et al. (2004) reported a better performance after training for the detection of knives in X-ray images for novices. Fig. 1 Different types of prohibited items in X-ray images of passenger bags. a Electric shock device, b self defense gas spray Guardian Angel

3 Investigating effects of recurrent CBT of X-ray interpretation 83 When one takes into account the myriad of views that can be produced by a single object, the question arises how the human brain stores and recognizes objects even if they are presented in unusual views. In the object recognition literature, two types of theories can be distinguished: structural description theories and view-based theories. The former assume that objects are stored in visual memory by their component parts and their spatial relationship. An object-centered description of this nature was described by Marr and Nishihara (1978), who proposed that objects are hierarchically decomposed into their parts and spatial relations relative to objectcentered coordinates in order to access an object-centered 3D model in visual memory. In Biederman (1987) recognition by components (RBC) theory, nonaccidental properties like vertices, parallel vs. non-parallel lines, straight vs. curved lines etc. (see Lowe, 1985, 1987) are extracted from a line drawing representation of objects to define basic geometrical primitives (geometrical ions, geons ) that are relatively orientation-invariant. A geon structural description (GSD) in memory is activated by extracting geons from the visual input and match geon properties and their spatial relationship with the GSD (Hummel and Biederman 1992). For view-based theories, different approaches have been proposed. Examples are recognition by alignment to a 3D representation (Lowe 1987), recognition by linear combination of 2D views (Ullman 1998), recognition by view interpolation (e.g., using RBF networks) proposed by Poggio and Edelman (1990) and storing of multiple views for each object plus performing transformations (Tarr and Pinker 1989). What view-based theories have in common is the assumption that objects are not stored in memory as rotation invariant structural descriptions but instead in a format which is viewer-centered. A more detailed discussion of structural description theories vs view-based theories and more recent hybrid theories is beyond the scope of this paper (for reviews see for example Graf et al. 2002; Hayward 2003; Kosslyn 1994; Peissi and Tarr 2007; Schwaninger 2005; Tarr and Bülthoff 1998). However, it should be pointed out that empirical results seem to be correlated with the required level of recognition (Bülthoff et al. 1995, p. 5, 13; Tarr 1995): if the object has to be recognized at entry level, behavioral measures are less affected by changes in perspective. However, in the case of subordinate recognition in which fine discriminations are typically required, both response times and accuracy are more sensitive to the specific viewpoint used. Furthermore, differences in the task a subject has to perform (Lawson 1999) and the specific paradigm that is used (Verfaillie 1992) can influence which level of representation is tapped (see also Logothetis and Sheinberg 1996). The first aim of this study is to investigate how well airport security screeners can detect guns, knives, IEDs and other prohibited items in x-ray images of passenger bags. The second aim is to examine whether screener detection performance can be increased by conducting recurrent CBT. To this end, screeners conducted weekly recurrent CBT (about 20min per week). Detection performance was tested with the X-ray Competency Assessment Test (X-ray CAT) by Koller and Schwaninger (2006). This test measures how well people detect threat items in X-ray images of passenger bags. It was conducted at the beginning and then after 3 and 6months of training. In addition to training effects, the X-ray CAT allows measuring transfer effects, i.e. to what extent visual knowledge that was gained through CBT can be transferred to other threat items (see below). In the X-ry CAT all prohibited items are

4 84 S.M. Koller, et al. depicted from a canonical (easy recognizable) perspective (Palmer et al. 1981) and unusual perspective which allows investigating viewpoint effects. The study was conducted at two mid-size European airports. In airport 1 (experiment 1) one group of screeners used adaptive CBT (XRT) whereas the other group of screeners (control group) used a conventional (not adaptive) CBT. In airport 2 (experiment 2) the same experimental design was used except for the fact that the control group used another conventional CBT system. This allows investigating whether a training effect is dependent on the type of the CBT system used. Experiment 1 Method Participants A total of 209 airport security screeners of a mid-size European airport participated in experiment 1 and conducted the X-ray CAT three times with an interval of 3 months between the measurements. The adaptive CBT group (XRT group) consisted of 97 screeners who conducted weekly recurrent CBT using X-ray Tutor (XRT) CBS 2.0 Standard Edition between all three test measurements. The control group consisted of 112 screeners who used a conventional (not adaptive) CBT. According to the security organization and their appropriate authority, airport security screeners of both groups conducted about 20 min CBT per week. Analysis of XRT training use showed that on average, each screener trained min (SD = 3.65 min) per week. Materials and procedure X-Ray competency assessment test (X-ray CAT) The X-ray CAT consists of 256 trials based on 128 different color X-ray images of passenger bags. Each of the bag images is used once containing a prohibited item (threat image) and once without any threat object (non-threat image). Figure 2 displays examples of the stimuli. Note that in the test, the images are displayed in color. Fig. 2 Example images from the X-ray CAT. Left: harmless bag (non-threat image), right same bag with a prohibited item at the top right corner (threat image). The prohibited item (gun) is shown also separately at the bottom right

5 Investigating effects of recurrent CBT of X-ray interpretation 85 Prohibited objects can be assigned to four categories as defined in Doc 30 of the European Civil Aviation Conference (ECAC): guns, IEDs, knives and other prohibited items (e.g., self-defense gas spray, chemicals, grenades, etc.). The threat objects have been selected and prepared in collaboration with experts of Zurich State Police, Airport Division to be representative and realistic. For each threat category 16 exemplars are used (eight pairs). Each pair consists of two prohibited items that are similar in shape (see Fig. 3). These were distributed randomly into two sets, sets A and B. Prohibited items of set A (non threat bag images) are contained in the XRT CBS 2.x SE training whereas the items of set B are not. This allows testing for transfer effects. Every item is depicted from two different viewpoints. The easy viewpoint refers to the canonical (i.e. easy recognizable) perspective (Palmer et al. 1981). The difficult viewpoint shows the threat item with an 85 horizontal rotation or an 85 vertical rotation relative to the canonical view (see Fig. 3 for examples). In each threat category, half of the prohibited items of the difficult viewpoint are rotated vertically, the other half horizontally. Sets A and B are equalized concerning the rotations of the prohibited objects. Every threat item is combined with a bag in a manner that the degree of superposition by other objects is similar for both viewpoints. This was achieved using a function that calculates the difference between the pixel intensity values of the bag image with the threat object minus the bag image without the threat object using the following formula: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P ½ISN ðx; yþ I N ðx; yþš 2 SP ¼ ObjectSize SP = Superposition; I SN = grayscale intensity of the SN (signal plus noise) image (contains a prohibited item); I N = grayscale intensity of the N (noise) image (contains no prohibited item); Object Size: Number of pixels of the prohibited item where R, G and B are <253 Using this equation (division by object size), the superposition value is independent of the size of the prohibited item. This value can be kept relatively constant for the two views of a threat object, independent of the degree of clutter in a bag, when combining the bag image and the prohibited item. The bag images were visually inspected by aviation security experts to ensure they do not contain any Fig. 3 Example of two X-ray images of similar looking threat objects used in the test. Left A gun of set A. Right Corresponding gun of set B

6 86 S.M. Koller, et al. other prohibited items. Harmless bags were assigned to the different categories and viewpoints of the threat objects in a way that their difficulty was balanced across all categories. 1 The false alarm rate (the rate at which screeners wrongly judged a harmless bag as containing a threat item) for each bag image served as measure of difficulty based on a pilot study with 192 screeners of another airport. The X-ray CAT takes about min to complete. Each image is shown for a maximum of 10 s on the screen. Screeners have to judge whether the bag is OK (contains no prohibited item) or Not OK (contains a prohibited item). Additionally, screeners have to indicate the perceived difficulty of each image on a 100-point scale (difficulty rating). 2 The X-ray CAT is built into the XRT training system (see below). The interface of the X-ray CAT is the same as in XRT except there is no feedback and screeners do not have to click on the image to identify the threat object. X-Ray tutor (XRT) training system X-Ray Tutor (XRT) is an individually adaptive training system for aviation security screeners. It contains a large image library with hundreds of different threat objects depicted in up to 72 views, more than 6,000 bag images and many millions of possible threat object to bag combinations (see Schwaninger 2004 for details). The individually adaptive training algorithm of XRT starts with showing threat objects depicted from easy viewpoints with little superposition by other objects and in bags of low complexity. Based on each individual screeners learning progress, threat objects are shown in more difficult views, more complex bags and with more superposition. These parameters are adapted automatically by a scientifically validated algorithm for each screener and threat object while taking into account automatic image processing algorithms as explained in Schwaninger et al. (2007). XRT first presents screeners prohibited objects in easy (canonical) views. The individually adaptive training algorithm determines for each screener which views are difficult to recognize and adapts the training so that the trainee becomes able to detect threat items reliably even if prohibited objects are substantially rotated away from the easiest view. During the next difficulty levels, first superposition and then bag complexity is increased so that the trainee becomes able to detect threat items reliably even if they are superimposed by other objects or if the complexity of a bag is very high (for more information on XRT see Schwaninger 2003, 2004, and 2005a). During a training session each image is displayed for 15 s on the screen. Within this time screeners can use image enhancement functions which are also available when working with the X-ray machine (e.g. grayscale, negative image, edge enhancement, etc.). If the image contains a prohibited item, screeners have to click on it and then click on the Not OK button. If the bag is harmless; they have to click on the OK button. After providing a confidence rating using a slider control, feedback is shown to inform the trainee whether the image has been judged correctly 1 The eight categories of test images (four threat categories in two viewpoints each) are similar in terms of the difficulty of the harmless bags. This means, a difference of detection performance between categories or viewpoints can not be due to differences in the difficulty of the bag images. 2 The difficulty ratings were not analyzed in this study.

7 Investigating effects of recurrent CBT of X-ray interpretation 87 or not (see Fig. 4). If the bag contains a threat item, it is highlighted by flickering and the trainee has the possibility to display information about the threat item (see bottom left of Fig. 4). By clicking on the continue button the next image is shown. As a default setting, one training sessions takes 20min. During this time screeners see between 150 and 300 images. Procedure As explained above, two groups of screeners participated in experiment 1. The XRT training group conducted weekly recurrent CBT using XRT CBS 2.0 Standard Edition. The control group used a conventional (not adaptive) CBT. In order to avoid potential negative consequences, we decided not to mention the exact CBT product in this article. However, it can be mentioned that this CBT is also widely used at many airports worldwide. It has a much smaller threat image library than XRT, threat objects are not displayed in many different views, threat objects are not matched with different bags on the fly, and there is no individually adaptive training algorithm. The XRT training group and the control group took the X-ray CAT before, after three, and after six months of weekly CBT. This allows testing the effectiveness of both CBT systems for increasing X-ray image interpretation competency of airport security screeners. As explained above, half of the prohibited items in the X-ray CAT are also contained in the XRT training system (although presented in different Fig. 4 Screenshot of the XRT CBS 2.0 training system during training. At the bottom right a feedback is provided after each response. If a bag contains a prohibited item, an information window can be displayed (see bottom left of the screen)

8 88 S.M. Koller, et al. bags). The other half of the prohibited items of the X-ray CAT are not part of the XRT training library. This allows testing for transfer effects, i.e. testing whether training with the detection of certain prohibited items helps increasing the detection of other prohibited items. Finally, as specified above in the section on the X-ray CAT, all prohibited items are depicted in easy and difficult view which allows testing effects of viewpoint on screener detection performance. Results and discussion Detection performance was calculated using the signal detection measure d (Green and Swets 1966), which takes into account the hit rate (correctly judged threat images as being Not OK) and the false alarm rate (wrongly judged harmless bags as being Not OK). D is calculated using the following formula: d 0 ¼ zh ð Þ zfa ð Þ Whereas H is the hit rate, FA the false alarm rate and z refers to the z transformation. Performance values are not reported due to security reasons. However, effect sizes are reported for all relevant analyses and interpreted based on Cohen (1988), see Table 1. For t tests, d between 0.20 and 0.49 represents small effect size; d between 0.50 and 0.79 represents medium effect size; d represents large effect size. For analysis of variance (ANOVA) statistics, η 2 between 0.01 and 0.05 represents small effect size; η 2 between 0.06 and 0.13 represents medium effect size; η represents large effect size. Figure 5 shows the detection performance of the first, second and third measurement for both screener groups. As can be seen in the Figure, there was a large improvement as a result of training in the XRT training group while there was no improvement in the control group. These results were confirmed by an ANOVA for repeated measures using d scores with the within-participant factor measurement (first, second and third) and the between-participants factor group (XRT training group and control group). There were large main effects of measurement, η 2 =0.28,F(2, 414) = 81.04, p <0.001,and group, η 2 =0.19,F(1, 207) = 47.62, p < There was also a large interaction of measurement and group, η 2 =0.25,F(2, 414) = 68.67, p < 0.001, which is consistent with Fig. 5 showing large performance increases as a result of training only for the XRT training group but not for the control group. Separate pairwise t tests of detection performance d revealed no significant difference at the baseline measurement between the two groups t(177) = 0.91, p =0.363,d = 0.13, but already a significant difference in the second measurement, i.e. after three months of training, t(207) = 7.52, p <.001,d = Additional paired-samples t tests revealed significant differences for the XRT training group between all three test measurements but no significant differences for the control group (see Table 2). Table 1 Classification of effect sizes based on Cohen (1988) Effect size d Η 2 Small Medium Large

9 Investigating effects of recurrent CBT of X-ray interpretation 89 Fig. 5 Detection performance with standard deviations for the XRT training group (left) vs the control group (right) comparing first, second and third measurement Detection Performance (d') First Measurement Second Measurement Third Measurement XRT Training Group (n=97) Control Group (n=112) Figure 6 shows the detection performance of both screener groups broken up by prohibited item category and the three test measurements. A repeated-measures ANOVA with the within-participant factors measurement (first, second and third) and threat category (guns, IEDs, knives and other), and the between-participants factor group (XRT training vs control) revealed the significant main effects and significant interactions given in Table 3, a. In addition to the effects that were already found in the previous ANOVA, also the factor threat (or prohibited item) category was significant. As can be seen in Fig. 6, guns were detected best, followed by knives, other prohibited items and IEDs at the first test measurement. There was a highly significant interaction between threat category and measurement. As can be seen in Fig. 6, detection of IEDs was initially much lower than gun detection. After 6 months of training, screeners of the XRT training group could detect IEDs even slightly better than guns. This result implies that IED detection is not difficult per se but rather a matter of the right training. Note that in this study all IEDs contained a detonator, wires, explosive, a triggering device and a power source. Thus our conclusions are only applicable to the detection of such multi-component IEDs. Large performance increases were also found for other prohibited items in this group, while for knives, only a small improvement as a result of training was found. Note that after 6months of training, detection performance of knives is lower than the one for any other threat category in the XRT training group, although at baseline measurement it was higher than the detection performance for IEDs or other threat objects. The interaction between threat category, group and measurement is also worth mentioning. As can be seen in Fig. 6 this results from the fact that there was no training effect for the control group. Their detection performance remains at about the same level for each threat category even after 6months of training with the conventional (not adaptive) CBT system. Separate pairwise t tests were conducted to compare detection performance at the first and the second measurement for both groups and each threat category separately Table 2 Results of the t tests comparing the detection performance of first (t1), second (t2) and third (t3) measurement t(96) t(111) p D XRT training group (t1 t2) 9.80 < XRT training group (t2 t3) 3.95 < Control group (t1 t2) 0.54 = Control group (t2 t3) 1.89 =

10 90 S.M. Koller, et al. Fig. 6 Detection performance with standard deviations for the XRT training group vs the control group broken up by prohibited item category and test measurement Detection Performance (d') First Measurement Second Measurement Third Measurement Guns IEDs Knives Other Guns IEDs Knives Other XRT Training Group (n=97) Control Group (n=112) (Table 4). The XRT training group showed a significant increase of the detection performance at the second measurement for the categories guns, IEDs and other threat objects. For knives, a significant difference could be found only in the third measurement. The comparison of the effect size d between the t tests of the four threat categories confirms the earlier mentioned conclusion that the training effect was particularly big for IEDs and rather small for knives. Detection performance of the control group did not differ significantly between the measurements, confirming that the conventional CBT did not result in an increase of threat detection performance. The results of the analyses considering the two prohibited item sets of the X-ray CAT, set A and set B, are shown in Figs. 7 and 8. As explained above, set A are X- ray CAT images which contain prohibited items which are part of the XRT image library. Set B are X-ray CAT images which contain prohibited items that are not part of the XRT image library. By comparing training effects for sets A and B transfer effects can be investigated, i.e. whether training with XRT does not only improve detection of prohibited items that are part of the XRT image library (set A) but also the detection of other prohibited items that are visually similar (set B). Figure 7 shows the detection performance for both screener groups broken up by test set for all three measurements. It shows a clear increase in detection performance for the XRT training group, especially at the second measurement, after the first 3 months of training. For the control group, as in the previous analysis, no training effect is evident. The results of the repeated measures ANOVA with the within-participant factors measurement (first, second and third) and set (A vs B) and the betweenparticipant factor group (XRT training group vs. control group) can be seen in Table 3, b. There was a significant effect of set in this analysis, which would imply a different detection performance for set A vs set B. However, the effect is very small, as the effect size of η 2 = 0.2 clearly shows, which makes the difference quasi negligible. This is also supported by the small effect size for the interaction between set and measurement, η 2 = 0.4. Pairwise t tests showed a significant increase in detection performance at the second measurement for both sets for the XRT training group, set A, t(96) = 10.27, p <.001, d = 1.19, set B, t(96) = 7.68, p <.001, d = These results indicate a large transfer effect, i.e. visual knowledge regarding the visual appearance of the prohibited objects of the XRT image library helped screeners to detect similar looking, but untrained objects in the X-ray CAT

11 Investigating effects of recurrent CBT of X-ray interpretation 91 Table 3 Results of the ANOVAs in experiment 1 Factor df F η 2 p Value a Measurement (M) 2, <0.001 Threat category (T) 3, <0.001 Group (G) 1, <0.001 MxG 2, <0.001 TxG 3, <0.001 MxT 6, <0.001 MxTxG 6, <0.001 b Measurement (M) 2, <0.001 Set (S) 1, <0.05 Group (G) 1, <0.001 MxG 2, <0.001 MxS 2, <0.001 SxG 1, <0.001 MxSxG 2, <0.001 c Measurement (M) 2, <0.001 Set (S) 1, =0.13 Threat category (T) 3, <0.001 Group (G) 1, <0.001 MxG 2, <0.001 MxT 6, <0.001 MxS 2, <0.001 SxG 1, <0.001 SxT 3, <0.001 TxG 3, <0.001 MxTxG 6, <0.001 MxSxG 2, <0.001 MxSxT 6, <0.01 SxTxG 3, <0.01 MxSxTxG 6, <0.01 d Measurement (M) 2, <0.001 View (V) 1, <0.001 Threat Category (T) 3, <0.001 Group (G) 1, <0.001 MxG 2, <0.001 MxT 6, <0.001 MxV 2, =0.13 VxG 1, =0.07 VxT 3, <0.001 TxG 3, <0.001 MxTxG 6, <0.001 MxVxG 2, <0.05 MxVxT 6, <0.001 VxTxG 3, <0.05 MxVxTxG 6, <0.05 (set B). Consistent with previous analyses, there was no training effect for the control group, neither for set A, t(111) = 0.76, p = 0.45, d = 0.08, nor for set B, t(111) = 0.28, p =.78, d = Pairwise t tests comparing both sets within one group at the first measurement revealed a significant difference of the two sets only for the control group t(111) = 2.82, p <.01, d = 0.17 but not for the XRT training group, t(96) = 0.42, p =.68, d = However, note that an effect size of d = 0.17 is very small which supports the assumption that the two sets are in fact very similar in their difficulty level.

12 92 S.M. Koller, et al. Table 4 Results of the t tests comparing the detection performance of the four categories between the first (t1), second (t2) and third (t3) measurement t(96) t(111) df p Value d XRT training group Guns t1 t < IEDs t1 t < Knives t1 t = Other t1 t < Guns t1 t < IEDs t1 t < Knives t1 t < Other t1 t < Control group Guns t1 t = IEDs t1 t = Knives t1 t = Other t1 t = Guns t1 t = IEDs t1 t = Knives t1 t = Other t1 t = Figure 8 includes also the threat category in the analysis. The increase in detection performance for the XRT training group can also be seen in the different threat categories. Pairwise t tests between the first and second measurement confirmed a significant (p <.001, all d > 0.62) increase in detection performance for the XRT training group for all threat categories per set except for knives (set A: p =.12, d = 0.19, set B; p =.32, d = 0.12). In Fig. 8, detection performance in set A for guns shows a decrease between the second and third measurement. However, this difference was not significant (p =.13, d = 0.17). For the control group, detection performance between the first and third measurement was compared in order to maximize the chances for finding a significant training effect. Even here, for all categories in each set, the detection between the first and third measurement did not differ significantly (all p >.12, d < 0.18). The extended ANOVA with the additional within-participant factor threat category revealed the main effects and interactions as specified in Table 3, c. The main effect of set was not significant but there were significant interactions with set (see Table 3, c). However, as can be seen in Fig. 8, these interactions are rather small, which implies large transfer effects. Fig. 7 Detection performance with standard deviations for the XRT training group vs the control group comparing first, second and third measurement for set A and set B separately Detection Performance (d') First Set A Second Set A Third Set A First Set B Second Set B Third Set B XRT Training Group (n=97) Control Group (n=112)

13 Investigating effects of recurrent CBT of X-ray interpretation 93 First Set A Second Set A Third Set A First Set B Second Set B Third Set B Detection Performance (d') Guns IEDs Knives Other Guns IEDs Knives Other XRT Training Group (n=97) Control Group (n=112) Fig. 8 Detection performance with standard deviations for the XRT training group vs the control group comparing first, second and third measurement for sets A and B and each threat category separately Figure 9 shows the results of the viewpoint analysis. An ANOVA was conducted on d scores with the within-participant factors measurement, threat category and viewpoint and the between-participants factor group. It showed significant main First Measurement View 1 First Measurement View 2 Second Measurement View 1 Second Measurement View 2 Third Measurement View 1 Third Measurement View 2 Detection Performance (d') Guns IEDs Knives Other Guns IEDs Knives Other XRT Training Group (n=97) Control Group (n=112) Fig. 9 Detection performance with standard deviations for the XRT training group vs the control group comparing first, second and third measurement for both views and each threat category separately

14 94 S.M. Koller, et al. effects of measurement, category, viewpoint and group. For details and interactions see Table 3, d. The large main effect of viewpoint indicates a higher detection performance for objects in easy (canonical) viewpoint compared to objects presented in a difficult (rotated) view (cf. Fig. 9). However, no significant interaction between viewpoint and training could be found. This would suggest that the viewpoint effect is unaffected by the training and could not be decreased. Pairwise t tests showed a significant increase in detection performance at the second measurement for both views in all categories for the XRT training group with the exception of knives in the easy view (p =.53, d = 0.07). All other comparisons were significant p <.05, d > 0.31). For the control group no significant increase in detection performance could be found (all p >.10, d <.0.19), see Table 5 for details. Training with XRT has an effect not only on the objects in the easy view but also on those in the difficult view. The screeners could make the association between the rotated object they detected during training and the canonical view of the object which is displayed in the object information in XRT. In summary, a large and significant training effect was found for the group who trained with XRT for 3 and 6 months compared to a control group who used another CBT for the same time. A significant training effect has been observed for all four threat categories (guns, knives, IEDs and other), whereas the extent of the effect varied between categories. A large transfer of the acquired knowledge about the visual appearance of trained objects (set A) to untrained but similar looking objects (set B) was found for the XRT training group but not for the control group. This means that training with XRT helped screeners to detect other prohibited items which were not part of the XRT training. Substantial effects of viewpoint could be observed, i.e. unusual views of prohibited objects were much harder to detect than canonical views. Table 5 Results of the t tests comparing the detection performance of the four categories for easy view (V1) and difficult view (V2) between the first (t1) and second (t2) measurement t(96) t(111) p Value D XRT training group Guns: V1t1 V1t < IEDs: V1t1 V1t < Knives: V1t1 V1t = Other: V1t1 V1t < Guns: V2t1 V2t < IEDs: V2t1 V2t < Knives: V2t1 V2t < Other: V2t1 V2t < Control group Guns: V1t1 V1t = IEDs: V1t1 V1t = Knives: V1t1 V1t = Other: V1t1 V1t = Guns: V2t1 V2t = IEDs: V2t1 V2t = Knives: V2t1 V2t = Other: V2t1 V2t =

15 Investigating effects of recurrent CBT of X-ray interpretation 95 Experiment 2 The main aim of experiment 2 was to replicate the results of experiment 1 at another European airport. In addition, another conventional CBT was used for the control group. Thus it could be investigated whether conventional CBTs differ from each other regarding training effectiveness compared to XRT. Method Participants A total of 163 airport security screeners of another mid-size European airport participated in experiment 2. All screeners conducted the X-ray CAT three times with an interval of three months between the measurements. The adaptive CBT group (XRT group) consisted of 84 screeners who conducted weekly recurrent CBT using X-ray Tutor (XRT) CBS 2.0 Standard Edition between all three test measurements. The control group consisted of 79 screeners and they used another conventional CBT than the control group of experiment 1. As in experiment 1, according to the security organization and their appropriate authority, airport security screeners of both groups conducted about 20min CBT per week. Analysis of XRT training use showed that on average, each screener trained 20.92min (SD = 2.87) per week. Materials and procedure Materials and procedure in experiment 2 were the same as in experiment 1. Again, all screeners took the X-ray CAT at the beginning and after 3 and 6months of CBT. The only difference was the CBT for the control group, which was another one than in experiment 1. In order to avoid potential negative consequences, we decided not to mention the exact CBT product in this article for experiment 2, neither. However, it can be mentioned that also this CBT is widely used at many airports worldwide. As the conventional CBT used in experiment 1, this CBT has a much smaller threat image library than XRT, threat objects are not displayed in many different views, threat objects are not matched with different bags automatically on the fly, and there is no individually adaptive training algorithm. Results and discussion This section is structured the same way as in experiment 1. Figure 10 shows the detection performance d for both groups and all three test measurements. As in experiment 1, individual d scores were subjected to repeated measures ANOVA with the within-participant factor measurement (first, second and third) and the betweenparticipant factor group (XRT training group and control group). Again, there were large main effects of measurement η 2 = 0.50, F(2, 322) = , p <.001, group, η 2 = 0.26, F(1, 161) = 56.34, p <.001, and a significant interaction of measurement and group η 2 = 0.33, F(2, 322) = 78.40, p < The large interaction is consistent with Fig. 10 showing a much larger performance increase as a result of training for the XRT training group when compared to the control

16 96 S.M. Koller, et al. Fig. 10 Detection performance with standard deviations for the XRT training group vs the control group comparing first, second and third measurement Detection Performance (d') First Measurement Second Measurement Third Measurement XRT Training Group (n=84) Control Group (n=79) group. This was confirmed by independent samples t tests. There was no significant difference between both groups for the first measurement t(161) = 0.22, p = 0.83, d = 0.03, but a highly significant difference already in the second measurement t(161) = 6.66, p < 0.001, d = 1.05 after 3months of training. As in experiment 1, additional paired-samples t tests revealed significant differences for the XRT training group between all measurements. In contrast to experiment 1, there were also significant differences for the control group between the first and second measurement, although not between the second and third measurement (see Table 6). Thus, the conventional CBT used in experiment 2 did also result in increased detection performance although substantially less than XRT. Figure 11 shows the detection performance of both screener groups broken up by prohibited item category and the three test measurements. Again, a clear effect of training on the detection performance can be seen for the XRT training group with the largest increase after the first 3months of training. However, also the control group shows a slight increase in detection performance at least for the second measurement. The analysis of variance (ANOVA) with threat category as additional within-participant factor showed significant main effects and significant interactions (for details see Table 7, a). The results are comparable to those in experiment 1. Most importantly, detection of guns was best initially, while detection of IEDs was much lower. After 6months of recurrent adaptive CBT, screeners of the XRT training group could detect IEDs even slightly better than guns. This nice replication of the results obtained in experiment 1 clearly shows that IED detection is not difficult per se but only a matter of the right training.. As mentioned above, all IEDs used in this study contained a detonator, wires, explosive, a triggering device and a power source. Thus our conclusions are only applicable to the detection of such multicomponent IEDs. As shown in Table 8, t tests between the first and second measurement revealed significant training effects for the XRT training group for all Table 6 Results of the t tests comparing the detection performance of first (t1), second (t2) and third (t3) measurement t(83) t(78) p Value D XRT training group (t1 t2) < XRT training group (t2 t3) 7.07 < Control group (t1 t2) 3.67 < Control group (t2 t3) 0.91 =

17 Investigating effects of recurrent CBT of X-ray interpretation 97 Fig. 11 Detection performance with standard deviations for the XRT training group vs the control group comparing first, second and third measurement for each threat category separately Detection Performance (d') First Measurement Second Measurement Third Measurement Guns IEDs Knives Other Guns IEDs Knives Other XRT Training Group (n=84) Control Group (n=79) threat categories with large effect sizes (all d > 0.80). In contrast to experiment 1, there were also significant effects for the control group, although with rather low effect sizes (all d < 0.56). Thus the conventional CBT used in experiment 2 also resulted in performance increases although much less than XRT. By an ANOVA with measurement and set as within-participant factors and group as between-participants factor, we investigated if training effects can also be shown for threat objects which were not included in the training sessions. There were main effects and interactions for all factors showing similar results as in experiment 1 (see Table 7, b for details). As in experiment 1, a large transfer effect was found (see Fig. 12). Not only for the prohibited items of set A, which were included in the training library of XRT, but also for the untrained prohibited objects of set B, screeners of the XRT training group showed a large increase in detection performance after training. Paired-samples t tests between the first and second measurement showed training effects for both sets and also for both groups whereas again large effect sizes were found for the XRT training group and small effect sizes for the control group (trained group set A: t(83) = 13.10, p < 0.001, d = 1.77 and set B: t(83) = 9.53, p < 0.001, d = 1.24, control group set A: t(78) = 2.32, p < 0.05, d = 0.24 and set B: t(78) = 3.00, p < 0.01, d = 0.32). Pairwise t tests showed no significant difference in the difficulty of set A and Set B for both groups at the first measurement (XRT training group: t(83) = 1.16, p = 0.25, d = 0.10, control group: t(78) = 1.93, p = 0.06, d = 0.19). Figure 13 includes also the threat category in the analysis. Paired samples t tests were calculated in order to investigate if the training effect between the first and second measurement was significant for each category in both sets for the XRT training group. Results revealed significant effects for all categories in each set (p < 0.01, d = 0.51 for knives in Set B, p < 0.001, d > 0.74 for all other categories). Thus, as in experiment 1, XRT resulted in large detection performance increases even for prohibited objects that are not part of the XRT image library (X-ray CAT image set B). For the control group the difference between the first and third measurement was calculated in order to maximize the chances for finding a significant training effect. The following t tests were significant: IEDs for both sets, knives only for set A, and other threat objects for both sets (p < 0.05, d > 0.23). All other values were not significant (p > 0.06, d < 0.28) and reveal no effect of training between the different measurements.

18 98 S.M. Koller, et al. Table 7 Results of the ANOVAs in experiment 2 Factor df F η 2 p Value a Measurement (M) 2, <0.001 Threat category (T) 3, <0.001 Group (G) 1, <0.001 MxG 2, <0.001 TxG 3, <0.001 MxT 6, <0.001 MxTxG 6, <0.001 b Measurement (M) 2, <0.001 Set (S) 1, <0.001 Group (G) 1, <0.001 MxG 2, <0.001 MxS 2, <0.001 SxG 1, <0.001 MxSxG 2, <0.001 c Measurement (M) 2, <0.001 Set (S) 1, <0.001 Threat category (T) 3, <0.001 Group (G) 1, <0.001 MxG 2, <0.001 MxT 6, <0.001 MxS 2, <0.001 SxG 1, <0.001 SxT 3, <0.001 TxG 3, <0.001 MxTxG 6, <0.001 MxSxG 2, <0.001 MxSxT 6, =0.18 SxTxG 3, <0.05 MxSxTxG 6, <0.05 d Measurement (M) 2, <0.001 View (V) 1, <0.001 Threat category (T) 3, <0.001 Group (G) 1, <0.001 MxG 2, <0.001 MxT 6, <0.001 MxV 2, =0.05 VxG 1, =0.43 VxT 3, <0.001 TxG 3, <0.001 MxTxG 6, <0.001 MxVxG 2, =0.30 MxVxT 6, <0.05 VxTxG 3, =0.17 MxVxTxG 6, =0.08 As in experiment 1, individual d scores were subjected to an extended ANOVA with the within-participant factors measurement, X-ray CAT image set, threat category and the between-participants factor group. All main effects and interactions were significant except the interaction between measurement, set and threat category (see Table 7, c, for details). In contrast to experiment 1 the ANOVA revealed a main effect of set and significant interactions with set. However, as can be seen in Fig. 13 they were rather small, which implies large transfer effects. As in experiment 1 the

19 Investigating effects of recurrent CBT of X-ray interpretation 99 Table 8 Results of the t tests comparing the categories between first (t1), second (t2) and third (t3) measurement XRT training group t df p Value d Guns t1 t < IEDs t1 t < Knives t1 t < Other t1 t < Guns t1 t < IEDs t1 t < Knives t1 t < Other t1 t < Control group Guns t1 t < IEDs t1 t < Knives t1 t < Other t1 t < Guns t1 t < IEDs t1 t < Knives t1 t < Other t1 t < results clearly show a training effect for each category and in both sets. This is consistent with the results of the t tests explained above. The training effect that was found for the control group revealed itself also in the sets, that is, there was a transfer effect for the control group, too. Last, the effect of viewpoint was investigated calculating a four-way ANOVA. Results show clear main effects of measurement, view, threat category and group. For details on interactions please refer to Table 7, d. Detection performance is clearly much higher for objects that are shown in the easy view (view 1) than for the objects that are shown from an unusual viewpoint (see Fig. 14). This effect is valid for all threat categories and for the XRT training group as well as for the control group. However, the viewpoint effect is not the same for different threat categories. The graphs in Fig. 14 suggest that the largest viewpoint effect can be observed for the detection of knives, the smallest one for IEDs. As in experiment 1, pairwise t tests showed a significant increase in detection performance at the second measurement for both views for the XRT training group for all four threat categories (p <0.01,d > For the easy view, the control group showed a significant effect for IEDs only (p <0.05,d = 0.32), all other t tests were not significant (p >0.07,d < 0.25). For the difficult view all t test with one exception were Fig. 12 Detection performance with standard deviations for the XRT training group vs the control group comparing first, second and third measurement for sets A and B separately Detection Performance (d') First Set A Second Set A Third Set A First Set B Second Set B Third Set B XRT Training Group (n=84) Control Group (n=79)

20 100 S.M. Koller, et al. First Set A Second Set A Third Set A First Set B Second Set B Third Set B significant for the control group (p <0.05,d > 0.26). Only the training effect of knives in the rotated view was not significant p = 0.07, d = 0.24 (see Table 9 for details). But the results show that although some significant effects in the control group were observed, effect sizes were small compared to those of the XRT training group. In summary, very similar results as in experiment 1 have been found in experiment 2. A large and significant training effect was observed for the group who trained with XRT First Measurement View 1 First Measurement View 2 Second Measurement View 1 Second Measurement View 2 Third Measurement View 1 Third Measurement View 2 Detection Performance (d') Detection Performance (d') Guns IEDs Knives Other Guns IEDs Knives Other XRT Training Group (n=84) Control Group (n=79) Fig. 13 Detection performance with standard deviations for the XRT training group vs the control group comparing first, second and third measurement for sets A and B and each threat category separately Guns IEDs Knives Other Guns IEDs Knives Other XRT Training Group (n=84) Control Group (n=79) Fig. 14 Detection performance with standard deviations for the XRT training group vs the control group comparing first, second and third measurement for both views and each threat category separately

Investigating Training and Transfer Effects Resulting from Recurrent CBT of X-Ray Image Interpretation

Investigating Training and Transfer Effects Resulting from Recurrent CBT of X-Ray Image Interpretation Investigating Training and Transfer Effects Resulting from Recurrent CBT of X-Ray Image Interpretation Saskia M. Koller (s.koller@psychologie.unizh.ch) University of Zurich, Department of Psychology, Binzmühlestrasse

More information

Evaluation of CBT for increasing threat detection performance in X-ray screening

Evaluation of CBT for increasing threat detection performance in X-ray screening Evaluation of CBT for increasing threat detection performance in X-ray screening A. Schwaninger & F. Hofer Department of Psychology, University of Zurich, Switzerland Abstract The relevance of aviation

More information

The role of recurrent CBT for increasing aviation security screeners visual knowledge and abilities needed in x-ray screening

The role of recurrent CBT for increasing aviation security screeners visual knowledge and abilities needed in x-ray screening The role of recurrent CBT for increasing aviation security screeners visual knowledge and abilities needed in x-ray screening Diana Hardmeier University of Zurich, Department of Psychology, Visual Cognition

More information

Evaluation of CBT for increasing threat detection performance in X-ray screening

Evaluation of CBT for increasing threat detection performance in X-ray screening Evaluation of CBT for increasing threat detection performance in X-ray screening A. Schwaninger & F. Hofer Department of Psychology, University of Zurich, Switzerland Abstract The relevance of aviation

More information

Measuring Visual Abilities and Visual Knowledge of Aviation Security Screeners

Measuring Visual Abilities and Visual Knowledge of Aviation Security Screeners Measuring Visual Abilities and Visual Knowledge of Aviation Security Screeners Adrian Schwaninger, Diana Hardmeier, Franziska Hofer Department of Psychology, University of Zurich, Switzerland Abstract

More information

Aviation Security Screeners Visual Abilities & Visual Knowledge Measurement

Aviation Security Screeners Visual Abilities & Visual Knowledge Measurement Aviation Security Screeners Visual Abilities & Visual Knowledge Measurement Adrian Schwaninger, Diana Hardmeier & Franziska Hofer University of Zurich, Switzerland ABSTRACT A central aspect of airport

More information

Increased Detection Performance in Airport Security Screening Using the X-Ray ORT as Pre-Employment Assessment Tool

Increased Detection Performance in Airport Security Screening Using the X-Ray ORT as Pre-Employment Assessment Tool Increased Detection Performance in Airport Security Screening Using the X-Ray ORT as Pre-Employment Assessment Tool Diana Hardmeier, Franziska Hofer, Adrian Schwaninger Abstract Detecting prohibited items

More information

The impact of color composition on X-ray image interpretation in aviation security screening

The impact of color composition on X-ray image interpretation in aviation security screening The impact of color composition on X-ray image interpretation in aviation security screening Claudia C. von Bastian Department of Psychology University of Zurich Zurich, Switzerland c.vonbastian@psychologie.uzh.ch

More information

Using threat image projection data for assessing individual screener performance

Using threat image projection data for assessing individual screener performance Safety and Security Engineering 417 Using threat image projection data for assessing individual screener performance F. Hofer & A. Schwaninger Department of Psychology, University of Zurich, Switzerland

More information

Using threat image projection data for assessing individual screener performance

Using threat image projection data for assessing individual screener performance Safety and Security Engineering 417 Using threat image projection data for assessing individual screener performance F. Hofer & A. Schwaninger Department of Psychology, University of Zurich, Switzerland

More information

Diana Hardmeier Moritz Jaeger Rebekka Schibli Adrian Schwaninger Center for Adaptive Security Research and Applications (CASRA) Zurich Switzerland

Diana Hardmeier Moritz Jaeger Rebekka Schibli Adrian Schwaninger Center for Adaptive Security Research and Applications (CASRA) Zurich Switzerland EFFECTIVENESS OF COMPUTER-BASED TRAINING FOR IMPROVING DETECTION OF IMPROVISED EXPLOSIVE DEVICES BY SCREENERS IS HIGHLY DEPENDENT ON THE TYPES OF IEDS USED DURING TRAINING Diana Hardmeier Moritz Jaeger

More information

Human-machine interaction in x-ray screening

Human-machine interaction in x-ray screening Human-machine interaction in x-ray screening Stefan Michel & Adrian Schwaninger University of Applied Sciences Northwestern Switzerland School of Applied Psychology, Institute Human in Complex Systems

More information

ADAPTIVE COMPUTER-BASED TRAINING INCREASES ON THE JOB PERFORMANCE OF X-RAY SCREENERS

ADAPTIVE COMPUTER-BASED TRAINING INCREASES ON THE JOB PERFORMANCE OF X-RAY SCREENERS ADAPTIVE COMPUTER-BASED TRAINING INCREASES ON THE JOB PERFORMANCE OF X-RAY SCREENERS Dr. Adran Schwaninger University of Zurich Department of Psychology Visual Cognition Research Group BinzmOhtestrasse

More information

How Image Based Factors and Human Factors Contribute to Threat Detection Performance in X-Ray Aviation Security Screening

How Image Based Factors and Human Factors Contribute to Threat Detection Performance in X-Ray Aviation Security Screening How Image Based Factors and Human Factors Contribute to Threat Detection Performance in X-Ray Aviation Security Screening Anton Bolfing, Tobias Halbherr, and Adrian Schwaninger Department of Psychology,

More information

Why do Airport Security Screeners Sometimes Fail in Covert Tests?

Why do Airport Security Screeners Sometimes Fail in Covert Tests? Why do Airport Security Screeners Sometimes Fail in Covert Tests? Adrian Schwaninger University of Applied Sciences Northwestern Switzerland School ofapplied Psychology, Institute Human in Complex Systems

More information

Certification of Airport Security Officers Using Multiple-Choice Tests: A Pilot Study

Certification of Airport Security Officers Using Multiple-Choice Tests: A Pilot Study Certification of Airport Security Officers Using Multiple-Choice Tests: A Pilot Study Diana Hardmeier Center for Adaptive Security Research and Applications (CASRA) Zurich, Switzerland Catharina Müller

More information

Viewpoint dependent recognition of familiar faces

Viewpoint dependent recognition of familiar faces Viewpoint dependent recognition of familiar faces N. F. Troje* and D. Kersten *Max-Planck Institut für biologische Kybernetik, Spemannstr. 38, 72076 Tübingen, Germany Department of Psychology, University

More information

Detection of terrorist threats in air passenger luggage: expertise development

Detection of terrorist threats in air passenger luggage: expertise development Loughborough University Institutional Repository Detection of terrorist threats in air passenger luggage: expertise development This item was submitted to Loughborough University's Institutional Repository

More information

View-Based Recognition of Faces in Man and Machine: Re-visiting Inter-Extra-Ortho

View-Based Recognition of Faces in Man and Machine: Re-visiting Inter-Extra-Ortho Wallraven, C., Schwaninger, A., Schuhmacher, S., & Bülthoff, H.H. (00). Lecture Notes in Computer Science, 55, 651-660. View-Based Recognition of Faces in Man and Machine: Re-visiting Inter-Extra-Ortho

More information

Viewpoint-dependent recognition of familiar faces

Viewpoint-dependent recognition of familiar faces Perception, 1999, volume 28, pages 483 ^ 487 DOI:10.1068/p2901 Viewpoint-dependent recognition of familiar faces Nikolaus F Trojeô Max-Planck Institut fïr biologische Kybernetik, Spemannstrasse 38, 72076

More information

Testing Conditions for Viewpoint Invariance in Object Recognition

Testing Conditions for Viewpoint Invariance in Object Recognition loiiiliil of Experimental Piychokwy: Human Perception and Pufaniaiice 1997, MM. 23, No. 5, 1511-1521 Copyright 1997 by the American Psychological Association, Inc. 0096-1523/97/1300 Testing Conditions

More information

Configural information is processed differently in perception and recognition of faces

Configural information is processed differently in perception and recognition of faces Vision Research 43 (2003) 1501 1505 Rapid communication Configural information is processed differently in perception and recognition of faces Adrian Schwaninger a,b, *, Stefan Ryf b, Franziska Hofer b

More information

JDiBrief Security Visual Inspection with X-rays: SUMMARY (1 of 5)

JDiBrief Security Visual Inspection with X-rays: SUMMARY (1 of 5) Visual Inspection with X-rays: SUMMARY (1 of 5) The success of recent terrorist attacks has disclosed to the public the dangers connected with failure in detecting threats and prohibited items at checkpoints.

More information

Priming of Depth-Rotated Objects Depends on Attention and Part Changes

Priming of Depth-Rotated Objects Depends on Attention and Part Changes Priming of Depth-Rotated Objects Depends on Attention and Part Changes Volker Thoma 1 and Jules Davidoff 2 1 University of East London, UK, 2 Goldsmiths University of London, UK Abstract. Three priming

More information

Differences of Face and Object Recognition in Utilizing Early Visual Information

Differences of Face and Object Recognition in Utilizing Early Visual Information Differences of Face and Object Recognition in Utilizing Early Visual Information Peter Kalocsai and Irving Biederman Department of Psychology and Computer Science University of Southern California Los

More information

Object Recognition & Categorization. Object Perception and Object Categorization

Object Recognition & Categorization. Object Perception and Object Categorization Object Recognition & Categorization Rhian Davies CS532 Information Visualization: Perception For Design (Ware, 2000) pp 241-256 Vision Science (Palmer, 1999) - pp 416-436, 561-563 Object Perception and

More information

PSYCHOLOGICAL SCIENCE. Research Article. Materials

PSYCHOLOGICAL SCIENCE. Research Article. Materials Research Article TO WHAT EXTENT DO UNIQUE PARTS INFLUENCE RECOGNITION ACROSS CHANGES IN VIEWPOINT? Michael J. Tarr,1 Heinrich H. Bulthoff,2 Marion Zabinski,3 and Volker Blanz2 1 Brown University, 2 Max-Planck

More information

Viewpoint Dependence in Human Spatial Memory

Viewpoint Dependence in Human Spatial Memory From: AAAI Technical Report SS-96-03. Compilation copyright 1996, AAAI (www.aaai.org). All rights reserved. Viewpoint Dependence in Human Spatial Memory Timothy P. McNamara Vaibhav A. Diwadkar Department

More information

Just One View: Invariances in Inferotemporal Cell Tuning

Just One View: Invariances in Inferotemporal Cell Tuning Just One View: Invariances in Inferotemporal Cell Tuning Maximilian Riesenhuber Tomaso Poggio Center for Biological and Computational Learning and Department of Brain and Cognitive Sciences Massachusetts

More information

Identity Verification Using Iris Images: Performance of Human Examiners

Identity Verification Using Iris Images: Performance of Human Examiners Identity Verification Using Iris Images: Performance of Human Examiners Kevin McGinn, Samuel Tarin and Kevin W. Bowyer Department of Computer Science and Engineering University of Notre Dame kmcginn3,

More information

THE PERCEPTION AND MEMORY OF STEREOSCOPIC DEPTH INFORMATION IN NATURALISTIC OBJECTS

THE PERCEPTION AND MEMORY OF STEREOSCOPIC DEPTH INFORMATION IN NATURALISTIC OBJECTS THE PERCEPTION AND MEMORY OF STEREOSCOPIC DEPTH INFORMATION IN NATURALISTIC OBJECTS Technical Report June 20, 1996 Thomas A. Busey, Ph.D. Indiana University Please send correspondence to: Thomas A. Busey

More information

Supplementary Materials

Supplementary Materials Supplementary Materials 1. Material and Methods: Our stimuli set comprised 24 exemplars for each of the five visual categories presented in the study: faces, houses, tools, strings and false-fonts. Examples

More information

Differing views on views: comments on Biederman and Bar (1999)

Differing views on views: comments on Biederman and Bar (1999) Vision Research 40 (2000) 3895 3899 www.elsevier.com/locate/visres Differing views on views: comments on Biederman and Bar (1999) William G. Hayward a, *, Michael J. Tarr b,1 a Department of Psychology,

More information

To what extent do unique parts influence recognition across changes in viewpoint?

To what extent do unique parts influence recognition across changes in viewpoint? Psychological Science 1997, Vol. 8, No. 4, 282-289 To what extent do unique parts influence recognition across changes in viewpoint? Michael J. Tarr Brown University Heinrich H. Bülthoff, Marion Zabinski,

More information

Max-Planck-Institut für biologische Kybernetik Arbeitsgruppe Bülthoff

Max-Planck-Institut für biologische Kybernetik Arbeitsgruppe Bülthoff Max-Planck-Institut für biologische Kybernetik Arbeitsgruppe Bülthoff Spemannstraße 38 72076 Tübingen Germany Technical Report No. 43 November 1996 An Introduction to Object Recognition Jerey C. Liter

More information

Role of Featural and Configural Information in Familiar and Unfamiliar Face Recognition

Role of Featural and Configural Information in Familiar and Unfamiliar Face Recognition Schwaninger, A., Lobmaier, J. S., & Collishaw, S. M. (2002). Lecture Notes in Computer Science, 2525, 643-650. Role of Featural and Configural Information in Familiar and Unfamiliar Face Recognition Adrian

More information

Object recognition under sequential viewing conditions: evidence for viewpoint-specific recognition procedures

Object recognition under sequential viewing conditions: evidence for viewpoint-specific recognition procedures Perception, 1994, volume 23, pages 595-614 Object recognition under sequential viewing conditions: evidence for viewpoint-specific recognition procedures Rebecca Lawson, Glyn W Humphreys, Derrick G Watson

More information

Competing Frameworks in Perception

Competing Frameworks in Perception Competing Frameworks in Perception Lesson II: Perception module 08 Perception.08. 1 Views on perception Perception as a cascade of information processing stages From sensation to percept Template vs. feature

More information

Competing Frameworks in Perception

Competing Frameworks in Perception Competing Frameworks in Perception Lesson II: Perception module 08 Perception.08. 1 Views on perception Perception as a cascade of information processing stages From sensation to percept Template vs. feature

More information

Task and object learning in visual recognition

Task and object learning in visual recognition A.I. Memo No. 1348 C.B.1.P Memo No. 63 MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY and CENTER FOR BIOLOGICAL INFORMATION PROCESSING WHITAKER COLLEGE Task and object learning

More information

Supplemental Information. Varying Target Prevalence Reveals. Two Dissociable Decision Criteria. in Visual Search

Supplemental Information. Varying Target Prevalence Reveals. Two Dissociable Decision Criteria. in Visual Search Current Biology, Volume 20 Supplemental Information Varying Target Prevalence Reveals Two Dissociable Decision Criteria in Visual Search Jeremy M. Wolfe and Michael J. Van Wert Figure S1. Figure S1. Modeling

More information

Object recognition and hierarchical computation

Object recognition and hierarchical computation Object recognition and hierarchical computation Challenges in object recognition. Fukushima s Neocognitron View-based representations of objects Poggio s HMAX Forward and Feedback in visual hierarchy Hierarchical

More information

Seeing Things From a Different Angle: The Pigeon's Recognition of Single Geons Rotated in Depth

Seeing Things From a Different Angle: The Pigeon's Recognition of Single Geons Rotated in Depth Journal of Experimental Psychology: Animal Behavior Processes 2. Vol. 26, No. 2,-2 Copyright 2 by the American Psychological Association, Inc. 97-7/^. DO: I.7/W97-7.26.2. Seeing Things From a Different

More information

(Visual) Attention. October 3, PSY Visual Attention 1

(Visual) Attention. October 3, PSY Visual Attention 1 (Visual) Attention Perception and awareness of a visual object seems to involve attending to the object. Do we have to attend to an object to perceive it? Some tasks seem to proceed with little or no attention

More information

PSYCHOLOGICAL SCIENCE

PSYCHOLOGICAL SCIENCE PSYCHOLOGICAL SCIENCE Research Article Visual Skills in Airport-Security Screening Jason S. McCarley, 1 Arthur F. Kramer, 1 Christopher D. Wickens, 1,2 Eric D. Vidoni, 1 and Walter R. Boot 1 1 Beckman

More information

Intelligent Object Group Selection

Intelligent Object Group Selection Intelligent Object Group Selection Hoda Dehmeshki Department of Computer Science and Engineering, York University, 47 Keele Street Toronto, Ontario, M3J 1P3 Canada hoda@cs.yorku.ca Wolfgang Stuerzlinger,

More information

Evidence for Holistic Representations of Ignored Images and Analytic Representations of Attended Images

Evidence for Holistic Representations of Ignored Images and Analytic Representations of Attended Images Journal of Experimental Psychology: Human Perception and Performance 2004, Vol. 30, No. 2, 257 267 Copyright 2004 by the American Psychological Association 0096-1523/04/$12.00 DOI: 10.1037/0096-1523.30.2.257

More information

Grouped Locations and Object-Based Attention: Comment on Egly, Driver, and Rafal (1994)

Grouped Locations and Object-Based Attention: Comment on Egly, Driver, and Rafal (1994) Journal of Experimental Psychology: General 1994, Vol. 123, No. 3, 316-320 Copyright 1994 by the American Psychological Association. Inc. 0096-3445/94/S3.00 COMMENT Grouped Locations and Object-Based Attention:

More information

Short article Detecting objects is easier than categorizing them

Short article Detecting objects is easier than categorizing them THE QUARTERLY JOURNAL OF EXPERIMENTAL PSYCHOLOGY 2008, 61 (4), 552 557 Short article Detecting objects is easier than categorizing them Jeffrey S. Bowers and Keely W. Jones University of Bristol, Bristol,

More information

Running Head: ATTENTION AND INVARIANCE WITH LEFT-RIGHT REFLECTION

Running Head: ATTENTION AND INVARIANCE WITH LEFT-RIGHT REFLECTION Attention and Reflection 1 Stankiewicz, B.J., Hummel, J.E. & Cooper, E.E. (1998). The role of attention in priming for left-right reflections of object images: Evidence for a dual representation of object

More information

Visual Categorization: How the Monkey Brain Does It

Visual Categorization: How the Monkey Brain Does It Visual Categorization: How the Monkey Brain Does It Ulf Knoblich 1, Maximilian Riesenhuber 1, David J. Freedman 2, Earl K. Miller 2, and Tomaso Poggio 1 1 Center for Biological and Computational Learning,

More information

Psy201 Module 3 Study and Assignment Guide. Using Excel to Calculate Descriptive and Inferential Statistics

Psy201 Module 3 Study and Assignment Guide. Using Excel to Calculate Descriptive and Inferential Statistics Psy201 Module 3 Study and Assignment Guide Using Excel to Calculate Descriptive and Inferential Statistics What is Excel? Excel is a spreadsheet program that allows one to enter numerical values or data

More information

Encoding of Elements and Relations of Object Arrangements by Young Children

Encoding of Elements and Relations of Object Arrangements by Young Children Encoding of Elements and Relations of Object Arrangements by Young Children Leslee J. Martin (martin.1103@osu.edu) Department of Psychology & Center for Cognitive Science Ohio State University 216 Lazenby

More information

Normative Representation of Objects: Evidence for an Ecological Bias in Object Perception and Memory

Normative Representation of Objects: Evidence for an Ecological Bias in Object Perception and Memory Normative Representation of Objects: Evidence for an Ecological Bias in Object Perception and Memory Talia Konkle (tkonkle@mit.edu) Aude Oliva (oliva@mit.edu) Department of Brain and Cognitive Sciences

More information

PSYCHOLOGY 300B (A01) One-sample t test. n = d = ρ 1 ρ 0 δ = d (n 1) d

PSYCHOLOGY 300B (A01) One-sample t test. n = d = ρ 1 ρ 0 δ = d (n 1) d PSYCHOLOGY 300B (A01) Assignment 3 January 4, 019 σ M = σ N z = M µ σ M d = M 1 M s p d = µ 1 µ 0 σ M = µ +σ M (z) Independent-samples t test One-sample t test n = δ δ = d n d d = µ 1 µ σ δ = d n n = δ

More information

MULTIPLE LINEAR REGRESSION 24.1 INTRODUCTION AND OBJECTIVES OBJECTIVES

MULTIPLE LINEAR REGRESSION 24.1 INTRODUCTION AND OBJECTIVES OBJECTIVES 24 MULTIPLE LINEAR REGRESSION 24.1 INTRODUCTION AND OBJECTIVES In the previous chapter, simple linear regression was used when you have one independent variable and one dependent variable. This chapter

More information

Differing views on views: response to Hayward and Tarr (2000)

Differing views on views: response to Hayward and Tarr (2000) Vision Research 40 (2000) 3901 3905 www.elsevier.com/locate/visres Differing views on views: response to Hayward and Tarr (2000) Irving Biederman a, *, Moshe Bar b a Department of Psychology and Neuroscience

More information

4 Diagnostic Tests and Measures of Agreement

4 Diagnostic Tests and Measures of Agreement 4 Diagnostic Tests and Measures of Agreement Diagnostic tests may be used for diagnosis of disease or for screening purposes. Some tests are more effective than others, so we need to be able to measure

More information

Image-based object recognition in man, monkey and machine

Image-based object recognition in man, monkey and machine 1 Image-based object recognition in man, monkey and machine Michael J. Tarr 3 -*, Heinrich H. Bulthoff b *Brown University, Department of Cognitive and Linguistic Sciences, P.O. Bon: 1978, Providence,

More information

CHAPTER ONE CORRELATION

CHAPTER ONE CORRELATION CHAPTER ONE CORRELATION 1.0 Introduction The first chapter focuses on the nature of statistical data of correlation. The aim of the series of exercises is to ensure the students are able to use SPSS to

More information

Sensation & Perception PSYC420 Thomas E. Van Cantfort, Ph.D.

Sensation & Perception PSYC420 Thomas E. Van Cantfort, Ph.D. Sensation & Perception PSYC420 Thomas E. Van Cantfort, Ph.D. Objects & Forms When we look out into the world we are able to see things as trees, cars, people, books, etc. A wide variety of objects and

More information

How does attention spread across objects oriented in depth?

How does attention spread across objects oriented in depth? Attention, Perception, & Psychophysics, 72 (4), 912-925 doi:.3758/app.72.4.912 How does attention spread across objects oriented in depth? Irene Reppa Swansea University, Swansea, Wales Daryl Fougnie Vanderbilt

More information

PERCEIVING FUNCTIONS. Visual Perception

PERCEIVING FUNCTIONS. Visual Perception PERCEIVING FUNCTIONS Visual Perception Initial Theory Direct Perception of Function Indirect Affordances Categorization Flat surface Horizontal Strong = Sittable Sittable J. J. Gibson All others Slide

More information

To What Extent Can the Recognition of Unfamiliar Faces be Accounted for by the Direct Output of Simple Cells?

To What Extent Can the Recognition of Unfamiliar Faces be Accounted for by the Direct Output of Simple Cells? To What Extent Can the Recognition of Unfamiliar Faces be Accounted for by the Direct Output of Simple Cells? Peter Kalocsai, Irving Biederman, and Eric E. Cooper University of Southern California Hedco

More information

Supplemental Material. Experiment 1: Face Matching with Unlimited Exposure Time

Supplemental Material. Experiment 1: Face Matching with Unlimited Exposure Time Supplemental Material Experiment : Face Matching with Unlimited Exposure Time Participants. In all experiments, participants were volunteer undergraduate students enrolled at The University of Texas at

More information

The Number of Trials with Target Affects the Low Prevalence Effect

The Number of Trials with Target Affects the Low Prevalence Effect The Number of Trials with Target Affects the Low Prevalence Effect Fan Yang 1, 2, Xianghong Sun 1, Kan Zhang 1, and Biyun Zhu 1,2 1 Institute of Psychology, Chinese Academy of Sciences, Beijing 100101,

More information

Do you have to look where you go? Gaze behaviour during spatial decision making

Do you have to look where you go? Gaze behaviour during spatial decision making Do you have to look where you go? Gaze behaviour during spatial decision making Jan M. Wiener (jwiener@bournemouth.ac.uk) Department of Psychology, Bournemouth University Poole, BH12 5BB, UK Olivier De

More information

Time Course of Processes and Representations Supporting Visual Object Identification and Memory

Time Course of Processes and Representations Supporting Visual Object Identification and Memory Time Course of Processes and Representations Supporting Visual Object Identification and Memory Haline E. Schendan 1 and Marta Kutas 2 Abstract & Event-related potentials (ERPs) were used to delineate

More information

One-Way Independent ANOVA

One-Way Independent ANOVA One-Way Independent ANOVA Analysis of Variance (ANOVA) is a common and robust statistical test that you can use to compare the mean scores collected from different conditions or groups in an experiment.

More information

Categorization of Natural Scenes: Local versus Global Information and the Role of Color

Categorization of Natural Scenes: Local versus Global Information and the Role of Color Categorization of Natural Scenes: Local versus Global Information and the Role of Color JULIA VOGEL University of British Columbia ADRIAN SCHWANINGER Max Planck Institute for Biological Cybernetics and

More information

Object Recognition. John E. Hummel. Department of Psychology University of Illinois. Abstract

Object Recognition. John E. Hummel. Department of Psychology University of Illinois. Abstract Hummel, J. E. (in press). Object recognition. In D. Reisburg (Ed.) Oxford Handbook of Cognitive Psychology. Oxford, England: Oxford University Press. Object Recognition John E. Hummel Department of Psychology

More information

Rare targets rarely missed in correctable search

Rare targets rarely missed in correctable search Fleck, M. S., & Mitroff, S. R. (in press). Rare targets rarely missed in correctable search. Psychological Science. Rare targets rarely missed in correctable search Mathias S. Fleck and Stephen R. Mitroff

More information

Effect of Automation Management Strategies and Sensory Modalities as Applied to a Baggage Screening Task

Effect of Automation Management Strategies and Sensory Modalities as Applied to a Baggage Screening Task Theses - Daytona Beach Dissertations and Theses Spring 2005 Effect of Automation Management Strategies and Sensory Modalities as Applied to a Baggage Screening Task Michelle Ann Leach Embry-Riddle Aeronautical

More information

The Optimum Choice for Implantologist

The Optimum Choice for Implantologist The Optimum Choice for Implantologist What is essential for your practice? What s the best way to choose a 3D X-ray machine for implant treatment planning? 02 Doctor says.. There are diagnostic limitations

More information

Interaction techniques for radiology workstations: impact on users' productivity

Interaction techniques for radiology workstations: impact on users' productivity Interaction techniques for radiology workstations: impact on users' productivity Abstract Moise, Adrian; Atkins, M. Stella {amoise, stella }@cs.sfu.ca Simon Fraser University, Department of Computing Science

More information

Integration of Multiple Views of Scenes. Monica S. Castelhano, Queen s University. Alexander Pollatsek, University of Massachusetts, Amherst.

Integration of Multiple Views of Scenes. Monica S. Castelhano, Queen s University. Alexander Pollatsek, University of Massachusetts, Amherst. Integrating Viewpoints in Scenes 1 in press, Perception and Psychophysics Integration of Multiple Views of Scenes Monica S. Castelhano, Queen s University Alexander Pollatsek, University of Massachusetts,

More information

Detection Theory: Sensitivity and Response Bias

Detection Theory: Sensitivity and Response Bias Detection Theory: Sensitivity and Response Bias Lewis O. Harvey, Jr. Department of Psychology University of Colorado Boulder, Colorado The Brain (Observable) Stimulus System (Observable) Response System

More information

The Interplay between Feature-Saliency and Feedback Information in Visual Category Learning Tasks

The Interplay between Feature-Saliency and Feedback Information in Visual Category Learning Tasks The Interplay between Feature-Saliency and Feedback Information in Visual Category Learning Tasks Rubi Hammer 1 (rubih@stanford.edu) Vladimir Sloutsky 2 (sloutsky@psy.ohio-state.edu) Kalanit Grill-Spector

More information

Sheila Barron Statistics Outreach Center 2/8/2011

Sheila Barron Statistics Outreach Center 2/8/2011 Sheila Barron Statistics Outreach Center 2/8/2011 What is Power? When conducting a research study using a statistical hypothesis test, power is the probability of getting statistical significance when

More information

Research methods in sensation and perception. (and the princess and the pea)

Research methods in sensation and perception. (and the princess and the pea) Research methods in sensation and perception (and the princess and the pea) Sensory Thresholds We can measure stuff in the world like pressure, sound, light, etc. We can't easily measure your psychological

More information

Business Statistics Probability

Business Statistics Probability Business Statistics The following was provided by Dr. Suzanne Delaney, and is a comprehensive review of Business Statistics. The workshop instructor will provide relevant examples during the Skills Assessment

More information

Does scene context always facilitate retrieval of visual object representations?

Does scene context always facilitate retrieval of visual object representations? Psychon Bull Rev (2011) 18:309 315 DOI 10.3758/s13423-010-0045-x Does scene context always facilitate retrieval of visual object representations? Ryoichi Nakashima & Kazuhiko Yokosawa Published online:

More information

Identify these objects

Identify these objects Pattern Recognition The Amazing Flexibility of Human PR. What is PR and What Problems does it Solve? Three Heuristic Distinctions for Understanding PR. Top-down vs. Bottom-up Processing. Semantic Priming.

More information

EARLY STAGE DIAGNOSIS OF LUNG CANCER USING CT-SCAN IMAGES BASED ON CELLULAR LEARNING AUTOMATE

EARLY STAGE DIAGNOSIS OF LUNG CANCER USING CT-SCAN IMAGES BASED ON CELLULAR LEARNING AUTOMATE EARLY STAGE DIAGNOSIS OF LUNG CANCER USING CT-SCAN IMAGES BASED ON CELLULAR LEARNING AUTOMATE SAKTHI NEELA.P.K Department of M.E (Medical electronics) Sengunthar College of engineering Namakkal, Tamilnadu,

More information

General recognition theory of categorization: A MATLAB toolbox

General recognition theory of categorization: A MATLAB toolbox ehavior Research Methods 26, 38 (4), 579-583 General recognition theory of categorization: MTL toolbox LEOL. LFONSO-REESE San Diego State University, San Diego, California General recognition theory (GRT)

More information

Changing Viewpoints During Dynamic Events

Changing Viewpoints During Dynamic Events Changing Viewpoints During Dynamic Events Bärbel Garsoffky (b.garsoffky@iwm-kmrc.de) Knowledge Media Research Center, Konrad-Adenauer-Str. 40 72072 Tübingen, Germany Markus Huff (m.huff@iwm-kmrc.de) Knowledge

More information

the remaining half of the arrays, a single target image of a different type from the remaining

the remaining half of the arrays, a single target image of a different type from the remaining 8 the remaining half of the arrays, a single target image of a different type from the remaining items was included. Participants were asked to decide whether a different item was included in the array,

More information

Feasibility Evaluation of a Novel Ultrasonic Method for Prosthetic Control ECE-492/3 Senior Design Project Fall 2011

Feasibility Evaluation of a Novel Ultrasonic Method for Prosthetic Control ECE-492/3 Senior Design Project Fall 2011 Feasibility Evaluation of a Novel Ultrasonic Method for Prosthetic Control ECE-492/3 Senior Design Project Fall 2011 Electrical and Computer Engineering Department Volgenau School of Engineering George

More information

Multiple routes to object matching from different viewpoints: Mental rotation versus invariant features

Multiple routes to object matching from different viewpoints: Mental rotation versus invariant features Perception, 2001, volume 30, pages 1047 ^ 1056 DOI:10.1068/p3200 Multiple routes to object matching from different viewpoints: Mental rotation versus invariant features Jan Vanrie, Bert Willems, Johan

More information

Modeling the Deployment of Spatial Attention

Modeling the Deployment of Spatial Attention 17 Chapter 3 Modeling the Deployment of Spatial Attention 3.1 Introduction When looking at a complex scene, our visual system is confronted with a large amount of visual information that needs to be broken

More information

Priming for symmetry detection of three-dimensional figures: Central axes can prime symmetry detection separately from local components

Priming for symmetry detection of three-dimensional figures: Central axes can prime symmetry detection separately from local components VISUAL COGNITION, 2006, 13 (3), 363±397 Priming for symmetry detection of three-dimensional figures: Central axes can prime symmetry detection separately from local components Takashi Yamauchi Psychology

More information

A Memory Model for Decision Processes in Pigeons

A Memory Model for Decision Processes in Pigeons From M. L. Commons, R.J. Herrnstein, & A.R. Wagner (Eds.). 1983. Quantitative Analyses of Behavior: Discrimination Processes. Cambridge, MA: Ballinger (Vol. IV, Chapter 1, pages 3-19). A Memory Model for

More information

Spatial Orientation Using Map Displays: A Model of the Influence of Target Location

Spatial Orientation Using Map Displays: A Model of the Influence of Target Location Gunzelmann, G., & Anderson, J. R. (2004). Spatial orientation using map displays: A model of the influence of target location. In K. Forbus, D. Gentner, and T. Regier (Eds.), Proceedings of the Twenty-Sixth

More information

Scale Invariance and Primacy and Recency Effects in an Absolute Identification Task

Scale Invariance and Primacy and Recency Effects in an Absolute Identification Task Neath, I., & Brown, G. D. A. (2005). Scale Invariance and Primacy and Recency Effects in an Absolute Identification Task. Memory Lab Technical Report 2005-01, Purdue University. Scale Invariance and Primacy

More information

Framework for Comparative Research on Relational Information Displays

Framework for Comparative Research on Relational Information Displays Framework for Comparative Research on Relational Information Displays Sung Park and Richard Catrambone 2 School of Psychology & Graphics, Visualization, and Usability Center (GVU) Georgia Institute of

More information

Rapid fear detection relies on high spatial frequencies

Rapid fear detection relies on high spatial frequencies Supplemental Material for Rapid fear detection relies on high spatial frequencies Timo Stein, Kiley Seymour, Martin N. Hebart, and Philipp Sterzer Additional experimental details Participants Volunteers

More information

August 30, Alternative to the Mishkin-Ungerleider model

August 30, Alternative to the Mishkin-Ungerleider model 1 Visual Cognition August 30, 2007 2 3 Overview of Visual Cognition Visual system: mission critical Multivariate inputs, unitary experience Multiple types of vision means distributed visual network Segregating

More information