Characterizing Visual Attention during Driving and Non-driving Hazard Perception Tasks in a Simulated Environment

Similar documents
Eye Movement Patterns and Driving Performance

OPTIC FLOW IN DRIVING SIMULATORS

Do you have to look where you go? Gaze behaviour during spatial decision making

THE SPATIAL EXTENT OF ATTENTION DURING DRIVING

MENTAL WORKLOAD AS A FUNCTION OF TRAFFIC DENSITY: COMPARISON OF PHYSIOLOGICAL, BEHAVIORAL, AND SUBJECTIVE INDICES

A FIELD STUDY ASSESSING DRIVING PERFORMANCE, VISUAL ATTENTION, HEART RATE AND SUBJECTIVE RATINGS IN RESPONSE TO TWO TYPES OF COGNITIVE WORKLOAD

CAN WE PREDICT STEERING CONTROL PERFORMANCE FROM A 2D SHAPE DETECTION TASK?

A Model for Automatic Diagnostic of Road Signs Saliency

Rules of apparent motion: The shortest-path constraint: objects will take the shortest path between flashed positions.

Estimation of Driver Inattention to Forward Objects Using Facial Direction with Application to Forward Collision Avoidance Systems

EYE MOVEMENTS DURING VISUAL AND AUDITORY TASK PERFORMANCE

Traffic Sign Detection and Identification

Available online at ScienceDirect. Procedia Manufacturing 3 (2015 )

Chapter 5 Car driving

Koji Sakai. Kyoto Koka Women s University, Ukyo-ku Kyoto, Japan

Vision and Action. 10/3/12 Percep,on Ac,on 1

DRIVING HAZARD DETECTION WITH A BIOPTIC TELESCOPE

I just didn t see it It s all about hazard perception. Dr. Robert B. Isler Associate Professor School of Psychology University of Waikato Hamilton

Analysis of Glance Movements in Critical Intersection Scenarios

Detecting and Reading Text on HUDs: Effects of Driving Workload and Message Location

The Danger of Incorrect Expectations In Driving: The Failure to Respond

Changing Driver Behavior Through Unconscious Stereotype Activation

Pupil Dilation as an Indicator of Cognitive Workload in Human-Computer Interaction

This is a repository copy of Leading to Distraction: Driver distraction, lead car, and road environment.

Multimodal Driver Displays: Potential and Limitations. Ioannis Politis

The role of cognitive effort in subjective reward devaluation and risky decision-making

Iran. T. Allahyari, J. Environ. et Health. al., USEFUL Sci. Eng., FIELD 2007, OF Vol. VIEW 4, No. AND 2, RISK pp OF... processing system, i.e

Application of ecological interface design to driver support systems

ROAD SIGN CONSPICUITY AND MEMORABILITY: WHAT WE SEE AND REMEMBER

Effect of Pre-Presentation of a Frontal Face on the Shift of Visual Attention Induced by Averted Gaze

Stimulus-Response Compatibilitiy Effects for Warning Signals and Steering Responses

Aging and the Detection of Collision Events in Fog

THE EFFECTS OF MOMENTARY VISUAL DISRUPTION ON HAZARD ANTICIPATION IN DRIVING. Massachusetts, USA

How do we read YAH maps? An Eye-Tracking Study

Modeling the Real World using STISIM Drive Simulation Software: A Study Contrasting High and Low Locality Simulations.

International Journal of Software and Web Sciences (IJSWS)

Effects of Narcotic Analgesics on Driving

Differential Viewing Strategies towards Attractive and Unattractive Human Faces

Effects of Central Nervous System Depressants on Driving

Effects of Cannabis on Driving

The Graphesthesia Paradigm: Drawing Letters on the Body to Investigate the Embodied Nature of Perspective-Taking

Why do we look at people's eyes?

The Attraction of Visual Attention to Texts in Real-World Scenes

Naturalistic Driving Performance During Secondary Tasks

Measuring IVIS Impact to Driver by On-road Test and Simulator Experiment Chenjiang Xie a,b, * Tong Zhu a,b Chunlin Guo a,b Yimin Zhang a a

The Effects of Age and Distraction on Reaction Time in a Driving Simulator

TIME-TO-CONTACT AND COLLISION DETECTION ESTIMATIONS AS MEASURES OF DRIVING SAFETY IN OLD AND DEMENTIA DRIVERS

Validity of Haptic Cues and Its Effect on Priming Visual Spatial Attention

Eye Movements and Hazard Perception in Police Pursuit and Emergency Response Driving

Hazard Perception and Distraction in Novice Drivers: Effects of 12 Months Driving Experience

Research in Progress on Distracted Driving

Does Workload Modulate the Effects of In-Vehicle Display Location on Concurrent Driving and Side Task Performance?

Autism Spectrum Disorder: In the Workplace and On the Road

Show Me the Features: Regular Viewing Patterns. During Encoding and Recognition of Faces, Objects, and Places. Makiko Fujimoto

How Much Do Novice Drivers See? The Effects of Demand on Visual Search Strategies in Novice and Experienced Drivers

DRIVER BEHAVIOUR STUDIES IN THE MOTORWAY OPERATIONS PLATFORM GRANT

Outline: Vergence Eye Movements: Classification I. Describe with 3 degrees of freedom- Horiz, Vert, torsion II. Quantifying units- deg, PD, MA III.

Driver Distraction: Towards A Working Definition

Attention Response Functions: Characterizing Brain Areas Using fmri Activation during Parametric Variations of Attentional Load

Supplemental Information

Cell-Phone Induced Driver Distraction David L. Strayer and Frank A. Drews

Viewpoint dependent recognition of familiar faces

Effectiveness of Training Interventions on the Hazard Anticipation for Young Drivers Differing in Sensation Seeking Behavior

Effects of Visual and Cognitive Distraction on Lane Change Test Performance

Congruency Effects with Dynamic Auditory Stimuli: Design Implications

Distracted Driving among Teens. What We Know about It and How to Prevent It May 31 st, 2017

Protecting Workers with Smart E-Vests

UC Merced Proceedings of the Annual Meeting of the Cognitive Science Society

Pupillary Response Based Cognitive Workload Measurement under Luminance Changes

7 Grip aperture and target shape

PMT GCE. Psychology. Advanced Subsidiary GCE Unit G541: Psychological Investigations. Mark Scheme for June Oxford Cambridge and RSA Examinations

In-Car Information Systems: Matching and Mismatching Personality of Driver with Personality of Car Voice

The Role of Color and Attention in Fast Natural Scene Recognition

Prevention of coordinated eye movements and steering impairs driving performance

Perception of Heading and Driving Distance From Optic Flow

Supplemental Information: Task-specific transfer of perceptual learning across sensory modalities

Hand Eye Coordination Patterns in Target Selection

Functional Fixedness: The Functional Significance of Delayed Disengagement Based on Attention Set

Distracted Driving Effects on CMV Operators

Game Design Guided by Visual Attention

Framework for Comparative Research on Relational Information Displays

Transportation Research Part F

COMPARISON OF SELF-REPORTED AND COMPUTER-BASED HAZARD PERCEPTION SKILLS AMONG NOVICE AND EXPERIENCED DRIVERS

The Misjudgment of Risk due to Inattention

The path of visual attention

Evaluating the Safety of Verbal Interface Use while Driving

Positive emotion expands visual attention...or maybe not...

Perception of Faces and Bodies

The influence of visual motion on fast reaching movements to a stationary object

Natural Scene Statistics and Perception. W.S. Geisler

Situation Awareness in Driving

What s the Risk? A Comparison of Actual and Perceived Driving Risk

Inhibitory Control and Peer Passengers Predict Risky Driving in Young Novice Drivers - A Simulator Study

Artificial Intelligence CS 6364

Deciphering Psychological-Physiological Mappings While Driving and Performing a Secondary Memory Task

See highlights on pages 1-4

The Design of Driving Simulator Performance Evaluations for Driving With Vision Impairments and Visual Aids. Aaron J. Mandel

MEMORABILITY OF NATURAL SCENES: THE ROLE OF ATTENTION

Evaluation of a Training Program (STRAP) Designed to Decrease Young Drivers Secondary Task Engagement in High Risk Scenarios

Transcription:

Title: Authors: Characterizing Visual Attention during Driving and Non-driving Hazard Perception Tasks in a Simulated Environment Mackenzie, A.K. Harris, J.M. Journal: ACM Digital Library, (ETRA '14 Proceedings of the Symposium on Eye Tracking Research and Applications) Pages 127-130. Publisher pdf of this manuscript can be obtained from: DOI: 10.1145/2578153.2578171

Characterizing Visual Attention during Driving and Non-driving Hazard Perception Tasks in a Simulated Environment Andrew K. Mackenzie* & Julie M. Harris School of Psychology & Neuroscience, The University of St. Andrews, Scotland email: *akm9@st-andrews.ac.uk Abstract Research into driving skill, particularly of hazard perception, often involves studies where participants either view pictures of driving scenarios or use movie viewing paradigms. However oculomotor strategies tend to change between active and passive tasks and attentional limitations are introduced during real driving. Here we present a study using eye tracking methods, to contrast oculomotor behaviour differences across a passive video based hazard perception task and an active hazard perception simulated driving task. The differences presented highlight a requirement to study driving skill under more active conditions, where the participant is engaged with a driving task. Our results suggest that more standard, passive tests, may have limited utility when developing visual models of driving behaviour. The results presented here have implications for driver safety measures and provide further insights into how vision and action interact during natural activity. CR Categories: J.4 [Computer Applications]: Social and Behavioral Sciences Psychology. Keywords: Eye movements and cognition, visual search behaviour, scanning strategies, natural scene perception 1 Introduction / Overview Many studies of driving and associated visual behaviours involve participants either viewing pictures of driving scenarios or use movie viewing based paradigms (e.g. Savage et al., 2013). A typical hazard perception experiment, for example, might involve participants watching a series of video clips of a car driving from a driver's viewpoint. The associated task is to press a button when a hazard is detected (often while eye movements are tracked). Indeed a similar approach is used in the United Kingdom to assess hazard perception abilities in individuals before they acquire a valid drivers' license. It has been argued that the role of vision during natural tasks, that incorporate visuomotor control, can only usefully be studied during the performance of action itself (Land & Tatler, 2009). Action control is rather different from passively perceiving. The neural substrates for computing these visual processes are thought to be at least partially separate (Ungeleider & Pasternak, 2003). Furthermore, differences have been found in eye-movement behaviour when action is removed. For example, a series of experiments reviewed by Steinman (2003) investigated the oculomotor strategies used to complete a tapping search task compared with a search task where observers were asked to look at the target object. The oculomotor strategies employed by individuals were largely dissimilar for these two tasks. This suggests that the way in which we employ our oculomotor system in response to a task involving natural action is rather different to when we simply move our eyes around to complete a task. In driving, it is possible that video based hazard perception tasks generate visual behaviour that may not accurately represent the visual behaviour observed during a more naturalistic driving task. Underwood et al. (2011) hypothesized that the interactivity of real, or simulated, driving places more of a demand upon the visual system than the more passive video based hazard perception tasks. Certain locations which need to be fixated by the driver in order to control the car successfully become much more important in an active driving task. As a consequence, the search for hazards could potentially be interrupted. When using tasks such as viewing movies of driving scenes, where only eye movements and a button push response are used to complete the task, there might be important differences in behaviour that do not reflect behaviour under more active conditions. We test that hypothesis here, by studying visual behaviour when driving in a simulated setting that incorporates active control of a vehicle, compared with perception during passive movie viewing. Our aim was to quantify the possible differences in the pattern of eye movements generated across non-driving and active driving hazard perception conditions. We took care to use conditions as similar as possible for the two tasks where the simulated driving environment and type of hazardous stimuli used were identical. The active 'driving' task consisted of driving around a number of courses in a driving simulator programme and responding to hazards. The more passive non-driving task involved watching a series of video clips from the same driving software and responding to the hazards. We recorded eye movements throughout. 2 Methods 2.1 Apparatus and Driving Environment For the driving condition, the driving simulator software used was Driving Simulator 2011 (Excalibur Publishing Limited,

2011). With this software, the driving environment could be controlled. Hazards were created by altering the 'Artificial Intelligence' of the other road users so that they would, at certain points, collide with each other. To control the car, a Thrustmaster 5 Axes RGT Force feedback steering wheel (with left and right indicators) and pedal combination was used. The button press for participants to indicate they had spotted a hazard was located on the wheel where the participants' right thumb would naturally be. The simulated car was fully automatic, controlled by the gas pedal for acceleration and the brake pedal for deceleration. The stimulus display monitor was a 22inch CRT, set at an optimal resolution of 1280x1024 (See Figure 1). An SR Eyelink 1000 eye tracker, with tower mount apparatus (Figure 1), was used to record eye movements during the experiment sampling at 1000 Hz. Heads were stabilized using a chin rest so that the virtual environment was viewed at a distance of 60cm for both driving and non-driving conditions (horizontal viewing angle of 38.50 o ). and distant hazards) in the form of a collision between two (or more) other vehicles. Participants were told to press the button when they detected such an event. The other group of participants were told they would be performing a hazard perception task where they would be watching a series of video clips of driving situations, then pressing the button on the steering wheel when they detected a hazardous event (non-driving task, n=17). They were told their eye movements would be tracked. They were instructed that they should watch the video as if they were the driver. Eight different videos were shown with only four containing hazardous situations. 3. Results In our analyses we considered eye movements from only the four courses that did not contain hazards, to avoid hazard specific artifacts. 3.1 x- and y-axis fixation locations Scanning behaviour To investigate road scanning behaviour we measured the standard deviations of the x-axis (horizontal plane) and y-axis (vertical plane) fixation locations for each course individually. The area of interest is that of the roadway. This excludes vehicle specific areas e.g. exterior or interior mirrors. A larger standard deviation equates to a larger spread in overt visual attention. Typical distributions of fixation locations are illustrated in Figure 2. Figure. 1. Apparatus set up of monitor (left), steering wheel and pedals and eye tracker (right) 2.2 Participants Thirty-four participants took part in the study (5 males) with an age range of 19 to 31 years (mean age 21.2 years). All participants had normal or corrected-to-normal vision. Permission was given to conduct the study by the University of St. Andrews University Teaching and Research Ethics committee (UTREC). 2.3 Procedures One group of participants were told they would be performing a hazard perception task while driving in a virtual environment (driving task, n=17), with their eye movements being tracked. They were shown how to navigate through the virtual environment whilst obeying all traffic laws. Participants drove a total of eight courses, consisting of both suburban and urban routes. Four of the courses contained pre-determined hazards (both abrupt Figure 2. Example (from one observer) of density heat maps showing the distribution of fixations for the Driving (left) and Non driving (right) condition where red represents a greater proportion of fixations. Two MANOVAs (for horizontal and vertical fixations) were conducted to identify any significant differences in the standard deviations of fixation locations across the conditions (driving and non-driving). The standard deviations of the fixation distribution for each of the separate courses were used as dependent variables. There was an overall significant effect of condition for standard deviations in the horizontal plane (V=0.53, F(4,29)=4.91, p<0.001) using Pillai's Trace. Separate univariate ANOVAs on the dependent variables revealed significant effects of condition on the standard deviations for Course 1 (F(1,32)=30.52, p<0.001), Course 2 (F(1,32)=6.03, p<0.05) and

Course 3 (F(1,32)=11.83, p<0.01) but not for Course 4 (F(1,32)=0.1, p>0.05). From Figure 3 we see that the standard deviation of fixations for the horizontal plane was larger for the non-driving condition than the driving condition. StDev Horizontal Fixations (deg) 9 8 7 6 5 4 3 2 1 0 Course1 Course2 Course3 Course4 Figure 3. Standard deviations of horizontal fixations (and SEMs) for the driving (grey) and non-driving (blue) conditions. There was an overall significant effect of task for the standard deviations of fixations in the vertical plane (V=0.57, F(4,29)=9.46, p<0.001) using Pillai's Trace. Separate univariate ANOVAs on the dependent variables revealed significant effects of task on the standard deviations for Course 1 (F(1,32)=20.87, p<0.001), Course 2 (F(1,32)=16.71, p<0.001), Course 3 (F(1,32)=7.55, p<0.01) and Course 4 (F(1,32)=6.22, p<0.05). Figure 4. shows that the standard deviation in vertical plane fixations was larger for the non-driving condition than the active driving condition for Courses 1-3 but shows the opposite effect for Course 4. StDev Vertical Fixations (deg) 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 Course1 Course2 Course3 Course4 Figure 4. Standard deviations of vertical fixations (and SEMs) for the driving (grey) and non-driving (blue) conditions. It is hypothesised that the null effect of the x-axis fixation distribution and the opposite effect of the y-axis fixation distributions for Course 4 can largely be accounted for by the environment. The course contained many corners which may have influenced horizontal scanning pattern, and also contained many over-head traffic lights which are more likely to draw the visual attention of an active driver vertically more frequently, in order to respond to the signals, and thus may have influenced vertical scanning. 3.2 Average y-axis fixation position To investigate how far along the road participants fixated, we measured the mean y-axis fixation location. Converted from screen pixels, a larger mean y-axis fixation location value would suggest individuals are looking lower down in the image and thus closer to the front of the vehicle. Results from a MANOVA showed an overall significant effect of condition on the mean y position fixations (V=0.70, F(4,29)=16.83, p<0.001) using Pillai's Trace. Separate univariate ANOVAs on the dependent variables revealed significant effects of task on the mean vertical positions for Course 1 (F(1,32)=41.65, p<0.001), Course 2 (F(1,32)=22.67, p<0.001), Course 3 (F(1,32)=10.56, p<0.01) and Course 4 (F(1,32)=9.30, p<0.01). Figure 5 shows that, for each course, participants fixated lower in the image, and thus closer to the front of the vehicle, in the driving condition (grey bars) than in the non-driving condition (blue bars). Mean y-axis position (deg) 18 17.5 17 16.5 16 15.5 Course1 Course2 Course3 Course4 Figure 5. Mean Y axis fixation location (and SEMs) across driving (grey) and non-driving (blue) conditions. 3.3 Time to fixate hazard We measured the average time it takes to detect (Time to See) hazards by calculating the latency from when the hazard first appeared on screen to when participants first fixate the hazard (means in Figure 6). Time to see / (s) 3.00 2.50 2.00 1.50 1.00 0.50 0.00 Driving Non-Driving Figure 6. The average latencies (and SEMs) to first fixate the

hazards across driving and non-driving conditions We found that individuals take significantly longer (around 1 second) to first fixate on the hazards during the driving task (F(1,32)=7.25, p<0.05) (Figure. 6). 4 Discussion Our primary aim was to measure, under controlled conditions, whether there were any differences in eye movement behaviour between a passive hazard perception based task (i.e. non-driving) and an active simulated driving task. We found that individuals potentially search the road more (both up and down, and side to side) during the non-driving hazard perception task. One explanation for this, could be the need to locate the focus of expansion (FoE) when driving. The FoE is the apparent point from which motion vectors flow, and normally corresponds to the direction of heading (Gibson, 1979). It is thought that the FoE provides useful information to the driver about vehicle direction, with which to control the vehicle. Here, it is likely this information comes from the roadway. If we are not actively controlling the vehicle, such as in a video based non driving task, there is little need to fixate primarily near the FoE because direction information is less critical as we are not actively navigating the environment. It is possible therefore that observers can dedicate eye movements to searching the virtual environment more exhaustively for hazards during the nondriving task. Such a hypothesis would explain the distribution of fixation locations and latencies presented here. One may argue that the search strategies found here are impoverished search strategies. We argue this is highlighted given our finding that individuals tend to fixate closer to the vehicle in an active task. It would make sense, that in order to detect hazards more efficiently, individuals would benefit from fixating further along the roadway. It is possible that these search strategies could be the cause for the increased latency in first fixating on the hazards. If we scan the road less, then it makes sense that we take longer to fixate on the hazards. The results might also suggest that there is a cognitive load imbalance between the two tasks. An increase in central processing has been found to narrow spatial attention in driving (Crundall et al., 1999). The presumed higher cognitive demand of the driving task may have restricted the extent of the search of the road. Indeed this may also explain the latency difference we see in individuals detecting the hazards. For instance, increasing mental workload does impair hazard detection in other studies e.g. Recarte & Nunes (2003). A failure to scan the roadway fully could result in a collision (Lee, 2008). Thus we argue that future research should focus on effective ways of maximising the efficiency of a visual search while driving. Our results may have implications for current UK driving protocol, whereby individuals undertake a (passive) hazard perception test before the acquisition of a full drivers' license. This hazard perception task is similar to the non-driving task conducted here where videos are viewed and a button is pressed when a hazard is spotted. If the assessment of driving behaviour during such tests is not representative of typical driving behaviour as suggested here, then this may have potential implications for the types of everyday assessment tools used for hazard perception. One could put forward an argument that more active, naturalistic, assessment tools should be introduced. 5 Conclusions Here, using the active behaviour of driving, we have demonstrated that the way in which we employ our oculomotor system changes depending on the nature of the task. These differences highlight the need to not only train, but also assess, driving behaviour under more naturally ecologically valid conditions where individuals are engaged in an active driving task. Ideally, of course, this should go beyond simulation training, given the number of factors that influence driving behaviour in a real driving situation. 6 Acknowledgments The research was funded by a studentship from the UK Engineering and Physical Sciences Research Council (EPSRC). References Crundall, D., Underwood, G., & Chapman, P. (1999). Driving experience and the functional field of view. Perception, 28, 1075-1087. Driving Simulator 2011. 2011 Excalibur Publishing Gibson, J.J. (1979). The ecological approach to visual perception. Boston, MA: Houghton Mifflin. Land, M.F., & Tatler, B.W. (2009). Looking and Acting: Vision and eye movements in natural behaviour. New York, Oxford University Press. Lee, J.D. (2008). Fifty years of driving research. Human Factors, 50, 521-528. Oxford: Oxford University Press Recarte, M.A. & Nunes, L.M. (2003). Mental workload while driving: Effects on visual search discrimination and decision making. Journal of Experimental Psychology: Applied, 9(2), 119-137 Savage, S.W., Potter, D.D., & Tatler, B.W. (2013). Does preoccupation impair hazard perception? A simultaneous EEG and eye tracking study. Transportation Research Part F: Traffic Psychology and Behaviour, 17, 52-62 Steinman, R. M. (2003). Gaze control under natural conditions. In L. M. Chalupa & J. S. Werner (Eds.), The Visual Neurosciences. Cambridge, MA: MIT Press.

Underwood, G., Crundall, D., & Chapman, P. (2011). Driving simulator validation with hazard perception. Transportation Research Part F, 14, 435-446. Ungerleider, L. G., & Pasternak, T. (2004). Ventral and dorsal cortical processing streams. The visual neurosciences,1(34), 541-562.