Humans perceive object motion in world coordinates during obstacle avoidance

Size: px
Start display at page:

Download "Humans perceive object motion in world coordinates during obstacle avoidance"

Transcription

1 Journal of Vision (2013) 13(8):25, Humans perceive object motion in world coordinates during obstacle avoidance Brett R. Fajen Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, USA # $ Melissa S. Parade Jonathan S. Matthis Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, USA Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, USA # $ # $ A fundamental question about locomotion in the presence of moving objects is whether movements are guided based upon perceived object motion in an observer-centered or world-centered reference frame. The former captures object motion relative to the moving observer and depends on both observer and object motion. The latter captures object motion relative to the stationary environment and is independent of observer motion. Subjects walked through a virtual environment (VE) viewed through a head-mounted display and indicated whether they would pass in front of or behind a moving obstacle that was on course to cross their future path. Subjects movement through the VE was manipulated such that object motion in observer coordinates was affected while object motion in world coordinates was the same. We found that when moving observers choose routes around moving obstacles, they rely on object motion perceived in world coordinates. This entails a process, which has been called flow parsing (Rushton & Warren, 2005; Warren & Rushton, 2009a), that recovers the component of optic flow due to object motion independent of self-motion. We found that when self-motion is real and actively generated, the process by which object motion is recovered relies on both visual and nonvisual information to factor out the influence of self-motion. The remaining component contains information about object motion in world coordinates that is needed to guide locomotion. Introduction Many locomotor tasks involve interactions with moving objects. People weave through crowds in shopping malls, athletes dodge their opponents on the playing field, and animals chase prey in the wild. Such tasks comprise a family of actions that require humans and other animals to coordinate their movements with the movements of other objects. Attempts to understand how locomotion is guided in the presence of moving objects often begin with an analysis of information in optic flow. Figure 1A depicts the optic flow field for a moving observer with an object moving from right to left across the observer s future path. The local optical motion of the moving object (depicted by the yellow vector) reflects the motion of the object relative to the moving observer that is, object motion in an observercentered reference frame. As such, the same local optical motion results from different combinations of observer and object motion with the same relative motion. The optic flow field depicted in Figure 1A can be parsed into two components: a self-motion component (Figure 1B), which reflects the motion of the observer independent of the motion of other objects, and an object-motion component (Figure 1C), which reflects the motion of objects independent of the motion of the observer. That is, the optic flow field is the vector sum of the self-motion component and the object-motion component. Whereas the local optical motion of the moving object in Figure 1A reflects object motion in a reference frame that moves with the observer (i.e., observer coordinates), the motion of the moving object in Figure 1C reflects object motion in a stationary reference frame (i.e., world coordinates). A question of major theoretical significance for models of visually guided interception and obstacle avoidance is whether observers rely upon object motion perceived in an observer-centered reference frame or a Citation: Fajen, B. R., Parade, M. S., & Matthis, J. S. (2013). Humans perceive object motion in world coordinates during obstacle avoidance. Journal of Vision, 13(8):25, 1 13, doi: / doi: / Received September 28, 2012; published July 25, 2013 ISSN Ó 2013 ARVO

2 Journal of Vision (2013) 13(8):25, 1 13 Fajen, Parade, & Matthis 2 Figure 1. Optic flow field and decomposition into self-motion and object-motion components. (A) Optic flow field generated by an observer moving over a ground surface and an object (yellow dot) moving from right to left. (B) Component of optic flow due to selfmotion independent of object motion. (C) Component of optic flow due to object motion independent of self-motion. The optic flow field in (A) is the vector sum of the self-motion (B) and object-motion (C) components. From Fajen, B. R., & Matthis, J. S. (2013). Visual and non-visual contributions to the perception of object motion during self-motion. PLoS One, 8(2): e doi: /journal. pone , used under a Creative Commons Attribution License. world-centered reference frame. The former allows for the possibility that the local optical motion of the moving object in the optic flow field is sufficient to guide locomotion. For example, the leftward drift of the moving object in Figure 1A specifies that the object will pass in front of the observer if current speed and direction are maintained. If the object is a target to be intercepted, the observer should increase speed and/or turn to the left. Conversely, if the object is an obstacle to be avoided and is not laterally drifting in the optic flow field, it is on a collision course and evasive action is called for. This strategy and minor variations of it for using optical motion have been proposed as accounts of collision detection, obstacle avoidance, and interception in both humans (Chardenon, Montagne, Laurent, & Bootsma, 2005; Cutting, Vishton, & Braren, 1995; Fajen & Warren, 2004, 2007; Lenoir, Musch, Thiery, & Savelsbergh, 2002; Ni & Andersen, 2008; Rushton & Allison, 2013; Rushton, Harris, Lloyd, & Wann, 1998) and nonhuman animals (Collett & Land, 1978; Lanchester & Mark, 1975; Olberg, Worthington, & Venator, 2000). Because the object s lateral motion in the optic flow field reflects the relative motion between the object and the observer, such models imply that interception and obstacle avoidance are guided by object motion perceived in observer coordinates. Alternatively, guiding locomotion in the presence of moving objects may require recovering the objectmotion component of optic flow that is, the component that reflects the motion of objects in world coordinates independent of the motion of the observer (Fajen & Matthis, 2011). Because the optic flow field is influenced by both the motion of the observer and the motion of other objects, recovering the object-motion component requires factoring out the influence of selfmotion (Wallach, 1987). This process is known as flow parsing (Rushton & Warren, 2005; Warren & Rushton, 2009a). Studies of flow parsing demonstrate that humans are capable of recovering the object-motion component and perceiving object motion in world coordinates (Matsumiya & Ando, 2009). However, it remains unclear whether this process actually plays any role in visually guided interception and obstacle avoidance. Models based on the lateral motion of the object in the optic flow field suggests that flow parsing is superfluous because the lateral optical motion is sufficient to guide locomotion. Yet there are several important aspects of interception and avoidance of moving objects that these models cannot capture (Fajen, 2013; Fajen & Matthis, 2013): (a) they treat objects and the observer as points without physical extent (but see Rushton, Wen, & Allison, 2002 for an attempt to address this problem), (b) they ignore the fact that there are limits to how fast one can move, and (c) they offer no account of how speed and direction of locomotion are coordinated during interception and obstacle avoidance. An alternative model presented by Fajen and Matthis (Fajen, 2013; Fajen & Matthis, 2013) provides a basis for

3 Journal of Vision (2013) 13(8):25, 1 13 Fajen, Parade, & Matthis 3 Figure 2. Screenshot and task. (A) Screenshot of virtual environment viewed through HMD. (B) Plan view of observer moving straight ahead and object moving from right to left toward an unmarked location ( ) 3, 4, or 5 m from the home location. (C) Lateral shift manipulation applied in Session A-Shift and Session B-Shift trials. Observer s position in the virtual environment was shifted to the left by 20% of his or her forward displacement. Figure 3. Schematic diagram of design of experiment. The four main quadrants represent trials with 0% lateral shift (red) and trials with 20% lateral shift (blue) in Sessions A and B. Session A comprised 120 trials with 0% lateral shift (solid red) and 24 randomly interspersed catch trials with 20% lateral shift (checkered blue). Session B comprised 120 trials with 20% lateral shift (solid blue) and 24 randomly interspersed catch trials with 0% lateral shift (checkered red). addressing these limitations, but it is based on object motion in world coordinates and therefore requires the visual system to recover the object-motion component of optic flow. The primary aim of the present study was to test two competing hypotheses about the perception of object motion during obstacle avoidance: humans rely on information in optic flow, which reflects object motion in observer coordinates (Hypothesis 1), versus humans must recover object motion in world coordinates (Hypothesis 2). We considered two versions of Hypothesis 2, one that relies entirely on visual self-motion information to recover object motion in world coordinates (Hypothesis 2A) and one that relies on both visual and nonvisual self-motion information (Hypothesis 2B). Choosing routes around moving obstacles Subjects performed a route decision task in an ambulatory virtual environment that was viewed through a head-mounted display (Figure 2A). They walked straight ahead from a home position while a virtual obstacle moved from right to left for 1.4 s, at which time it disappeared (Figure 2B). Subjects quickly judged whether they could have avoided the obstacle by passing in front of it before it reached their locomotor path. They were instructed to base their judgments on whether they could have passed in front if they had been allowed to walk as quickly as possible but not run. 1 The obstacle moved along one of 15 unique trajectories, which were defined by two factors: the location along an imaginary line extending forward from the subjects initial position toward which the obstacle moved (indicated by an in Figure 2B), and the amount of time it would have taken for the obstacle to reach that point had it not disappeared (i.e., time-tocrossing [TTC]). The top-left quadrant of Figure 3A shows the specific values of location and TTC. The trajectories were chosen to yield a range of responses varying from easily passable even at slow walking speeds (i.e., when the obstacle moved toward a nearby location and TTC was long) to definitely not passable even at a fast walking speed (i.e., when the obstacle moved toward a distant location and TTC was short).

4 Journal of Vision (2013) 13(8):25, 1 13 Fajen, Parade, & Matthis 4 Each trajectory was repeated eight times for a total of 120 trials. In addition, there were also 24 randomly interspersed catch trials in which subjects position in the virtual environment was laterally shifted to the left on each frame by 20% of their forward displacement, which corresponds to a ;118 shift in the locomotor path (see Figure 2C). The lateral shift manipulation was similar to that used by Warren, Kay, Zosh, Duchon, and Sahuc (2001). The initial conditions in catch trials matched a subset of the initial conditions in normal trials (see bottom left quadrant of Figure 3). The 120 normal trials without a lateral shift and 24 randomly interspersed catch trials with a lateral shift comprised Session A (left column of Figure 3). Subjects also completed a second session (Session B) within 4 days of Session A. The order of sessions was counterbalanced across subjects. Session B was identical to Session A with two exceptions. First, there were 120 trials with a 20% lateral shift and 24 randomly interspersed catch trials with a 0% lateral shift (i.e., the reverse of Session A). Second, the set of initial conditions used in catch trials in the two sessions differed (compare initial conditions for trials indicated by checkered blocks in Figure 3). As we will explain in the Results section below, this design allowed us to determine whether route judgments were based on object motion in observer coordinates or world coordinates. Methods Subjects Eleven subjects (six men, five women, mean age: 19.0 years) participated in the experiment. Subjects were compensated for participation with extra credit. Equipment The experiment was conducted in a 6.5 m 9m ambulatory virtual environment laboratory. Subjects wore an nvis nvisor SX111 stereoscopic HMD with a resolution of 1280 pixels 1024 pixels (nvis, Inc., Reston, VA) per eye and a diagonal field of view of Head position and orientation were tracked using an Intersense IS-900 motion tracking system (Intersense, Billerica, MA). Data from the tracking system were used to update the position and orientation of the simulated viewpoint. The virtual environment was created using Vizard Virtual Reality Toolkit (WorldViz LLC, Santa Barbara, CA) running on an Alienware Area-51 PC (Dell, Inc., Round Rock, TX). Virtual environment and procedure The virtual environment consisted of a green, grasstextured ground surface, a black sky, and an array of randomly distributed bamboo-textured posts (Figure 2A). Subjects began each trial by walking to a designated home location, which was a rectangular box in the virtual environment that changed color from translucent red to translucent yellow when the subject s head was inside the box. Subjects also turned to face an alignment marker, which appeared as a thin vertical line in the distance. Once they were properly positioned and aligned, they pressed a button on a handheld remote mouse, which triggered the appearance of a stationary, yellow obstacle (a cylinder 2.0 m tall 0.1 m in diameter). The initial position of the obstacle varied randomly between 5.5 and 6.0 m in depth and between 1.5 and 2.0 m to the right of the midline. After a 0.5 s delay, the home box and alignment marker disappeared, the obstacle began moving leftward, and an auditory go signal was presented to cue subjects to begin walking. Subjects were instructed to walk straight ahead in the direction that they were facing. The trajectory of the obstacle was determined by the location along the imaginary line extending forward from the subjects initial position toward which the obstacle moved and the amount of time it would have taken to arrive at that point (TTC). The 15 conditions (3 locations 5 TTCs) used in Session A-No Shift trials and the six conditions (3 locations 2 TTCs) used in Session A-Shift trials in both sessions are shown in Figure 3. The obstacle disappeared 1.4 s after it began moving, which was between 0.9 and 1.7 s before it reached the midline depending on the value of TTC in that trial. Subjects pressed one of two buttons on the handheld mouse to indicate whether they would have avoided the obstacle by passing in front of it before it reached their locomotor path or passing behind it after it crossed their locomotor path. Judgments had to be entered within a response window that began 1.0 s after the trial began and lasted 1.6 s, or else the trial was aborted and repeated later in the session. Odd-numbered and even-numbered trials were performed while walking in opposite directions in the lab. Therefore, after subjects entered the response, the start box for the next trial appeared in front of them. Subjects walked into the box and turned 1808 to face the alignment marker in preparation for the next trial. The distance between the start boxes for odd-numbered and even-numbered trials (and hence the approximate distance that subjects walked between trials) was 3 m. The virtual environment remained visible as subjects walked to the start box, with the same value of lateral shift (0% or 20%) that was applied before the response was entered.

5 Journal of Vision (2013) 13(8):25, 1 13 Fajen, Parade, & Matthis 5 Figure 4. Summary of results. (A) and (C) show the subset of conditions used for analyses shown in (B) and (D), respectively. Error bars represent 61 SE and asterisks denote statistically significant differences. Before starting the experiment, subjects completed a short warm-up session designed to familiarize themselves with moving through the virtual environment and performing the judgment task. In Session B, the warm-up session was completed with the lateral shift to ensure that subjects were properly adapted to the conditions encountered in normal trials in that session. The Institutional Review Board at Rensselaer Polytechnic Institute approved the experimental protocol. Data analyses The dependent measure was the percentage of trials in which subjects judged that they would pass in front of the object, averaged across various subsets of conditions, as explained in following text. We refer to this as % passable. The logic of the analyses assumed that there were no systematic differences in walking behavior in the real world across sets of trials from different conditions. To confirm this assumption, we measured subjects head position in real world coordinates 1 s after the trial began. For the purposes of this analysis, we did not consider head position after 1 s because this was the moment at which subjects could begin to enter their responses. Because we were concerned with walking trajectories up until responses were entered, 1 s was the last possible moment at which we could be certain that subjects had not yet entered a response. For each analysis considered (see also, Figure 4), we calculated the difference in mean head position at 1 s between the

6 Journal of Vision (2013) 13(8):25, 1 13 Fajen, Parade, & Matthis 6 two sets of trials. The mean differences along the x-axis were 1.4 cm (Session A-No Shift/Session B-No Shift), 2.3 cm (Session A-Shift/Session B-Shift), 2.7 cm (Session B-Shift/Session B-No Shift), and 4.1 cm (Session A-No Shift/Session B-Shift) and along the z- axis were 1.2 cm (Session A-No Shift/Session B-No Shift), 0.7 cm (Session A-Shift /Session B-Shift), 1.0 cm (Session B-Shift/Session B-No Shift), and 0.2 cm (Session A-No Shift/Session B-Shift). These differences indicate that subjects followed nearly identical walking trajectories in each set of trials. The fact that walking trajectories were so similar in all four conditions may seem inconsistent with previous studies on locomotor adaptation. For example, subjects in Bruggeman, Zosh, & Warren (2007) walked to a visible goal while the focus of expansion was offset from the actual direction of locomotion, similar to the lateral shift manipulation used in the present study. Within a few trials, they adapted to the offset such that they followed a different walking trajectory to the goal. Given that the lateral shift was present in ;83% of trials in Session B, one might expect walking trajectories in that session to differ from those in Session A. As we already explained, this was not the case. The reason for this was that in the present study, subjects did not walk to a visible target but rather walked straight ahead in the direction that they were already facing, which was the same in all four conditions as determined by the alignment marker and start box. Therefore, the similarity of walking trajectories across conditions was not inconsistent with previous studies or with adaptation to the lateral shift. Results Hypothesis 1: Object motion is perceived in observer coordinates According to Hypothesis 1, observers base their judgments on object motion perceived at observer coordinates using information that is directly available in the optic flow field. Therefore, Hypothesis 1 predicts that judgments should be similar in conditions in which the visual information that is available to subjects as they move is the same. We tested this prediction by comparing No Shift trials in Session A with No Shift trials in Session B. For this analysis, we focused on the subset of Session A-No Shift trials with initial conditions that matched those in Session B-No Shift trials (see dark red blocks in upper left quadrant of Figure 4A). Therefore, differences in judgments could not be attributed to differences in initial conditions. We also confirmed that there were no systematic differences in walking behavior across conditions (see data analyses section of Methods). As such, we can assume that for a pair of trials from different sets (i.e., one Session A-No Shift trial and one Session B-No Shift trial) with identical initial conditions (i.e., the same location and initial TTC), the visual information that was available to subjects as they moved was effectively the same. If judgments were based on information in optic flow that reflects object motion in observer coordinates, judgments in Session A-No Shift trials should be similar to judgments in Session B-No Shift trials. The findings were inconsistent with this prediction (see solid and checkered red bars in Figure 4B). Subjects were significantly more likely to perceive that they could pass in front in Session A-No Shift trials compared with Session B-No Shift trials, t(10) ¼ 4.04, p, Hypothesis 1 makes a similar prediction about Session A-Shift trials and the subset of Session B-Shift trials with the same initial conditions (solid and checkered blue blocks in Figure 4A). The same lateral shift manipulation that was applied in Session A-Shift trials was also applied in Session B-Shift trials, and walking behavior was nearly identical in these two conditions. Therefore, the visual information that was available to subjects in these two conditions was effectively the same. Contrary to the predictions of Hypothesis 1, subjects were significantly more likely to perceive that they could pass in front in Session A-Shift trials compared with Session B-Shift trials, t(10) ¼ 3.74, p, 0.01 (see solid and checkered blue bars in Figure 4B). Taken together, the first set of analyses demonstrates that judgments differed under conditions in which object motion in observer coordinates was the same, which is inconsistent with the predictions of Hypothesis 1. Next, we tested Hypothesis 2, which predicts that observers base their judgments on perceived object motion in world coordinates, which involves flow parsing. We considered two versions of this hypothesis that differ in terms of the contributions of visual and nonvisual self-motion information to flow parsing. Hypothesis 2A: Object motion is perceived in world coordinates and is recovered using visual self-motion information The first version of Hypothesis 2, which we labeled Hypothesis 2A, states that the self-motion component of optic flow (i.e., the component that must be factored out) is based entirely on visual self-motion information with no contribution of nonvisual information. Several previous studies have investigated the influence of visual self-motion information by presenting stationary observers with stimuli simulating combined self-motion and object motion (Matsumiya & Ando, 2009; Royden

7 Journal of Vision (2013) 13(8):25, 1 13 Fajen, Parade, & Matthis 7 & Connors, 2010; Royden & Moore, 2012; Royden, Wolfe, & Klempen, 2001; Rushton, Bradshaw, & Warren, 2007; Rushton & Warren, 2005; Warren & Rushton, 2007, 2008, 2009b). Perceived object motion was influenced by global optic flow simulating selfmotion, indicating that visual information can be used to factor out the self-motion component of optic flow. Here we asked whether humans rely entirely on visual self-motion information even when self-motion is real and actively generated, as it was in the present study; that is, when nonvisual self-motion information is also available. Like Hypothesis 1, Hypothesis 2A predicts that judgments should be similar in normal trials in one session and catch trials in the opposite session (i.e., the two comparisons in Figure 4A), because both the local optical motion of the object and the global optic flow specifying self-motion were the same in these two conditions. As already noted, subjects were significantly more likely to perceive gaps as passable in Session A-No Shift trials compared with Session B-No Shift trials, and in Session A-Shift trials compared with Session B-Shift trials (Figure 4B). Thus, the analyses of normal trials in one session and catch trials in the opposite session do not support the predictions of Hypothesis 2A either. Hypothesis 2B: Object motion is perceived in world coordinates and is recovered using visual and nonvisual self-motion information These analyses are, however, consistent with the second version of this hypothesis (Hypothesis 2B), which states that observers rely on perceived object motion in world coordinates and use both visual and nonvisual self-motion information for flow parsing. We illustrated this point for Session A-No Shift trials and Session B-No Shift trials (i.e., conditions indicated by red bars in Figure 4A). Because there was no lateral shift in either set of trials and because walking behavior was nearly identical in both trial types, the optic flow field in pairs of trials with matching initial conditions was effectively the same. However, Session B-No Shift trials were randomly interspersed within a larger set of Session B-Shift trials with a 20% leftward shift. Consequently, as subjects walked through the virtual environment in Session B-Shift trials, nonvisual selfmotion information generated by walking in one direction was accompanied by global optic flow corresponding to walking ; 118 to the left. Because the lateral shift was applied in the majority of trials in Session B, subjects should adapt to this change in the relation between nonvisual self-motion information and global optic flow. When adaptation occurred, the perceived direction of self-motion based on nonvisual information shifted leftward toward the visually specified direction of locomotion (similar to the effect reported in Bruggeman et al., 2007). This is illustrated in Figure 5, which shows how the perceived direction of self-motion based on nonvisual information shifted as the observer adapted to the leftward shifted flow field. Although the lateral shift manipulation was not applied in Session B-No Shift trials, the large majority (83%) of trials in Session B were Session B-Shift trials with the lateral shift. As such, the effects of adaptation on perceived direction of self-motion based on nonvisual information should carry over to Session B-No Shift trials as well. Even though the optically specified direction of self-motion was straight ahead in Session B-No Shift trials, perceived direction of self-motion based on nonvisual information was shifted to the left (see right side of Figure 5). Thus, in Session A-No Shift trials, both visual and nonvisual information about the direction of selfmotion were consistent with the actual direction of self- Figure 5. Adaptation of perceived direction of self-motion based on nonvisual information in Session B. Over repeated Session B-Shift trials with the optic flow field shifted leftward, perceived direction based on nonvisual information (NV) shifted leftward toward the optically specified direction of self-motion (V). The effects of adaptation carried over to Session B-No Shift trials, in which the lateral shift was not applied.

8 Journal of Vision (2013) 13(8):25, 1 13 Fajen, Parade, & Matthis 8 motion. In contrast, in Session B-No Shift trials, visual information was consistent with the actual direction of self-motion but the perceived direction based on nonvisual information was perturbed to the left. If nonvisual information contributes to flow parsing, then the component of optic flow that is attributed to selfmotion and factored out should differ in Session A-No Shift trials compared with Session B-No Shift trials. The logic of this prediction is illustrated in Figure 6A through D, which depicts the flow parsing process for Session A-No Shift trials and Session B-No Shift trials, assuming that subjects relied at least partly on nonvisual self-motion information. Figure 6A and C depict the optic flow fields on Session A-No Shift and Session B-No Shift trials, respectively. The flow fields are effectively the same because there was no lateral shift and because walking behavior was so similar. Note that the perceived direction of self-motion based on visual and nonvisual information (indicated by the white bars labeled V and NV, respectively) are aligned in Figure 6A but not in Figure 6C due to adaptation to the lateral shift in Session B-Shift trials. In Figure 6B and D, which depicts the flow parsing process, the local optical motion of the object (solid yellow lines) is also the same. However, if subjects adapted to the lateral shift in Session B and if they relied partly on nonvisual self-motion information for flow parsing, then the component attributed to selfmotion (dashed lines) would have more lateral motion. The remaining component, which was attributed to object motion (dotted line), points farther to the left. Therefore, if nonvisual information contributed to flow parsing, then subjects should perceive that the object was moving leftward at a faster rate in Session B-No Shift trials and should be less likely to perceive that the object was passable, when compared with Session A- Figure 6. Flow parsing with and without lateral shift. Optic flow fields with moving object for the no lateral shift (A, C) and 20% leftward shift (E) conditions. (B, D, F) show the parsing of the local optical motion of the moving object (solid line) into self-motion (dashed lines) and object-motion (dotted lines) components. V and NV indicate the perceived direction of self-motion based on visual and nonvisual information, respectively.

9 Journal of Vision (2013) 13(8):25, 1 13 Fajen, Parade, & Matthis 9 No Shift trials. As already explained and as illustrated by the red bars in Figure 4B, the results were consistent with this prediction, providing support for Hypothesis 2B. As the previous analysis shows, Hypothesis 2B explains why judgments can be different even under conditions in which the available visual information is the same. The next analysis tested a strong prediction of Hypothesis 2B that under certain circumstances, judgments can be similar in trials with and without the lateral shift despite the available visual information being quite different. Understanding this prediction requires a brief discussion of the lateral shift manipulation and how it affected object motion in observer coordinates and world coordinates. Recall that when the lateral shift manipulation was applied, subjects movements through the virtual environment were shifted to the left. Therefore, the motion of the object in observer coordinates differed in these two conditions. This difference is illustrated in Figure 6A and E, which depicts the unparsed optic flow fields with 0% and 20% lateral shift, respectively. Nonetheless, because the lateral shift manipulation affected the movement of the observer and not the movement of the object, object motion in world coordinates was the same. (Note that object motion is the same in Figure 2B and C, which depicts observer and object motion in world coordinates without and with the lateral shift manipulation, respectively.) Therefore, if route decisions are based on object motion perceived in world coordinates, then it should be possible for judgments in trials with and without the lateral shift to be similaraslongastheavailableself-motioninformation is sufficient for accurate flow parsing. This is illustrated in Figure 6B and F, which shows the flow parsing process with 0% and 20% lateral shift, respectively. The local optical motion of the object differed in these two conditions (compare solid yellow lines). However, if subjects accurately perceived their self-motion in the virtual environment, then the component of optic flow that was attributed to self-motion should also differ (compare dashed lines). In particular, the estimated self-motion component should have more lateral motion when the lateral shift was applied. Therefore, although the local optical motion in Figure 6B (with no lateral shift) differs from that in Figure 6F (with the lateral shift), the difference should be canceled out by the difference in the estimated self-motion component. The resultant vector, which reflects the object-motion component, should be the same(comparedotted lines in Figure 6B and F). Next, let us consider the conditions in which this prediction (i.e., similar judgments with and without the lateral shift) should hold. According to Hypothesis 2B, observers rely on both visual and nonvisual self-motion information. Therefore, subjects should accurately recover the object-motion component only when perceived self-motion based on visual and nonvisual information are consistent with each other. Such was the case in Session A-No Shift trials because both visual and nonvisual information was aligned with the actual direction of self-motion (see white bars in Figure 6B). Likewise, perceptions of self-motion based on visual and nonvisual information were consistent in Session B-Shift trials (i.e., both were perturbed to the left, assuming that subjects adapted to the lateral shift; see white bars in Figure 6F). In contrast, in Session B- No Shift trials, visual self-motion information was aligned with the actual direction of self-motion, whereas perceived self-motion based on nonvisual information was perturbed to the left (see white bars in Figure 6D). Therefore, visual and nonvisual estimates of self-motion were consistent in Session A-No Shift trials, consistent in Session B-Shift trials, but in conflict in Session B-No Shift trials. If Hypothesis 2B is correct, then judgments on Session A-No Shift and Session B- Shift trials should be similar to each other but should differ from judgments in Session B-No Shift trials. To test this prediction, we used the subset of initial conditions that were common to all three sets of trials (see Figure 4C, D). As predicted, judgments on Session A-No Shift trials were significantly different from judgments on Session B-No Shift trials, t(10) ¼ 4.04, p, 0.01] but not significantly different from judgments in Session B-Shift trials, t(10) ¼ 1.96, ns. 2 Discussion The comparisons in Figure 4C, D provide the crucial test for whether object motion is perceived in observer coordinates or world coordinates. The triad of conditions (Session A-No Shift, Session B-Shift, and Session B-No Shift) includes one pairing in which object motion was the same in observer coordinates (e.g., Session A-No Shift and Session B-No Shift) and another pairing in which object motion was the same in world coordinates but different in observer coordinates (e.g., Session A-No Shift and Session B-Shift). The fact that the pairing with the similar judgments is Session A-No Shift trials and Session B-Shift trials provides compelling evidence that when people choose routes around moving obstacles, they rely on information that reflects object motion in world coordinates rather than in observer coordinates. The results also provide strong evidence that the process of recovering object motion in world coordinates relies on both visual and nonvisual self-motion information. In previous studies of flow parsing

10 Journal of Vision (2013) 13(8):25, 1 13 Fajen, Parade, & Matthis 10 (Matsumiya & Ando, 2009; Royden & Connors, 2010; Royden & Moore, 2012; Royden et al., 2001; Rushton et al., 2007; Rushton & Warren, 2005; Warren & Rushton, 2007, 2008, 2009a, 2009b), reliable nonvisual self-motion information was not available because selfmotion was simulated and viewed on a computer monitor or projection screen by a stationary observer. When self-motion is real and actively generated, as it was in the present study, nonvisual information, which is known to contribute to the perception of self-motion (Campos, Byrne, & Sun, 2010; Harris, Jenkin, & Zikovitz, 2000; Mittelstaedt & Mittelstaedt, 2001), is also available. Our findings show that nonvisual information also plays a role in flow parsing. This highlights the multisensory nature of the flow parsing problem (Calabro, Soto-Faraco, & Vaina, 2011; Dyde & Harris, 2008; MacNeilage, Zhang, DeAngelis, & Angelaki, 2012) and adds to the growing body of literature demonstrating that what people perceive can be affected by their own self-generated movement (Wexler & van Boxtel, 2005). On the possibility that adaptation affected perceived egocentric direction Our interpretation of the effects is based on the assumption that the lateral shift manipulation in Session B shifted the perceived direction of self-motion based on nonvisual information and affected the component of optic flow that was attributed to selfmotion. However, one might wonder whether adaptation to the lateral shift manipulation affected the perceived egocentric direction of the obstacle and if this could also explain the results. There are two reasons why we think the findings cannot be attributed to a change in the perceived egocentric direction of the object. First, although perceived straight ahead can be realigned using prisms (e.g., Herlihey & Rushton, 2012; Redding & Wallace, 1997), laterally displacing optic flow in a virtual environment (as in the present study) does not affect perceived straight ahead (Bruggeman & Warren, 2010; Bruggeman et al., 2007). Second, even if the lateral shift manipulation had affected perceived straight ahead, it would have shifted it in the same direction as the shift in optic flow (i.e., to the left). The obstacle, which always appeared on the right, would have been perceived as lying farther to the right than it actually was. This would have biased subjects toward perceiving that they could pass in front of the obstacle. However, the actual effect is in the opposite direction. As shown in Figure 4B, subjects were less likely to perceive that they could pass in front when they were adapted to the lateral shift (i.e., on Session B-No Shift trials) compared to when they were not (i.e., on Session A-No Shift trials). Therefore, the findings cannot be attributed to an effect of the lateral shift manipulation on the perceived egocentric direction of the obstacle. Guiding locomotion in the presence of moving objects The findings of this study help to bridge a gap in the literature between studies of the perception of object motion during self-motion on the one hand and studies of visually guided locomotion on the other hand. The former (Matsumiya & Ando, 2009; Royden & Connors, 2010; Royden & Moore, 2012; Rushton & Warren, 2005; Warren & Rushton, 2007, 2008, 2009a, 2009b) establish that moving observers can perceive object motion in world coordinates and provide details about the mechanisms that underlie this process. The latter (Chardenon et al., 2005; Cutting et al., 1995; Fajen & Warren, 2004, 2007; Lenoir et al., 2002; Ni & Andersen, 2008) generally assume that collision detection, obstacle avoidance, and interception are based on object motion perceived in observer coordinates. Thus, one might conclude that the perception of object motion during self-motion and the visual guidance of locomotion in the presence of moving objects rely on different reference frames. Our findings suggest that this is not the case both rely on object motion perceived in a world-centered reference frame and both require flow parsing to recover object motion independent of self-motion. Why do observers need to perceive object motion in world coordinates to guide locomotion in the presence of moving objects? After all, the lateral optical motion of a moving object, which reflects object motion in observer coordinates, specifies whether one will collide with a moving object if current locomotor speed and direction are maintained. The answer is that there is more to interception and obstacle avoidance than knowing whether one s current locomotor velocity will result in a collision. Observers also need to know how fast they would need to move to intercept, pass in front of, or pass behind a moving object. More specifically, observers need to know how fast to move independent of how fast they are currently moving, and they need to know how fast to move in relation to how fast they are capable of moving. To illustrate this point, consider a pedestrian moving at a comfortable walking speed and a moving obstacle (e.g., bicycle) moving from right to left across the pedestrian s future path. Even if the pedestrian would pass behind the bicycle if her current speed is maintained, she may choose to speed up to pass in front of the obstacle (e.g., if she is in a hurry). However, if the speed needed to pass in front is faster than the speed that the pedestrian is capable of moving (or willing to move), then she should not attempt to pass in front.

11 Journal of Vision (2013) 13(8):25, 1 13 Fajen, Parade, & Matthis 11 The decision to pass in front or pass behind requires the ability to perceive the minimum speed needed to pass in front in relation to the maximum speed that the observer is capable of moving (or willing to move). Although information about the sufficiency of one s current speed is available in the optic flow field (i.e., combined self-motion plus object-motion components), information about how fast one needs to move is not. Instead, such information is found in the object-motion component of optic flow that is, the component that reflects object motion in world coordinates (Fajen & Matthis, 2013). Therefore, the ability to perceive object motion in world coordinates, which was demonstrated in this study, plays an essential role in guiding locomotion in the presence of moving objects. Keywords: optic flow, moving objects, obstacle avoidance, flow parsing, locomotion Acknowledgments This research was supported by a grant from the National Institutes of Health (1R01EY019317). The authors thank Kevin Todisco for creating the virtual environments used in this experiment. Commercial relationships: none. Corresponding author: Brett Fajen. fajenb@rpi.edu. Address: Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY. Footnotes 1 In this particular study, the accuracy of judgments was less important than the effects of manipulations of self-motion information. Therefore, the experiment was not designed to measure the accuracy with which subjects judged whether they were capable of passing in front of the moving obstacle. However, the accuracy of judgments was measured in an earlier experiment in which subjects judged whether they could pass in front on some trials and actually attempted to pass in front on other trials (Fajen, Diaz, & Cramer, 2011). In that experiment, there was a close match between judgments and actions with no evidence of a systematic bias to overestimate or underestimate one s ability to pass in front. 2 Although the difference between Session A-No Shift trials and Session B-Shift trials was not statistically significant, there did appear to be a trend toward a higher percentage of passable judgments in Session B- Shift trials. This can be attributed to incomplete adaptation to the lateral shift in Session B. If subjects only partially adapted to the lateral shift, perceived self-motion based on nonvisual information would be shifted to the left, but by less than the optically specified shift of ;118. The component of optic flow that is attributed to self-motion would have more lateral motion in Session B than in Session A (as depicted by the dashed line in Figure 6F and B). However, the difference in the component attributed to self-motion would be less than if subjects had completely adapted. Therefore, the difference in the local optical motion between Session A-No Shift trials and Session B-Shift trials would not have been completely canceled out by the difference in the component attributed to self-motion, resulting in a small difference in the perceived motion of the object. Thus, the possible trend toward a higher percentage of passable judgments on Session B-Shift trials could be explained by incomplete adaptation to the lateral shift manipulation. References Bruggeman, H., & Warren, W. H. (2010). The direction of walking but not throwing or kicking is adapted by optic flow. Psychological Science, 21(7), Bruggeman, H., Zosh, W., & Warren, W. H. (2007). Optic flow drives human visuo-locomotor adaptation. Current Biology, 17(23), Calabro, F. J., Soto-Faraco, S., & Vaina, L. M. (2011). Acoustic facilitation of object movement detection during self-motion. Proceedings of the Royal Society B-Biological Sciences, 278(1719), Campos, J. L., Byrne, P., & Sun, H. J. (2010). The brain weights body-based cues higher than vision when estimating walked distances. European Journal of Neuroscience, 31(10), Chardenon, A., Montagne, G., Laurent, M., & Bootsma, R. J. (2005). A robust solution for dealing with environmental changes in intercepting moving balls. Journal of Motor Behavior, 37(1), Collett, T. S., & Land, M. F. (1978). How hoverflies computer interception courses. Journal of Comparative Physiology, 125, Cutting, J. E., Vishton, P. M., & Braren, P. A. (1995). How we avoid collisions with stationary and moving objects. Psychological Review, 102(4), Dyde, R. T., & Harris, L. R. (2008). The influence of retinal and extra-retinal motion cues on perceived

12 Journal of Vision (2013) 13(8):25, 1 13 Fajen, Parade, & Matthis 12 object motion during self-motion. Journal of Vision, 8(14):5, 1 10, content/8/14/5, doi: / [PubMed] [Article] Fajen, B. R. (2013). Guiding locomotion in complex and dynamic environments. Frontiers in Behavioral Neuroscience, manuscript accepted for publication. Fajen, B. R., Diaz, G., & Cramer, C. (2011). Reconsidering the role of movement in perceiving action-scaled affordances. Human Movement Science, 30(3), Fajen, B. R., & Matthis, J. M. (2013). Visual and nonvisual contributions to the perception of object motion during self-motion. PLoS One, 8(2), e Fajen, B. R., & Matthis, J. S. (2011). Direct perception of action-scaled affordances: The shrinking gap problem. Journal of Experimental Psychology: Human Perception and Performance, 37(5), Fajen, B. R., & Warren, W. H. (2007). Behavioral dynamics of intercepting a moving target. Experimental Brain Research, 180(2), Fajen, B. R., & Warren, W. H. (2004). Visual guidance of intercepting a moving target on foot. Perception, 33(6), Harris, L. R., Jenkin, M., & Zikovitz, D. C. (2000). Visual and non-visual cues in the perception of linear self motion. Experimental Brain Research, 135(1), Herlihey, T. A., & Rushton, S. K. (2012). The role of discrepant retinal motion during walking in the realignment of egocentric space. Journal of Vision, 12(3):4, 1 11, content/12/3/4, doi: / [PubMed] [Article] Lanchester, B. S., & Mark, R. F. (1975). Pursuit and prediction in tracking of moving food by a teleost fish (Acanthaluteres-Spilomelanurus). Journal of Experimental Biology, 63(3), Lenoir, M., Musch, E., Thiery, E., & Savelsbergh, G. J. (2002). Rate of change of angular bearing as the relevant property in a horizontal interception task during locomotion. Journal of Motor Behavior, 34(4), MacNeilage, P. R., Zhang, Z., DeAngelis, G. C., & Angelaki, D. E. (2012). Vestibular facilitation of optic flow parsing. PLoS One, 7(7), e Matsumiya, K., & Ando, H. (2009). World-centered perception of 3D object motion during visually guided self-motion. Journal of Vision, 9(1):15, 11 13, doi: / [PubMed] [Article] Mittelstaedt, M. L., & Mittelstaedt, H. (2001). Idiothetic navigation in humans: estimation of path length. Experimental Brain Research, 139(3), Ni, R., & Andersen, G. J. (2008). Detection of collision events on curved trajectories: Optical information from invariant rate-of-bearing change. Attention Perception & Psychophysics, 70(7), Olberg, R. M., Worthington, A. H., & Venator, K. R. (2000). Prey pursuit and interception in dragonflies. Journal of comparative physiology. A, Sensory, Neural, and Behavioral Physiology, 186(2), Redding, G. M., & Wallace, B. (1997). Adaptive spatial alignment. Mahwah, NJ: Erlbaum. Royden, C. S., & Connors, E. M. (2010). The detection of moving objects by moving observers. Vision Research, 50(11), Royden, C. S., & Moore, K. D. (2012). Use of speed cues in the detection of moving objects by moving observers. Vision Research, 59, Royden, C. S., Wolfe, J. M., & Klempen, N. (2001). Visual search asymmetries in motion and optic flow fields. Perception & Psychophysics, 63(3), Rushton, S. K., & Allison, R. S. (2013). Biologicallyinspired heuristics for human-like walking trajectories toward targets and around obstacles. Displays, 34(2), Rushton, S. K., Bradshaw, M. F., & Warren, P. A. (2007). The pop out of scene-relative object movement against retinal motion due to selfmovement. Cognition, 105(1), Rushton, S. K., Harris, J. M., Lloyd, M., & Wann, J. P. (1998). Guidance of locomotion on foot uses perceived target location rather than optic flow. Current Biology, 8, Rushton, S. K., & Warren, P. A. (2005). Moving observers, relative retinal motion and the detection of object movement. Current Biology, 15(14), R542 R543. Rushton, S. K., Wen, J., & Allison, R. S. (2002). Egocentric direction and the visual guidance of robot locomotion background, theory and implementation. In S.-W. Lee, H. H. Bülthoff, T. A. Poggio, & C. Wallraven (Eds.), Biologically motivated computer vision: Lecture notes in computer science (Vol pp ). Berlin, Heidelberg: Springer. Wallach, H. (1987). Perceiving a stable environment

Learning to Control Collisions: The Role of Perceptual Attunement and Action Boundaries

Learning to Control Collisions: The Role of Perceptual Attunement and Action Boundaries Journal of Experimental Psychology: Human Perception and Performance 2006, Vol. 32, No. 2, 300 313 Copyright 2006 by the American Psychological Association 0096-1523/06/$12.00 DOI: 10.1037/0096-1523.32.2.300

More information

Human Movement Science

Human Movement Science Human Movement Science xxx (2011) xxx xxx Contents lists available at ScienceDirect Human Movement Science journal homepage: www.elsevier.com/locate/humov Reconsidering the role of movement in perceiving

More information

Behavioral dynamics of intercepting a moving target

Behavioral dynamics of intercepting a moving target Exp Brain Res (2007) 180:303 319 DOI 10.1007/s00221-007-0859-6 RESEARCH ARTICLE Behavioral dynamics of intercepting a moving target Brett R. Fajen Æ William H. Warren Received: 25 January 2006 / Accepted:

More information

Importance of perceptual representation in the visual control of action

Importance of perceptual representation in the visual control of action Iowa State University From the SelectedWorks of Jonathan W. Kelly March 18, 2005 Importance of perceptual representation in the visual control of action Jack M. Loomis Andrew C. Beall Jonathan W. Kelly

More information

Controlling speed and direction during interception: an affordance-based approach

Controlling speed and direction during interception: an affordance-based approach Exp Brain Res (21) 21:763 78 DOI 1.17/s221-9-292-y RESEARCH ARTICLE Controlling speed and direction during interception: an affordance-based approach Julien Bastin Brett R. Fajen Gilles Montagne Received:

More information

The influence of visual motion on fast reaching movements to a stationary object

The influence of visual motion on fast reaching movements to a stationary object Supplemental materials for: The influence of visual motion on fast reaching movements to a stationary object David Whitney*, David A. Westwood, & Melvyn A. Goodale* *Group on Action and Perception, The

More information

Behavioral Dynamics of Steering, Obstacle Avoidance, and Route Selection

Behavioral Dynamics of Steering, Obstacle Avoidance, and Route Selection Journal of Experimental Psychology: Human Perception and Performance 2003, Vol. 29, No. 2, 343 362 Copyright 2003 by the American Psychological Association, Inc. 0096-1523/03/$12.00 DOI: 10.1037/0096-1523.29.2.343

More information

Vision and Action. 10/3/12 Percep,on Ac,on 1

Vision and Action. 10/3/12 Percep,on Ac,on 1 Vision and Action Our ability to move thru our environment is closely tied to visual perception. Simple examples include standing one one foot. It is easier to maintain balance with the eyes open than

More information

Visual Control of Action Without Retinal Optic Flow Jack M. Loomis, 1 Andrew C. Beall, 1 Kristen L. Macuga, 1 Jonathan W. Kelly, 1 and Roy S.

Visual Control of Action Without Retinal Optic Flow Jack M. Loomis, 1 Andrew C. Beall, 1 Kristen L. Macuga, 1 Jonathan W. Kelly, 1 and Roy S. PSYCHOLOGICAL SCIENCE Research Article Visual Control of Action Without Retinal Optic Flow Jack M. Loomis, 1 Andrew C. Beall, 1 Kristen L. Macuga, 1 Jonathan W. Kelly, 1 and Roy S. Smith 2 1 Department

More information

THE SPATIAL EXTENT OF ATTENTION DURING DRIVING

THE SPATIAL EXTENT OF ATTENTION DURING DRIVING THE SPATIAL EXTENT OF ATTENTION DURING DRIVING George J. Andersen, Rui Ni Department of Psychology University of California Riverside Riverside, California, USA E-mail: Andersen@ucr.edu E-mail: ruini@ucr.edu

More information

Visual Selection and Attention

Visual Selection and Attention Visual Selection and Attention Retrieve Information Select what to observe No time to focus on every object Overt Selections Performed by eye movements Covert Selections Performed by visual attention 2

More information

Vision Research 51 (2011) Contents lists available at SciVerse ScienceDirect. Vision Research

Vision Research 51 (2011) Contents lists available at SciVerse ScienceDirect. Vision Research Vision Research 51 (2011) 2378 2385 Contents lists available at SciVerse ScienceDirect Vision Research journal homepage: www.elsevier.com/locate/visres Different motion cues are used to estimate time-to-arrival

More information

Rules of apparent motion: The shortest-path constraint: objects will take the shortest path between flashed positions.

Rules of apparent motion: The shortest-path constraint: objects will take the shortest path between flashed positions. Rules of apparent motion: The shortest-path constraint: objects will take the shortest path between flashed positions. The box interrupts the apparent motion. The box interrupts the apparent motion.

More information

PSYCHOLOGICAL SCIENCE. Research Report

PSYCHOLOGICAL SCIENCE. Research Report Research Report A VISUAL EQUALIZATION STRATEGY FOR LOCOMOTOR CONTROL: Of Honeybees, Robots, and Humans Brown University Abstract Honeybees fly down the center of a corridor by equating the speed of optic

More information

Calibration, Information, and Control Strategies for Braking to Avoid a Collision

Calibration, Information, and Control Strategies for Braking to Avoid a Collision Journal of Experimental Psychology: Human Perception and Performance 2005, Vol. 31, No. 3, 480 501 Copyright 2005 by the American Psychological Association 0096-1523/05/$12.00 DOI: 10.1037/0096-1523.31.3.480

More information

Integration Mechanisms for Heading Perception

Integration Mechanisms for Heading Perception Seeing and Perceiving 23 (2010) 197 221 brill.nl/sp Integration Mechanisms for Heading Perception Elif M. Sikoglu 1, Finnegan J. Calabro 1, Scott A. Beardsley 1,2 and Lucia M. Vaina 1,3, 1 Brain and Vision

More information

CAN WE PREDICT STEERING CONTROL PERFORMANCE FROM A 2D SHAPE DETECTION TASK?

CAN WE PREDICT STEERING CONTROL PERFORMANCE FROM A 2D SHAPE DETECTION TASK? CAN WE PREDICT STEERING CONTROL PERFORMANCE FROM A 2D SHAPE DETECTION TASK? Bobby Nguyen 1, Yan Zhuo 2 & Rui Ni 1 1 Wichita State University, Wichita, Kansas, USA 2 Institute of Biophysics, Chinese Academy

More information

Neural correlates of multisensory cue integration in macaque MSTd

Neural correlates of multisensory cue integration in macaque MSTd Neural correlates of multisensory cue integration in macaque MSTd Yong Gu, Dora E Angelaki,3 & Gregory C DeAngelis 3 Human observers combine multiple sensory cues synergistically to achieve greater perceptual

More information

Perception of Heading and Driving Distance From Optic Flow

Perception of Heading and Driving Distance From Optic Flow Perception of Heading and Driving Distance From Optic Flow Markus Lappe, Antje Grigo, Frank Bremmer, Harald Frenz Zoology & Neurobiology, Ruhr University, 44780 Bochum, Germany Tel. +49/234/32-24350, Fax.

More information

The Perception of Visual Speed While Moving

The Perception of Visual Speed While Moving Draft version of manuscript IN PRESS in the Journal of Experimental Psychology: Human Perception and Performance The Perception of Visual Speed While Moving Frank H. Durgin, Krista Gigone, and Rebecca

More information

The path of visual attention

The path of visual attention Acta Psychologica 121 (2006) 199 209 www.elsevier.com/locate/actpsy The path of visual attention James M. Brown a, *, Bruno G. Breitmeyer b, Katherine A. Leighty a, Hope I. Denney a a Department of Psychology,

More information

How Far Away Is That? It Depends on You: Perception Accounts for the Abilities of Others

How Far Away Is That? It Depends on You: Perception Accounts for the Abilities of Others Journal of Experimental Psychology: Human Perception and Performance 2015, Vol. 41, No. 3, 000 2015 American Psychological Association 0096-1523/15/$12.00 http://dx.doi.org/10.1037/xhp0000070 OBSERVATION

More information

Spatial Orientation Using Map Displays: A Model of the Influence of Target Location

Spatial Orientation Using Map Displays: A Model of the Influence of Target Location Gunzelmann, G., & Anderson, J. R. (2004). Spatial orientation using map displays: A model of the influence of target location. In K. Forbus, D. Gentner, and T. Regier (Eds.), Proceedings of the Twenty-Sixth

More information

2012 Course: The Statistician Brain: the Bayesian Revolution in Cognitive Sciences

2012 Course: The Statistician Brain: the Bayesian Revolution in Cognitive Sciences 2012 Course: The Statistician Brain: the Bayesian Revolution in Cognitive Sciences Stanislas Dehaene Chair of Experimental Cognitive Psychology Lecture n 5 Bayesian Decision-Making Lecture material translated

More information

Natural Scene Statistics and Perception. W.S. Geisler

Natural Scene Statistics and Perception. W.S. Geisler Natural Scene Statistics and Perception W.S. Geisler Some Important Visual Tasks Identification of objects and materials Navigation through the environment Estimation of motion trajectories and speeds

More information

Rapid recalibration based on optic flow in visually guided action

Rapid recalibration based on optic flow in visually guided action Exp Brain Res (2007) 183:61 74 DOI 10.1007/s00221-007-1021-1 RESEARCH ARTICLE Rapid recalibration based on optic flow in visually guided action Brett R. Fajen Received: 5 August 2006 / Accepted: 4 June

More information

OPTIC FLOW IN DRIVING SIMULATORS

OPTIC FLOW IN DRIVING SIMULATORS OPTIC FLOW IN DRIVING SIMULATORS Ronald R. Mourant, Beverly K. Jaeger, and Yingzi Lin Virtual Environments Laboratory 334 Snell Engineering Center Northeastern University Boston, MA 02115-5000 In the case

More information

Differences in temporal frequency tuning between the two binocular mechanisms for seeing motion in depth

Differences in temporal frequency tuning between the two binocular mechanisms for seeing motion in depth 1574 J. Opt. Soc. Am. A/ Vol. 25, No. 7/ July 2008 Shioiri et al. Differences in temporal frequency tuning between the two binocular mechanisms for seeing motion in depth Satoshi Shioiri, 1, * Tomohiko

More information

PSY 310: Sensory and Perceptual Processes 1

PSY 310: Sensory and Perceptual Processes 1 Prof. Greg Francis PSY 310 Greg Francis Perception We have mostly talked about perception as an observer who acquires information about an environment Object properties Distance Size Color Shape Motion

More information

DISTANCE PERCEPTION IN VIRTUAL REALITY WILLIAM HAMMEL

DISTANCE PERCEPTION IN VIRTUAL REALITY WILLIAM HAMMEL DISTANCE PERCEPTION IN VIRTUAL REALITY WILLIAM HAMMEL BACKGROUND Distance judgment: a method used to determine the accuracy of distance perception If perception is accurate, then judgments should be accurate

More information

Perception. Chapter 8, Section 3

Perception. Chapter 8, Section 3 Perception Chapter 8, Section 3 Principles of Perceptual Organization The perception process helps us to comprehend the confusion of the stimuli bombarding our senses Our brain takes the bits and pieces

More information

A FRÖHLICH EFFECT IN MEMORY FOR AUDITORY PITCH: EFFECTS OF CUEING AND OF REPRESENTATIONAL GRAVITY. Timothy L. Hubbard 1 & Susan E.

A FRÖHLICH EFFECT IN MEMORY FOR AUDITORY PITCH: EFFECTS OF CUEING AND OF REPRESENTATIONAL GRAVITY. Timothy L. Hubbard 1 & Susan E. In D. Algom, D. Zakay, E. Chajut, S. Shaki, Y. Mama, & V. Shakuf (Eds.). (2011). Fechner Day 2011: Proceedings of the 27 th Annual Meeting of the International Society for Psychophysics (pp. 89-94). Raanana,

More information

Congruency Effects with Dynamic Auditory Stimuli: Design Implications

Congruency Effects with Dynamic Auditory Stimuli: Design Implications Congruency Effects with Dynamic Auditory Stimuli: Design Implications Bruce N. Walker and Addie Ehrenstein Psychology Department Rice University 6100 Main Street Houston, TX 77005-1892 USA +1 (713) 527-8101

More information

Theta sequences are essential for internally generated hippocampal firing fields.

Theta sequences are essential for internally generated hippocampal firing fields. Theta sequences are essential for internally generated hippocampal firing fields. Yingxue Wang, Sandro Romani, Brian Lustig, Anthony Leonardo, Eva Pastalkova Supplementary Materials Supplementary Modeling

More information

Supervised Calibration Relies on the Multisensory Percept

Supervised Calibration Relies on the Multisensory Percept Article Supervised Calibration Relies on the Multisensory Percept Adam Zaidel, 1, * Wei Ji Ma, 1,2 and Dora E. Angelaki 1 1 Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA

More information

Vestibular Facilitation of Optic Flow Parsing

Vestibular Facilitation of Optic Flow Parsing Paul R. MacNeilage 1 *, Zhou Zhang 2, Gregory C. DeAngelis 3, Dora E. Angelaki 4 1 Vertigo, Balance, and Oculomotor Research Center, University Hospital of Munich, Munich, Germany, 2 Department of Biomedical

More information

Use of Geometric Properties of Landmark Arrays for Reorientation Relative to Remote Cities and Local Objects

Use of Geometric Properties of Landmark Arrays for Reorientation Relative to Remote Cities and Local Objects Journal of Experimental Psychology: Learning, Memory, and Cognition 014, Vol. 40, No., 476 491 013 American Psychological Association 078-7393/14/$1.00 DOI: 10.1037/a0034976 Use of Geometric Properties

More information

Changing expectations about speed alters perceived motion direction

Changing expectations about speed alters perceived motion direction Current Biology, in press Supplemental Information: Changing expectations about speed alters perceived motion direction Grigorios Sotiropoulos, Aaron R. Seitz, and Peggy Seriès Supplemental Data Detailed

More information

Synchronizing Self and Object Movement: How Child and Adult Cyclists Intercept Moving Gaps in a Virtual Environment

Synchronizing Self and Object Movement: How Child and Adult Cyclists Intercept Moving Gaps in a Virtual Environment Journal of Experimental Psychology: Human Perception and Performance 2010, Vol. 36, No. 6, 1535 1552 2010 American Psychological Association 0096-1523/10/$12.00 DOI: 10.1037/a0020560 Synchronizing Self

More information

Presence and Perception: theoretical links & empirical evidence. Edwin Blake

Presence and Perception: theoretical links & empirical evidence. Edwin Blake Presence and Perception: theoretical links & empirical evidence Edwin Blake edwin@cs.uct.ac.za This Talk 2 Perception Bottom-up Top-down Integration Presence Bottom-up Top-down BIPs Presence arises from

More information

Optic flow in a virtual environment can impact on locomotor steering post stroke

Optic flow in a virtual environment can impact on locomotor steering post stroke International Conference on Virtual Rehabilitation 2011 Rehab Week Zurich, ETH Zurich Science City, Switzerland, June 27-29, 2011 Optic flow in a virtual environment can impact on locomotor steering post

More information

Examining the Role of Object Size in Judgments of Lateral Separation

Examining the Role of Object Size in Judgments of Lateral Separation Examining the Role of Object Size in Judgments of Lateral Separation Abstract Research on depth judgments has found a small but significant effect of object size on perceived depth (Gogel & Da Silva, 1987).

More information

Perceiving possibilities for action: On the necessity of calibration and perceptual learning for the visual guidance of action

Perceiving possibilities for action: On the necessity of calibration and perceptual learning for the visual guidance of action Perception, 2005, volume 34, pages 717 ^ 740 DOI:10.1068/p5405 Perceiving possibilities for action: On the necessity of calibration and perceptual learning for the visual guidance of action Brett R Fajen

More information

Viewpoint dependent recognition of familiar faces

Viewpoint dependent recognition of familiar faces Viewpoint dependent recognition of familiar faces N. F. Troje* and D. Kersten *Max-Planck Institut für biologische Kybernetik, Spemannstr. 38, 72076 Tübingen, Germany Department of Psychology, University

More information

Representational Momentum Beyond Internalized Physics

Representational Momentum Beyond Internalized Physics CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE Representational Momentum Beyond Internalized Physics Embodied Mechanisms of Anticipation Cause Errors in Visual Short-Term Memory Dirk Kerzel University of

More information

Visual & Auditory Skills Lab

Visual & Auditory Skills Lab Visual & Auditory Skills Lab Name: Score: Introduction This lab consists of a series of experiments that explore various perceptual, vision, and balance skills that help us understand how we perform motor

More information

The Horizontal Vertical Illusion: An Investigation in Context and Environment

The Horizontal Vertical Illusion: An Investigation in Context and Environment The Horizontal Vertical Illusion: An Investigation in Context and Environment, Theresa C. Cook, Lawrence Rosenblum Department of Psychology University of California, Riverside A B S T R A C T The Horizontal

More information

Quantifying the Coherence of Pedestrian Groups

Quantifying the Coherence of Pedestrian Groups Quantifying the Coherence of Pedestrian Groups Adam W. Kiefer (adam_kiefer@brown.edu) Stéphane Bonneaud (stephane_bonneaud@brown.edu) Kevin Rio (kevin_rio@brown.edu) William H. Warren (bill_warren@brown.edu)

More information

Look before you leap: Jumping ability affects distance perception

Look before you leap: Jumping ability affects distance perception Perception, 2009, volume 38, pages 1863 ^ 1866 doi:10.1068/p6509 LAST BUT NOT LEAST Look before you leap: Jumping ability affects distance perception David A Lessard, Sally A Linkenauger, Dennis R Proffitt

More information

(Visual) Attention. October 3, PSY Visual Attention 1

(Visual) Attention. October 3, PSY Visual Attention 1 (Visual) Attention Perception and awareness of a visual object seems to involve attending to the object. Do we have to attend to an object to perceive it? Some tasks seem to proceed with little or no attention

More information

Do you have to look where you go? Gaze behaviour during spatial decision making

Do you have to look where you go? Gaze behaviour during spatial decision making Do you have to look where you go? Gaze behaviour during spatial decision making Jan M. Wiener (jwiener@bournemouth.ac.uk) Department of Psychology, Bournemouth University Poole, BH12 5BB, UK Olivier De

More information

7 Grip aperture and target shape

7 Grip aperture and target shape 7 Grip aperture and target shape Based on: Verheij R, Brenner E, Smeets JBJ. The influence of target object shape on maximum grip aperture in human grasping movements. Exp Brain Res, In revision 103 Introduction

More information

The Effects of Action on Perception. Andriana Tesoro. California State University, Long Beach

The Effects of Action on Perception. Andriana Tesoro. California State University, Long Beach ACTION ON PERCEPTION 1 The Effects of Action on Perception Andriana Tesoro California State University, Long Beach ACTION ON PERCEPTION 2 The Effects of Action on Perception Perception is a process that

More information

Traffic Sign Detection and Identification

Traffic Sign Detection and Identification University of Iowa Iowa Research Online Driving Assessment Conference 2013 Driving Assessment Conference Jun 19th, 12:00 AM Traffic Sign Detection and Identification Vaughan W. Inman SAIC, McLean, VA Brian

More information

Aging and the Detection of Collision Events in Fog

Aging and the Detection of Collision Events in Fog University of Iowa Iowa Research Online Driving Assessment Conference 2009 Driving Assessment Conference Jun 23rd, 12:00 AM Aging and the Detection of Collision Events in Fog Zheng Bian University of California,

More information

Hierarchical Bayesian Modeling of Individual Differences in Texture Discrimination

Hierarchical Bayesian Modeling of Individual Differences in Texture Discrimination Hierarchical Bayesian Modeling of Individual Differences in Texture Discrimination Timothy N. Rubin (trubin@uci.edu) Michael D. Lee (mdlee@uci.edu) Charles F. Chubb (cchubb@uci.edu) Department of Cognitive

More information

Cognition 119 (2011) Contents lists available at ScienceDirect. Cognition. journal homepage:

Cognition 119 (2011) Contents lists available at ScienceDirect. Cognition. journal homepage: Cognition 119 (2011) 419 429 Contents lists available at ScienceDirect Cognition journal homepage: www.elsevier.com/locate/cognit Spatial updating according to a fixed reference direction of a briefly

More information

When Walking Makes Perception Better Frank H. Durgin

When Walking Makes Perception Better Frank H. Durgin CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE When Walking Makes Perception Better Frank H. Durgin Swarthmore College ABSTRACT When we move, the visual world moves toward us. That is, self-motion normally

More information

A model of parallel time estimation

A model of parallel time estimation A model of parallel time estimation Hedderik van Rijn 1 and Niels Taatgen 1,2 1 Department of Artificial Intelligence, University of Groningen Grote Kruisstraat 2/1, 9712 TS Groningen 2 Department of Psychology,

More information

Object Substitution Masking: When does Mask Preview work?

Object Substitution Masking: When does Mask Preview work? Object Substitution Masking: When does Mask Preview work? Stephen W. H. Lim (psylwhs@nus.edu.sg) Department of Psychology, National University of Singapore, Block AS6, 11 Law Link, Singapore 117570 Chua

More information

What is visual anticipation and how much does it. rely on the dorsal stream?

What is visual anticipation and how much does it. rely on the dorsal stream? What is visual anticipation and how much does it rely on the dorsal stream? Gilles Montagne, Julien Bastin and David M. Jacobs UMR Mouvement et Perception Université de la Méditerranée & CNRS Correspondence

More information

TOC: VE examples, VE student surveys, VE diagnostic questions Virtual Experiments Examples

TOC: VE examples, VE student surveys, VE diagnostic questions Virtual Experiments Examples TOC: VE examples, VE student surveys, VE diagnostic questions Virtual Experiments Examples Circular Motion In this activity, students are asked to exert a force on an object, which has an initial velocity,

More information

Active Gaze, Visual Look-Ahead, and Locomotor Control

Active Gaze, Visual Look-Ahead, and Locomotor Control Journal of Experimental Psychology: Human Perception and Performance 28, Vol. 34, No. 5, 115 1164 Copyright 28 by the American Psychological Association 96-1523/8/$12. DOI: 1.137/96-1523.34.5.115 Active

More information

Intelligent Object Group Selection

Intelligent Object Group Selection Intelligent Object Group Selection Hoda Dehmeshki Department of Computer Science and Engineering, York University, 47 Keele Street Toronto, Ontario, M3J 1P3 Canada hoda@cs.yorku.ca Wolfgang Stuerzlinger,

More information

Correspondence between the DRDP (2015) and the California Preschool Learning Foundations. Foundations (PLF)

Correspondence between the DRDP (2015) and the California Preschool Learning Foundations. Foundations (PLF) 1 Desired Results Developmental Profile (2015) [DRDP (2015)] Correspondence to California Foundations: Physical Development Health (PD-HLTH) and the Overall, the Physical Development Health (PD-HLTH) domain

More information

Correspondence between the DRDP (2015) and the California Preschool Learning Foundations. Foundations (PLF)

Correspondence between the DRDP (2015) and the California Preschool Learning Foundations. Foundations (PLF) 1 Desired Results Developmental Profile (2015) [DRDP (2015)] Correspondence to California Foundations: Physical Development Health (PD-HLTH) and the Overall, the Physical Development Health (PD-HLTH) domain

More information

Chapter 5: Perceiving Objects and Scenes

Chapter 5: Perceiving Objects and Scenes PSY382-Hande Kaynak, PhD 2/13/17 Chapter 5: Perceiving Objects and Scenes 1 2 Figure 5-1 p96 3 Figure 5-2 p96 4 Figure 5-4 p97 1 Why Is It So Difficult to Design a Perceiving Machine? The stimulus on the

More information

Adapting internal statistical models for interpreting visual cues to depth

Adapting internal statistical models for interpreting visual cues to depth Journal of Vision (2010) 10(4):1, 1 27 http://journalofvision.org/10/4/1/ 1 Adapting internal statistical models for interpreting visual cues to depth Anna Seydell David C. Knill Julia Trommershäuser Department

More information

The perception of motion transparency: A signal-to-noise limit

The perception of motion transparency: A signal-to-noise limit Vision Research 45 (2005) 1877 1884 www.elsevier.com/locate/visres The perception of motion transparency: A signal-to-noise limit Mark Edwards *, John A. Greenwood School of Psychology, Australian National

More information

Supplementary Materials

Supplementary Materials Supplementary Materials Supplementary Figure S1: Data of all 106 subjects in Experiment 1, with each rectangle corresponding to one subject. Data from each of the two identical sub-sessions are shown separately.

More information

2012 Course : The Statistician Brain: the Bayesian Revolution in Cognitive Science

2012 Course : The Statistician Brain: the Bayesian Revolution in Cognitive Science 2012 Course : The Statistician Brain: the Bayesian Revolution in Cognitive Science Stanislas Dehaene Chair in Experimental Cognitive Psychology Lecture No. 4 Constraints combination and selection of a

More information

OLIVER HÖNER: DEVELOPMENT AND EVALUATION OF A GOALBALL-SPECIFIC PERFORMANCE TEST 1

OLIVER HÖNER: DEVELOPMENT AND EVALUATION OF A GOALBALL-SPECIFIC PERFORMANCE TEST 1 OLIVER HÖNER: DEVELOPMENT AND EVALUATION OF A GOALBALL-SPECIFIC PERFORMANCE TEST 1 Abstract Whereas goalball coaches can adopt performance diagnostics from other competitive sport games to test the physical

More information

(2) In each graph above, calculate the velocity in feet per second that is represented.

(2) In each graph above, calculate the velocity in feet per second that is represented. Calculus Week 1 CHALLENGE 1-Velocity Exercise 1: Examine the two graphs below and answer the questions. Suppose each graph represents the position of Abby, our art teacher. (1) For both graphs above, come

More information

Examining Effective Navigational Learning Strategies for the Visually Impaired

Examining Effective Navigational Learning Strategies for the Visually Impaired Examining Effective Navigational Learning Strategies for the Visually Impaired Jeremy R. Donaldson Department of Psychology, University of Utah This study focuses on navigational learning strategies and

More information

Effects of gaze on vection from jittering, oscillating, and purely radial optic flow

Effects of gaze on vection from jittering, oscillating, and purely radial optic flow University of Wollongong Research Online Faculty of Health and Behavioural Sciences - Papers (Archive) Faculty of Science, Medicine and Health 2009 Effects of gaze on vection from jittering, oscillating,

More information

Time Experiencing by Robotic Agents

Time Experiencing by Robotic Agents Time Experiencing by Robotic Agents Michail Maniadakis 1 and Marc Wittmann 2 and Panos Trahanias 1 1- Foundation for Research and Technology - Hellas, ICS, Greece 2- Institute for Frontier Areas of Psychology

More information

Physical Education K-5 Essential Learning for Physical Education

Physical Education K-5 Essential Learning for Physical Education K-5 Essential Learning for Kindergarten *Demonstrates mature walking form and maintains a rhythmic pattern while stationary *Demonstrates stability while jumping, walking and starting and stopping movement

More information

BRAIN-CENTERED PERFORMANCE: Understanding How the Brain Works, So We Can Work More Safely.

BRAIN-CENTERED PERFORMANCE: Understanding How the Brain Works, So We Can Work More Safely. BRAIN-CENTERED PERFORMANCE: Understanding How the Brain Works, So We Can Work More Safely. CONTENTS Introduction. The human brain - the next frontier in workplace safety. Fast Brain, Slow Brain. Neuroscience

More information

Supplementary Figure 1. Psychology Experiment Building Language (PEBL) screen-shots

Supplementary Figure 1. Psychology Experiment Building Language (PEBL) screen-shots 1 1 2 3 4 5 Supplementary Figure 1. Psychology Experiment Building Language (PEBL) screen-shots including Pursuit Rotor (A), Time-Wall (B), Trail-Making Test (C), Digit Span (D), Wisconsin (Berg) Card

More information

Systematic perceptual distortion of 3D slant by disconjugate eye movements

Systematic perceptual distortion of 3D slant by disconjugate eye movements Vision Research 46 (2006) 2328 2335 www.elsevier.com/locate/visres Systematic perceptual distortion of 3D slant by disconjugate eye movements Hyung-Chul O. Li * Department of Industrial Psychology, Kwangwoon

More information

Validity of Haptic Cues and Its Effect on Priming Visual Spatial Attention

Validity of Haptic Cues and Its Effect on Priming Visual Spatial Attention Validity of Haptic Cues and Its Effect on Priming Visual Spatial Attention J. Jay Young & Hong Z. Tan Haptic Interface Research Laboratory Purdue University 1285 EE Building West Lafayette, IN 47907 {youngj,

More information

The eyes fixate the optimal viewing position of task-irrelevant words

The eyes fixate the optimal viewing position of task-irrelevant words Psychonomic Bulletin & Review 2009, 16 (1), 57-61 doi:10.3758/pbr.16.1.57 The eyes fixate the optimal viewing position of task-irrelevant words DANIEL SMILEK, GRAYDEN J. F. SOLMAN, PETER MURAWSKI, AND

More information

Biologically-Inspired Human Motion Detection

Biologically-Inspired Human Motion Detection Biologically-Inspired Human Motion Detection Vijay Laxmi, J. N. Carter and R. I. Damper Image, Speech and Intelligent Systems (ISIS) Research Group Department of Electronics and Computer Science University

More information

Physical Education: Pre-K

Physical Education: Pre-K Physical Education: Pre-K Movement Forms/Motor Skills and Movement Patterns Grade Level Expectation: A physically educated person demonstrates competency in motor skills and movement patterns needed to

More information

Building Better Balance

Building Better Balance Building Better Balance The Effects of MS on Balance Individuals with MS experience a decline in their balance due to various MS related impairments. Some of these impairments can be improved with exercise

More information

Introductory Motor Learning and Development Lab

Introductory Motor Learning and Development Lab Introductory Motor Learning and Development Lab Laboratory Equipment & Test Procedures. Motor learning and control historically has built its discipline through laboratory research. This has led to the

More information

The dependence of motion repulsion and rivalry on the distance between moving elements

The dependence of motion repulsion and rivalry on the distance between moving elements Vision Research 40 (2000) 2025 2036 www.elsevier.com/locate/visres The dependence of motion repulsion and rivalry on the distance between moving elements Nestor Matthews a, Bard J. Geesaman b, Ning Qian

More information

Learning Bayesian priors for depth perception

Learning Bayesian priors for depth perception Learning Bayesian priors for depth perception David C. Knill Center for Visual Science, University of Rochester, Rochester, NY, 14627 How the visual system learns the statistical regularities (e.g. symmetry)

More information

Subjective randomness and natural scene statistics

Subjective randomness and natural scene statistics Psychonomic Bulletin & Review 2010, 17 (5), 624-629 doi:10.3758/pbr.17.5.624 Brief Reports Subjective randomness and natural scene statistics Anne S. Hsu University College London, London, England Thomas

More information

Pupil Dilation as an Indicator of Cognitive Workload in Human-Computer Interaction

Pupil Dilation as an Indicator of Cognitive Workload in Human-Computer Interaction Pupil Dilation as an Indicator of Cognitive Workload in Human-Computer Interaction Marc Pomplun and Sindhura Sunkara Department of Computer Science, University of Massachusetts at Boston 100 Morrissey

More information

The Simon Effect as a Function of Temporal Overlap between Relevant and Irrelevant

The Simon Effect as a Function of Temporal Overlap between Relevant and Irrelevant University of North Florida UNF Digital Commons All Volumes (2001-2008) The Osprey Journal of Ideas and Inquiry 2008 The Simon Effect as a Function of Temporal Overlap between Relevant and Irrelevant Leslie

More information

Journal of Experimental Psychology: Human Perception and Performance

Journal of Experimental Psychology: Human Perception and Performance Journal of Experimental Psychology: Human Perception and Performance Eye Movements Reveal how Task Difficulty Moulds Visual Search Angela H. Young and Johan Hulleman Online First Publication, May 28, 2012.

More information

CRITICALLY APPRAISED PAPER (CAP)

CRITICALLY APPRAISED PAPER (CAP) CRITICALLY APPRAISED PAPER (CAP) Padula, W. V., Nelson, C. A., Padula, W. V., Benabib, R., Yilmaz, T., & Krevisky, S. (2009). Modifying postural adaptation following a CVA through prismatic shift of visuo-spatial

More information

Principals of Object Perception

Principals of Object Perception Principals of Object Perception Elizabeth S. Spelke COGNITIVE SCIENCE 14, 29-56 (1990) Cornell University Summary Infants perceive object by analyzing tree-dimensional surface arrangements and motions.

More information

Interference with spatial working memory: An eye movement is more than a shift of attention

Interference with spatial working memory: An eye movement is more than a shift of attention Psychonomic Bulletin & Review 2004, 11 (3), 488-494 Interference with spatial working memory: An eye movement is more than a shift of attention BONNIE M. LAWRENCE Washington University School of Medicine,

More information

TEMPORAL CHANGE IN RESPONSE BIAS OBSERVED IN EXPERT ANTICIPATION OF VOLLEYBALL SPIKES

TEMPORAL CHANGE IN RESPONSE BIAS OBSERVED IN EXPERT ANTICIPATION OF VOLLEYBALL SPIKES TEMPORAL CHANGE IN RESPONSE BIAS OBSERVED IN ANTICIPATION OF VOLLEYBALL SPIKES Tomoko Takeyama, Nobuyuki Hirose 2, and Shuji Mori 2 Department of Informatics, Graduate School of Information Science and

More information

Sound Localization PSY 310 Greg Francis. Lecture 31. Audition

Sound Localization PSY 310 Greg Francis. Lecture 31. Audition Sound Localization PSY 310 Greg Francis Lecture 31 Physics and psychology. Audition We now have some idea of how sound properties are recorded by the auditory system So, we know what kind of information

More information

LEA Color Vision Testing

LEA Color Vision Testing To The Tester Quantitative measurement of color vision is an important diagnostic test used to define the degree of hereditary color vision defects found in screening with pseudoisochromatic tests and

More information

Competing Frameworks in Perception

Competing Frameworks in Perception Competing Frameworks in Perception Lesson II: Perception module 08 Perception.08. 1 Views on perception Perception as a cascade of information processing stages From sensation to percept Template vs. feature

More information

Competing Frameworks in Perception

Competing Frameworks in Perception Competing Frameworks in Perception Lesson II: Perception module 08 Perception.08. 1 Views on perception Perception as a cascade of information processing stages From sensation to percept Template vs. feature

More information