Systematic perceptual distortion of 3D slant by disconjugate eye movements

Similar documents
Differences in temporal frequency tuning between the two binocular mechanisms for seeing motion in depth

Depth aliasing by the transient-stereopsis system

Perception. Chapter 8, Section 3

The perception of motion transparency: A signal-to-noise limit

The influence of visual motion on fast reaching movements to a stationary object

Selective attention and cyclopean motion processing

The dependence of motion repulsion and rivalry on the distance between moving elements

Changing expectations about speed alters perceived motion direction

Orientation tuning of the transient-stereopsis system

Seeing motion in depth using inter-ocular velocity differences

Perceptual Metamers in Stereoscopic Vision

Adapting internal statistical models for interpreting visual cues to depth

Congruency Effects with Dynamic Auditory Stimuli: Design Implications

A contrast paradox in stereopsis, motion detection and vernier acuity

Attention enhances feature integration

Outline: Vergence Eye Movements: Classification I. Describe with 3 degrees of freedom- Horiz, Vert, torsion II. Quantifying units- deg, PD, MA III.

Feature binding in object-file representations of multiple moving items

CAN WE PREDICT STEERING CONTROL PERFORMANCE FROM A 2D SHAPE DETECTION TASK?

Visual Selection and Attention

Effects of attention on motion repulsion

ID# Exam 1 PS 325, Fall 2007

Spatial-frequency and contrast tuning of the transient-stereopsis system

Department of Computer Science, University College London, London WC1E 6BT, UK;

The path of visual attention

A FRÖHLICH EFFECT IN MEMORY FOR AUDITORY PITCH: EFFECTS OF CUEING AND OF REPRESENTATIONAL GRAVITY. Timothy L. Hubbard 1 & Susan E.

Disruption of implicit perceptual memory by intervening neutral stimuli

Gaze direction modulates visual aftereffects in depth and color

Sensorimotor Integration Compensates for Visual Localization Errors During Smooth Pursuit Eye Movements

Concurrent measurement of perceived speed and speed discrimination threshold using the method of single stimuli

Image-Based Grouping during Binocular Rivalry Is Dictated by Eye-Of-Origin

Latency differences and the flash-lag effect

Natural Scene Statistics and Perception. W.S. Geisler

The Visual Perception of Three-Dimensional Length

Features, as well as space and time, guide object persistence

Disparity- and velocity- based signals for 3D motion perception in human MT+ Bas Rokers, Lawrence K. Cormack, and Alexander C. Huk

Configural information is processed differently in perception and recognition of faces

There s more behind it: Perceived depth order biases perceived numerosity/density

BINOCULAR DEPTH PERCEPTION IN SMALL-ANGLE

Stimulus any aspect of or change in the environment to which an organism responds. Sensation what occurs when a stimulus activates a receptor

Psicológica ISSN: Universitat de València España

Connectedness underlies the underestimation of the horizontal

Sensation vs. Perception

Vision Research 51 (2011) Contents lists available at SciVerse ScienceDirect. Vision Research

Object Substitution Masking: When does Mask Preview work?

Non-veridical size perception of expanding and contracting objects

Perceived motion direction during smooth pursuit eye movements

First- and second-order processing in transient stereopsis

Representational Momentum Beyond Internalized Physics

The effects of subthreshold synchrony on the perception of simultaneity. Ludwig-Maximilians-Universität Leopoldstr 13 D München/Munich, Germany

Supplemental Information: Task-specific transfer of perceptual learning across sensory modalities

Measurement of Individual Changes in the Performance of Human Stereoscopic Vision for Disparities at the Limits of the Zone of Comfortable Viewing

IAT 355 Perception 1. Or What You See is Maybe Not What You Were Supposed to Get

Sensory Adaptation within a Bayesian Framework for Perception

Aging and the Detection of Collision Events in Fog

Kobe University Repository : Kernel

Sensation and Perception: How the World Enters the Mind

Opposite Influence of Perceptual Memory on Initial and Prolonged Perception of Sensory Ambiguity

Principals of Object Perception

Does scene context always facilitate retrieval of visual object representations?

How fast is the barberpole? Interactions of motion and depth in the perception of velocity. Fauzia Mosca & Nicola Bruno

Gathering and Repetition of the Elements in an Image Affect the Perception of Order and Disorder

Vision and Action. 10/3/12 Percep,on Ac,on 1

Functional Fixedness: The Functional Significance of Delayed Disengagement Based on Attention Set

Dikran J. Martin. Psychology 110. Name: Date: Making Contact with the World around Us. Principal Features

The Simon Effect as a Function of Temporal Overlap between Relevant and Irrelevant

What is mid level vision? Mid Level Vision. What is mid level vision? Lightness perception as revealed by lightness illusions

Segregation from direction differences in dynamic random-dot stimuli

Spatial versus temporal grouping in a modified Ternus display

THE SPATIAL EXTENT OF ATTENTION DURING DRIVING

PSYC& Lilienfeld et al. - Chapter 4 Sensation and Perception: How We Sense and Conceptualize the World Study Guide

Illusory reversal of temporal order: the bias to report a dimmer stimulus as the first

Amodal representation depends on the object seen before partial occlusion

(Visual) Attention. October 3, PSY Visual Attention 1

SELECTIVE ATTENTION AND CONFIDENCE CALIBRATION

Selective attention and asymmetry in the Müller-Lyer illusion

The Position of Moving Objects.

An analysis of binocular slant contrast

ID# Exam 1 PS 325, Fall 2003

The temporal course ofthe relationship between retinal disparity and the equidistance tendency in the determination ofperceived depth*

THE FUNCTIONAL ROLES OF THE MOTION PERCEPTION SYSTEMS. Aaron J. Fath. in partial fulfillment of the requirements. for the degree.

Processing in the probed-sinewave paradigm is likely retinal

Perceptual learning on orientation and direction discrimination

Study Guide Chapter 6

The Attentional Blink is Modulated by First Target Contrast: Implications of an Attention Capture Hypothesis

Interpolation and extrapolation on the path of apparent motion

V1 (Chap 3, part II) Lecture 8. Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Princeton University, Fall 2017

How does attention spread across objects oriented in depth?

PERFORMANCE OPERATING CHARACTERISTICS FOR SPATIAL AND TEMPORAL DISCRIMINATIONS: COMMON OR SEPARATE CAPACITIES?

Note:- Receptors are the person who receives any images from outer environment.

Feature Integration Theory

Examining the Role of Object Size in Judgments of Lateral Separation

Contour interpolation revealed by a dot localization paradigm

Seeing Sound: Changing Visual Perception Through Cross-Modal Interaction. Tracey D. Berger. New York University. Department of Psychology

Vision Research 50 (2010) Contents lists available at ScienceDirect. Vision Research. journal homepage:

The Role of Stereopsis, Motion Parallax, Perspective and Angle Polarity in Perceiving 3-D Shape

Importance of perceptual representation in the visual control of action

Exploring a brightness-drag illusion. Author. Published. Journal Title DOI. Copyright Statement. Downloaded from. Griffith Research Online

Retinal location and its effect on the spatial distribution of visual attention. by Paula Goolkasian

Perception of three-dimensional shape specified by optic flow by 8-week-old infants

Definition Slides. Sensation. Perception. Bottom-up processing. Selective attention. Top-down processing 11/3/2013

Transcription:

Vision Research 46 (2006) 2328 2335 www.elsevier.com/locate/visres Systematic perceptual distortion of 3D slant by disconjugate eye movements Hyung-Chul O. Li * Department of Industrial Psychology, Kwangwoon University, 447-1 Wolgae-Dong, Nowon-Gu, Seoul, Republic of Korea Received 20 October 2004; received in revised form 12 November 2005 Abstract When an observer pursues an object moving away from him or her, both eyes rotate in the opposite direction, and this type of disconjugate eye movement can generate eye movement-induced disparities in the case of dynamic objects that are present around the pursuit object. Such disparities are not usually generated by conjugate eye movement. The aim of this study was to determine whether eye movement-induced disparities could be calibrated with eye position information. Observers were requested to judge the slant of an object defined by the spatiotemporal pattern of occlusion during disconjugate eye movement. Interestingly, the observers perception of the slant of the target object was systematically distorted, although the perceptual distortion decreased somewhat in the presence of a salient reference around the target. This suggests that eye movement-induced disparities are not calibrated properly with eye position information. Ó 2006 Elsevier Ltd. All rights reserved. Keywords: Perceptual distortion; Disconjugate eye movement; 3D slant 1. Introduction The effect of pursuit eye movement on spatial vision has been extensively examined in previous studies, and various types of tasks have been employed to accomplish this: (a) whether the static background is perceived as moving during pursuit eye movement (Ehrenstein, Mateef, & Hohnsbein, 1986; Mack & Herman, 1973), (b) whether pursuit eye movement affects the perceived speed of an object (Brenner & van den Berg, 1994), or (c) whether the perceived position of flashes are affected by pursuit eye movement (Brenner & Cornelissen, 2000; Mateeff, Yakimoff, & Dimirtrov, 1981; Mita, Hironaka, & Koike, 1950). It should be noted that observers performed conjugate eye movements in the above studies. Conjugate eye movement is not the only eye movement that is performed in real life. We frequently encounter a situation in which disconjugate * Fax: +82 2 941 5429. E-mail address: hyung@kw.ac.kr. eye movement needs to be performed to pursue objects that are translating in the depth dimension. One way to evaluate the effect of pursuit eye movement on perceptual judgments is by presenting retinal information sequentially while subject s eyes are moving (Brenner & Cornelissen, 2000; Stoper, 1967), and Li, Brenner, Cornellissen, and Kim (2002) developed a new paradigm for studying the effect of pursuit eye movement on 2D shape perception. In Li et al., the 2D shape of a target object was defined by making the contour of the object unfold in a sequential manner. Without having a contour that will appear to move itself, the object was defined with the sequential pattern of occlusion of a moving line. Fig. 1A indicates an opaque object displaying the same luminance as the background. The object is typically not visible in this situation. However, if a luminance-defined horizontal line moves downward behind the opaque object, and the part of the line that passes behind the object is occluded, the object then finally becomes visible. When observers are asked to pursue a dot that moves in a right direction, both eyes perform conjugate eye movements and the pursuit dot 0042-6989/$ - see front matter Ó 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.visres.2005.11.032

Hyung-Chul O. Li / Vision Research 46 (2006) 2328 2335 2329 A B line motion Pursuit dot pursuit dot left retina right retina C D Pursuit dot pursuit dot left retina right retina left retina right retina Fig. 1. The target object employed by Li et al., which was defined by the spatiotemporal pattern of occlusion and the variation of retinal images depending on the type of pursuit eye movement. (A) A target square is defined by the spatiotemporal pattern of occlusion of the vertical movement of a horizontal line moving behind the target itself while observers are requested to pursue the pursuit dot. (B) The retinal images in both eyes when the stimulus illustrated in (A) was presented while the subjects were pursuing a dot moving to the right in a frontal plane. Note that both eyes rotate in the same direction and the images formed in both retina are the same. (C) The retinal images in both eyes when the stimulus illustrated in (A) was presented while subjects were pursuing a dot moving away from an observer. Note that both eyes rotate in the opposite direction and the images formed in both retinas are quite different. (D) Retinal images in both eyes when the direction of disconjugate eye movement is the opposite of (C). would project to the fovea. The occluded portion of the horizontal line would be projected into a different location in the retina, and the retinal image would be quite different from the screen image. Li et al. showed that observers perceived a parallelogram corresponding to the retinal image rather than a rectangle that is the actual physical stimulus on the screen. This indicates that the visual system simply ignores certain types of extraretinal information. When observers were asked to pursue a pursuit dot translating in a 2D frontal plane (see Fig. 1B), both eyes rotate in the same direction and the images that are formed in both retinas are the same. On the other hand, when observers perform disconjugate eye movements to pursue a pursuit dot moving away from the observer, both eyes rotate in the opposite direction and the images projected onto the retinas in both eyes would be different parallelograms, generating non-zero disparities between the top and the bottom sides of the parallelogram (see Fig. 1C). From here on, the disparity generated by disconjugate eye movement will be referred to as eye movement-induced disparity. The direction of eye movement-induced disparity would be the opposite when observers perform disconjugate eye movement in the opposite direction, as shown in Fig. 1D. There are two possibilities for what observers would perceive when eye movement-induced disparities are generated. One is the veridical slant of the object, that is, observers would perceive that the top and the bottom sides of the object are at the same depth from them. The other is that the top and bottom sides of the object are in different depths. The first possibility assumes that the visual system can compensate for eye movement-induced disparity with extraretinal information for disconjugate eye movements. The second possibility, however, assumes that the visual system fails to properly compensate for eye movement-induced disparity. These possibilities were examined by measuring observers perception of the slant (i.e., the relative depth of the top and the bottom sides) of a spatiotemporally defined object, as illustrated in Fig. 1A. 2. Experiment 1 If the pursuit dot translates in the depth dimension to the extent that disconjugate eye movements generate an

2330 Hyung-Chul O. Li / Vision Research 46 (2006) 2328 2335 eye movement-induced disparity that exceeds Panum s fusional area, then binocular rivalry would occur and it would be very difficult for subjects to judge the slant of the object. In order to prevent the possibility of binocular rivalry, the movement of the pursuit dot should be restricted to a limited range. Experiment 1 was designed to examine whether eye movement-induced disparity was generated when subjects perform a restricted amount of disconjugate eye movement. 2.1. Methods 2.1.1. Subjects Three observers who had no knowledge of the purpose of the research, and one author participated in the experiment. All of the observers had normal (corrected) vision. 2.1.2. The stimuli The stimuli were generated with a PowerMac G4/450 and displayed on a 17 00 LG Flatron 795 FT Plus video monitor (1268 H 768 V pixel resolution: 85 Hz frame rate), using the Matlab and Psychophysics Toolbox (Brainard, 1997; Pelli, 1997). The target object was defined by the spatiotemporal pattern of occlusion, as defined in the study reported by Li et al. A horizontal line (5.9 deg, 116.4 cd/m 2 ) passed vertically behind a square-shaped object (1.8 deg 1.8 deg) displaying the same luminance (56.8 cd/m 2 ) as the background, at a speed of 6.7 deg/s. In order to simulate a pursuit dot moving in the depth dimension, the pursuit dots that were presented to both eyes were manipulated so as to move in opposite directions in the horizontal dimension (see Fig. 2A). The pursuit dot moved horizontally 0.197, 0.384, or 0.958 deg around the center of the target object for 588 ms. The speed of the pursuit dot was 0.335, 0.67 or 1.68 deg/s. The movement of the horizontal line and the pursuit dot were synchronized. The stimulus was comprised of 50 frames (588 ms) and 353 ms were required for the target object to be defined by the vertical movement of the horizontal line. In each trial, the target object was randomly presented to one eye while the pursuit dot was presented to both eyes. In order to make the fusion of the two images easier, all the stimuli were presented inside a luminance-defined rectangle (7.9 deg 11.8 deg). 2.1.3. Procedures A session was comprised of 60 trials and each session was repeated four times for each subject: two directions of pursuit eye movements (moving toward vs. away from the subject) two locations of target object presentations (left vs. right eye) three amounts of pursuit dot movement (0.197, 0.394, and 0.985 deg) five replications. All the conditions in each session were randomized. In each trial, the static pursuit dot was presented first, and subjects were instructed to press a keyboard button whenever they were ready to pursue the dot. Immediately after the button was pressed, the pursuit dot moved in the depth dimension and the subjects pursued the pursuit dot. Immediately after the subjects performed disconjugate eye movement, they reported the perceived 2D shape of the target object in a 2AFC (2 Alternative-Forced-Choice) task. They pressed 1 if the top side, compared to the bottom side of the object, was perceived as being inclined toward the left. They pressed 2 in the opposite case. In a trial, the subjects were permitted to observe the stimulus as much as they wished. Another trial automatically started when a trial ended. A chin rest was used to minimize the head movement of the subjects, and the viewing distance was 45 cm. The vergence angle to the screen was about 8 deg. The subjects observed the kinematic stereograms via a mirror-type stereoscope. The first session was regarded as a practice session, and the data obtained from the last three sessions were included in the data analysis. 2.2. Results and discussions In the data analysis, we calculated the proportion in which the target appeared as a slanted rectangle was consistent with the retinal image. This proportion was quite similar over the two pursuit directions, and the patterns of results across all subjects were quite similar as well. Fig. 3 indicates the proportion reporting a perceptual dis- Fig. 2. The schematic stereograms employed in Experiments 1 and 2. (A) Schematic stereograms employed in Experiment 1. This shows only the case where the pursuit dot moves away from the observer. The target object, illustrated as dotted square here, was presented to one eye randomly. (B) The schematic stereograms employed in Experiment 2. The pursuit dot, in this example, moves away from an observer, and the target object as illustrated in Fig. 1 was presented to both eyes.

Hyung-Chul O. Li / Vision Research 46 (2006) 2328 2335 2331 % Target appeared as a slanted rectangle being consistent with a retinal image 100 90 80 70 60 50 tortion of a 2D shape, averaged over the subjects, with the amount of pursuit eye movement. The performances of the subjects were much higher than 50%, the chance level, in all the ranges of the eye movement, and this implies that the retinal image is distorted by pursuit eye movement and eye movement-induced disparities are generated by disconjugate eye movement. The latency of vergence eye movements made in response to a 2 deg step change in stimulus disparity is known to be between 130 and 250 ms (Rashbass & Westheimer, 1961). This latency, however, decreases when subjects initiate the motion of pursuit target, i.e., they anticipate its motion (Erkelens, Van der Steen, Steinman, & Collewijn, 1989), because in this situation vergence eye movements starts even before the stimulus moves. Some amount of vergence latency was possible in the present research, however, it should be noted that the target shape was defined in part of the disconjugate eye movement in each trial, and subjects initiated the motion of the pursuit target and were allowed to observe the stimulus as much as they wished. Thus, it is very unlikely that vergence latency had ay effect on the results. 3. Experiment 2 We infer that eye movement-induced disparity between the top and bottom sides of the object (i.e., Fig. 1A) is generated by disconjugate eye movement, based on the results of Experiment 1. In Experiment 2, we examined the issue of whether eye movement-induced disparities could be compensated for with eye position information. 3.1. Methods 0.197 0.394 0.985 Pursuit Dot Movement (deg) Fig. 3. Results of Experiment 1: the proportion in which the target appeared as a slanted rectangle being consistent with a retinal image along with the amount of pursuit dot movement. The target object was presented to one eye. 3.1.1. Subjects The subjects who had participated in Experiment 1 also participated in Experiment 2. 3.1.2. The stimuli The stimuli in Experiment 2 were basically the same as those in Experiment 1, except that the target object was presented to both eyes in Experiment 2. (see Fig. 2B). 3.1.3. Procedures The procedures in Experiment 2 were basically the same as those in Experiment 1, except for the subjects task. In Experiment 2, the subjects task was a 2AFC task on the 3D slant of the target object. They pressed 1 if the top side of the object appeared to be inclined toward subject, and 2 in the other case. An experimental session was composed of 30 trials and the session was repeated four times for each subject: two directions of pursuit eye movement three amounts of pursuit dot movement five replications. In each session, all the conditions were randomized. 3.2. Results and discussions If the visual system fails to compensate for eye movement-induced disparities, the subjects would perceive the target object slanted around the horizontal axis. Because the direction of eye movement-induced disparity depends on the pursuit direction, the direction of the perceived slant of the target object would depend on the direction of the disconjugate eye movement. We calculated the proportion in which the target object appeared to be slanted being consistent with the eye movement-induced disparity. No significant difference was found in the subjects performances between the different directions of disconjugate eye movements. Fig. 4 shows the proportion of perceptual distortion in the 3D slant, averaged over the subjects. The proportion of the perceptual distortions in all the ranges of the movement of the pursuit dot was significantly higher than 50%, the chance level. These results imply that eye movement-induced disparities are not compensated by extraretinal eye position information. % Surface appeared slanted being consistent with eye movement-induced disparity Pursuit Dot Movement (deg) Fig. 4. The results of Experiment 2: the proportion in which the target surface appeared slanted being consistent with eye movement-induced disparity.

2332 Hyung-Chul O. Li / Vision Research 46 (2006) 2328 2335 4. Experiment 3 In Experiment 2, we showed that disconjugate eye movement led to the perceptual distortion in the 3D slant of an object. Despite the fact that disconjugate eye movement is not a rare event in life, we usually do not experience perceptual distortion in a 3D slant. What could be the reason for this? The first is that the luminance values of the target object and the background in Experiment 2 were the same, although they are usually different in real life. This unusual luminance manipulation might have caused the perceptual distortion in the 3D slant. The second is that various types of reference objects around a target object are present in a natural situation. It should be noted that, when a static reference rectangle exists around the target defined by the sequentially unfolding contour, the temporary relative horizontal location between the reference rectangle and every part of the sequentially unfolding contour of the object would remain constant in a retinal image regardless of the subject s eye movement. This is true even when the retinal shape of the object varies, depending on eye movement. Although the final retinal shape of the sequentially defined target object is deformed by eye movement and is different from that of reference rectangle, every part of the sequentially unfolding contour is generated in the constant horizontal position relative to the reference rectangle on the retina because the reference rectangle translates as the target shape is deformed in the retina by eye movements. If the visual system succeeds in exploiting constant information of the relative horizontal location between the reference and target contour, veridical slant perception would be achieved. These possibilities were examined in Experiment 3. 4.1. Methods 4.1.1. Subjects The subjects who had participated in Experiment 1 and Experiment 2 also participated in Experiment 3. 4.1.2. The stimuli The stimuli employed in Experiment 3 were similar to those used in Experiment 2, except that the target object could have a different luminance with the background, and that a salient reference was present near the target object. The control of luminance of the target object was achieved by manipulating the luminance of the occluded part of the horizontal line: same (64 cd/m 2 ) as or different (94 cd/m 2 ) from that of the background (see Fig. 5). Experimental conditions were examined with and without a reference stimulus, independent of the luminance manipulation of the target object. The distance between the reference and the target object was manipulated in two levels (0 or 0.39 deg) and the luminance of the reference was manipulated in three levels in order to control its visibility (66, 83, or 109 cd/m 2, while the background luminance was fixed at 64 cd/m 2 ). The last two variables were included to explore reference properties that could affect the perceptual distortion of the 3D slant during disconjugate eye movement. 4.1.3. Procedures An experimental session was composed of 112 trials; the reference was present in 96 trials (two luminance levels of target two distance levels three luminance levels of reference two pursuit directions two pursuit amounts two directions of horizontal line) and was absent in the other 16 trials. The conditions were randomized in each of the sessions. The subject s task was exactly the same as described in Experiment 2, the 2AFC of reporting the perceived slant of the target. Each subject repeated the session five times, and the first session was regarded as a practice session. 4.2. Results and discussions Fig. 6A shows the proportion in which the target surface appeared slanted being consistent with eye movement-induced disparity in Experiment 3, averaged Fig. 5. The schematic diagram of target object displaying different luminance from the background, which was defined by the spatiotemporal pattern of occlusion of the vertically moving horizontal line. The occluded part was manipulated so as to have a different luminance from the background (left) to generate the target object having different luminance from the background (right). The right graph shows the perception of the subjects for the stimulus as illustrated in the left figure.

Hyung-Chul O. Li / Vision Research 46 (2006) 2328 2335 2333 A % Surface appeared slanted being consistent with eye movement-induced disparity B % Surface appeared slanted being consistent with eye movement-induced disparity same different Yes Presence of Reference Reference Luminance (cd / m ) over the subjects. Whether the luminance of the target was the same as the background or not has no effect on the perceptual distortion. This implies that the target does not necessarily have the same luminance as the background for a perceptual distortion of 3D slant to be observed. Interestingly, the proportion of perceptual distortion was much higher in the absence of a salient reference. It became much higher when the reference was located far from the target and when the luminance of the reference was low (see Fig. 6B). The proportion of perceptual distortion was about 68% when the reference was present, whereas it was 95% when it was absent. This implies that eye movement-induced disparities can be compensated for with reference information in the processing of 3D information and that the amount of compensation may depend on the visibility of the reference and the distance between the reference and the target. No reference size 1.183 deg 1.97 deg Fig. 6. The proportion in which the target surface appeared slanted being consistent with eye movement-induced disparity along with the presence of reference (A) and along with reference luminance (B) in Experiment 3. 5. General discussion The purpose of the present research was to determine whether the visual system compensates for eye movement-induced disparities using eye position information. Experiment 1 confirmed that eye movement-induced disparity was generated by disconjugate eye movement. In Experiment 2, the target object lying in a frontal plane appeared to be slanted around a horizontal axis, consistent with eye movement-induced disparities. These results imply that eye movement-induced disparities may not be compensated for properly with eye position information. The binocular disparity for the depth of an object does not provide absolute information on the depth of the object. It decreases in proportion to the square of the distance between the object and the observer. The visual system needs to integrate the viewing distance and disparities in order to perceive a constant depth. Some researchers have argued that constant depth representation is acquired if the visual system succeeds in finding out disparity curvature, the second derivative of the disparities, which is constant for some depth regardless of the viewing distance (Rogers & Cagenello, 1989). On the contrary, others have suggested that disparity information needs calibration with viewing distance for depth constancy to be accomplished (Cormack & Fox, 1984; Johnston, 1991; Ono & Comerford, 1977; Ritter, 1977; Wallach & Zuckerman, 1963) and that a variety of information could be employed for disparity calibration depending on the viewing distance. Ono and Comerford showed that the manipulation of accommodation and vergence affected perceived depth and that vergence could be a valid cue for distances of up to 2 m in the absence of other cues. Johnston showed that the incorrect estimation of viewing distance might result in an underestimation of the depth of an object. Consistent with Johnston s suggestion, Patterson and Martin (1992) proposed that factors that disrupt the scaling of disparity by distance information as well as the disparity computation might lead to nonveridical depth perception. The possibility that eye movement-induced disparities are calibrated with eye position information in a similar manner as that suggested by Johnston cannot be excluded. The systematic perceptual distortion of 3D slant observed in the present research implies that even if eye movement-induced disparities are calibrated by eye position information, this calibration is not perfect. The present research does not conclude that the Johnston type of compensation mechanism does not work but that the compensation of eye movement-induced disparity is not altogether satisfactory for cases of veridical slant perception. Whether the compensation of eye movement-induced disparity requires a binocular mechanism or two monocular mechanisms remains to be solved in future research. Misjudgment of the 3D slant of a frontoparallel surface has been reported in other studies as well as in the present research, but there are distinctive differences. Ogle (1938, 1939) conducted a series of experiments on the perception

2334 Hyung-Chul O. Li / Vision Research 46 (2006) 2328 2335 of the slant of a surface. When the image of one eye for a surface lying in a frontal plane was horizontally magnified relative to the image in the other eye, it appeared to slant away from the eye seeing the smaller image. The opposite direction of perceptual slant distortion was observed when the image that was smaller along the horizontal meridian was made smaller along the vertical meridian. Ogle referred to the former as a geometry effect and the latter as an induced effect (see Howard & Rogers (1995) for reviews of these effects). However, there are some distinctive differences between the present research and Ogle s. The surface appeared slanted around the horizontal axis in the present research, while it appeared slanted around the vertical axis in Ogle s study. More importantly, the phenomenon in the present research is based on eye movement-induced disparity generated by disconjugate eye movements, while that in Ogle is irrelevant to eye movement but is based on the magnification of the horizontal or vertical size of the image projected to one eye relative to the other eye. It should be noted that eye movement-induced disparity can be calibrated with eye position information for veridical slant perception while the other two effects cannot. As indicated previously, when we pursue an object moving in a 3D space, any dynamic object might produce eye movement-induced disparities. Unlike the results from Experiment 2 that shows the systematic perceptual distortion of 3D slant, observers usually do not experience this type of perceptual distortion of 3D slant in everyday life. What is the mechanism of this veridical perception of 3D slant in a situation where observers do disconjugate eye movement? As shown in Experiment 3, the amount of perceptual distortion decreased in the presence of a reference, but the perceptual distortion did not disappear completely. This implies that the visual system might exploit the reference information. However, it should be noted that both the calibration mechanism and the use of reference information do not completely explain the veridical perception of 3D slant during disconjugate eye movement in every day life. Someone might argue that another possible reason for the veridical 3D slant perception during disconjugate eye movement in everyday life would be that objects are not typically defined by sequential implicit information in the manner created in the present research. However, this possibility does not answer the question of why we veridically perceive the 3D slant of the path of a moving object during disconjugate eye movement although eye movement-induced disparity is generated for the path of the moving object by disconjugate eye movement. Another issue is why we usually do not experience perceptual distortion from watching a raster-based projection system like a TV set which defines objects sequentially. Usually it takes about 17 ms for a TV set to project a screen image while about 353 ms was required for the target object to be defined in the present research. To examine whether the temporal parameter plays a critical role in experiencing the phenomenon of perceptual distortion of 3D slant during disconjugate eye movements, the time required to define the target object was manipulated in a control experiment: 169, 112, 79, and 45 ms. Three subjects were used. The average proportions in which the target surface appeared to be slanted being consistent with eye movement-induced disparity were 95%, 83%, 93%, and 96%, respectively for conditions of 169, 112, 79, and 45 ms. These proportions are quite similar with those obtained in the main experiments. This implies that a temporal parameter does not play a critical role in the phenomenon observed in the present research, and suggested that further research will be required to explain the veridical perception of 3D slant during disconjugate eye movement in everyday life. Acknowledgments This research was supported by a grant (M103KV010021-03k2201-02140) from Brain Research Center of the 21st Century Frontier Research Program funded by the Ministry of Science and Technology of Republic of Korea and by the Research Grant of Kwangwoon University in 2004. References Brainard, D. H. (1997). The psychophysics toolbox. Spatial Vision, 10, 433 436. Brenner, E., & Cornelissen, F. W. (2000). Separate simultaneous processing of egocentric and relative positions. Vision Research, 40, 2557 2563. Brenner, E., & van den Berg, A. V. (1994). Judging object velocity during smooth pursuit eye movements. Experimental Brain Research, 99, 316 324. Cormack, R. H., & Fox, R. (1984). The computation of disparity and depth in stereograms. Perception and Psychophysics, 38, 375 380. Ehrenstein, W. H., Mateef, S., & Hohnsbein, J. (1986). Temporal aspects of position constancy during ocular pursuit. Pflüggers Archives, 406, R15, 47. Erkelens, C. J., Van der Steen, J., Steinman, R. M., & Collewijn, H. (1989). Ocular vergence under natural conditions. I. Continuous changes of target distance along the median plane. Proceedings of the Royal Society B, 236, 417 440. Howard, I. P., & Rogers, B. J. (1995). Binocular vision and stereopsis. New York: Oxford University Press. Johnston, E. B. (1991). Systematic distortions of shape from stereopsis. Vision Research, 31, 1351 1360. Li, H.-C. O., Brenner, E., Cornellissen, F. W., & Kim, E.-S. (2002). Systematic distortion of 2D shape during pursuit eye-movements. Vision Research, 42, 2569 2575. Mack, A., & Herman, E. (1973). Position constancy during pursuit eye movements: an investigation of the Filehne illusion. Quarterly Journal of Experimental Psychology, 25, 71 84. Mateeff, S., Yakimoff, N., & Dimirtrov, G. (1981). Localization of brief visual stimuli during pursuit eye movements. Acta Psychologica, 48, 133 140. Mita, T., Hironaka, K., & Koike, I. (1950). The influence of retinal adaptation and location on the Empfindungszeit. The Tohoku Journal of Experimental Medicine, 52, 397 405. Ogle, K. N. (1938). Induced size effect I.A new phenomenon in binocular space-perception associated with the relative sizes of the images of the two eyes. Archives of Ophthalmology, 20, 604 623.

Hyung-Chul O. Li / Vision Research 46 (2006) 2328 2335 2335 Ogle, K. N. (1939). Incuced size effect II. An experimental study of the phenomenon with restricted fusion stimuli. Archives of Ophthalmology, 21, 604 625. Ono, H., & Comerford, J. (1977). Stereoscopic depth constancy. In W. Epstein (Ed.), Stability and constancy in visual perception (pp. 91 128). Toronto: Wiley. Patterson, R., & Martin, W. L. (1992). Human Stereopsis. Human Factors, 34, 669 692. Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10, 437 442. Rashbass, C., & Westheimer, G. (1961). Disjunctive eye movements. Journal of Physiology, 159, 339 360. Ritter, M. (1977). Effect of disparity and viewing distance on perceived depth. Perception and Psychophysics, 22, 400 407. Rogers, B. J., & Cagenello, R. B. (1989). Disparity curvature and the perception of three-dimensional surfaces. Nature, 339, 135 137. Stoper, A. E. (1967). Vision during pursuit movement: The role of oculomotor information. Unpublished doctoral dissertation, Brandeis University. Wallach, H., & Zuckerman, C. (1963). The constancy of stereoscopic depth. American Journal of Psychology, 76, 191 204.