EFFECTS OF D-AMPHETAMINE ON SIGNALED AND UNSIGNALED DELAYS TO REINFORCEMENT. Lee Davis Thomas

Similar documents
Quantitative analyses of methamphetamine s effects on self-control choices: implications for elucidating behavioral mechanisms of drug action

Examining the Constant Difference Effect in a Concurrent Chains Procedure

PREFERENCE REVERSALS WITH FOOD AND WATER REINFORCERS IN RATS LEONARD GREEN AND SARA J. ESTLE V /V (A /A )(D /D ), (1)

on both components of conc Fl Fl schedules, c and a were again less than 1.0. FI schedule when these were arranged concurrently.

STIMULUS FUNCTIONS IN TOKEN-REINFORCEMENT SCHEDULES CHRISTOPHER E. BULLOCK

d-amphetamine S EFFECTS ON BEHAVIOR PUNISHED BY TIME-OUT FROM POSITIVE REINFORCEMENT Emily E. Guido

CAROL 0. ECKERMAN UNIVERSITY OF NORTH CAROLINA. in which stimulus control developed was studied; of subjects differing in the probability value

KEY PECKING IN PIGEONS PRODUCED BY PAIRING KEYLIGHT WITH INACCESSIBLE GRAIN'

ECONOMIC AND BIOLOGICAL INFLUENCES ON KEY PECKING AND TREADLE PRESSING IN PIGEONS LEONARD GREEN AND DANIEL D. HOLT

INTERACTIONS AMONG UNIT PRICE, FIXED-RATIO VALUE, AND DOSING REGIMEN IN DETERMINING EFFECTS OF REPEATED COCAINE ADMINISTRATION

Within-event learning contributes to value transfer in simultaneous instrumental discriminations by pigeons

Sequences of Fixed-Ratio Schedules of Reinforcement: The Effect of Ratio Size in the Second and Third Fixed-Ratio on Pigeons' Choice

STUDIES OF WHEEL-RUNNING REINFORCEMENT: PARAMETERS OF HERRNSTEIN S (1970) RESPONSE-STRENGTH EQUATION VARY WITH SCHEDULE ORDER TERRY W.

Overview. Simple Schedules of Reinforcement. Important Features of Combined Schedules of Reinforcement. Combined Schedules of Reinforcement BEHP 1016

ON THE EFFECTS OF EXTENDED SAMPLE-OBSERVING RESPONSE REQUIREMENTS ON ADJUSTED DELAY IN A TITRATING DELAY MATCHING-TO-SAMPLE PROCEDURE WITH PIGEONS

Value transfer in a simultaneous discrimination by pigeons: The value of the S + is not specific to the simultaneous discrimination context

UNIVERSITY OF NORTH CAROLINA CHAPEL HILL AND UNIVERSITY OF NORTH CAROLINA WILMINGTON

Stimulus control of foodcup approach following fixed ratio reinforcement*

Processing of empty and filled time intervals in pigeons

THE EFFECTS OF TERMINAL-LINK STIMULUS ARRANGEMENTS ON PREFERENCE IN CONCURRENT CHAINS. LAUREL COLTON and JAY MOORE University of Wisconsin-Milwaukee

The generality of within-session patterns of responding: Rate of reinforcement and session length

Discounting in pigeons when the choice is between two delayed rewards: implications for species comparisons

OBSERVING AND ATTENDING IN A DELAYED MATCHING-TO-SAMPLE PREPARATION IN PIGEONS. Bryan S. Lovelace, B.S. Thesis Prepared for the Degree of

Behavioural Processes

STEPHEN P. KRAMER. (Kojima, 1980; Lattal, 1975; Maki, Moe, &

The Development of Context-specific Operant Sensitization to d-amphetamine

Sum of responding as a function of sum of reinforcement on two-key concurrent schedules

Travel Distance and Stimulus Duration on Observing Responses by Rats

THE SEVERAL ROLES OF STIMULI IN TOKEN REINFORCEMENT CHRISTOPHER E. BULLOCK

Operant response topographies of rats receiving food or water reinforcers on FR or FI reinforcement schedules

RESPONSE-INDEPENDENT CONDITIONED REINFORCEMENT IN AN OBSERVING PROCEDURE

GENERALIZED IDENTITY MATCHING-TO-SAMPLE IN RATS USING OLFACTORY STIMULI. Tracy M. Peña

The effect of sample duration and cue on a double temporal discrimination q

between successive DMTS choice phases.

PIGEONS CHOICES BETWEEN FIXED-RATIO AND LINEAR OR GEOMETRIC ESCALATING SCHEDULES PAUL NEUMAN, WILLIAM H. AHEARN, AND PHILIP N.

PREFERENCE FOR FIXED-INTERVAL SCHEDULES: AN ALTERNATIVE MODEL'

Reinforcer Magnitude and Resistance to Change of Forgetting Functions and Response Rates

Schedule Induced Polydipsia: Effects of Inter-Food Interval on Access to Water as a Reinforcer

Jennifer J. McComas and Ellie C. Hartman. Angel Jimenez

Concurrent schedule responding as a function ofbody weight

UNIVERSITY OF WALES SWANSEA AND WEST VIRGINIA UNIVERSITY

MOUNT ALLISON UNIVERSITY

ANTECEDENT REINFORCEMENT CONTINGENCIES IN THE STIMULUS CONTROL OF AN A UDITORY DISCRIMINA TION' ROSEMARY PIERREL AND SCOT BLUE

FIXED-RATIO PUNISHMENT1 N. H. AZRIN,2 W. C. HOLZ,2 AND D. F. HAKE3

Supporting Online Material for

EFFECTS OF INTERRESPONSE-TIME SHAPING ON MULTIPLE SCHEDULE PERFORMANCE. RAFAEL BEJARANO University of Kansas

The digital copy of this thesis is protected by the Copyright Act 1994 (New Zealand).

PROBABILITY OF SHOCK IN THE PRESENCE AND ABSENCE OF CS IN FEAR CONDITIONING 1

RESPONSE PERSISTENCE UNDER RATIO AND INTERVAL REINFORCEMENT SCHEDULES KENNON A. LATTAL, MARK P. REILLY, AND JAMES P. KOHN

Transitive inference in pigeons: Control for differential value transfer

Pigeons Choose to Gamble in a Categorical Discrimination Task

Observing behavior: Redundant stimuli and time since information

THE SUNK COST EFFECT WITH PIGEONS: SOME DETERMINANTS OF DECISIONS ABOUT PERSISTENCE ANNE C. MACASKILL

EVALUATIONS OF DELAYED REINFORCEMENT IN CHILDREN WITH DEVELOPMENTAL DISABILITIES

INTRODUCING NEW STIMULI IN FADING

Evidence for a Magnitude Effect in Temporal Discounting With Pigeons

Interference in pigeons' long-term memory viewed as a retrieval problem

USING A SELF-CONTROL TRAINING PROCEDURE TO INCREASE APPROPRIATE BEHAVIOR MARK R. DIXON AND LINDA J. HAYES

DISCRIMINATION IN RATS OSAKA CITY UNIVERSITY. to emit the response in question. Within this. in the way of presenting the enabling stimulus.

The effects of two different states of food deprivation for 6 roosters was measured with a

REINFORCEMENT AT CONSTANT RELATIVE IMMEDIACY OF REINFORCEMENT A THESIS. Presented to. The Faculty of the Division of Graduate. Studies and Research

Instrumental Conditioning I

NIH Public Access Author Manuscript Learn Behav. Author manuscript; available in PMC 2010 February 26.

A PROCEDURE TO TEACH SELF-CONTROL TO CHILDREN WITH ATTENTION DEFICIT HYPERACTIVITY DISORDER LISA M. BINDER AND MARK R. DIXON PATRICK M.

Signaled reinforcement effects on fixed-interval performance of rats with lever depressing or releasing as a target response 1

REINFORCEMENT OF PROBE RESPONSES AND ACQUISITION OF STIMULUS CONTROL IN FADING PROCEDURES

Schedules of Reinforcement

A Memory Model for Decision Processes in Pigeons

Comparison of Direct and Indirect Reinforcement Contingencies on Task Acquisition. A Thesis Presented. Robert Mark Grant

Magazine approach during a signal for food depends on Pavlovian, not instrumental, conditioning.

Contextual Control of Chained Instrumental Behaviors

PSYC2010: Brain and Behaviour

DOES THE TEMPORAL PLACEMENT OF FOOD-PELLET REINFORCEMENT ALTER INDUCTION WHEN RATS RESPOND ON A THREE-COMPONENT MULTIPLE SCHEDULE?

SUBSTITUTION EFFECTS IN A GENERALIZED TOKEN ECONOMY WITH PIGEONS LEONARDO F. ANDRADE 1 AND TIMOTHY D. HACKENBERG 2

Pigeons memory for time: Assessment of the role of subjective shortening in the duration-comparison procedure

CONCURRENT CHAINS UNIVERSITY-IMPERIAL VALLEY CAMPUS. links differ with respect to the percentage of

PURSUING THE PAVLOVIAN CONTRIBUTIONS TO INDUCTION IN RATS RESPONDING FOR 1% SUCROSE REINFORCEMENT

Revista Mexicana de Análisis de la Conducta ISSN: Sociedad Mexicana de Análisis de la Conducta.

Remembering: The role of extraneous reinforcement

VERNON L. QUINSEY DALHOUSIE UNIVERSITY. in the two conditions. If this were possible, well understood where the criterion response is

Effects of Increased Exposure to Training Trials with Children with Autism. A Thesis Presented. Melissa A. Ezold

Delayed Matching-To-Sample Test in Macaques

Testing the Functional Equivalence of Retention Intervals and Sample-Stimulus Disparity in Conditional Discrimination

Describing Naturally Occurring Schedules: Analysis of Feedback Functions for Shooting During Basketball Games. A Thesis Presented

Some Parameters of the Second-Order Conditioning of Fear in Rats

CONDITIONED REINFORCEMENT IN RATS'

Excerpt from LABORATORY MANUAL PRINCIPLES OF PSYCHOLOGY: EXPERIMENTAL FOUNDATIONS PSYCHOLOGY

Role of the anterior cingulate cortex in the control over behaviour by Pavlovian conditioned stimuli

Unit 6 Learning.

Operant matching. Sebastian Seung 9.29 Lecture 6: February 24, 2004

Effects of a Novel Fentanyl Derivative on Drug Discrimination and Learning in Rhesus Monkeys

NIH Public Access Author Manuscript J Exp Psychol Anim Behav Process. Author manuscript; available in PMC 2005 November 14.

The Persistence-Strengthening Effects of DRA: An Illustration of Bidirectional Translational Research

Extinction. n Operant Extinction: n Ideally combined with DRO (esp. DRI) n No longer reinforcing operant behavioral response

UNIVERSITY OF IOWA AND SOUTHERN ILLINOIS UNIVERSITY AT CARBONDALE

REPEATED MEASUREMENTS OF REINFORCEMENT SCHEDULE EFFECTS ON GRADIENTS OF STIMULUS CONTROL' MICHAEL D. ZEILER

Concurrent Chains Schedules as a Method to Study Choice Between Alcohol Associated Conditioned Reinforcers

TOLERANCE TO EFFECTS OF COCAINE ON BEHAVIOR UNDER A RESPONSE-INITIATED FIXED-INTERVAL SCHEDULE MATTHEW T. WEAVER AND MARC N.

Oddity learning in the pigeon: Effect of negative instances, correction, and number of incorrect alternatives

Operant Conditioning B.F. SKINNER

Transcription:

EFFECTS OF D-AMPHETAMINE ON SIGNALED AND UNSIGNALED DELAYS TO REINFORCEMENT Lee Davis Thomas A Thesis Submitted to the University of North Carolina Wilmington for Partial Fulfillment of the Requirements for a Degree of Masters of Arts Department of Psychology 2009 Approved by Advisory Committee Christine Hughes Carol Pilgrim Raymond Pitts, Jr. Chair Approved by Dean, Graduate School

TABLE OF CONTENTS ABSTRACT... iv ACKNOWLEDGEMENTS... v DEDICATION... vi LIST OF TABLES... vii LIST OF FIGURES...viii INTRODUCTION... 1 Choice and Preference... 1 Self-Control Choices... 2 Delay Discounting... 4 Psychomotor Stimulants and Self-Control... 6 The Present Study... 11 METHOD... 14 Subjects... 14 Apparatus... 14 Procedure... 15 Data Analysis... 18 RESULTS... 20 Performance During Control Conditions... 20 Performance for Saline, Control, and Drug Conditions... 23 DISCUSSION... 36 Performance During Control Conditions... 36 Effects of d-amphetamine... 37 ii

REFERENCES... 42 iii

ABSTRACT Four pigeons responded under a progressive-delay procedure. In a signaled-delay condition, a chained variable interval (VI) 30-s progressive time (PT) 4-s schedule was arranged; in an unsignaled-delay condition, a tandem VI 30-s PT 4-s schedule was arranged. Two pigeons experienced a signaled-unsignaled-signaled delay sequence; whereas, 2 pigeons experienced an unsignaled-signaled-unsignaled delay sequence. Effects of saline and d-amphetamine were determined under each condition. At intermediate doses (1.0 and 1.78 m/kg) delay functions were shallower, AUC was increased, and, when possible, break points were increased compared to saline; these effects were not systematically related to signaling conditions. These effects on control by delay often were accompanied by decreased response rates at 0 s. These results suggest that stimulus conditions associated with the delay may not play a crucial role in effects of d-amphetamine and other stimulants on behavior controlled by reinforcement delay. iv

ACKNOWLEDGEMENTS I give my thanks to Carol Rothstein, who was my Latin teacher in high school; she made me the student that I today through never accepting anything but the best. She will be sorely missed in the teaching profession. I also would like to thank my mentors, Dr. Raymond Pitts, Jr. and Dr. Christine Hughes, whose continual pursuit of scholastic excellence has been invaluable. They have tremendously aided in my understanding of topics in the field of Behavior Analysis. Without their support this process would have been virtually impossible. I would like to thank all of the undergraduates that spent many hours in the lab helping with the research presented in this thesis. A special thanks to go my parents and my sister. They continued to push me along even though, at times, the end appeared to never be in sight. Finally, I would like to thank my thesis committee. Thank you all for the many hours spent revising, and for the guidance during this process. v

DEDICATION I would like to dedicate this thesis to my mother, Rose Thomas, whose continual support through this long and arduous journey has been fathomless. She has and will always continue to inspire me to conquer all obstacles that I have and will face. vi

LIST OF TABLES Table Page 1. Breaks points for all pigeons, signaling conditions, and doses (1.0 and 1.78 mg/kg d-amphetamine)... 22 vii

LIST OF FIGURES Figure Page 1. Mean response rates plotted as a function of the nominal delay... 21 2. Mean responses per minute in the VI as a function of the nominal delay(s) after administration of saline (filled symbols) and 1.0 d-amphetamine (unfilled symbols) during the signaled and unsignaled conditions for each pigeon... 24 3. Mean responses per minute in the VI as a function of the nominal delay(s) after administration of saline (filled symbols) and 1.78 d-amphetamine (unfilled symbols) during the signaled and unsignaled conditions for each pigeon... 27 4. Mean area under the curve (AUC, left column) and mean response rate in the first interval of the session when the delay was 0 s (right column) calculated from sessions before which 1.0 mg/kg (white bars) and 1.78 mg/kg (striped bars) were administered in the signaled and unsignaled conditions for each pigeon... 29 5. Group mean area under the curve (AUC, upper panel) and mean response rate in the first interval of the session when the delay was 0 s (lower panel) calculated from sessions before which 1.0 mg/kg (white bars) and 1.78 mg/kg (striped bars) were a administered in the signaled and unsignaled conditions for each pigeon... 32 6. Log (drug rate/control rate) is plotted as a function of the log control rate before which 1.0 mg/kg d-amphetamine was administered in the signaled and unsignaled conditions for eachpigeon... 34 7. Log (drug rate/control rate) is plotted as a function of the log control rate before which 1.78 mg/kg d-amphetamine was administered in the signaled and unsignaled conditions for each pigeon... 35 viii

INTRODUCTION Choice and Preference Choice is ubiquitous; most behavior occurs in a context under which multiple alternatives are available. For example, students enrolled at any university may frequently have the option for studying for a test and going out with friends. Someone trying to quit smoking can smoke a cigarette or refrain from smoking. An individual may be faced with the choice of buying a fancy sports car or buying a fuel-efficient car. These examples illustrate that, conceptualized this way, much of our daily behavior involves choice. According to a behavior-analytic view, choices are determined primarily by the consequences associated with the alternatives. That is, choice involves operant behavior. Given that choice is an important component of behavior, it is understandable that it has been a major focus of experimental study over the last 30-40 years (see Davison & McCarthy, 1988; devilliers, 1977; Mazur, 2006). The concurrent (Conc) schedule of reinforcement is a popular procedure to study choice in the laboratory. In a concurrent schedule, two or more operant responses are available simultaneously, and each alternative is a specific operant response associated with a contingency of reinforcement. The responses often are independent in that responding on one alternative does not affect the consequences associated with other alternatives. In a seminal study, Herrnstein (1961) exposed pigeons to concurrent variable-interval (VI) schedules in which the relative reinforcement rates were varied across alternatives. Herrnstein reported an extremely important effect: The proportion of responses allocated to a given alternative was approximately equal to the proportion of reinforcers obtained via that alternative. This effect has been replicated extensively, and holds across a variety of species, response topographies, and reinforcer types (see Davison & McCarthy, 1988). This effect can be

described by the mathematical formulation now known as the matching law, and is written as follows: B L /(B L +B R ) = R L /(R L +R R ). (1) In this equation, B refers to responses and R refers to reinforcers, and the L and R subscripts indicate the left- and right-hand alternatives, respectively. This formulation is relatively straightforward. For example, it states that, if 80% of the total reinforcers were obtained via the left alternative and 20% of the total reinforcers were obtained via the right alternative, then approximately 80% and 20% of the responses would be allocated to the left and right alternatives, respectively. The matching equation provides a quantitative description of behavioral allocation under multi-operant conditions. Self-Control Choices Sometimes individuals are faced with choices that involve conflicting consequences. Fried foods taste extremely good, but eating them could have long-term adverse consequences. Spending money now may lead to the acquisition of goods; however, not saving money could have unfavorable effects down the road. Taking certain drugs may result in reinforcing effects, but repeated use of drugs can lead to harmful future consequences. It is important to note that, in these examples, there likely are several factors that affect which option will be ultimately chosen. Nevertheless, they appear to have one key similarity. These situations involve a choice between an alternative that delivers something immediately and a choice that provides something better in the long run. That is, perhaps these sorts of choices can be described as good now or better later. Rachlin (1974) and Ainslie (1974) suggested that one key variable that affects choices is the temporal aspect associated with each alternative. That is, the fact that the better option is delayed decreases the likelihood of choosing that option. Choices for the good now option are 2

described as being impulsive. They are labeled as impulsive because typically they deliver something immediately, but at a cost in the long-term. Choices of the alternative that delivers something better later could be characterized as self-control choices. These choices deliver consequences at a later point in time. The long-term effects of selecting these types of alternatives, however, could be extremely positive. Rachlin and Green (1972) studied effects of the availability of commitment responses on self-control choices. They provided pigeons with the opportunity to emit a response that would commit them to the larger, more delayed reinforcer (i.e., this response eliminated option providing the smaller, more immediate reinforcer). The study included a two-choice procedure in which selection of one alternative delivered the larger reinforcer after a fixed delay; however, responses on this key also eliminated the opportunity to the select the smaller, more immediate reinforcer. In addition, selection of the alternative that provided the larger reinforcer could only be made during the beginning of each trial. Selection of the other key produced a delay after which a choice situation was presented. The choice situation included the availability of a larger, more delayed reinforcer and a smaller, more immediate reinforcer. They found that subjects consistently emitted the commitment response if the opportunity was provided well in advance of the choice point between the larger and smaller reinforcers. That is, if the overall delay to reinforcement for both alternatives were relatively short, then subjects were more likely to choose the smaller more immediate reinforcer. However, if the overall delay to reinforcement was relatively long for both alternatives, then subjects were more likely choose the larger more delayed reinforcer. 3

Delay Discounting As mentioned above, it appears that an important factor producing impulsive choices is the longer delay associated with the larger reinforcer. Delay discounting describes how the effectiveness of a reinforcer is controlled by the delay between the choice and the delivery of the reinforcer. A great deal of attention has been devoted to elucidating the form of the function relating the effectiveness of a reinforcer (its value ) to its delay. Mazur (1987, 1988), and others (e.g., Richards, Mitchell, de Wit, & Seiden, 1997), have presented substantial evidence suggesting that a hyperbolic function provides the best description of this relation. Mazur s (1987) hyperbolic equation is as follows: V = A/ (1 + kd) (2), where V refers to the value, or effectiveness, of the reinforcer, A refers to the amount of the reinforcer, D refers to the delay associated with the reinforcer, and k is a fitted parameter that characterizes the rate at which the function decreases to asymptote; k typically is referred to as the discounting parameter. The higher the k value, the faster the function reaches asymptote (i.e., the higher the value of k, the more the delay discounts reinforcer value). Delay discounting can be affected by several factors such as past experiences with delay to reinforcement (Mazur & Logue, 1978), and administration of psychoactive substances (Charrier & Thiebot, 1996; de Wit, Enggasser, & Richards, 2002; Evenden & Ryan, 1996; Logue, 1992; Pietras, Cherek, Lane, Tcheremissine, & Steinberg, 2003; Pitts & Febbo, 2004; Pitts & McKinney, 2005; Richards et al, 1997; Richards, Sabol, & de Witt, 1999; Wade, de Witt, & Richards, 2000). Mazur and Logue (1978) demonstrated that reinforcement history can alter effects of reinforcement delay. The researchers used two groups of subjects (an experimental and a control group); both groups of subjects chose between a smaller and larger reinforcer amount. The 4

experimental group initially had a 6-s delay for both the smaller and the larger reinforcers, but the delay for the smaller reinforcer was faded (i.e., gradually reduced) to 0-s over the course of several sessions. The control group had a 0-s delay for the smaller reinforcer and a 5.5-s delay for the larger reinforcer (i.e., the delay for the smaller reinforcer was not faded). Following the fading procedure, subjects in the experimental condition chose the larger more delayed reinforcer at substantially higher rates compared to the subjects in the control condition when delay to the smaller reinforcer was 0 s. This finding illustrates that delay-discounting and, hence, preference for the larger more delayed reinforcer, can be altered by reinforcement history. Recently, investigators have been interested in studying effects of drugs on self-control choices (Charrier & Thiebot, 1996; de Wit, et al., 2002; Evenden & Ryan, 1996; Logue, 1992; Pietras et al., 2003; Pitts & Febbo, 2004; Pitts & McKinney, 2005; Richards et al, 1997; Richards, et al., 1999; Wade, et al., 2000). In particular, psychomotor stimulants have received considerable attention, likely due in part to the clinical prescription of these drugs to treat individuals with attention-deficit disorder and attention-deficit-hyperactive disorder (Greenhill, 2001; Murray & Kollins, 2000). A diagnostic characteristic of these disorders is impulsive behavior patterns. Therefore, it is exceedingly important that behavioral patterns modified by drugs and the behavioral mechanisms that affect these behavioral patterns be studied in the laboratory. Psychomotor Stimulants and Self-Control Early research on effects of psychomotor stimulants on self-control choices indicated that these drugs increased preference for the smaller, more immediate reinforcer (e.g., increased impulsive choices). For example, Evenden and Ryan (1996) used a discrete-trials choice procedure in which the delay to the larger reinforcer escalated across blocks of trials. The 5

subjects were presented with 8 or 12 trials per block; each block had a different delay value associated with the larger reinforcer, but the delay to the smaller reinforcer was fixed at 0 s for the entire session. The delay to the larger reinforcer increased from 0 s to 60 s across blocks. Choosing the larger alternative yielded five pellets of food and choosing the smaller alternative yielded one pellet of food. Within each session, choice of the larger alternative decreased as a function of the increasing delays. As the delay to reinforcement increased, subjects were more likely to choose the smaller, more immediate alternative. During control conditions, when the delays were relatively short, subjects preferred the larger more delayed reinforcer; however, as the delay increased across the session the subject s preference switched towards the smaller more immediate reinforcer, which shifted the function to the left. Moderate doses of d-amphetamine increased choice for the smaller more immediate reinforcer. That is, the function relating reinforcer effectiveness to delay was shifted to the left. Similarly, Charrier and Thiebot (1996) also reported that d-amphetamine increased choices of a smaller more immediate reinforcer. In contrast to the data reported by Evenden and Ryan (1996) and Charrier and Thiebot (1996), later research (Pietras et al., 2003; Pitts & Febbo, 2004; Pitts & McKinney, 2005; Richards et al, 1997; Richards, et al., 1999; Wade, et al., 2000) demonstrated that psychomotor stimulants increased choices of a larger more delayed reinforcer. For example, Richards et al. (1999) used an adjusting-amount procedure to examine effects of methamphetamine on selfcontrol choices. Their adjusting-amount procedure consisted of two alternatives. One alternative (the standard) provided a larger reinforcer presented after a fixed delay. The other (adjusting) alternative provided an immediate reinforcer, the size of which adjusted as a function of the subjects choices. Each choice of the standard alternative increased the amount of reinforcement for the adjusting alternative by 10% on the next trial, whereas each choice of the 6

adjusting alternative decreased its amount by 10% on the next trial. Responding in this procedure yielded an indifference point for each subject, which was defined as the amount of reinforcement on the adjusting alternative when the subjects chose each option approximately 50% of the time. It should be noted that the indifference points are the result of the interaction between reinforcement magnitude and delay. The indifference points could be conceptualized as the point at which the reinforcer values of the alternatives are equal. In this procedure, lower indifference points are said to show impulsive behavior and higher indifference points are said to show self-control. That is, repeated selection of the smaller more immediate reinforcer drives the indifference point down and repeated selection of the larger more delayed alternative drives the indifference point up. Richards et al. reported that moderate doses of methamphetamine increased the indifference points compared to baseline. That is, at these doses, subjects selected the larger more delayed alternative to a much greater degree, which increased the indifference point. These data suggest that the administration of methamphetamine increased the effectiveness of the larger more delayed reinforcer (see also Wade et al., 2000). Pitts and McKinney (2005) used a procedure similar to Evenden and Ryan s (1996) within-session, escalating-delay procedure to test effects of methyphenidate on self-control choice. Pitts and McKinney s procedure included five blocks of trials in which the delay to the larger reinforcer escalated across blocks. Each block consisted of two initial forced choice trials and five free-choice trials. Subjects selected between two alternatives with the delay to the smaller reinforcer held constant at 0 s and the delay to the larger reinforcer increased from 0 s to 50 s. Selection of the larger, more delayed reinforcer initiated a signaled delay; the delay was signaled by the blinking of the light above the lever associated with the larger more delayed alternative. During control conditions, subjects selected the larger reinforcer alterative almost 7

exclusively when the delays were equal, and as the delays increased subjects were more likely to select the smaller more immediate reinforcer. Pitts and McKinney reported that, in most cases, intermediate doses of methylphenidate increased selection of the larger more delay alternative compared to baseline responding. That is, they found that the function relating reinforcer effectiveness to delay was shifted to the right. Pitts and Febbo (2004) exposed 4 pigeons to a concurrent chains procedure with escalating delays across blocks. During the initial links, two alternatives (red or greens keys) were provided simultaneously and a single random-interval (RI) schedule of 60 s controlled contact with the terminal links. The houselight and side keys (either red or green) were illuminated during the initial link. Once the initial link schedule was satisfied, the houselight and side keys were turned off, and the active key color was re-illuminated on the houselight. One terminal link provided 1.5-s access to reinforcement (grain) delayed by a fixed time (FT) of 2 s. The other terminal link provided 4.5-s access to reinforcement delayed by an increasing FT that ranged from 2 to 40 s. There were five, 10-min blocks. For each subject, intermediate doses of methamphetamine reduced sensitivity to the delay. That is, at these doses, subjects chose the larger more delayed reinforcer even as delays were relatively large. A similar effect has been replicated with humans (de Wit et al., 2002; Pietras, et al., 2003). Why are there discrepancies between the effects of psychomotor stimulants on selfcontrol choices across the earlier and later studies? It is unclear at this point why earlier studies indicated that psychomotor stimulants decreased the self-control choices ( Charrier & Thiebot, 1996; Evenden & Ryan, 1996; Logue, et al. 1992), but later studies indicted that psychomotor stimulants increased self-control choices (Pietras et al., 2003; Pitts & Febbo, 2004; Pitts & McKinney, 2005; Richards et al., 1999; Wade et al., 2000). Several procedural differences 8

across these studies have been noted (see Pitts & McKinney, 2005; Richards et al., 1999). One conspicuous difference has to do with the signaling conditions associated with the delay. Most of the early studies did not include an explicit and unique stimulus associated with the delay to the larger reinforcer. That is, the same stimulus conditions were present during both the delay and the inter-trial interval. In contrast, virtually all of the more recent studies, in which stimulants increased self-control choices, provided explicit, and unique, stimulus conditions during the delay to the larger reinforcer. This suggests that the differential results between earlier and later studies could be due to the presence or absence of a unique stimulus associated with the delay to the larger reinforcer. Noting the differential signaling conditions across the earlier and later studies, Cardinal, Robbins and Everitt (2000) examined effects of signaled delays of reinforcement. Their experimental procedure was similar to the one used by Evenden and Ryan (1996); the smaller reinforcer (1 food pellet) was presented immediately, and the delay to the larger reinforcer (5 food pellets) increased within each session across blocks of trials. Each choice retracted the levers and extinguished the houselight, however, there were two different signaling conditions associated with the delay to the larger reinforcer. In the cue condition, the stimulus light above the lever associated with the larger reinforcer was illuminated during the entire duration of the delay. In the no-cue condition, the stimulus light above the lever was not illuminated during the delay to the larger reinforcer. That is, in the no-cue stimulus conditions (blackout) during the delay were the same as those during the inter-trial interval, whereas during the cue condition a delay-correlated stimulus was presented following a response. They reported that moderate doses of d-amphetamine increased choices of the larger more delay reinforcer in the cue condition. That is, they reported that the delay functions in the cue condition shifted slightly to 9

right. The opposite effect was found for d-amphetamine in the no-cue condition; the drug decreased choices of the smaller, more immediate reinforcer. That is, they reported that the delay functions in the no-cue condition shifted to the left which was indicative of greater delay discounting. These findings suggested that effects of the d-amphetamine were modulated by the particular signaling conditions associated with the delay to the larger reinforcer. One interpretation of the results provided by Cardinal et al. (2000) is that psychomotor stimulants increased the efficacy of the conditioned reinforcer associated with the delay. In theory, conditioned, or secondary reinforcers acquire the capacity to strengthen behavior by being paired with primary reinforcers (Hill, 1970; Robbins, 1975). Cardinal et al. postulated that signals present during reinforcement delay served as conditioned reinforcers, and that d- amphetamine increased choices of the larger more delayed reinforcer in their cue condition by enhancing the conditioned-reinforcing effectiveness of the signal This interpretation is supported by previous studies investigating effects of stimulant drugs on conditioned reinforcement (Branch & Walker, 1996; Files, Branch, & Clody, 1989; Hill, 1970; Robbins, 1975). For example, Files et al. (1989) assessed effects of methylphenidate on responding when a brief stimulus complex was presented with the presentation of a primary reinforcer and during standard operant extinction (i.e., the brief stimulus complex was presented without the presentation of a primary reinforcer). They used a second-order, random-ratio (RR) 2 (VI 30-s) schedule, in which completion of each VI 30-s resulted in presentation of a brief stimulus and completion of an average of two VI 30-s components resulted in food presentation. This schedule was in effect during the first portion of each session, and food extinction was in effect during the second portion. Two extinction conditions (during which no food was presented) were used during the second portion of each session; these two conditions were presented 10

irregularly across experimental sessions. One condition consisted of standard food extinction; this means that responses had no scheduled consequences. In the other food-extinction condition, the brief stimulus complex that was previously paired with food delivery was presented according to a VI 30-s schedule. After stable performance was established in each condition, effects of saline and several doses of methylphenidate were determined. Methylphenidate increased response rates during the second portion of the session for both extinction conditions, but rates were increased to a greater extent with the presentation of the brief stimulus compared to the standard extinction condition. These data suggest that methylphenidate increased the conditioned reinforcing effectiveness of the paired brief stimuli. Although an account of the Cardinal et al. s (2000) data in terms of terms of conditioned reinforcement is a viable one, it should be noted that there are some potential limitations of the Cardinal et al. study. At this point, theirs is the only experiment demonstrating differential effects of the signaling conditions on self-control choices. Moreover, the effects of d- amphetamine in both the cue and no-cue conditions were extremely small. At the 20-s delay in the cue condition, only one dose (0.3 mg/kg) demonstrated an increase in selection of the larger, more delayed reinforcer when compared to baseline responding. That is, the shift of the functions, in either direction, were not particularly impressive considering that Cardinal et al. s conclusions were based entirely on group data. Finally, in their cue condition, the presence or absence of an explicit cue was confounded with the presence or absence of a delay. That is, there was no delay associated with the smaller reinforcer and, thus, there was no opportunity to provide a unique signal for this reinforcer. Thus, further study seems needed on the modulation of the effects of d-amphetamine by the signaling conditions associated with reinforcement delay. 11

The Present Study Reilly and Lattal (2004) developed a potentially useful procedure to assess effects of drugs on behavior maintained by delayed reinforcement and the modulating effect of delayassociated stimuli. They used a progressive-delay procedure to generate within-session delay gradients. In this procedure, responding is reinforced on a single schedule, and the delay to reinforcement increases by a fixed duration following each reinforcer presentation. The progressive nature of the procedure allowed the researchers to assess how a large range of delays affects behavior. Delay gradients were obtained by plotting response rates within each session as a function of the progressively increasing delay; the gradients typically showed a decreasing, hyperbolic-like function. In their first experiment, Reilly and Lattal (2004) used a progressive-delay procedure that included a VI or a fixed-interval (FI) schedule during the initial link, and a delay that progressively increased by a fixed duration in the terminal link. That is, responding was reinforced according to a tandem VI (or FI) 30-s, progressive time (PT) 2-s schedule of food presentation. Key pecks that satisfied the interval schedule initiated an unsignaled, non-resetting delay that began at 0 s and increased by 2 s following each reinforcer presentation. When a subject failed to respond for 7 min, the session was terminated, and the delay value in effect was considered the break point. On alternate sessions, yoked-interval schedules were arranged based upon the inter-food intervals from the immediately preceding progressive-delay session. Response rates under the delay condition decreased across the session to a greater extent than under the yoked-interval schedule. Thus, Reilly and Lattal were able to isolate the effects of reinforcement delay, independent of the effects of reinforcement rate. 12

In their second experiment, Reilly and Lattal (2004) used the progressive-delay procedure to compare effects of signaled and unsignaled delays to reinforcement. Under the signaled-delay condition, a chained VI 30-s, PT 2-s schedule was arranged and the delays were signaled by a blackout. When the subject pecked the center key after an average of 30 s, the houselight and keylight were extinguished and a blackout occurred for the entire delay interval. Under the unsignaled-delay condition, a tandem VI 30-s, PT 2-s was arranged. When the subject pecked the center key after an average of 30 s, the delay began, but the houselight and keylight remained illuminated until the delay was over. Under both conditions, the delay to reinforcement increased by 2 s following each reinforcer presentation. Signaled and unsignaled delays to reinforcer were examined using an ABA design; the order of conditions (signaled-unsignaled-signaled, or unsignaled-signaled-unsignaled) was counterbalanced across subjects. In addition, delay gradients were assessed by plotting response rates at each delay. Delay gradients were analyzed with a modified version of Mazur s (1987) hyperbolic function: B = B I /(1 + kd) (3), where B denotes the rate of response at a given delay (D), and B I indicates the rate of responding at the 0-s delay. Reilly and Lattal found that the break points and response rates were higher for the signaled condition. In addition, they also reported that lower k values for the signaled condition, which is indicative of greater behavioral maintenance. These, and other, findings suggest that the presence of the signal during the delay increases response rates maintained by delayed reinforcement. Using a similar procedure to Reilly and Lattal (2004) could be an extremely effective method to examine the effects of drugs on behavior maintained by signaled and unsignaled delays to reinforcement. The manner in which the delay progressively increases within each 13

session allows for determining how drugs affect the entire delay function efficiently. Furthermore, self-control procedures involve differences in both the delay and magnitude of reinforcerment. The present procedure may provide a method to assess drug effects on delayed reinforcement without being confounded by magnitude. The purpose of the present study was to compare effects of the stimulant d-amphetamine on behavior maintained by signaled and unsignaled delays to reinforcement. Similar procedures to those described by Reilly and Lattal (2004) were used to generate within-session delay gradients under signaled (chained) and unsignaled (tandem) conditions. Several predictions could be made based upon the previous literature. For instance, if the data are consistent with Reilly and Lattal (2004), then we could expect greater behavioral maintenance (i.e., more shallow delay gradients and higher break points) during the signaled condition. In contrast, we would expect the opposite effect (i.e., steeper delay gradients and lower break points) during the unsignaled condition. In addition, if Cardinal et al. s (2000) data are due to a unique signal presented during the delay, then we would expect d-amphetamine to produce an increase of AUC and a shift of the function to the right during the signaled condition; however, during the unsignaled condition, we would predict a decrease of AUC and a shift of the function to the left. If the effect is not due to signaling, then for both conditions we would expect d-amphetamine to produce comparable changes to the AUC and similar shifts of the function. Method Subjects Four male White Carneau pigeons were used (75, 358, 1871, 1985). Two of the pigeons (75, 358) were experimentally naïve; whereas, the other two pigeons (1871, 1985) had 14

previously experienced choice procedures that included drug regimens. The pigeons were kept at approximately 80% of their free feeding weight via post-session feeding as needed. All of the subjects had free access to water and health grit. The subjects were housed individually in a vivarium under a 12:12 light/dark cycle; the Vivarium s lights were extinguished at 7 p.m. and illuminated at 7 a.m. Apparatus Four operant-conditioning chambers (BRS-LVE model SEC-002), each with interior dimensions of 35.0 cm deep, 30.5 cm wide and 36 cm high, were used. Two of the chambers were equipped with three keys on the right wall adjacent to the entrance of the chamber. The keys were 8.5 cm apart (center to center) and could be illuminated red, yellow, or green. The keys were located 26 cm from the floor of the chamber and each side key was located 9 cm from its adjacent wall; only the center key was used. The shielded houselight used a 1.2 watt bulb and was located 6.5 cm above the center key. Located 5 cm to the left of the houselight was a green stimulus light and located 5 cm to the right of the houselight was a red stimulus light; these lights were not used during the experiment. The other two chambers were identical to those just described, except they contained only a single key (centered) and did not contain the green and red stimulus lights. Each peck on the illuminated center key that exceeded a force of 0.25 N was counted as a response. A food hopper containing mixed grain was accessed via a 5 by 6 cm opening which was located 11 cm directly underneath the center key. Reinforcement consisted of 4-s presentations of the hopper, during which the opening was illuminated and the keylight and houselight were extinguished. Ambient white noise was delivered by 4 speakers placed in the corners of the room. Each chamber contained a ventilation fan. Experimental events were controlled and data were collected by a computer running MED-PC IV and connected to MED 15

Associates interfacing; the computer and interface equipment were located in an adjoining room. Procedure After Pigeons 358 and 75 were habituated to the operant chambers and magazine trained, pecking the yellow center key was shaped by differentially reinforcing successive approximations. The other two pigeons (1985, 1871) were not experimentally naïve, so they were placed directly on an FR 1 schedule. For all pigeons, once pecking occurred reliably on the FR 1, the response requirement was raised to FR 5. After 3 to 5 sessions at FR 5, a VI 5-s schedule was arranged. The value of the VI was raised gradually across sessions to 30 s. Once the training phase was completed, the pigeons were assigned to one of two experimental conditions such that each condition (signaled or unsignaled delays) included a naïve and an experienced pigeon. The reinforcement contingencies in the signaled condition were a chained VI 30-s PT 4-s schedule. The cycle began with the illumination of the houselight and the yellow keylight. The first keypeck after an average of 30 s turned off the houselight and keylight and initiated a progressively increasing, non-resetting delay. After the delay was over, reinforcement was presented. The first reinforcer of each session was presented immediately (0-s delay), after which the delay to reinforcement increased in increments of 4 s. That is, the delay to the second reinforcer was 4 s, the delay to the third reinforcer was 8 s, and so on. Keypecks during the delay were counted, but had no programmed consequences. The reinforcement contingencies in the unsignaled condition were a tandem VI 30-s PT 4-s schedule. The first keypeck after an average of 30 s initiated a progressively increasing, nonresetting unsignaled delay. The houselight and keylight remain illuminated until the delay was 16

over, after which the houselight light extinguished, and reinforcement was presented. The delay value increased within each session in the same fashion as described for the signaled condition. Similar to the signaled condition, keypecks during the delay had no programmed consequences. In both conditions, the experimental sessions were terminated when a pigeon failed to peck the center key for 5 min (300 s), or the session exceeded a time limit of 120 min. The delay value in effect at the time the session ended was considered the break point. Once stable performance was reached under the progressive-delay procedure, drug testing began. For any block of 10 sessions, behavior was considered stable when the average breakpoints for the first 5 sessions and the last 5 session of the block differed by less than 10% of the mean of all 10 sessions. During the drug regimen, injections were usually administered on Tuesdays and Fridays. Data from days that directly preceded injections served as the no-injection control data. Injections were made intramuscularly in the pigeon s breast area; injection sites alternated between the left and right side of the sternum. Pre-treatment time was 15 min, during which the pigeons were placed into their homecages. The 15-min pretreatment time allowed the substance to be absorbed into the organism s system. Physiological saline was used as the injection vehicle. Dose-effect curves were determined for each of the subjects. The drug regimen included administration of 0.3, 1.0, 1.78, 3.0, and 5.6 mg/kg d-amphetamine sulfate. This range included a dose that produced little or no effect on overall response rates and a dose that completely or nearly eliminated responding. The curves consisted of at least two injections of each dose for each individual pigeon. The order of exposure to the doses was mixed, with the constraint that no dose was given a second time before all doses were given once. For each pigeon, once all 17

doses and saline had been given twice, one or two doses ( effective doses) were determined through visual inspection of the data. A dose was considered an effective dose if it affected the break point and/or the shape of the delay gradient, without dramatically suppressing overall response rates. For all pigeons the effective doses were 1.0 and 1.78 mg/kg. For Pigeons 1871 and 75, 1.0 and 1.78 mg/kg were administered 3 additional times to establish the reliability of their effects; however, for Pigeon 358, 1.0 mg/kg was not tested any additional times and for Pigeon 1985, 1.0 mg/kg was only tested 1 additional time. The reason for not testing 1.0 mg/kg more times for Pigeons 358 and 1985 was this dose had little or no effect on the shape of the delay gradients. After the above drug regimen was completed, the condition for each pigeon was reversed. The pigeons (358 and 1985) in the unsignaled condition were placed in the signaled condition and vice versa. Once stable performance was achieved following the reversal, the same drug regimen described for the initial conditions were also tested again and effective were also re-determined. For all pigeons during the second phase, 1.0 and 1.78 mg/kg were determined to be effective doses, and these doses were administered 1-3 additional times. After the drug regimen was completed during the second phase, the pigeons were returned to the original condition (reversal). After stability was achieved during the return to baseline condition, saline and the effective doses that were determined during the previous phases (1.0 and 1.78 mg/kg) were administered 5 times each. Thus, under this ABA experimental design, two pigeons (1871 and 75) experienced a signaled-unsignaled-signaled sequence, and the other two pigeons (358 and 1985) experienced an unsignaled-signaled-unsignaled sequence. Data Analysis Response rates were expressed as responses per minute. Response rates were calculated by dividing the total number of responses within each interval (i.e., at each delay) by the total 18

time spent in each VI 30-s interval. Thus, for each session, a delay-of-reinforcement gradient was constructed by plotting response rate during the VI as a function of delay. In addition, mean response rates at 0-s delays were obtained by averaging the response rate at 0-s for each determination of saline and d-amphetamine. Break points were obtained for each experimental session. The break point was the delay value when the session was terminated, either via the time limit or when no responses occurred for 300 s (5 min). To characterize the effects of d-amphetamine on the function relating response rate to reinforcement delay, the area under the curve (AUC) was analyzed (Myerson, Green, and Warusawitharana, 2004). The area under the curves was calculated using: (x 2 - x 1 ) [(y 2 - y 1 ) / 2] (6). The x 1 values are for any given delay and x 2 values represent the next delay in the sequence. The delays (x-values) were normalized by representing each delay as a proportion of the maximum delay (i.e., 196 s). That is, each delay was divided by the maximum delay. The y 1 values were derived from the mean response rates for a given delay and the y 2 values were the mean response rates for the next delay in the sequence. Response rates (y-values) were normalized by representing the mean response rate at each delay as a proportion of the mean response rate at 0-s delay. That is, the mean response rate at each delay was divided by the mean response rate at the 0-s delay. Trapezoids were created by the series of successive delay values and the area for each trapezoid was measured. Finally, the total area of all trapezoids was summated. The AUC values obtained via Equation 6 were used to create dose-effect curves. A drug induced increase in the AUC would indicate an attenuated effect of the delay (i.e., greater delay discounting), 19

whereas, a decrease in the AUC would indicate an attenuated effect of the delay (i.e., less delay discounting). Rate-dependency analyses also were performed on the mean response rates after administration of 1.0 and 1.78 mg/kg of d-amphetamine with respect to responding after administration of saline. Mean saline response rates were transformed to log 10, and the equation for calculation of rate-dependency was as follows: log 10 (mean RR of a given dose/ mean RR at saline) (7). Once these data were obtained via equation 7, they were plotted as a function of mean saline response rates transformed to log 10. A linear regression line was fitted to the data. Results Performance During Control Conditions Figure 1 shows mean responses per minute during control sessions for each pigeon as a function of the nominal (programmed delay) under the signaled and unsignaled phases of the experiment. Under both signaled and unsignaled conditions, response rates were a negatively decelerating function of delay. The functions for the signaled condition, however, were much shallower and reached asymptote less quickly than those for the unsignaled condition. That is, behavior was better maintained by delayed reinforcement under the signaled condition than under the unsignaled condition. Greater behavioral maintenance was also indicated by higher break points when subjects were in the signaled phase. In 5 out of 6 instances in the signaled phase, the maximum break point of 196 s was reached. In contrast, in the unsignaled condition, none of the subjects reached the maximum break point (i.e., all subjects failed to reach the 196 s delay) (see Table 1). In general, control performance was recovered in the return to the baseline conditions for all subjects. That is, for the most part, when the baseline conditions 20

1871 - Signaled 1871 - Unsignaled 1871 - Signaled 100 80 60 40 20 0 75 - Signaled 75 - Unsignaled 75 - Signaled 100 80 Mean Responses Per Min 60 40 20 0 80 60 40 358 - Unsignaled 358 - Signaled 358 - Unsignaled 20 0 1985 - Unsignaled 1985 - Signaled 1985 - Unsignaled 80 60 40 20 0 0 50 100 150 200 0 50 100 150 200 Nominal Delay 0 50 100 150 200 Figure 1. Mean response rates for control days are plotted as a function of the nominal delay. 21

Phase 1 Phase 2 Phase 3 Pigeon Signaled Unsignaled Signaled 1871 Saline 196 a 33.6 (20-48) 188 (164-196) 1.0 mg/kg 196 155.2 (60-196) 196 1.78 mg/kg 196 120.0 (72-176) 196 75 Saline 196 112.8 (24-188) 196 1.0 mg/kg 196 156.0 (72-196) 196 1.78 mg/kg 196 100.8 (32-196) 196 Unsignaled Signaled Unsignaled 358 Saline 20.8 (12-28) 75.2 (60-92) 28.2 (12-40) 1.0 mg/kg 42.0 (40-44) b 196 155.2 (44-188) 1.78 mg/kg 84.0 (16-152) 196 101.6 (36-184) 1985 Saline 41.6 (16-80) 171.2 (88-196) 35.2 (20-48) 1.0 mg/kg 45.3 (16-84) b 196 42.0 (16-72) 1.78 mg/kg 39.2 (12-124) 196 30.4 (20-48) a No range after a mean indicates that for all sessions the mean break point was the same. b In this condition, 358 and 1985 received 2 and 3 administrations of 1.0 mg/kg, respectively. Table 1. Mean (range) break points (in seconds) reached by each pigeon in each condition from all sessions before which saline, 1.0 mg/kg, or 1.78 mg/kg was administered. Means are from five sessions, unless indicated otherwise. 22

were reinstated, the delay functions and break points were similar to those previously obtained in the signaled condition (Pigeons 1871 and 75) and in the unsignaled condition (Pigeons 358 and 1985). In some cases, there appeared to be some slight carry-over effects. For example, break points in the signaled condition for the 2 pigeons who started in the unsignaled condition were lower (M = 132.4 s) than those for the 2 pigeons who started in the signaled condition (M = 196 s). Performance during Saline, Control, and Drug Conditions Figure 2 shows mean responses per minute after administration of saline (closed circles) and 1.0 mg/kg d-amphetamine (open circles) as a function of the nominal delay for each condition. In addition, the curves were extended to the mean break points for each condition for both Figure 2 and 3. Although several doses of d-amphetamine were tested, most of the analyses presented will focus on effects of the 1.0 and 1.78 mg/kg doses. The lowest dose (0.3 mg/kg) did not produce a reliable effect on performance, and the higher doses (3.0 mg/kg and 5.6 mg/kg) typically suppressed overall response rates substantially. For Pigeons 1871 and 75, who experienced a signaled unsignaled signaled sequence, the delay functions during the initial signaled condition shifted upwards after injections of 1.0 mg/kg when compared to saline. At delays over 50 s, response rates during the initial signaled condition increased compared to saline. At shorter delays, response rates either were relatively unchanged (1871) or slighted decreased (75). The delay functions for Pigeons 1871 and 75, during the unsignaled condition, shifted upwards (i.e., higher response rates) and to the right (i.e., higher break points) following administrations of 1.0 mg/kg. Break points were 4.62 and 1.38 times greater after administrations of 1.0 mg/kg compared to saline for Pigeon 1871 and 75, respectively. Note that 23

100 80 60 40 1871 - Signaled 1871 - Unsignaled Saline 1.0 mg/kg 1871 - Signaled 20 0 80 75 - Signaled 75 - Unsignaled 75 - Signaled Mean Responses Per Minute 60 40 20 0 80 60 40 20 0 358 - Unsignaled 358 - Signaled 358 - Unsignaled 60 1985 - Unsignaled 1985 - Signaled 1985 - Unsignaled 40 20 0 0 50 100 150 200 0 50 100 150 200 Nominal Delay (s) 0 50 100 150 200 Figure 2. Mean responses per minute in the VI as a function of the nominal delay (s) after administration of saline (filled symbols) and 1.0 d-amphetamine (unfilled symbols) during the signaled and unsignaled conditions for each pigeon. Each function is the mean of data from five sessions and extends to the mean break point. Note that the y-axes differ across pigeons. 24

for Pigeon 1871, during the unsignaled condition, 1.0 mg/kg produced a very large shift of the function to the right. Indeed, this dose increased the break point from 33.6 s to 155.2 s. Although 1.0 mg/kg slightly increased response rates at several delays, for Pigeon 358, during the unsignaled condition, response rates remained relatively low at delays higher than 12 s. For Pigeons 1871 and 75, 1.0 mg/kg shifted the function upwards, and breaks point remained the maximum during the initial signaled condition. During the return to the signaled condition, however, response rates for these two pigeons largely were unaffected by this dose. Specifically, response rates at delays greater than 50 s were elevated following administration of 1.0 mg/kg during the first, but not during the second, exposure to the signaled condition. Thus, the effects of the drug that were produced during the initial signaled condition were not completely recovered during the return to the signaled phase. For Pigeons 358 and 1985 (who experienced an unsignaled signaled unsignaled sequence) the delay functions shifted slightly to the right following administration of 1.0 mg/kg during the initial unsignaled condition. In addition, this dose decreased response rates at the 0 s delay for both pigeons. The break points for Pigeons 358 and 1985 during the initial unsignaled conditions were 2.02 and 1.09 times greater after administrations of 1.0 mg/kg, respectively. For Pigeons 358 and 1985 during the signaled condition, the delay functions shifted upwards and to the right when possible (Pigeon 358). For Pigeon 358 the break points were 2.61 times higher following administration of 1.0 mg/kg than compared to saline. Interestingly, 1.0 mg/kg produced rather large increases for both break points and response rates during the signaled conditions for Pigeon 358. The delay function for Pigeon 1985 during the return to the unsignaled condition was similar to the function during the initial unsignaled condition (i.e., low mean break point, 25