CONTINGENCY VALUES OF VARYING STRENGTH AND COMPLEXITY

Size: px
Start display at page:

Download "CONTINGENCY VALUES OF VARYING STRENGTH AND COMPLEXITY"

Transcription

1 CONTINGENCY VALUES OF VARYING STRENGTH AND COMPLEXITY By ANDREW LAWRENCE SAMAHA A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA

2 2008 Andrew Lawrence Samaha 2

3 To my dog, Enzo. 3

4 ACKNOWLEDGMENTS I would like to thank the faculty, staff, and students of the University of Florida Psychology Department for contributing toward my education in psychology and behavior analysis and challenging me to become a better student and scientist. Above all others, I would like to express my sincerest gratitude to Dr. Timothy Vollmer for his support, guidance, and patience as my faculty advisor. I would also like to thank the members of my dissertation committee for their attention and feedback for which I am extremely grateful and honored: Dr. Timothy Hackenberg, Dr. Brian Iwata, Dr. David Smith, and Dr. Colette St. Mary. I would also like to acknowledge Stephen Haworth and Dr. Frans van Haaren for helping to establish the lab in which this research was conducted, Dr. Jonathan Pinkston and Dr. Jin Yoon for their innumerable contributions during my early development as a student, and Dr. Gregory Hanley and Dr. Rachael Thompson for encouraging me to pursue a career in Behavior Analysis. 4

5 TABLE OF CONTENTS ACKNOWLEDGMENTS...4 LIST OF TABLES...7 LIST OF FIGURES...8 ABSTRACT...10 CHAPTER 1 INTRODUCTION...12 page Brief History of Reinforcement...12 Considering the Occurrence and Nonoccurrence of Behavior...18 An Analogy in Respondent Conditioning...20 Previous Research on Complex Contingencies of (Operant) Reinforcement...23 Translational Research...26 Goals of the Current Research EXPERIMENT Purpose...29 Method...29 Subjects...29 Apparatus...30 Procedures...30 Conditions...31 Results and Discussion EXPERIMENT Purpose...37 Method...37 Subjects and Apparatus...37 Procedures...37 Conditions...37 Results and Discussion

6 4 EXPERIMENT Purpose...42 Method...42 Subjects and Apparatus...42 Procedures...42 Conditions...43 Results and Discussion EXPERIMENT Purpose...49 Methods...49 Subjects and Apparatus...49 Procedures...50 Conditions...50 Results and Discussion...51 Subject Subject Subject Subject Subject GENERAL DISCUSSION...65 LIST OF REFERENCES...72 BIOGRAPHICAL SKETCH

7 LIST OF TABLES Table page 1-1 Contingencies for each condition in Hammond (1980) Contingencies for each condition of Experiment

8 LIST OF FIGURES Figure page 2-1 Experiment 1: All Sessions Experiment 2: All Sessions Experiment 3: All Sessions Experiment 4: Sequence of Conditions Experiment 4: Subject Experiment 4: Subject Experiment 4: Subject Experiment 4: Subject Experiment 4: Subject

9 LIST OF ABBREVIATIONS DRO Differential reinforcement of other behavior. This is a common treatment for problem behavior whereby reinforcers are arranged to follow some period of time in which problem behavior does not occur. FI FR Fixed-interval schedule. This is a schedule of reinforcement whereby a reinforcer is delivered following the first instance of behavior after a fixed-amount of time has elapsed. For example, FI-30 would mean that the first response after 30 s would be reinforced. Fixed-ratio schedule. This is a schedule of reinforcement whereby a reinforcer is delivered following the nth instance of behavior. For example, FR-30 would mean that the 30 th response would produce a reinforcer. NCR Noncontingent reinforcement. This is a common treatment for problem behavior whereby reinforcer are arranged independent of behavior, usually according to the passage of time (e.g., every 30 s). VI VR Variable-interval schedule. This is a schedule of reinforcement whereby a reinforcer is delivered following the first response after some variable interval of time has elapsed. That amount of time centers around an average determined by an experimenter-set distribution. Variable-ratio schedule. This is a schedule of reinforcement whereby a reinforcer is delivered following, on average, the nth instance of behavior. The exact response requirement changes from trial to trial according to some experimenter-set distribution. 9

10 Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy CONTINGENCY VALUES OF VARYING STRENGTH AND COMPLEXITY Chair: Timothy R. Vollmer Major: Psychology By Andrew Lawrence Samaha August 2008 Precise control over the reinforcers that follow behavior and the reinforcers that are presented in the absence of behavior may help to provide a clearer understanding of the role of response-dependent and response-independent reinforcers. Four experiments examined lever pressing in rats as a function of a contingency for the delivery of sucrose pellets. Contingencies were arranged by manipulating the probability of a reinforcer given a response and the probability of a reinforcer given no response. Experiment 1 examined acquisition and maintenance of lever pressing during positive contingencies (where the probability of a reinforcer given a response was higher than the probability of a reinforcer given no response) and complex positive contingencies (a positive contingency where the probability of a reinforcer given no response is greater than zero). Results indicated lever pressing was not acquired under the complex positive contingency, was acquired under the positive contingency, but persisted during a return to the complex positive contingency for all three subjects. In Experiment 2, subjects were exposed to the same sequence of conditions as subjects in Experiment 1 but after first experiencing negative (.00/.10) and complex negative contingencies (.05/.10). In general, results of Experiment 2 were similar to the results of Experiment 1 except 10

11 that responding did not persist during the second exposure to.10/.05 for two subjects and, for one subject, acquisition during the positive contingency was more difficult to obtain than for any of the subjects in Experiment 1. In Experiment 3, a two-component multiple schedule was arranged where one component was associated with early exposure to a negative contingency while the other component was associated with only positive contingencies. Results indicated that, overall, the multiple schedule method did not detect differences in subsequent responding. In Experiment 4, the effects of a gradual shift from a positive to a negative contingency were examined. Results indicated that lever pressing decreased accordingly as contingencies became more negative. In addition, maintenance under negative contingencies was more likely when smaller contingency changes were made from one condition to another. All of the results are discussed in terms of understanding naturally occurring schedules of reinforcement in the acquisition and maintenance of appropriate and problematic human behavior. 11

12 CHAPTER 1 INTRODUCTION The following experiments examined acquisition and maintenance of lever pressing in rats. The purpose of the research was to investigate contingencies of reinforcement in which reinforcers are presented both following behavior and following periods of time in which behavior did not occur. Although reinforcement contingencies are commonly arranged experimentally such that a response must occur to produce a reinforcer, based on prior research in applied behavior analysis, it is likely that in nature a blend of events occur following both the occurrence and nonoccurrence of behavior occur. A better understanding of such contingencies has important implications for understanding acquisition of both problem and appropriate behavior in the development of human behavioral repertoires. Brief History of Reinforcement In Schedules of Reinforcement (1957), Ferster and Skinner categorized hundreds of variations on relations between behavior and environment, known as reinforcement schedules. Investigating such relations involved arranging contingencies between behavior and features of the apparatus that could exert control over behavior. The contingencies took the form of if-then relations with some specification of behavior or time and behavior affecting some feature of the environment. For example, if a response key in a pigeon chamber was pecked 25 times, a solenoid was then activated to raise a hopper filled with grain. Or, the first response after 15 s resulted in hopper access. These procedures have come to be known as a fixed-ratio (FR) and fixed-interval (FI) schedules of reinforcement, respectively. Fixed-ratio schedules specify that reinforcers are to be delivered following some fixed number of responses. Examples of FR schedules include piece-work reimbursement systems in which workers are paid for completing a set amount of work. Fixed-interval schedules specify that reinforcers are to be delivered 12

13 following the first response after some fixed period of time. For example, an FI-10 min schedule specifies that the first response after 10 min will produce a reinforcer. In addition to FR and FI schedules, Ferster and Skinner also examined the effects of varying the response requirement around some average following every reinforcer delivery. These were referred to as variableratio (VR) and variable-interval (VI) schedules. Variable-ratio schedules specify that reinforcers are delivered following, on average, the nth response but the exact number of responses necessary to produce each individual reinforcer is unpredictable. The exact distribution of response requirements is controlled by the experimenter. Variable-interval schedules specify that reinforcers are delivered following the first response after the passage of some variable length of time. Similar to VR schedules, the length of time is centered on some average value but varies unpredictably from reinforcer to reinforcer. Reynolds (1968) made an important distinction that further extends the notion of reinforcement schedules. In his text, A Primer of Operant Conditioning, Reynolds wrote about the difference between dependencies and contingencies. According to Reynolds, dependencies describe relations in which some consequence occurs if and only if behavior occurs. All of the schedules described in Schedules of Reinforcement arranged dependencies. For example, the mechanical delivery of grain in an operant chamber may be dependent on a key press. Turning on the light in one s office is dependent upon hitting the light-switch. And, according to Reynolds, contingencies describe the obtained relations found in the environment, including those that occur as a result of dependencies and those that occur for other reasons. For example, a reinforcer may be programmed to occur every 60 s whether or not behavior happens. Suppose that, by accident, a response occurs at second 59. This accidental contingency may produce a 13

14 reinforcement effect and the relation may be expressed as a reinforcement contingency despite the fact that there is no dependency between behavior and the delivery of reinforcers. In our day to day lives, behavior can enter into relations that likely consist of a blend of dependencies, accidental pairing, and events that follow periods with no behavior. In order to understand these kinds of relations, a method or framework must be established to integrate them. Consider the behavior one person (Albert) might engage in to get another person s (Jane s) attention (for the purpose of the example, assume that attention is a reinforcer). For example, Albert might say Hello or attempt to make eye-contact with Jane. Through observation and experimentation, it might be possible to show that making eye contact is reinforced on about every other occasion. This approximates something like a random-ratio schedule where each response is associated with a.5 probability of being followed by a reinforcer. But, what if Jane initiates a conversation with Albert before Albert had a chance to do anything? How should this extra attention be conceptualized? There are a few possibilities. One is that a reinforcement effect will occur merely as a result of the contiguity, or brief delay, between behavior and the subsequent attention. The other is that a reinforcement effect for eyecontact would result if eye-contact was correlated with an increase in the probability of receiving attention over the background probability of attention. Skinner and others certainly recognized that there was value in examining the effects of reinforcers that were delivered for free or independent of behavior. For example, Zeiler (1968) examined the effects of what he termed response-independent schedules of reinforcement. Zeiler exposed pigeons to fixed-time (FT) and variable-time (VT) schedules where reinforcers were delivered according to either a fixed duration of time that did not change from reinforcer to reinforcer or a quasi-random duration that changed from reinforcer to reinforcer but whose 14

15 average remained constant across sessions. Responding in the context of FT and VT schedules was evaluated after pigeons first experienced FI and VI schedules. The effect of both schedules was to produce a decrease in the rate of responding however the FT schedule produced accelerated patterns of responding just prior to reinforcer delivery. This increase was attributed to adventitious reinforcement or, the strengthening of behavior because it happens to occur contiguously with or in close temporal proximity to reinforcement. (p. 412) That is, the pattern of responding established during FI was maintained during the subsequent FT condition despite the lack of a dependency between responding and reinforcer delivery. The absence of systematic patterns observed during exposure to VT was interpreted to have been caused by the strengthening of behavior other than key-pecking as a result of unpredictable intervals between reinforcers. Additional experiments followed Zeiler s (1968) examination including Lattal and Maxey (1971). Lattal and Maxey evaluated responding during VT schedules using a multiple schedule. Multiple schedules involve the alternation between two conditions (or, components) within the same session. Each component is associated with a unique stimulus or set of stimuli. In Lattal and Maxey s first experiment, both components were initially set to VI schedules (Mult VI VI). In later conditions, both components changed to VT schedules (but, at different points in the experiment). Responding during the VT component persisted longer when the other component was VI. In addition, responding was higher in the component that was most recently a VI schedule, suggesting that responding during the VT schedule was partly a function of the response strength in the previous condition. In the second experiment, responding was examined following a transition from Mult VI VI to Mult VI Ext (extinction) and then Mult Ext Ext with occasional 1-session probe evaluations of Mult VT VT. Although extinction typically produces 15

16 complete suppression of behavior, responses maintained at approximately 10 responses per minute, indicating that responding during the VT condition would have likely produce responses contiguous with reinforcer presentation. Hence, at least some of the response persistence during VT might be attributed to adventitious reinforcement. Other researchers noted that the pattern produced by the previous response-dependent schedule could influence the likelihood of adventitious reinforcement in subsequent responseindependent conditions. For example, Rescorla and Skucy (1969) suggested that relatively high rates could be obtained in FT following FI because exposure to FI schedules typically produces rates of behavior that increase prior to reinforcer delivery. Therefore, response-independent reinforcers delivered at the same frequency would likely follow similar local increases in responding. Similarly, Lattal (1972) concluded that relative to FT, VT does not produce rates of responding as high as its response-dependent counterpart (VI) because the VT presentation of reinforcers is more likely to occur during some behavior other than lever pressing. In an attempt to understand the relative contributions of dependency and contingency to responding, several investigators examined schedules that combined features of both. Edwards, Peek, and Wolfe (1970) compared rates of responding in FR, FT, conjoint FR FT (where reinforcers were delivered following a fixed number of responses and fixed-periods of time), and extinction (where reinforcers were not delivered during the session). Edwards et al. found that the effects of adding response-independent schedules on top of existing response-dependent schedules produced relatively small decreases in behavior compared to either extinction or FT. In addition, as the rate of response-independent reinforcement was increased (or, as the intervals of the FT schedule were decreased) and the response-requirement for response-dependent reinforcers remained fixed during the conjoint FR FT condition, response rate decreased. 16

17 Lattal (1974) examined schedules in which the percentage of reinforcers delivered according to a variable schedule were response-dependent (while the remainder were responseindependent). This was accomplished by either making every 3rd, 10th, or all reinforcer deliveries dependent on a response. When response-dependent reinforcers were available, response-independent deliveries were suspended until after the first response occurred. In addition, the proportions of response-dependent reinforcers were examined in both ascending and descending series. Results suggested that response rates decreased as the percentage of response-dependent reinforcers decreased. Lattal and Bryan (1976, Experiment 1) examined effects of delivering responseindependent reinforcers according to a VT schedule on top of existing FI performance using a conjoint FI VT schedule. The experimenters manipulated the rate of reinforcer presentation on the VT schedule while keeping the FI schedule constant. In general, the results suggested that VT reinforcer delivery disrupted both the pattern and rate of responding established by the FI schedule. That is, the positively accelerated rates observed prior to reinforcement on the FI schedule became more linear when VT reinforcer were introduces. In addition, the overall rate of responding decreased during the session. However, the authors noted that in some cases, the addition of response-independent reinforcement had either no clear effect or increased rates of responding. The authors suggested that the uncontrolled temporal contiguity of responses and reinforcers delivered according to the VT schedule may have contributed to the lack of consistent effects. Additionally, more recent applied studies have shown that responding may persist when response-independent reinforcers are delivered on top of an existing response-dependent schedule. For example, Marcus and Vollmer (1996) evaluated whether appropriate 17

18 communication behavior would persist following training if the reinforcers maintaining appropriate communication (and problem behavior) were delivered according to a fixed-time schedule. Once appropriate behavior was established and problem behavior remained low, the rate of fixed-time presentation was decreased across sessions. The results showed that appropriate communication persisted despite the fixed-time delivery of reinforcers. Additionally, this effect was replicated by Goh, Iwata, and DeLeon (2000). Considering the Occurrence and Nonoccurrence of Behavior One feature common to schedules in which reinforcers are delivered following either responses or following the passage of time is that reinforcers delivered according to the latter might still follow responses closely in time. This becomes a problem because reinforcers can have different effects depending on whether or not they follow behavior. In addition, these different effects can occur independent of whether or not the behavior actually triggered the delivery (i.e., there does not need to be a dependency between behavior and a subsequent event for the behavior to be affected by it). So, a conceptualization of reinforcement that includes those reinforcers that happen after behavior and those reinforcers that happen after some period of time (regularly or irregularly) is inadequate because some proportion of those latter reinforcers will inevitably follow behavior. Furthermore, that proportion (of reinforcers delivered according to a time-based schedule that accidentally follow behavior) is not controlled by the experimenter but instead, by the organism s behavior. Therefore, to study contingencies similar to those found in the natural environment, there must be control over the delivery of reinforcers following the occurrence of behavior and the delivery of reinforcers following the nonoccurrence of behavior. Fortunately, nomenclature and conceptualizations that supports such a framework already exist. 18

19 Catania (1988) described in his text Learning, the fundamental process and procedures known as reinforcement. He noted that a prototypical study on reinforcement might compare the effects of exposing the animal to two conditions: a baseline, where the animal receives no food and a reinforcement condition, where the animal receives food after each instance of behavior. The conditions might alternate back and forth a few times so that the experimenter is convinced it is the reinforcement causing the increase in behavior and not some other, uncontrolled variable. Following such an experiment, the data might reveal that responding remained low during the initial baseline condition, increased during the reinforcement condition, then decreased back down to previous levels during the subsequent baseline condition, and so on. To some, it may seem like a clear demonstration that reinforcement was responsible for the increase in behavior, but Catania noted two changes occurring during the transition back to baseline: 1) the relationship between behavior and food and 2) the mere presence of food in the session. In light of that limitation, an alternative explanation for the obtained increase in behavior might be that the food had a general tendency to increase the activity of the animal, which produced not only an increase in the measured behavior but in other, unmeasured behavior as well. To address this, Catania described an alternative control condition where, instead of not delivering reinforcers at all, food is to be delivered for both the occurrence and nonoccurrence of behavior. He expressed these terms probabilistically such that, for the reinforcement condition, the probability of a reinforcer given a response was 1.0 and the probability of a reinforcer given no response was 0 and in the extinction condition, both probability terms would be equal. In addition, Catania s (1988) conceptualization provides a heuristic for anticipating the effects of complex contingencies (for lack of a better term, complex is used here to describe contingencies where both the probability of a reinforcer given a response and the probability of a 19

20 reinforcer given no response are greater than zero). Referring back to the above example using eye-contact, the probability of receiving attention given eye-contact was.5 but sometimes attention was delivered in the absence of eye-contact. Catania s conceptualization allows us to evaluate the contingency if we also express the attention that is delivered in the absence of eyecontact as a probability. If the probability of attention given eye-contact is greater than the probability of attention given no eye-contact, Catania s framework would predict that eyecontact would be strengthened as a result of reinforcement. Conversely, if the probability of attention given eye-contact is less than or equal to the probability of attention given no eyecontact, Catania s framework would predict that eye-contact would not be strengthened. The conceptualization might be helpful for improving our understanding of contingencies similar to those found outside the laboratory. An Analogy in Respondent Conditioning Perhaps not coincidently, a similar conceptualization of contingencies has been useful for understanding respondent conditioning. Rescorla (1967) wrote about confounds present in common control conditions during tests of respondent conditioning. Respondent conditioning (sometimes called Pavlovian conditioning) describes conditioning in which a neutral stimulus comes to produce effects similar to those of an unconditioned stimulus (US) as a result of operations often simply (and inadequately) described as pairing. Effects of respondent conditioning are demonstrated by comparing a subject s responses to the CS (conditioned stimulus) following a test condition (in which the CS and US are paired) and a control condition. Popular control procedures pre-dating Rescorla s publication involved some variation of presenting both the CS and the US but, in a manner that was directly contrary to the test condition. That is, US were often presented before CS such that presentation of the CS was never predictive of an upcoming presentation of the US. Rescorla made two arguments: 1) the 20

21 only difference between the effects of the test and control conditions should be the contingency necessary to produce conditioning and 2) many of the commonly used control conditions included two changes: the removal of one contingency and the addition of another. For Rescorla, the constraints placed on the relation between the CS and the US in typical control conditions constituted a procedural difference beyond the mere absence of the contingency responsible for conditioning. Therefore, the ideal control condition was one in which presentation of the CS and the US was unconstrained. The test and three of the control conditions (explicitly unpaired control, backward, conditioning, and discriminative conditioning) described by Rescorla can be expressed probabilistically (for the sake of completeness, the remaining control conditions were presentation of the CS alone, presentation of a novel CS, and presentation of the US alone). In the test condition, in which CS are always presented and removed prior to the US, the probability of a US given a CS is 1.0 and the probability of a CS given a US is 0. The explicitly unpaired, backward conditioning, and discriminative conditioning effectively arranged the same contingency: US always proceed CS and CS never proceed US. Hence, in these control conditions, the probability of a US given a CS is 0 and the probability of a CS given a US is 1.0. And in the ideal control condition, in which presentation of the CS and the US are unconstrained (random), the probability of a US given a CS would be equal to the probability of a CS given a US. Lane (1960) investigated the potential effectiveness of control conditions for operant control of vocalizations in chickens. The control conditions included no reinforcement (extinction), fixed-time reinforcer delivery, fixed-ratio food tray presentation (a stimulus that was correlated with reinforcer delivery) without accompanying reinforcers, and DRO (where 21

22 reinforcers were delivered given the absence of food). Lane found decreases in each of the control conditions relative to either fixed-ratio and fixed-interval test conditions. Similar results were obtained by Thompson, Iwata, Hanley, Dozier, & Samaha (2003) who examined fixedtime, extinction, and DRO. Both studies reported relatively higher rates of responding during the fixed-time condition which was attributed to accidental contiguity between responses and reinforcers. Thompson and Iwata (2005) noted the analogy between Rescorla s (1967) description of ideal control procedures for respondent conditioning and those used for operant conditioning. Their analysis led them to conclude that, although imperfect for reasons described below, noncontingent reinforcement (NCR) met Rescorla s definition of a truly random control. (Thompson and Iwata, 2005, p. 261) However, the fixed-time delivery of reinforcers does not ensure that the obtained relationship between behavior and reinforcers is random. Reinforcers, by definition, have the effect of strengthening whatever preceded them. The strengthening effect does not depend on the nature of the relationship between behavior and reinforcement (i.e., whether the behavior produced the reinforcer or if the reinforcer accidentally followed behavior). As a result of being strengthened, the rate and/or pattern of behavior may change such that the obtained contingency is no longer random. In the case of fixed-time delivery of reinforcers, responses that occur in the interval just before food delivery may be more likely to occur in the future. Such a case was reported by Vollmer, Ringdahl, Roane, and Marcus (1997) in which a child s aggression persisted during NCR. An examination of the within-session pattern of responding revealed that as the individual gained more experience with the treatment, instances of aggression became more likely just prior to reinforcer-delivery. In other words, the probability of a reinforcer given aggression was likely higher than the probability of a reinforcer 22

23 given the nonoccurrence of aggression. Such a condition is more descriptive of a fixed- or variable-ratio schedule as opposed to a truly random control. It is possible that such a problem only occurs if one uses fixed-time schedules and that NCR implemented using variable-time schedules would retain the status as the truly random control. However, VT schedules also do not ensure that the obtained relation between behavior and reinforcers remains random. Reinforcers that are delivered closely following responses may increase the overall rate of responding such that, compared to the initial rate of responding that produced a negative contingency, higher rates of responding may produce positive contingencies. In addition, many of the studies on reinforcement contingencies already discussed emphasize the role in which response-independent reinforcers exert their influence on responding in systematic (i.e., nonrandom) ways (c.f., Zeiler, 1968; Rescorla & Skucy, 1969; Edwards, Peek, & Wolfe, 1970, Lattal & Maxey, 1971; Lattal, 1972; Lattal, 1974; Lattal & Bryan, 1976). Previous Research on Complex Contingencies of (Operant) Reinforcement To date, two studies have experimentally manipulated contingencies of reinforcement viewed as the probability of a reinforcer given a response and the probability of a reinforcer given no response. In the first one, which was a two experiment study, Hammond (1980) investigated effects of positive and negative contingencies in rats using water as the reinforcer and lever pressing as the response. Contingencies were arranged by dividing the session into a series of unsignaled 1-s cycles. At the end of each cycle,.03 ml of water was delivered (or not) according to two experimenter-programmed probabilities: the probability of a reinforcer given that at least one response occurred during the previous cycle and the probability of a reinforcer given that no responses occurred during the previous cycle. In the first experiment, rats were given a history of a positive contingency before they were exposed to a zero contingency. Hammond used the term positive contingency to refer to 23

24 conditions where the probability of a reinforcer given a response was higher than the probability of a reinforcer given no response. The term zero contingency was used to refer to conditions where the probabilities of a reinforcer given a response and given no response were equal. The specific sequence of conditions and the terms used to describe them are listed in Table 1-1. Responding decreased rapidly after the introduction of the zero contingency as compared to the moderately high positive contingency. Table 1-1. Contingencies for each condition in Hammond (1980). Condition P(Sr R) P(Sr ~R) Term a Very High Positive b.2 0 High Positive c.05 0 Moderately High Positive d Zero e.05 0 Moderately High Positive f Zero The conditions in Experiment 1 of Hammond (1980) and the terms used to describe them. The abbreviation P(Sr R) stands for the probability of a reinforcer given a response and the abbreviation P(Sr ~R) stands for the probability of a reinforcer given no response. In the second experiment, 47 rats were given a history of a positive contingency and then were exposed to either one of two positive contingencies (.12/.00 or.12/.08), one of two zero contingencies (.12/.12 or.05/.05), or a negative contingency (.00/.05). The results showed that responding decreased as the contingencies were progressively weakened. In the discussion, the correspondingly decreased response rates were interpreted to have implications against accounts of reinforcement that are based on contiguity. Contiguity, when used with respect to operant behavior, refers to the amount of time that elapses between responses and reinforcers. According to the author, the contiguity was the same in all conditions of the experiment. Therefore, the relationship between the probability of a reinforcer given a response and the probability of a reinforcer given no response must play an important role in determining reinforcement effects. 24

25 Borrero, Vollmer, and Wright (2002) translated the findings and procedures used by Hammond (1980) in the treatment of aggression. A functional analysis (Iwata et al., 1982/1994) was conducted in order to identify the reinforcers maintaining aggression for two participants. For both participants, aggression was maintained by social reinforcement, which meant that it occurred because of the reactions of other individuals in the environment. Specifically, one participant s aggression was maintained by escape from activities and the other s was maintained by access to preferred food items. Following the functional analyses, the participants were exposed to positive and then neutral (zero) contingencies. Cycle durations were adjusted to be approximately equal to the average duration of the responses made by the participants. For one participant, the cycle duration was 1 s and, for the other, the cycle duration was 5 s. The effect of the contingencies was the same for both participants: positive contingencies produced maintenance and neutral contingencies produced decreases in aggression. One implication of Borrero, Vollmer, and Wright is that the procedures used to arrange complex contingencies of reinforcement may represent a useful method for simulating reinforcement contingencies like those maintaining problem (or appropriate) behavior in the natural environment. Furthermore, the effects on socially-relevant behavior seem to be in the direction anticipated by Catania (1988). The neutral contingencies described by Hammond (1980) and Borrero, Vollmer, and Wright (2002) might better fit an operant analog of Rescorla s (1967) truly random control. Neutral contingencies specify that the probability of a reinforcer given a response is equal to the probability of a reinforcer given no response. If those probabilities are set to values greater than zero then, responding does not have the effect of increasing the probability of a reinforcer above that obtained if no response occurs. Therefore, the alternation between positive and neutral 25

26 contingencies by Borrero, Vollmer, and Wright (2002) constitutes the demonstration of a control condition where the only change between baseline and reinforcement is the contingency for not responding. However, this kind of control condition has not been described or examined in relevant discussions of operant control procedures (Lane, 1960; Thompson et al., 2003, Thompson & Iwata, 2005). Translational Research Traditional views of science often place a division between two groups of scientists: basic and applied. Basic scientists are those that do science for the sake of understanding and applied scientists are those that do it to meet some more immediate need of society (Baer, Wolf, & Risley, 1968). The extension of the findings of Hammond (1980) and the contingency concept of reinforcement to the treatment of problem behavior represents an example of how research in basic science may be applied to address issues that are important to society (i.e., reducing aggressive behavior displayed by children). This model of the relationship between basic and applied science is often unidirectional, where information flows from basic to applied. However, less obvious is the reciprocal role in which application can (or should) guide basic science. Positive reinforcement is a concept that is clearly basic and fundamental to behavior analysis. Basic research on positive reinforcement has focused largely on if-then responsereinforcer dependencies. However, applied research has shown that events known to reinforce problem (and appropriate behavior) sometimes occur following behavior and sometimes occur when behavior has not occurred (e.g., Vollmer, Borrero, Borrero, Van Camp, & Lalli, 2001; Samaha et al., in press). Intuitively, such contingencies are frequent in human environments. Therefore, examining the necessary and sufficient conditions for reinforcement in the context of complex contingencies would seem important. 26

27 In addition, previous translational research has shown that some effects of reinforcement seem dependent on not just current contingencies, but also previous experience. For example, Borrero, Vollmer, Van Harren, Haworth, and Samaha (in prep) used rats to examine lever pressing during fixed-time (FT) schedules where reinforcers are delivered according to a clock (independent of lever pressing). Fixed-time schedules might sometimes produce complex contingencies because, even though reinforcers are delivered according to a clock, they may accidentally occur just after a response or after a period of time without responding. Results indicated that maintenance during the FT condition was more likely when rats had a previous history of responding on an FI schedule with the same interval value as that used in the subsequent FT condition. For example, rats with a previous history of FI 30 s (where the first response after 30 s produced a reinforcer) continued to respond at higher rates in a subsequent FT 30 s condition (where reinforcers were presented every 30 s independent of lever pressing) as compared to an FT 15 s condition. While the results of this study do not lend themselves to an evaluation of the effects of complex contingencies (because the relationship between responding and reinforcer delivery in the FT condition was not directly arranged by the experimenter), the results clearly suggested that reinforcement effects in complex contingencies may be influenced by previous experience. Therefore, a complete description of the necessary and sufficient conditions for reinforcement in complex contingencies might need to include conditional statements based on an organism s previous experience. Goals of the Current Research The general aim of this dissertation is to present a method to study complex contingencies of reinforcement. The series of studies seeks to investigate some conditions for observing acquisition and maintenance under complex schedules of reinforcement. An improved 27

28 understanding of complex schedules of reinforcement has implications for how behavior might be reinforced and maintained in the natural environment. The following five experiments examined acquisition and maintenance of lever pressing in rats. In the first experiment, acquisition was examined during two positive contingencies (.10/.05 and.10/.00) and effects of exposure to.10/.00 on responding in a subsequent.10/.05. In Experiment 2, a systematic replication of experiment one was conducted by providing experience with negative contingencies (.00/.10 and.05/.10) prior to the evaluation of responding in positive contingencies. The results of Experiment 1 and 2 were somewhat different, such that acquisition and maintenance may have been weakened by the early exposure to a negative contingency. So, Experiment 3 was designed to evaluate effects of the differences between Experiment 1 and 2 (the previous exposure to positive contingencies) within subjects. Finally, in Experiment 4, a method was used to systematically identify the contingency values at which responding would break down by gradually manipulating the contingency from positive to negative (.10/.00 to.00/.10). 28

29 CHAPTER 2 EXPERIMENT 1 Purpose The purpose of this experiment was to evaluate whether lever pressing could be acquired, maintained, or both under a complex positive contingency of reinforcement, in which there was some probability of a reinforcer given behavior (.10) and some probability of a reinforcer given no behavior (.05). Subjects Method Three experimentally naïve male Wistar (albino) rats purchased at 8 weeks of age were housed individually in home cages. Experimentally naïve rats were selected as subjects in order to control for a history of behavior reinforced by access to food. Conclusions based on the acquisition of behavior by non-experimentally naïve organisms would need to be tempered due to both known and unknown experiences prior to the experiment. Likewise, the conditions under which food-reinforced behavior could be acquired and maintained in experimentally naïve organisms could be tested. Prior to the experiment, rats were given ad-lib food and water for 7 consecutive days. After 7 days, access to food was restricted to 16 g per day. Food was made available in the home cages immediately following sessions. Water was freely available in the home cages throughout the experiment. Sessions began after the 7th day of food restriction. All procedures were approved by the University of Florida Animal Care and Use Committee. The colony room was illuminated on a 12-hour light-dark cycle with lights programmed to turn on at 8 am. Temperature and humidity were monitored and maintained at consistent levels. 29

30 Apparatus Six Coulbourn-Instruments operant chambers were enclosed in sound-attenuated boxes with exhaust fans. An intelligence panel was mounted on one wall of the chamber measuring 29 cm long X 30 cm wide X 25 cm high. Mounted on the panel were two levers and a pellet hopper. The pellet hopper was mounted in the center of the intelligence panel (7.0 cm above the floor) and the levers were located on either side of the hopper (centered 7.0 cm above the floor and 5.5 cm from the center of the hopper). Also mounted on the intelligence panel were three color LEDs (light-emitting diodes) mounted horizontally 4 cm above each lever, an incandescent house-light (2.0 cm from the top-center of the panel), and an incandescent hopper-light. From left to right, the colors of the LEDs were red, green, and yellow. The side-panels of the chamber were made of clear acrylic plastic while the ceiling, rear, and intelligence panel were constructed of aluminum. The bottom of the chamber consisted of a shock floor (although no shock was ever delivered during the experiment) raised above a white plastic drop pan. A pellet feeder was attached to the back of the intelligence panel and delivered pellets into the hopper. Lever presses were defined as any force on the lever sufficient to produce a switch closure (about 0.20 nm). Responses to both levers were recorded but only responses on the left lever produced changes in the probability of reinforcer delivery. A PC computer used Coulbourn Instruments Graphic State Notation to record lever-presses and control the apparatuses. The computer also emitted white-noise through a pair of attached speakers at approximately 70 dbs (as measured from the center of the room). Procedures Three 10-min sessions were conducted each day. Each session was proceeded by a 1-min blackout and the third session was followed by a 1-min blackout before the animal was returned to its home cage. During sessions, the house light and the lever lights above both levers were 30

31 illuminated. Throughout the experiment, the session was divided into unsignaled 1-s cycles (similar to that described by Hammond, 1980). The computer was programmed to deliver a single 45-mg sucrose pellet (Formula 5TUL, Research Diets Inc., New Brunswick, NJ) at the end of each cycle according to a pair of probabilities specific to each phase: the probability of a pellet delivery given at least one lever press in the current cycle ( P(Sr R) ) and the probability of a pellet delivery given no lever presses in the current cycle ( P(Sr ~R) ). During a pellet delivery, the house and lever lights were turned off for 1 s. At the same time, the hopper-light flashed briefly for 250 ms. The next cycle began when the house and lever-lights were re-illuminated. Lever presses that occurred during the 1-s blackout did not have any programmed effect and were not included in the overall rate of responding. Other than the contingencies implemented during each phase, no lever shaping or hoppertraining was performed prior to or during the experiment. Contingency values (the probabilities of pellet delivery) for each condition were initially based on the values reported by Hammond (1980). Pilot work revealed that animals gained excessive weight when exposed to similar contingency values in combination with session durations of 50 min. Therefore, an attempt was made to reduce food intake by limiting the total time spent in session to 30 min per day. In addition, the session time was divided into three 10-min blocks after an examination of withinsession patterns revealed reasonably consistent rates of responding. Conditions Condition changes were made following stability as judged by visual inspection. From this point forward, each condition has been specified using two parameters: the probability of a pellet delivery given a response and the probability of a pellet delivery given no response (P(Sr R) / P(Sr ~R)). Conditions were conducted in the following order:.00/.00 (No Pellet),.10/.05,.10/.00, and.10/

32 Results and Discussion Figure 2-1 shows responses per min of lever pressing for each session, subject, and condition. The following pattern of responding was observed for all three subjects. Little to no responding was obtained in the initial No Pellet condition (as expected). No subject showed acquisition during the subsequent.10/.05 condition. Responding increased for all three subjects following exposure to.10/.00. Responding then persisted at somewhat reduced levels (as compared to the previous.10/.00 condition) following the reversal back to.10/

33 Figure 2-1. Experiment 1 All Sessions. This figure shows responses per min of lever pressing for each session. Each panel shows data from a different subject. 33

34 Three conclusions can be drawn from the data. First,.10/.05 was not sufficient to produce acquisition in these subjects during the time period in which they were exposed to the condition. Second, the lack of acquisition in.10/.05 may be, in part, explained by the reinforcers that were delivered following cycles without responses given that acquisition was obtained in.10/.00. Third, responding was maintained during the second exposure to.10/.05, a condition which previously did not produce responding. It is this third finding that is perhaps most critical. If the.10/.00 condition is viewed as an independent variable, then exposure to that variable produced a differential effect in a subsequent condition:.10/.05 in comparison to the.10/.05 condition that preceded.10/.00. Although, only a small range of parameter values was examined in this experiment, the results may have implications for the acquisition and maintenance of problem and appropriate behavior in humans. With respect to the first effect, it may be that occasional reinforcers presented in the absence of behavior are sufficient to prevent the acquisition of problem (or appropriate) behavior. Given the current data, this could be the case even if the probability of reinforcement given problem (or appropriate) behavior was twice as likely as the probability of reinforcement given no behavior. Such reinforcers could be arranged using fixed-time schedules (e.g., noncontingent reinforcement, NCR), differential reinforcement of other behavior (DRO), or following the occurrence of appropriate behavior as a sort of inoculation against the emergence of problem behavior. On the other hand, too many free reinforcers may impede the development of important appropriate skills. Koegel and Rincover (1977) showed similar results when, following experience with intermittent reinforcement, students correct responses persisted (but eventually decreased) in another setting when reinforcers were presented following successive incorrect responses or 34

35 independent of behavior. When reinforcers were presented following incorrect responses, an examination of the pattern of responses revealed that the reinforcer appeared to serve as a discriminative stimulus. That is, correct responses increased after the delivery of a reinforcer and then decreased across successive trials. Indeed, other authors have observed response persistence during DRO schedules and have positive a discriminative effect of the reinforcer (c.f., Thompson, Iwata, Hanley, Dozier, & Samaha, 2003). When reinforcers were presented independent of correct responses, behavior persisted for much longer. The authors attributed the enhanced persistence of the response-independent reinforcement to adventitious pairing of responses and reinforcers. In the current study, response rates persisted (for several hundred sessions in two cases) under a complex positive contingency. One interpretation of the results of this study was that the occasional response-dependent reinforcer may have enhanced the discriminative properties of the all the reinforcers such that responding persisted for much longer than that observed by Koegel and Rincover (1977). The acquisition versus maintenance effect with.10/.05 has implications for the maintenance of appropriate behavior and the treatment of problem behavior. Once acquired, both appropriate and problem behavior may be relatively robust despite intermittent reinforcement and occasional reinforcers delivered following the absence of behavior. For problem behavior, the result suggests that those selecting treatments for eventual implementation by caregivers should do so while considering the possible effects that treatment integrity failures will have on the contingency. For example, DRO (a common treatment) specifies that reinforcers are to be delivered following periods of time in which behavior has not occurred. Despite even the most ideal training, it is very likely that other factors may result in reinforcers occasionally following problem behavior (e.g., as a result of intermittent care by untrained or 35

STIMULUS FUNCTIONS IN TOKEN-REINFORCEMENT SCHEDULES CHRISTOPHER E. BULLOCK

STIMULUS FUNCTIONS IN TOKEN-REINFORCEMENT SCHEDULES CHRISTOPHER E. BULLOCK STIMULUS FUNCTIONS IN TOKEN-REINFORCEMENT SCHEDULES By CHRISTOPHER E. BULLOCK A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR

More information

Unit 6 Learning.

Unit 6 Learning. Unit 6 Learning https://www.apstudynotes.org/psychology/outlines/chapter-6-learning/ 1. Overview 1. Learning 1. A long lasting change in behavior resulting from experience 2. Classical Conditioning 1.

More information

RESPONSE-INDEPENDENT CONDITIONED REINFORCEMENT IN AN OBSERVING PROCEDURE

RESPONSE-INDEPENDENT CONDITIONED REINFORCEMENT IN AN OBSERVING PROCEDURE RESPONSE-INDEPENDENT CONDITIONED REINFORCEMENT IN AN OBSERVING PROCEDURE By ANTHONY L. DEFULIO A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE

More information

Within-event learning contributes to value transfer in simultaneous instrumental discriminations by pigeons

Within-event learning contributes to value transfer in simultaneous instrumental discriminations by pigeons Animal Learning & Behavior 1999, 27 (2), 206-210 Within-event learning contributes to value transfer in simultaneous instrumental discriminations by pigeons BRIGETTE R. DORRANCE and THOMAS R. ZENTALL University

More information

Schedules of Reinforcement

Schedules of Reinforcement Schedules of Reinforcement MACE, PRATT, ZANGRILLO & STEEGE (2011) FISHER, PIAZZA & ROANE CH 4 Rules that describe how will be reinforced are 1. Every response gets SR+ ( ) vs where each response gets 0

More information

PROBABILITY OF SHOCK IN THE PRESENCE AND ABSENCE OF CS IN FEAR CONDITIONING 1

PROBABILITY OF SHOCK IN THE PRESENCE AND ABSENCE OF CS IN FEAR CONDITIONING 1 Journal of Comparative and Physiological Psychology 1968, Vol. 66, No. I, 1-5 PROBABILITY OF SHOCK IN THE PRESENCE AND ABSENCE OF CS IN FEAR CONDITIONING 1 ROBERT A. RESCORLA Yale University 2 experiments

More information

EVALUATIONS OF DELAYED REINFORCEMENT IN CHILDREN WITH DEVELOPMENTAL DISABILITIES

EVALUATIONS OF DELAYED REINFORCEMENT IN CHILDREN WITH DEVELOPMENTAL DISABILITIES EVALUATIONS OF DELAYED REINFORCEMENT IN CHILDREN WITH DEVELOPMENTAL DISABILITIES By JOLENE RACHEL SY A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT

More information

JOURNAL OF APPLIED BEHAVIOR ANALYSIS 2001, 34, NUMBER 2(SUMMER 2001)

JOURNAL OF APPLIED BEHAVIOR ANALYSIS 2001, 34, NUMBER 2(SUMMER 2001) JOURNAL OF APPLIED BEHAVIOR ANALYSIS 2001, 34, 241 253 NUMBER 2(SUMMER 2001) REINFORCEMENT CONTINGENCIES AND SOCIAL REINFORCEMENT: SOME RECIPROCAL RELATIONS BETWEEN BASIC AND APPLIED RESEARCH TIMOTHY R.

More information

PURSUING THE PAVLOVIAN CONTRIBUTIONS TO INDUCTION IN RATS RESPONDING FOR 1% SUCROSE REINFORCEMENT

PURSUING THE PAVLOVIAN CONTRIBUTIONS TO INDUCTION IN RATS RESPONDING FOR 1% SUCROSE REINFORCEMENT The Psychological Record, 2007, 57, 577 592 PURSUING THE PAVLOVIAN CONTRIBUTIONS TO INDUCTION IN RATS RESPONDING FOR 1% SUCROSE REINFORCEMENT JEFFREY N. WEATHERLY, AMBER HULS, and ASHLEY KULLAND University

More information

KEY PECKING IN PIGEONS PRODUCED BY PAIRING KEYLIGHT WITH INACCESSIBLE GRAIN'

KEY PECKING IN PIGEONS PRODUCED BY PAIRING KEYLIGHT WITH INACCESSIBLE GRAIN' JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR 1975, 23, 199-206 NUMBER 2 (march) KEY PECKING IN PIGEONS PRODUCED BY PAIRING KEYLIGHT WITH INACCESSIBLE GRAIN' THOMAS R. ZENTALL AND DAVID E. HOGAN UNIVERSITY

More information

FIXED-RATIO PUNISHMENT1 N. H. AZRIN,2 W. C. HOLZ,2 AND D. F. HAKE3

FIXED-RATIO PUNISHMENT1 N. H. AZRIN,2 W. C. HOLZ,2 AND D. F. HAKE3 JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR VOLUME 6, NUMBER 2 APRIL, 1963 FIXED-RATIO PUNISHMENT1 N. H. AZRIN,2 W. C. HOLZ,2 AND D. F. HAKE3 Responses were maintained by a variable-interval schedule

More information

The effects of Pavlovian CSs on two food-reinforced baselineswith and without noncontingent shock

The effects of Pavlovian CSs on two food-reinforced baselineswith and without noncontingent shock Animal Learning & Behavior 1976, Vol. 4 (3), 293-298 The effects of Pavlovian CSs on two food-reinforced baselineswith and without noncontingent shock THOMAS s. HYDE Case Western Reserve University, Cleveland,

More information

Stimulus control of foodcup approach following fixed ratio reinforcement*

Stimulus control of foodcup approach following fixed ratio reinforcement* Animal Learning & Behavior 1974, Vol. 2,No. 2, 148-152 Stimulus control of foodcup approach following fixed ratio reinforcement* RICHARD B. DAY and JOHN R. PLATT McMaster University, Hamilton, Ontario,

More information

Chapter 6/9: Learning

Chapter 6/9: Learning Chapter 6/9: Learning Learning A relatively durable change in behavior or knowledge that is due to experience. The acquisition of knowledge, skills, and behavior through reinforcement, modeling and natural

More information

Instrumental Conditioning I

Instrumental Conditioning I Instrumental Conditioning I Basic Procedures and Processes Instrumental or Operant Conditioning? These terms both refer to learned changes in behavior that occur as a result of the consequences of the

More information

Fixed-time schedules involved presenting reinforcers

Fixed-time schedules involved presenting reinforcers JOURNAL OF APPLIED BEHAVIOR ANALYSIS 1998, 31, 529 542 NUMBER 4(WINTER 1998) FIXED-TIME SCHEDULES ATTENUATE EXTINCTION-INDUCED PHENOMENA IN THE TREATMENT OF SEVERE ABERRANT BEHAVIOR TIMOTHY R. VOLLMER,

More information

Chapter 5 Study Guide

Chapter 5 Study Guide Chapter 5 Study Guide Practice Exam Questions: Which of the following is not included in the definition of learning? It is demonstrated immediately Assuming you have eaten sour pickles before, imagine

More information

Operant response topographies of rats receiving food or water reinforcers on FR or FI reinforcement schedules

Operant response topographies of rats receiving food or water reinforcers on FR or FI reinforcement schedules Animal Learning& Behavior 1981,9 (3),406-410 Operant response topographies of rats receiving food or water reinforcers on FR or FI reinforcement schedules JOHN H. HULL, TIMOTHY J. BARTLETT, and ROBERT

More information

Establishing Conditioned Reinforcers in Children with Autism Spectrum Disorder

Establishing Conditioned Reinforcers in Children with Autism Spectrum Disorder James Madison University JMU Scholarly Commons Masters Theses The Graduate School Spring 2015 Establishing Conditioned Reinforcers in Children with Autism Spectrum Disorder Kristen Rollman James Madison

More information

Increasing the persistence of a heterogeneous behavior chain: Studies of extinction in a rat model of search behavior of working dogs

Increasing the persistence of a heterogeneous behavior chain: Studies of extinction in a rat model of search behavior of working dogs Increasing the persistence of a heterogeneous behavior chain: Studies of extinction in a rat model of search behavior of working dogs Eric A. Thrailkill 1, Alex Kacelnik 2, Fay Porritt 3 & Mark E. Bouton

More information

Chapter 5: Learning and Behavior Learning How Learning is Studied Ivan Pavlov Edward Thorndike eliciting stimulus emitted

Chapter 5: Learning and Behavior Learning How Learning is Studied Ivan Pavlov Edward Thorndike eliciting stimulus emitted Chapter 5: Learning and Behavior A. Learning-long lasting changes in the environmental guidance of behavior as a result of experience B. Learning emphasizes the fact that individual environments also play

More information

Variability as an Operant?

Variability as an Operant? The Behavior Analyst 2012, 35, 243 248 No. 2 (Fall) Variability as an Operant? Per Holth Oslo and Akershus University College Correspondence concerning this commentary should be addressed to Per Holth,

More information

EFFECTS OF SIGNALS ON THE MAINTENANCE OF ALTERNATIVE BEHAVIOR UNDER INTERMITTENT REINFORCEMENT

EFFECTS OF SIGNALS ON THE MAINTENANCE OF ALTERNATIVE BEHAVIOR UNDER INTERMITTENT REINFORCEMENT EFFECTS OF SIGNALS ON THE MAINTENANCE OF ALTERNATIVE BEHAVIOR UNDER INTERMITTENT REINFORCEMENT By CARRIE MELISSA DEMPSEY A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN

More information

Signaled reinforcement effects on fixed-interval performance of rats with lever depressing or releasing as a target response 1

Signaled reinforcement effects on fixed-interval performance of rats with lever depressing or releasing as a target response 1 Japanese Psychological Research 1998, Volume 40, No. 2, 104 110 Short Report Signaled reinforcement effects on fixed-interval performance of rats with lever depressing or releasing as a target response

More information

UNIVERSITY OF WALES SWANSEA AND WEST VIRGINIA UNIVERSITY

UNIVERSITY OF WALES SWANSEA AND WEST VIRGINIA UNIVERSITY JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR 05, 3, 3 45 NUMBER (JANUARY) WITHIN-SUBJECT TESTING OF THE SIGNALED-REINFORCEMENT EFFECT ON OPERANT RESPONDING AS MEASURED BY RESPONSE RATE AND RESISTANCE

More information

DISCRIMINATION IN RATS OSAKA CITY UNIVERSITY. to emit the response in question. Within this. in the way of presenting the enabling stimulus.

DISCRIMINATION IN RATS OSAKA CITY UNIVERSITY. to emit the response in question. Within this. in the way of presenting the enabling stimulus. JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR EFFECTS OF DISCRETE-TRIAL AND FREE-OPERANT PROCEDURES ON THE ACQUISITION AND MAINTENANCE OF SUCCESSIVE DISCRIMINATION IN RATS SHIN HACHIYA AND MASATO ITO

More information

Noncontingent Reinforcement and the Acquisition of Appropriate Behavior

Noncontingent Reinforcement and the Acquisition of Appropriate Behavior EUROPEAN JOURNAL OF BEHAVIOR ANALYSIS 2005, 6, 51-55 NUMBER 1 (SUMMER 2005) 51 Noncontingent Reinforcement and the Acquisition of Appropriate Behavior The University of Iowa Noncontingent reinforcement

More information

EFFECTS OF ALTERNATIVE RESPONSES ON BEHAVIOR EXPOSED TO NONCONTINGENT REINFORCEMENT JAVIER VIRUES-ORTEGA BRIAN A. IWATA TARA A. FAHMIE JILL M.

EFFECTS OF ALTERNATIVE RESPONSES ON BEHAVIOR EXPOSED TO NONCONTINGENT REINFORCEMENT JAVIER VIRUES-ORTEGA BRIAN A. IWATA TARA A. FAHMIE JILL M. JOURNAL OF APPLIED BEHAVIOR ANALYSIS 213, 46, 63 612 NUMBER 3(FALL 213) EFFECTS OF ALTERNATIVE RESPONSES ON BEHAVIOR EXPOSED TO NONCONTINGENT REINFORCEMENT JAVIER VIRUES-ORTEGA UNIVERSITY OF MANITOBA AND

More information

Overview. Simple Schedules of Reinforcement. Important Features of Combined Schedules of Reinforcement. Combined Schedules of Reinforcement BEHP 1016

Overview. Simple Schedules of Reinforcement. Important Features of Combined Schedules of Reinforcement. Combined Schedules of Reinforcement BEHP 1016 BEHP 1016 Why People Often Make Bad Choices and What to Do About It: Important Features of Combined Schedules of Reinforcement F. Charles Mace, Ph.D., BCBA-D University of Southern Maine with Jose Martinez-Diaz,

More information

Value transfer in a simultaneous discrimination by pigeons: The value of the S + is not specific to the simultaneous discrimination context

Value transfer in a simultaneous discrimination by pigeons: The value of the S + is not specific to the simultaneous discrimination context Animal Learning & Behavior 1998, 26 (3), 257 263 Value transfer in a simultaneous discrimination by pigeons: The value of the S + is not specific to the simultaneous discrimination context BRIGETTE R.

More information

PSYC2010: Brain and Behaviour

PSYC2010: Brain and Behaviour PSYC2010: Brain and Behaviour PSYC2010 Notes Textbook used Week 1-3: Bouton, M.E. (2016). Learning and Behavior: A Contemporary Synthesis. 2nd Ed. Sinauer Week 4-6: Rieger, E. (Ed.) (2014) Abnormal Psychology:

More information

CONDITIONED REINFORCEMENT IN RATS'

CONDITIONED REINFORCEMENT IN RATS' JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR 1969, 12, 261-268 NUMBER 2 (MARCH) CONCURRENT SCHEULES OF PRIMARY AN CONITIONE REINFORCEMENT IN RATS' ONAL W. ZIMMERMAN CARLETON UNIVERSITY Rats responded

More information

REINFORCEMENT SCHEDULE THINNING FOLLOWING TREATMENT WITH FUNCTIONAL COMMUNICATION TRAINING GREGORY P. HANLEY, BRIAN A. IWATA, AND RACHEL H.

REINFORCEMENT SCHEDULE THINNING FOLLOWING TREATMENT WITH FUNCTIONAL COMMUNICATION TRAINING GREGORY P. HANLEY, BRIAN A. IWATA, AND RACHEL H. JOURNAL OF APPLIED BEHAVIOR ANALYSIS 2001, 34, 17 38 NUMBER 1(SPRING 2001) REINFORCEMENT SCHEDULE THINNING FOLLOWING TREATMENT WITH FUNCTIONAL COMMUNICATION TRAINING GREGORY P. HANLEY, BRIAN A. IWATA,

More information

The generality of within-session patterns of responding: Rate of reinforcement and session length

The generality of within-session patterns of responding: Rate of reinforcement and session length Animal Learning & Behavior 1994, 22 (3), 252-266 The generality of within-session patterns of responding: Rate of reinforcement and session length FRANCES K. MCSWEENEY, JOHN M. ROLL, and CARI B. CANNON

More information

Occasion Setting without Feature-Positive Discrimination Training

Occasion Setting without Feature-Positive Discrimination Training LEARNING AND MOTIVATION 23, 343-367 (1992) Occasion Setting without Feature-Positive Discrimination Training CHARLOTTE BONARDI University of York, York, United Kingdom In four experiments rats received

More information

Value Transfer in a Simultaneous Discrimination Appears to Result From Within-Event Pavlovian Conditioning

Value Transfer in a Simultaneous Discrimination Appears to Result From Within-Event Pavlovian Conditioning Journal of Experimental Psychology: Animal Behavior Processes 1996, Vol. 22. No. 1, 68-75 Copyright 1996 by the American Psychological Association. Inc. 0097-7403/96/53.00 Value Transfer in a Simultaneous

More information

RESURGENCE OF LEVER PRESSING DURING A DRO CHALLENGE. Jennifer L. Hudnall, B.A.

RESURGENCE OF LEVER PRESSING DURING A DRO CHALLENGE. Jennifer L. Hudnall, B.A. RESURGENCE OF LEVER PRESSING DURING A DRO CHALLENGE By Jennifer L. Hudnall, B.A. Submitted to the graduate degree program Applied Behavioral Science and the Graduate Faculty of the University of Kansas

More information

Comparison of Direct and Indirect Reinforcement Contingencies on Task Acquisition. A Thesis Presented. Robert Mark Grant

Comparison of Direct and Indirect Reinforcement Contingencies on Task Acquisition. A Thesis Presented. Robert Mark Grant Comparison of Direct and Indirect Reinforcement Contingencies on Task Acquisition A Thesis Presented By Robert Mark Grant In partial fulfillment of the requirements for the degree of Master of Science

More information

ECONOMIC AND BIOLOGICAL INFLUENCES ON KEY PECKING AND TREADLE PRESSING IN PIGEONS LEONARD GREEN AND DANIEL D. HOLT

ECONOMIC AND BIOLOGICAL INFLUENCES ON KEY PECKING AND TREADLE PRESSING IN PIGEONS LEONARD GREEN AND DANIEL D. HOLT JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR 2003, 80, 43 58 NUMBER 1(JULY) ECONOMIC AND BIOLOGICAL INFLUENCES ON KEY PECKING AND TREADLE PRESSING IN PIGEONS LEONARD GREEN AND DANIEL D. HOLT WASHINGTON

More information

DOES THE TEMPORAL PLACEMENT OF FOOD-PELLET REINFORCEMENT ALTER INDUCTION WHEN RATS RESPOND ON A THREE-COMPONENT MULTIPLE SCHEDULE?

DOES THE TEMPORAL PLACEMENT OF FOOD-PELLET REINFORCEMENT ALTER INDUCTION WHEN RATS RESPOND ON A THREE-COMPONENT MULTIPLE SCHEDULE? The Psychological Record, 2004, 54, 319-332 DOES THE TEMPORAL PLACEMENT OF FOOD-PELLET REINFORCEMENT ALTER INDUCTION WHEN RATS RESPOND ON A THREE-COMPONENT MULTIPLE SCHEDULE? JEFFREY N. WEATHERLY, KELSEY

More information

Classical Conditioning Classical Conditioning - a type of learning in which one learns to link two stimuli and anticipate events.

Classical Conditioning Classical Conditioning - a type of learning in which one learns to link two stimuli and anticipate events. Classical Conditioning Classical Conditioning - a type of learning in which one learns to link two stimuli and anticipate events. behaviorism - the view that psychology (1) should be an objective science

More information

STUDIES OF WHEEL-RUNNING REINFORCEMENT: PARAMETERS OF HERRNSTEIN S (1970) RESPONSE-STRENGTH EQUATION VARY WITH SCHEDULE ORDER TERRY W.

STUDIES OF WHEEL-RUNNING REINFORCEMENT: PARAMETERS OF HERRNSTEIN S (1970) RESPONSE-STRENGTH EQUATION VARY WITH SCHEDULE ORDER TERRY W. JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR 2000, 73, 319 331 NUMBER 3(MAY) STUDIES OF WHEEL-RUNNING REINFORCEMENT: PARAMETERS OF HERRNSTEIN S (1970) RESPONSE-STRENGTH EQUATION VARY WITH SCHEDULE

More information

THE SEVERAL ROLES OF STIMULI IN TOKEN REINFORCEMENT CHRISTOPHER E. BULLOCK

THE SEVERAL ROLES OF STIMULI IN TOKEN REINFORCEMENT CHRISTOPHER E. BULLOCK JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR 2015, 103, 269 287 NUMBER 2 (MARCH) THE SEVERAL ROLES OF STIMULI IN TOKEN REINFORCEMENT CHRISTOPHER E. BULLOCK UNIVERSITY OF FLORIDA AND TIMOTHY D. HACKENBERG

More information

A COMPARISON OF PROCEDURES FOR UNPAIRING CONDITIONED REFLEXIVE ESTABLISHING OPERATIONS. Dissertation

A COMPARISON OF PROCEDURES FOR UNPAIRING CONDITIONED REFLEXIVE ESTABLISHING OPERATIONS. Dissertation A COMPARISON OF PROCEDURES FOR UNPAIRING CONDITIONED REFLEXIVE ESTABLISHING OPERATIONS Dissertation Presented in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in the Graduate

More information

TEST-SPECIFIC CONTROL CONDITIONS FOR FUNCTIONAL ANALYSES TARA A. FAHMIE BRIAN A. IWATA ANGIE C. QUERIM JILL M. HARPER

TEST-SPECIFIC CONTROL CONDITIONS FOR FUNCTIONAL ANALYSES TARA A. FAHMIE BRIAN A. IWATA ANGIE C. QUERIM JILL M. HARPER JOURNAL OF APPLIED BEHAVIOR ANALYSIS 213, 46, 61 7 NUMBER 1(SPRING 213) TEST-SPECIFIC CONTROL CONDITIONS FOR FUNCTIONAL ANALYSES TARA A. FAHMIE CALIFORNIA STATE UNIVERSITY, NORTHRIDGE BRIAN A. IWATA UNIVERSITY

More information

OBSERVING RESPONSES AND SERIAL STIMULI: SEARCHING FOR THE REINFORCING PROPERTIES OF THE S2 ROGELIO ESCOBAR AND CARLOS A. BRUNER

OBSERVING RESPONSES AND SERIAL STIMULI: SEARCHING FOR THE REINFORCING PROPERTIES OF THE S2 ROGELIO ESCOBAR AND CARLOS A. BRUNER JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR 2009, 92, 215 231 NUMBER 2(SEPTEMBER) OBSERVING RESPONSES AND SERIAL STIMULI: SEARCHING FOR THE REINFORCING PROPERTIES OF THE S2 ROGELIO ESCOBAR AND CARLOS

More information

Schedules of Reinforcement 11/11/11

Schedules of Reinforcement 11/11/11 Schedules of Reinforcement 11/11/11 Reinforcement Schedules Intermittent Reinforcement: A type of reinforcement schedule by which some, but not all, correct responses are reinforced. Intermittent reinforcement

More information

ON THE EFFECTS OF EXTENDED SAMPLE-OBSERVING RESPONSE REQUIREMENTS ON ADJUSTED DELAY IN A TITRATING DELAY MATCHING-TO-SAMPLE PROCEDURE WITH PIGEONS

ON THE EFFECTS OF EXTENDED SAMPLE-OBSERVING RESPONSE REQUIREMENTS ON ADJUSTED DELAY IN A TITRATING DELAY MATCHING-TO-SAMPLE PROCEDURE WITH PIGEONS ON THE EFFECTS OF EXTENDED SAMPLE-OBSERVING RESPONSE REQUIREMENTS ON ADJUSTED DELAY IN A TITRATING DELAY MATCHING-TO-SAMPLE PROCEDURE WITH PIGEONS Brian D. Kangas, B.A. Thesis Prepared for the Degree of

More information

Some Parameters of the Second-Order Conditioning of Fear in Rats

Some Parameters of the Second-Order Conditioning of Fear in Rats University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Papers in Behavior and Biological Sciences Papers in the Biological Sciences 1969 Some Parameters of the Second-Order Conditioning

More information

RESPONSE PERSISTENCE UNDER RATIO AND INTERVAL REINFORCEMENT SCHEDULES KENNON A. LATTAL, MARK P. REILLY, AND JAMES P. KOHN

RESPONSE PERSISTENCE UNDER RATIO AND INTERVAL REINFORCEMENT SCHEDULES KENNON A. LATTAL, MARK P. REILLY, AND JAMES P. KOHN JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR 1998, 70, 165 183 NUMBER 2(SEPTEMBER) RESPONSE PERSISTENCE UNDER RATIO AND INTERVAL REINFORCEMENT SCHEDULES KENNON A. LATTAL, MARK P. REILLY, AND JAMES

More information

PREFERENCE REVERSALS WITH FOOD AND WATER REINFORCERS IN RATS LEONARD GREEN AND SARA J. ESTLE V /V (A /A )(D /D ), (1)

PREFERENCE REVERSALS WITH FOOD AND WATER REINFORCERS IN RATS LEONARD GREEN AND SARA J. ESTLE V /V (A /A )(D /D ), (1) JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR 23, 79, 233 242 NUMBER 2(MARCH) PREFERENCE REVERSALS WITH FOOD AND WATER REINFORCERS IN RATS LEONARD GREEN AND SARA J. ESTLE WASHINGTON UNIVERSITY Rats

More information

Learning. Learning is a relatively permanent change in behavior acquired through experience or practice.

Learning. Learning is a relatively permanent change in behavior acquired through experience or practice. Learning Learning is a relatively permanent change in behavior acquired through experience or practice. What is Learning? Learning is the process that allows us to adapt (be flexible) to the changing conditions

More information

Transfer of Control in Ambiguous Discriminations

Transfer of Control in Ambiguous Discriminations Journal of Experimental Psychology: Animal Behavior Processes 1991, Vol. 17, No. 3, 231-248 Copyright 1991 by the Am n Psychological Association, Inc. 0097-7403/91/53.00 Transfer of Control in Ambiguous

More information

CS DURATION' UNIVERSITY OF CHICAGO. in response suppression (Meltzer and Brahlek, with bananas. MH to S. P. Grossman. The authors wish to

CS DURATION' UNIVERSITY OF CHICAGO. in response suppression (Meltzer and Brahlek, with bananas. MH to S. P. Grossman. The authors wish to JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR 1971, 15, 243-247 NUMBER 2 (MARCH) POSITIVE CONDITIONED SUPPRESSION: EFFECTS OF CS DURATION' KLAUS A. MICZEK AND SEBASTIAN P. GROSSMAN UNIVERSITY OF CHICAGO

More information

CAROL 0. ECKERMAN UNIVERSITY OF NORTH CAROLINA. in which stimulus control developed was studied; of subjects differing in the probability value

CAROL 0. ECKERMAN UNIVERSITY OF NORTH CAROLINA. in which stimulus control developed was studied; of subjects differing in the probability value JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR 1969, 12, 551-559 NUMBER 4 (JULY) PROBABILITY OF REINFORCEMENT AND THE DEVELOPMENT OF STIMULUS CONTROL' CAROL 0. ECKERMAN UNIVERSITY OF NORTH CAROLINA Pigeons

More information

OBSERVING AND ATTENDING IN A DELAYED MATCHING-TO-SAMPLE PREPARATION IN PIGEONS. Bryan S. Lovelace, B.S. Thesis Prepared for the Degree of

OBSERVING AND ATTENDING IN A DELAYED MATCHING-TO-SAMPLE PREPARATION IN PIGEONS. Bryan S. Lovelace, B.S. Thesis Prepared for the Degree of OBSERVING AND ATTENDING IN A DELAYED MATCHING-TO-SAMPLE PREPARATION IN PIGEONS Bryan S. Lovelace, B.S. Thesis Prepared for the Degree of MASTER OF SCIENCE UNIVERSITY OF NORTH TEXAS December 2008 APPROVED:

More information

Travel Distance and Stimulus Duration on Observing Responses by Rats

Travel Distance and Stimulus Duration on Observing Responses by Rats EUROPEAN JOURNAL OF BEHAVIOR ANALYSIS 2010, 11, 79-91 NUMBER 1 (SUMMER 2010) 79 Travel Distance and Stimulus Duration on Observing Responses by Rats Rogelio Escobar National Autonomous University of Mexico

More information

The Concept of Automatic Reinforcement: Implications for Assessment and Intervention

The Concept of Automatic Reinforcement: Implications for Assessment and Intervention The Concept of Automatic Reinforcement: Implications for Assessment and Intervention Timothy R. Vollmer, Meghan Deshais, & Faris R. Kronfli University of Florida Overview Discuss what is meant by automatic

More information

SUBSTITUTION EFFECTS IN A GENERALIZED TOKEN ECONOMY WITH PIGEONS LEONARDO F. ANDRADE 1 AND TIMOTHY D. HACKENBERG 2

SUBSTITUTION EFFECTS IN A GENERALIZED TOKEN ECONOMY WITH PIGEONS LEONARDO F. ANDRADE 1 AND TIMOTHY D. HACKENBERG 2 JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR 217, 17, 123 135 NUMBER 1 (JANUARY) SUBSTITUTION EFFECTS IN A GENERALIZED TOKEN ECONOMY WITH PIGEONS LEONARDO F. ANDRADE 1 AND TIMOTHY D. HACKENBERG 2 1

More information

I. Classical Conditioning

I. Classical Conditioning Learning Chapter 8 Learning A relatively permanent change in an organism that occur because of prior experience Psychologists must study overt behavior or physical changes to study learning Learning I.

More information

VERNON L. QUINSEY DALHOUSIE UNIVERSITY. in the two conditions. If this were possible, well understood where the criterion response is

VERNON L. QUINSEY DALHOUSIE UNIVERSITY. in the two conditions. If this were possible, well understood where the criterion response is JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR LICK-SHOCK CONTINGENCIES IN THE RATT1 VERNON L. QUINSEY DALHOUSIE UNIVERSITY 1972, 17, 119-125 NUMBER I (JANUARY) Hungry rats were allowed to lick an 8%

More information

Operant Conditioning B.F. SKINNER

Operant Conditioning B.F. SKINNER Operant Conditioning B.F. SKINNER Reinforcement in Operant Conditioning Behavior Consequence Patronize Elmo s Diner It s all a matter of consequences. Rewarding Stimulus Presented Tendency to tell jokes

More information

Representations of single and compound stimuli in negative and positive patterning

Representations of single and compound stimuli in negative and positive patterning Learning & Behavior 2009, 37 (3), 230-245 doi:10.3758/lb.37.3.230 Representations of single and compound stimuli in negative and positive patterning JUSTIN A. HARRIS, SABA A GHARA EI, AND CLINTON A. MOORE

More information

REPEATED MEASUREMENTS OF REINFORCEMENT SCHEDULE EFFECTS ON GRADIENTS OF STIMULUS CONTROL' MICHAEL D. ZEILER

REPEATED MEASUREMENTS OF REINFORCEMENT SCHEDULE EFFECTS ON GRADIENTS OF STIMULUS CONTROL' MICHAEL D. ZEILER JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR REPEATED MEASUREMENTS OF REINFORCEMENT SCHEDULE EFFECTS ON GRADIENTS OF STIMULUS CONTROL' MICHAEL D. ZEILER UNIVERSITY OF IOWA 1969, 12, 451-461 NUMBER

More information

A comparison of response-contingent and noncontingent pairing in the conditioning of a reinforcer

A comparison of response-contingent and noncontingent pairing in the conditioning of a reinforcer Louisiana State University LSU Digital Commons LSU Master's Theses Graduate School 2012 A comparison of response-contingent and noncontingent pairing in the conditioning of a reinforcer Sarah Joanne Miller

More information

between successive DMTS choice phases.

between successive DMTS choice phases. JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR 1996, 66, 231 242 NUMBER 2(SEPTEMBER) SEPARATING THE EFFECTS OF TRIAL-SPECIFIC AND AVERAGE SAMPLE-STIMULUS DURATION IN DELAYED MATCHING TO SAMPLE IN PIGEONS

More information

INTERACTIONS AMONG UNIT PRICE, FIXED-RATIO VALUE, AND DOSING REGIMEN IN DETERMINING EFFECTS OF REPEATED COCAINE ADMINISTRATION

INTERACTIONS AMONG UNIT PRICE, FIXED-RATIO VALUE, AND DOSING REGIMEN IN DETERMINING EFFECTS OF REPEATED COCAINE ADMINISTRATION INTERACTIONS AMONG UNIT PRICE, FIXED-RATIO VALUE, AND DOSING REGIMEN IN DETERMINING EFFECTS OF REPEATED COCAINE ADMINISTRATION By JIN HO YOON A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY

More information

on both components of conc Fl Fl schedules, c and a were again less than 1.0. FI schedule when these were arranged concurrently.

on both components of conc Fl Fl schedules, c and a were again less than 1.0. FI schedule when these were arranged concurrently. JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR 1975, 24, 191-197 NUMBER 2 (SEPTEMBER) PERFORMANCE IN CONCURRENT INTERVAL SCHEDULES: A SYSTEMATIC REPLICATION' BRENDA LOBB AND M. C. DAVISON UNIVERSITY

More information

Associative Learning

Associative Learning Learning Learning Associative Learning Classical Conditioning Operant Conditioning Observational Learning Biological Components of Learning Cognitive Components of Learning Behavioral Therapies Associative

More information

Chapter 12 Behavioral Momentum Theory: Understanding Persistence and Improving Treatment

Chapter 12 Behavioral Momentum Theory: Understanding Persistence and Improving Treatment Chapter 12 Behavioral Momentum Theory: Understanding Persistence and Improving Treatment Christopher A. Podlesnik and Iser G. DeLeon 12.1 Introduction Translational research in behavior analysis aims both

More information

Learning. AP PSYCHOLOGY Unit 4

Learning. AP PSYCHOLOGY Unit 4 Learning AP PSYCHOLOGY Unit 4 Learning Learning is a lasting change in behavior or mental process as the result of an experience. There are two important parts: a lasting change a simple reflexive reaction

More information

BACB Fourth Edition Task List Assessment Form

BACB Fourth Edition Task List Assessment Form Supervisor: Date first assessed: Individual being Supervised: Certification being sought: Instructions: Please mark each item with either a 0,1,2 or 3 based on rating scale Rating Scale: 0 - cannot identify

More information

DIFFERENTIAL REINFORCEMENT WITH AND WITHOUT BLOCKING AS TREATMENT FOR ELOPEMENT NATHAN A. CALL

DIFFERENTIAL REINFORCEMENT WITH AND WITHOUT BLOCKING AS TREATMENT FOR ELOPEMENT NATHAN A. CALL JOURNAL OF APPLIED BEHAVIOR ANALYSIS 2011, 44, 903 907 NUMBER 4(WINTER 2011) DIFFERENTIAL REINFORCEMENT WITH AND WITHOUT BLOCKING AS TREATMENT FOR ELOPEMENT NATHAN A. CALL MARCUS AUTISM CENTER AND EMORY

More information

EFFECTS OF A LIMITED HOLD ON PIGEONS MATCH-TO-SAMPLE PERFORMANCE UNDER FIXED-RATIO SCHEDULING. Joseph Leland Cermak, B.A.

EFFECTS OF A LIMITED HOLD ON PIGEONS MATCH-TO-SAMPLE PERFORMANCE UNDER FIXED-RATIO SCHEDULING. Joseph Leland Cermak, B.A. EFFECTS OF A LIMITED HOLD ON PIGEONS MATCH-TO-SAMPLE PERFORMANCE UNDER FIXED-RATIO SCHEDULING Joseph Leland Cermak, B.A. Thesis Prepared for the Degree of MASTER OF SCIENCE UNIVERSITY OF NORTH TEXAS December

More information

STUDY GUIDE ANSWERS 6: Learning Introduction and How Do We Learn? Operant Conditioning Classical Conditioning

STUDY GUIDE ANSWERS 6: Learning Introduction and How Do We Learn? Operant Conditioning Classical Conditioning STUDY GUIDE ANSWERS 6: Learning Introduction and How Do We Learn? 1. learning 2. associate; associations; associative learning; habituates 3. classical 4. operant 5. observing Classical Conditioning 1.

More information

IVER H. IVERSEN UNIVERSITY OF NORTH FLORIDA. because no special deprivation or home-cage. of other independent variables on operant behavior.

IVER H. IVERSEN UNIVERSITY OF NORTH FLORIDA. because no special deprivation or home-cage. of other independent variables on operant behavior. JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR TECHNIQUES FOR ESTABLISHING SCHEDULES WITH WHEEL RUNNING AS REINFORCEMENT IN RATS IVER H. IVERSEN UNIVERSITY OF NORTH FLORIDA 1993, 60, 219-238 NUMBER 1

More information

PSYC 337 LEARNING. Session 6 Instrumental and Operant Conditioning Part Two

PSYC 337 LEARNING. Session 6 Instrumental and Operant Conditioning Part Two PSYC 337 LEARNING Session 6 Instrumental and Operant Conditioning Part Two Lecturer: Dr. Inusah Abdul-Nasiru Contact Information: iabdul-nasiru@ug.edu.gh College of Education School of Continuing and Distance

More information

Behavioural Processes

Behavioural Processes Behavioural Processes 89 (2012) 212 218 Contents lists available at SciVerse ScienceDirect Behavioural Processes j o ur nal homep age : www.elsevier.com/locate/behavproc Providing a reinforcement history

More information

Schedule Induced Polydipsia: Effects of Inter-Food Interval on Access to Water as a Reinforcer

Schedule Induced Polydipsia: Effects of Inter-Food Interval on Access to Water as a Reinforcer Western Michigan University ScholarWorks at WMU Master's Theses Graduate College 8-1974 Schedule Induced Polydipsia: Effects of Inter-Food Interval on Access to Water as a Reinforcer Richard H. Weiss Western

More information

Running head: UTILITY OF TWO DEMAND ASSESSMENTS 1. An Evaluation of the Relative Utility of Two Demand Assessments

Running head: UTILITY OF TWO DEMAND ASSESSMENTS 1. An Evaluation of the Relative Utility of Two Demand Assessments Running head: UTILITY OF TWO DEMAND ASSESSMENTS 1 An Evaluation of the Relative Utility of Two Demand Assessments for Identifying Negative Reinforcers A Thesis Presented By Carly Cornelius In Partial fulfillment

More information

Operant Conditioning

Operant Conditioning Operant Conditioning Classical vs. Operant Conditioning With classical conditioning you can teach a dog to salivate, but you cannot teach it to sit up or roll over. Why? Salivation is an involuntary reflex,

More information

SIDE EFFECTS OF EXTINCTION: PREVALENCE OF BURSTING AND AGGRESSION DURING THE TREATMENT OF SELF-INJURIOUS BEHAVIOR DOROTHEA C.

SIDE EFFECTS OF EXTINCTION: PREVALENCE OF BURSTING AND AGGRESSION DURING THE TREATMENT OF SELF-INJURIOUS BEHAVIOR DOROTHEA C. JOURNAL OF APPLIED BEHAVIOR ANALYSIS 1999, 32, 1 8 NUMBER 1(SPRING 1999) SIDE EFFECTS OF EXTINCTION: PREVALENCE OF BURSTING AND AGGRESSION DURING THE TREATMENT OF SELF-INJURIOUS BEHAVIOR DOROTHEA C. LERMAN

More information

Psychology, Ch. 6. Learning Part 1

Psychology, Ch. 6. Learning Part 1 Psychology, Ch. 6 Learning Part 1 Two Main Types of Learning Associative learning- learning that certain events occur together Cognitive learning- acquisition of mental information, by observing or listening

More information

PERIODIC RESPONSE-REINFORCER CONTIGUITY: TEMPORAL CONTROL BUT NOT AS WE KNOW IT! MICHAEL KEENAN University of Ulster at Coleraine

PERIODIC RESPONSE-REINFORCER CONTIGUITY: TEMPORAL CONTROL BUT NOT AS WE KNOW IT! MICHAEL KEENAN University of Ulster at Coleraine The Psychological Record, 1999, 49, 273-297 PERIODIC RESPONSE-REINFORCER CONTIGUITY: TEMPORAL CONTROL BUT NOT AS WE KNOW IT! MICHAEL KEENAN University of Ulster at Coleraine Two experiments using rats

More information

The Persistence-Strengthening Effects of DRA: An Illustration of Bidirectional Translational Research

The Persistence-Strengthening Effects of DRA: An Illustration of Bidirectional Translational Research The Behavior Analyst 2009, 32, 000 000 No. 2 (Fall) The Persistence-Strengthening Effects of DRA: An Illustration of Bidirectional Translational Research F. Charles Mace University of Southern Maine Jennifer

More information

Pigeons' memory for number of events: EVects of intertrial interval and delay interval illumination

Pigeons' memory for number of events: EVects of intertrial interval and delay interval illumination Learning and Motivation 35 (2004) 348 370 www.elsevier.com/locate/l&m Pigeons' memory for number of events: EVects of intertrial interval and delay interval illumination Chris Hope and Angelo Santi Wilfrid

More information

CHAPTER 15 SKINNER'S OPERANT ANALYSIS 4/18/2008. Operant Conditioning

CHAPTER 15 SKINNER'S OPERANT ANALYSIS 4/18/2008. Operant Conditioning CHAPTER 15 SKINNER'S OPERANT ANALYSIS Operant Conditioning Establishment of the linkage or association between a behavior and its consequences. 1 Operant Conditioning Establishment of the linkage or association

More information

PSY402 Theories of Learning. Chapter 8, Theories of Appetitive and Aversive Conditioning

PSY402 Theories of Learning. Chapter 8, Theories of Appetitive and Aversive Conditioning PSY402 Theories of Learning Chapter 8, Theories of Appetitive and Aversive Conditioning Operant Conditioning The nature of reinforcement: Premack s probability differential theory Response deprivation

More information

Magazine approach during a signal for food depends on Pavlovian, not instrumental, conditioning.

Magazine approach during a signal for food depends on Pavlovian, not instrumental, conditioning. In Journal of Experimental Psychology: Animal Behavior Processes http://www.apa.org/pubs/journals/xan/index.aspx 2013, vol. 39 (2), pp 107 116 2013 American Psychological Association DOI: 10.1037/a0031315

More information

Examining the Constant Difference Effect in a Concurrent Chains Procedure

Examining the Constant Difference Effect in a Concurrent Chains Procedure University of Wisconsin Milwaukee UWM Digital Commons Theses and Dissertations May 2015 Examining the Constant Difference Effect in a Concurrent Chains Procedure Carrie Suzanne Prentice University of Wisconsin-Milwaukee

More information

A Memory Model for Decision Processes in Pigeons

A Memory Model for Decision Processes in Pigeons From M. L. Commons, R.J. Herrnstein, & A.R. Wagner (Eds.). 1983. Quantitative Analyses of Behavior: Discrimination Processes. Cambridge, MA: Ballinger (Vol. IV, Chapter 1, pages 3-19). A Memory Model for

More information

Jennifer J. McComas and Ellie C. Hartman. Angel Jimenez

Jennifer J. McComas and Ellie C. Hartman. Angel Jimenez The Psychological Record, 28, 58, 57 528 Some Effects of Magnitude of Reinforcement on Persistence of Responding Jennifer J. McComas and Ellie C. Hartman The University of Minnesota Angel Jimenez The University

More information

Occasion Setters: Specificity to the US and the CS US Association

Occasion Setters: Specificity to the US and the CS US Association Learning and Motivation 32, 349 366 (2001) doi:10.1006/lmot.2001.1089, available online at http://www.idealibrary.com on Occasion Setters: Specificity to the US and the CS US Association Charlotte Bonardi

More information

Pigeons transfer between conditional discriminations with differential outcomes in the absence of differential-sample-responding cues

Pigeons transfer between conditional discriminations with differential outcomes in the absence of differential-sample-responding cues Animal Learning & Behavior 1995, 23 (3), 273-279 Pigeons transfer between conditional discriminations with differential outcomes in the absence of differential-sample-responding cues LOU M. SHERBURNE and

More information

REINFORCEMENT OF PROBE RESPONSES AND ACQUISITION OF STIMULUS CONTROL IN FADING PROCEDURES

REINFORCEMENT OF PROBE RESPONSES AND ACQUISITION OF STIMULUS CONTROL IN FADING PROCEDURES JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR 1985, 439 235-241 NUMBER 2 (MARCH) REINFORCEMENT OF PROBE RESPONSES AND ACQUISITION OF STIMULUS CONTROL IN FADING PROCEDURES LANNY FIELDS THE COLLEGE OF

More information

CURRENT RESEARCH ON THE INFLUENCE OF ESTABLISHING OPERATIONS ON BEHAVIOR IN APPLIED SETTINGS BRIAN A. IWATA RICHARD G. SMITH JACK MICHAEL

CURRENT RESEARCH ON THE INFLUENCE OF ESTABLISHING OPERATIONS ON BEHAVIOR IN APPLIED SETTINGS BRIAN A. IWATA RICHARD G. SMITH JACK MICHAEL JOURNAL OF APPLIED BEHAVIOR ANALYSIS 2000, 33, 411 418 NUMBER 4(WINTER 2000) CURRENT RESEARCH ON THE INFLUENCE OF ESTABLISHING OPERATIONS ON BEHAVIOR IN APPLIED SETTINGS BRIAN A. IWATA THE UNIVERSITY OF

More information

Effects of Increased Exposure to Training Trials with Children with Autism. A Thesis Presented. Melissa A. Ezold

Effects of Increased Exposure to Training Trials with Children with Autism. A Thesis Presented. Melissa A. Ezold Effects of Increased Exposure to Training Trials with Children with Autism A Thesis Presented by Melissa A. Ezold The Department of Counseling and Applied Educational Psychology In partial fulfillment

More information

NEW ENGLAND CENTER FOR CHILDREN NORTHEASTERN UNIVERSITY

NEW ENGLAND CENTER FOR CHILDREN NORTHEASTERN UNIVERSITY JOURNAL OF APPLIED BEHAVIOR ANALYSIS 2009, 42, 425 446 NUMBER 2(SUMMER 2009) RELATIVE CONTRIBUTIONS OF THREE DESCRIPTIVE METHODS: IMPLICATIONS FOR BEHAVIORAL ASSESSMENT SACHA T. PENCE, EILEEN M. ROSCOE,

More information

Expanding Functional Analysis of Automatically Reinforced Behavior Using a Three-Component Multiple-Schedule

Expanding Functional Analysis of Automatically Reinforced Behavior Using a Three-Component Multiple-Schedule EUROPEAN JOURNAL OF BEHAVIOR ANALYSIS 2010, 11, 17-27 NUMBER 1 (SUMMER 2010) 17 Expanding Functional Analysis of Automatically Reinforced Behavior Using a Three-Component Multiple-Schedule Marc J. Lanovaz

More information

Lecture 5: Learning II. Major Phenomenon of Classical Conditioning. Contents

Lecture 5: Learning II. Major Phenomenon of Classical Conditioning. Contents Lecture 5: Learning II Contents Major Phenomenon of Classical Conditioning Applied Examples of Classical Conditioning Other Types of Learning Thorndike and the Law of Effect Skinner and Operant Learning

More information