Psychology - Problem Drill 09: Learning

Similar documents
Classical Conditioning. AKA: Pavlovian conditioning

Myers PSYCHOLOGY. (7th Ed) Chapter 8. Learning. James A. McCubbin, PhD Clemson University. Worth Publishers

Association. Operant Conditioning. Classical or Pavlovian Conditioning. Learning to associate two events. We learn to. associate two stimuli

Learning. Association. Association. Unit 6: Learning. Learning. Classical or Pavlovian Conditioning. Different Types of Learning

Classical Conditioning Classical Conditioning - a type of learning in which one learns to link two stimuli and anticipate events.

acquisition associative learning behaviorism B. F. Skinner biofeedback

Unit 06 - Overview. Click on the any of the above hyperlinks to go to that section in the presentation.

Classical Conditioning. Learning. Classical conditioning terms. Classical Conditioning Procedure. Procedure, cont. Important concepts

AP PSYCH Unit 6.1 Learning & Classical Conditioning. Before ever opening this book, what did you think learning meant?

I. Classical Conditioning

Spontaneous recovery. Module 18. Processes of Conditioning. Classical Conditioning (cont d)

Learning Habituation Associative learning Classical conditioning Operant conditioning Observational learning. Classical Conditioning Introduction

1. A type of learning in which behavior is strengthened if followed by a reinforcer or diminished if followed by a punisher.

acquisition associative learning behaviorism A type of learning in which one learns to link two or more stimuli and anticipate events

PSYC 221 Introduction to General Psychology

Learning. AP PSYCHOLOGY Unit 4

DEFINITION. Learning is the process of acquiring knowledge (INFORMATIN ) and new responses. It is a change in behavior as a result of experience

Psychology 020 Chapter 7: Learning Tues. Nov. 6th, 2007

Modules. PART I Module 26: How We Learn and Classical Conditioning

3/7/2010. Theoretical Perspectives

Chapter 7. Learning From Experience

CHAPTER 6. Learning. Lecture Overview. Introductory Definitions PSYCHOLOGY PSYCHOLOGY PSYCHOLOGY

Chapter 7 - Learning

PSYCHOLOGY. Chapter 6 LEARNING PowerPoint Image Slideshow

Vidya Prasarak Mandal s K. G. Joshi College of Arts and N. G. Bedekar College of Commerce, Thane.

Unit 6 REVIEW Page 1. Name: Date:

Learning. Learning is the relatively permanent change in an organism s behavior due to experience.

Psychology, Ch. 6. Learning Part 1

Learning. Learning. relatively permanent change in an organism s behavior due to experience

Learning. AP PSYCHOLOGY Unit 5

Learning and conditioning

Chapter 5 Study Guide

GCSE PSYCHOLOGY UNIT 2 LEARNING REVISION

STUDY GUIDE ANSWERS 6: Learning Introduction and How Do We Learn? Operant Conditioning Classical Conditioning

Unit 6 Learning.

Learning. Learning. Learning

Chapter 6/9: Learning

an ability that has been acquired by training (process) acquisition aversive conditioning behavior modification biological preparedness

Module 27: Operant Conditioning

Associative Learning

Psychology in Your Life

Associative Learning

Learning: Relatively permanent change in behavior due to experience

The Most Important Thing I ve Learned. What is the most important thing you ve learned in your life? How did you learn it?

Learning. Learning is a relatively permanent change in behavior acquired through experience or practice.

Chapter Six. Learning. Classical Conditioning Operant Conditioning Observational Learning

Outline. History of Learning Theory. Pavlov s Experiment: Step 1. Associative learning 9/26/2012. Nature or Nurture

Chapter 6: Learning The McGraw-Hill Companies, Inc.

Chapter 5: How Do We Learn?

Chapter 5: Learning and Behavior Learning How Learning is Studied Ivan Pavlov Edward Thorndike eliciting stimulus emitted

Learning Chapter 6 1

Classical & Operant Conditioning. Learning: Principles and Applications

Conditioning and Learning. Chapter 7

What is Learning? Learning: any relatively permanent change in behavior brought about by experience or practice

THEORIES OF PERSONALITY II

Review Sheet Learning (7-9%)

Objectives. 1. Operationally define terms relevant to theories of learning. 2. Examine learning theories that are currently important.

Learning. Revised by Pauline Davey Zeece, University of Nebraska-Lincoln

Operant Conditioning

PSYC 337 LEARNING. Session 3 Classical Conditioning. Lecturer: Dr. Inusah Abdul-Nasiru Contact Information:

Learning Theories - Behaviourism -

Psychology in Your Life

Learning. Learning is a relatively permanent change in behavior acquired through experience.

Learning: Some Key Terms

6 Knowing and Understanding the World

Chapter 6. Learning: The Behavioral Perspective

Psychology Study Guide Chapter 7

Learning Chapter 6. Please visit the Study Site at psychology.com. Developed by Stephen Tracy Community College of Southern Nevada

Lecture 5: Learning II. Major Phenomenon of Classical Conditioning. Contents

Operant Conditioning

Psychological Hodgepodge. Mr. Mattingly Psychology

Theories of Learning

Learning: a relatively permanent change in an organism s behavior due to experience.

Classical Conditioning & Operant Conditioning

Learning = an enduring change in behavior, resulting from experience.

Learning. Learning. Stimulus Learning. Modification of behavior or understanding Is it nature or nurture?

27- Classical Conditioning 1 of 5

CHAPTER 7 LEARNING. Jake Miller, Ocean Lakes High School

January 6, EQ- How does classical conditioning work? Agenda: 1. Welcome and New Policies 2. Vocabulary/Test Questions 3. Classical Conditioning

Thinking About Psychology: The Science of Mind and Behavior 2e. Charles T. Blair-Broeker Randal M. Ernst

Experimental Psychology PSY 433. Chapter 9 Conditioning and Learning

Learning. Chapter 7. Unit 6 ~ AP Psychology ~ Ms. Justice

How do we Learn? Chapter 6 Learning. Class Objectives: What is learning? What is Classical Conditioning? How do you know you ve learned something?

June 16, The retina has two types of light-sensitive receptor cells (or photoreceptors) called rods and cones.

Schedules of Reinforcement 11/11/11

PSYCHOLOGY (8th Edition) David Myers

Learning. Learning. Learning via Association 1/8/2012. Chapter Eight. Change in an organism s behavior or thought as a result of experience

... CR Response ... UR NR

October 21. EQ- How does operant conditioning work? SSPBC1

Module One: Booklet #7

Learning. PSYCHOLOGY (8th Edition) David Myers. Learning. Chapter 8. How Do We Learn? Classical Conditioning

Learning theory provides the basis for behavioral interventions. The USMLE behavioral science section always contains questions relating to learning

Chapter 7. Learning and Conditioning

Bronze statue of Pavlov and one of his dogs located on the grounds of his laboratory at Koltushi Photo taken by Jackie D. Wood, June 2004.

January 8. EQ- What are the major elements of classical conditioning?

Classical Conditioning

Associative Learning

Why should we study psychology? It all seems like common sense (isn t it)?

Learning. 3. Which of the following is an example of a generalized reinforcer? (A) chocolate cake (B) water (C) money (D) applause (E) high grades

Behavioural Approach. in Psychology

Transcription:

Psychology - Problem Drill 09: Learning No. 1 of 10 Instructions: (1) Read the problem statement and answer choices carefully, (2) Work the problems on paper 1. Which of the following is an example of operant conditioning? (A) An infant learns to read a face. (B) A dog salivates when it hears a bell ring. (C) A gorilla learns to use both hands to get food after it watches another gorilla do this. (D) A person thinks of their mother every time they hear a certain song. (E) A seal learns to sit up and bark because it knows it will get a herring if it does so. An infant learns to read a face is an example of classical conditioning. A dog salivates when it hears a bell ring is an example of classical conditioning. A gorilla learns to use both hands to get food after it watches another gorilla do this is an example of observational learning. A person thinks of their mother every time they hear a certain song is an example of classical conditioning. E. Correct! The seal has learned to perform an action in anticipation of a food reward. This is an example of operant conditioning. Simpler animals can learn simple associations. More complex animals can learn more complex associations, especially those that bring favorable consequences. Seals in an aquarium will repeat behaviors, such as slapping and barking, that prompt people to toss them a herring. By linking two events that occur close together, the seal exhibits associative learning: the seal associates slapping and barking with receiving a herring. The animal learned something important to their survival: they learned to associate the past with the immediate future. Learned associations influence people, too. During their first year, infants learn to associate different facial expressions with their accompanying behaviors and tones of voice, and thus to read a face. Adults form similar associations. Conditioning is the process of learning associations. In classical conditioning, we learn to associate two stimuli and thus to anticipate events. We learn that a flash of lightning signals an impending crack of thunder, and so we start to brace ourselves when lightning flashes nearby. In operant conditioning, we learn to associate a response and its consequence and thus to repeat acts followed by rewards and avoid acts followed by punishment. We learn that pushing a vending machine button relates to the delivery of a soda. To simplify, we will consider these two types of associative learning separately but often, they occur together in the same situation. However, conditioning is not the only form of learning. Through observational learning, we learn from others experiences and examples. In all these ways; by classical and operant conditioning and by observation; we humans learn and adapt to our environments.

No. 2 of 10 Instructions: (1) Read the problem statement and answer choices carefully, (2) Work the problems on paper 2. Which of the following statements is NOT part of Pavlov s initial dog experiments? (A) The dog would salivate if meat powder was placed in its mouth. (B) The dog would get petted if it performed correctly. (C) A musical tone was played at the same time as food was placed in the dog s mouth. (D) The experimenter was in an adjacent room. (E) A device was attached to the dog to divert the saliva to a measuring instrument. This was part of the initial experiments. B. Correct! Petting the dog after it performed correctly would be part of operant conditioning and was not part of Pavlov s experiments. This was part of the initial experiments and was the neutral stimulus. To eliminate the possible influence of extraneous stimuli, the experimenter was located in an adjacent room. This was part of the initial experiments. Pavlov s new direction came when his creative mind seized on an incidental finding. After studying salivary secretion in dogs, he knew that when he put food in a dog s mouth the animal would invariably salivate. He also noticed that when he worked with the same dog repeatedly, the dog began salivating to stimuli associated with food; such as to the mere sight of the food, to the food dish, to the presence of the person who regularly brought the food, or even to the sound of the person s approaching footsteps. Because these psychic secretions interfered with his experiments on digestion, Pavlov considered them an annoyance, until he realized they pointed to a simple but important form of learning. From that time on, Pavlov studied learning, which he hoped might enable him to understand better the brain s workings. To explore the phenomenon more objectively, Pavlov and his assistants experimented. They paired various neutral stimuli, such as a musical tone, with food in the mouth to see if the dog would begin salivating to the neutral stimuli alone. To eliminate the possible influence of extraneous stimuli, they isolated the dog in a small room, secured it in a harness, and attached a device that diverted its saliva to a measuring instrument. From an adjacent room, they could present food; at first by sliding in a food bowl, later by blowing meat powder into the dog s mouth at a precise moment. The questions they asked were fundamental. If a neutral stimulus; that something the dog could see or hear; that normally would not be associated with food; now regularly signaled the arrival of food, would the dog associate the two stimuli? If so, would it begin salivating to the neutral stimulus in anticipation of the food?

No. 3 of 10 Instructions: (1) Read the problem statement and answer choices carefully, (2) Work the problems on paper 3. Which of the following pairings is TRUE regarding the terms used in classical conditioning? (A) Salivation when food is placed in the mouth is an example of a Conditioned Response (CR). (B) The food stimulus an unconditioned stimulus (UCS). (C) Salivation in response to a tone is an example of an Unconditioned response (UCR). (D) The previously irrelevant tone stimulus that now triggered the salivation is an example of an Unconditioned Stimulus (UCS). (E) Conditioned = unlearned; unconditioned = learned. Salivation when food is placed in the mouth is an example of an Unconditioned Response, rather than Conditioned Response. B. Correct! The food stimulus is considered an example of an unconditioned stimulus (UCS). Salivation in response to a tone is an example of a Conditioned Response, not Unconditioned Response. The previously irrelevant tone stimulus that now triggered the salivation is an example of a Conditioned stimulus (CS). Conditioned=learned; unconditioned=unlearned. Because salivation in response to food in the mouth was unlearned, Pavlov called it an unconditioned response (UCR). Food in the mouth automatically, unconditionally, triggers a dog s salivary reflex. Thus Pavlov called the food stimulus an unconditioned stimulus (UCS). Salivation in response to the tone was conditional upon the dog s learning the association between the tone and the food. One translation of Pavlov therefore called the salivation the conditional reflex. Today we call this learned response the conditioned response (CR). The previously irrelevant tone stimulus that now triggered the conditional salivation we all the conditioned stimulus (CS). It s easy to distinguish these two kinds of stimuli and responses. Just remember: conditioned=learned; unconditioned=unlearned.

No. 4 of 10 4. Which of the following definitions is matched correctly to its term in regards to classical conditioning? (A) Acquisition is the initial learning of the stimulus-response relationship. (B) Discrimination occurs when there is a diminished response that occurs when the CS no longer signals an impending UCS. (C) Spontaneous Recovery is the tendency to respond to stimuli similar to the CS. (D) Generalization is the reappearance of a weakened CR after a rest pause. (E) Extinction is the learned ability to distinguish between a conditioned stimulus and other irrelevant stimuli. A. Correct! It is correct that Acquisition is the initial learning of the stimulus-response relationship. It is the Extinction in which there is a diminished response that occurs when the CS no longer signals an impending UCS. The tendency to respond to stimuli similar to the CS is called generalization. It is spontaneous recovery that is the reappearance of a weakened CR after a rest pause. It is discrimination that is the learned ability to distinguish between a conditioned stimulus, which predicts the UCS and other irrelevant stimuli. Acquisition, as a process in classical conditioning, is the initial learning of the stimulusresponse relationship. After conditioning, what happens if the CS occurs repeatedly without the UCS? Will the CS continue to elicit the CR? Pavlov found that when he sounded the tone again and again without presenting food, the dogs salivated less and less. Their declining salivation illustrates extinction, which is the diminished response that occurs when the CS (in this case the tone) no longer signals an impending UCS (the food). Pavlov found however, that if he allowed several hours to elapse after the extinction, and then sound the tone again, the salivation to the tone would reappear spontaneously, although weaker. This spontaneous recovery; the reappearance of a weakened CR after a rest pause, suggested to Pavlov that extinction was suppressing the CR rather than eliminating it. Both extinction and spontaneous recovery has been seen in humans as well. Another interesting result of classical conditioning is that of generalization. Pavlov and his students noticed that a dog conditioned to the sound of one tone also responded somewhat to the sound of a different tone that had never been paired with food. Likewise, a dog conditioned to salivate when rubbed would also salivate somewhat when scratched or when stimulated on a different body part. This tendency to respond to stimuli similar to the CS is called generalization. Discrimination is the learned ability to distinguish between a conditioned stimulus, which predicts the UCS and other irrelevant stimuli. Depending on how they were trained, Pavlov s dogs also learned to respond to the sound of a particular tone and not to other tones. It was all in how they were trained. Like generalization, discrimination has survival value.

No. 5 of 10 5. Which of the following statements is TRUE? (A) Through operant conditioning, an organism associates different stimuli that it does not control. (B) Through classical (Pavlovian) conditioning, the organism associates its behaviors with consequences. (C) Behaviors followed by reinforcers increases, those followed by punishers decrease. (D) Classical conditioning, but not operant conditioning, involves acquisition, extinction, spontaneous recovery, generalization, and discrimination. (E) Classical conditioning involves an act that operates on the environment to produce rewarding or punishing stimuli. It is through classical (Pavlovian) conditioning, that an organism associates different stimuli that it does not control. It is through operant conditioning, that the organism associates its behaviors with consequences. C. Correct! It is true that behaviors followed by reinforcers increases, those followed by punishers decrease. Both classical and operant conditioning involves acquisition, extinction, spontaneous recovery, generalization, and discrimination. It is operant conditioning that involves an act operating on the environment to produce rewarding or punishing stimuli. Through classical (Pavlovian) conditioning, an organism associates different stimuli that it does not control. Through operant conditioning, the organism associates its behaviors with consequences. Behaviors followed by reinforcers increases, those followed by punishers decrease. This simple but powerful principle has many applications and also several important qualifications. Both classical and operant conditioning involve acquisition, extinction, spontaneous recovery, generalization, and discrimination. Yet their difference is straightforward: classical conditioning forms associations between stimuli. It also involves respondent behavior; behavior that occurs as an automatic response to some stimulus (such as salivating in response to meat powder and later to a tone). Operant conditioning involves operant behavior; named because the act operates on the environment to produce rewarding or punishing stimuli. To distinguish between the two, we need only ask ourselves the question, is the organism learning associations between events that it doesn t control (classical conditioning), or is it learning associations between its behavior and resulting events (operant conditioning)?

No. 6 of 10 6. Which of the following statements is FALSE regarding B.F. Skinner and Operant Conditioning? (A) He elaborated on the law of effect. (B) He believed that rewarded behavior is likely to recur. (C) He developed a behavioral technology that revealed principles of behavior control. (D) He taught pigeons to do pigeon-like behaviors for rewards. (E) He explored the precise conditions that foster efficient and enduring learning. It is true that B.F. Skinner elaborated on the law of effect. It is true that B.F. Skinner believed that rewarded behavior is likely to recur. It is true that B.F. Skinner developed a behavioral technology that revealed principles of behavior control. D. Correct! It is un-pigeon-like behaviors that B.F. Skinner taught pigeons to do for rewards. It is true that B.F. Skinner explored the precise conditions that foster efficient and enduring learning. B.F. Skinner became modern behaviorism s most influential and controversial figure. His work elaborated a simple fact of life that psychologist Edward L. Thorndike called the law of effect: rewarded behavior is likely to recur. Using Thorndike s law of effect as a starting point, Skinner developed a behavioral technology that revealed principles of behavior control. These principles also enabled him to teach pigeons such un-pigeon-like behaviors as walking in a figure 8, playing ping-pong, and keeping a missile on course by pecking at a target on a screen. Skinner developed an operant chamber, popularly known as the Skinner box. The box is typically soundproof, with a bar or key that an animal presses or pecks to release a reward of food or water, and a device that records these responses. Skinner and other operant researchers explored the precise conditions that foster efficient and enduring learning.

No. 7 of 10 7. All of these statements are true regarding reinforcers EXCEPT: (A) Reinforcement is any event that increases the frequency of a preceding response. (B) A positive reinforcer may be a tangible reward such as praise or attention or an activity. (C) Negative reinforcement weakens a response. (D) Reinforcers vary with circumstance. (E) Food is a positive reinforcer for hungry animals. It is true that reinforcement is any event that increases the frequency of a preceding response. It is true that a positive reinforcer may be a tangible reward such as praise or attention, or an activity. C. Correct! Negative reinforcement, strengthens a response by reducing or removing an aversive stimulus, rather than weakens a response. It is true that reinforcers vary with circumstance. It is true that food is a positive reinforcer for hungry animals. In Skinner s world, reinforcement is any event that increases the frequency of a preceding response. A positive reinforcer may be a tangible reward. It may be praise or attention. Or it may be an activity. Most people think of reinforcers as rewards but actually, anything that serves to increase behavior is a reinforcer. For example, yelling at someone, if it increases behavior, even if it is offending behavior, is a reinforcer. Reinforcers vary with circumstance. What s reinforcing to one person may not be to another. What s reinforcing in one situation may not be in another. There are two basic kinds of reinforcement. One type, positive reinforcement, strengthens a response by presenting a typically pleasurable stimulus after a response. Food is a positive reinforcer for hungry animals; attention, approval, and money are positive reinforcers for most people. The other type, negative reinforcement, strengthens a response by reducing or removing an aversive stimulus. Taking aspirin may relieve a headache. Dragging on a cigarette will reduce a nicotine addict s pangs. Pushing the snooze button silences the annoying alarm. All these consequences, assuming they do affect behavior, provide negative reinforcement. When someone stops nagging or whining, that, too, is a reinforcer. Imagine a worried student who, after goofing off and getting a bad exam grade, studies harder for the next exam. The student s studying may be reinforced by reduced anxiety (negative reinforcement) and by a better grade (positive reinforcement). Whether it works by giving something desirable or by reducing something aversive, reinforcement is any consequence that strengthens behavior.

No. 8 of 10 8. Which of the following is TRUE regarding reinforcement schedules? (A) Fixed-ratio schedules reinforce a response only after a specified number of responses. (B) Variable-interval schedules provide reinforcers after an unpredictable number of responses. (C) Variable-ratio schedules reinforce the first response after a fixed time period. (D) Fixed-interval schedules reinforce the first response after varying time intervals. (E) People checking for the mail more frequently as the delivery time approaches is an example of a variable-interval schedule. A. Correct! It is true that fixed-ratio schedules reinforce a response only after a specified number of responses. It is variable-ratio schedules that provide reinforcers after an unpredictable number of responses. It is fixed-interval schedules that reinforce the first response after a fixed time period, not variable-ratio schedules. It is Variable-interval schedules that reinforce the first response after varying time intervals, not fixed-interval schedules. People checking for the mail more frequently as the delivery time approaches is an example of a fixed-interval schedule, not example of a variable-interval schedule. Fixed-ratio schedules reinforce behavior after a set number of responses. Like people paid on a piecework basis, laboratory animals may be reinforced on a fixed ratio. Rats may receive a food pellet only after the bar is pressed thirty times. Once conditioned, the animal will pause only briefly after a reinforcer and will then return to a high rate of responding. Variable-ratio schedules provide reinforcers after an unpredictable number of responses. This is what gamblers and fishermen experience; unpredictable reinforcement; and what makes gambling and fishing so hard to extinguish. Like the fixed-ratio schedule, the variable-ratio schedule produces high rates of responding, because reinforcers increase as the number of responses increases. Fixed-interval schedules reinforce the first response after a fixed time period. Like people checking more frequently for the mail as the delivery time approaches, or checking to see if the cookies are done or the Jell-O is set, pigeons on a fixed-interval schedule peck a key more frequently as the anticipated time for reward draws near, producing a choppy stopstart pattern rather than a steady rate of response. Variable-interval schedules reinforce the first response after varying time intervals. Like the you ve got mail that finally rewards persistence in rechecking for email, variableinterval schedules tend to produce slow, steady responding. This makes sense, because there is no knowing when the waiting will be over. Animal behaviors differ, yet Skinner contended that these reinforcement principles of operant conditioning are universal. According to Skinner, it matters little what response, what reinforcer, or what species you use. The effect of a given reinforcement schedule is pretty much the same. Behavior shows astonishingly similar properties.

No. 9 of 10 9. Which of the following refers to observational learning? (A) Observational learning plays a minor role in learning. (B) Observational learning is especially true among higher animals, especially humans. (C) The process of observing and imitating a specific behavior is often called copying. (D) Mirror neurons are not involved in observational learning. (E) The imitation of models is not involved in shaping a child s development. Observational learning plays a rather large role in learning. B. Correct! It is true that observational learning is especially true among higher animals, especially humans. The process of observing and imitating a specific behavior is often called modeling. Mirror neurons are in a frontal lobe area adjacent to the brain s motor cortex and provide a neural basis for observational learning. The imitation of models helps to shape a child s development. Thus far in the tutorial, we have discussed associative, conditioning methods of learning, but learning occurs not only through conditioning but also from our observations of others. This is especially true among higher animals, especially humans. Observational learning, in which we observe and imitate others, plays a large role in learning. The process of observing and imitating a specific behavior is often called modeling. We learn all kinds of social behaviors by observing and imitating models. We can glimpse the roots of observational learning in other species. For example, gorillas learn and generalize complex actions, such as learning to use both hands to prepare plants for eating, by observing other gorillas. Imitation is all the more striking in humans. So many of our ideas, fashions, and habits travel by imitation that these transmitted cultural elements now have a name: memes. Recently, neuroscientists have discovered mirror neurons in a frontal lobe area adjacent to the brain s motor cortex that provide a neural basis for observational learning. These neurons fire when the person/animal is observing another person or animal. The imitation of models helps to shape a child s development. Shortly after birth, an infant may imitate an adult who sticks out his tongue. By 9 months, infants will imitate novel play behaviors. By age 14 months they will imitate acts modeled on television.

No. 10 of 10 10. When discussing the effect of TV on observational learning, which of the following is FALSE? (A) Wherever television exists, it becomes the source of much observational learning. (B) During their first 18 years, most children in developed countries spend more time watching television than they spend in school. (C) Correlation studies find no link between violence-viewing and violent behavior. (D) The violence effect seems to stem from a combination of factors including imitation. (E) Watching cruelty fosters indifference. It is true that wherever television exists, it becomes the source of much observational learning. It is true that during their first 18 years, most children in developed countries spend more time watching television than they spend in school. C. Correct! Correlation studies do link violence-viewing with violent behavior. It is true that the violence effect seems to stem from a combination of factors including imitation. It is true that watching cruelty fosters indifference. Examples of observational learning come from research on media models of aggression. Wherever television exists, it becomes the source of much observational learning. During their first 18 years, most children in developed countries spend more time watching television than they spend in school. The question then becomes, does viewing televised aggression influence some people to commit aggression? Correlation studies do link violence-viewing with violent behaviour. Some of the findings include: the more hours children spend watching violent programs, the more at risk they are for aggression and crime as teens and adults; compared with those who watch less than an hour of TV daily at age 14, those who watch more than three hours at this age commit five times as many aggressive acts at age 16 or 22; and finally, in the United States and Canada, homicide rates doubled between 1957 and 1974, coinciding with the introduction and spread of television. Moreover, census regions that were late in acquiring television had their homicide rate jump correspondingly later. The violence effect seems to stem from a combination of factors including imitation. One research team observed a sevenfold increase in violent play immediately after children viewed the Power Rangers. Boys often precisely imitated the characters flying karate kicks and other violent acts. Prolonged exposure to violence also desensitizes viewers; they become more indifferent to it when later viewing a brawl, whether on TV or in real life. While spending three evenings watching sexually violent movies, male viewers in one experiment became progressively less bothered by the rapes and slashings. Three days later, they also expressed less sympathy for domestic violence victims than did research participants who had not been exposed to the films, and they rated the victims' injuries as less severe. Watching cruelty fosters indifference.