Chapter 6: Learning Learning = an enduring change in behavior, resulting from experience. Conditioning = a process in which environmental stimuli and behavioral processes become connected Two types of conditioning that scientists study: 1) Classical conditioning or Pavlovian conditioning occurs when we learn that two types of events go together. A type of learning response that occurs when a neutral object comes to elicit a reflexive response when it is associated with a stimulus that already produces that response. 2) Operant conditioning or instrumental conditioning occurs when we learn that a behavior leads to a particular outcome. Consequences of an action determine the likelihood that it will be performed in the future. Unconditioned response (UR) = a response that does not have to be learned, such as a reflex. Unconditioned stimulus (US) = a stimulus that elicits a response, such as a reflex without any prior learning. Conditioned stimulus (CS) = a stimulus that elicits a response only after learning has taken place. Conditioned response (CR) = a response to a conditional stimulus that has been learned. Acquisition = the gradual formation of an association between the conditioned and unconditioned stimuli. Extinction = a process in which the conditioned response is weakened when the conditioned stimulus is repeated without the unconditioned stimulus. Spontaneous recovery = a process in which a previously extinguished response reemerges following presentation of the unconditioned stimulus. Stimulus generalization = occurs when stimuli that are similar but not identical to the conditioned stimulus produce that conditioned response. Stimulus discrimination = a differentiation between two similar stimuli when only one of them is consistently associated with the unconditioned stimulus. Phobia = an acquired fear that is out of proportion to the real threat of an object or of a situation.
Scientific method: Watson s Little Albert Experiment Hypothesis: Phobias can be explained with classical conditioning. 1) Little Albert was presented with neutral objects, such as a white rate and costume masks, that provoked a neutral response. 2) During conditioning trials, when Albert reached for the white rat (CS) a loud clanging sound (US) scared him (UR) Results: Eventually, the pairing of the rat (CS) and the clanging sound (US) led to the rat producing fear (CR) on its own. The fear response generalized to other stimuli presented with the rat initially, such as the costume masks. Conclusion: Classical conditioning can cause participants to fear neutral objects. Rescoria-Wagner model = a cognitive model of classical conditioning; it states that the strength of the CS-US association is determined by the extent to which the unconditioned stimulus is expected. Blocking effect = once learned, a conditional stimulus can prevent the acquisition of a new conditioned stimulus. Law of effect = Thorndike s general theory of learning: Any behavior that leads to a satisfying state of events will be more likely to occur again, and any behavior that leads to an annoying state of affairs will less likely occur again. Reinforcer = a stimulus that follows a response and increases the likelihood that the response will be repeated. Primary reinforcers are those that satisfy biological needs. Shaping = a process of operant conditioning; it involves reinforcing behaviors that are increasingly similar to the desired behavior. Premack s Theory = reinforcer s value could be determined by the amount of time an organism engages in a specific associated behavior when free to do anything. It can account for differences in individual s values. Premack s Principle = the more valued activity can be used to reinforce the performance of a less valued activity. Positive reinforcement = the increase in the probability of a behavior s being repeated following the administration of a stimulus.
Negative reinforcement = the increase in the probability of a behavior s being repeated through the removal of a stimulus. Positive punishment = punishment that occurs with the administration of a stimulus and thus decreases the probability of a behavior s recurring. Negative punishment = punishment that occurs with the removal of a stimulus and thus decreases the probability of a behavior s recurring. Continuous reinforcement = a type of learning in which the desired behavior is reinforced each time it occurs. Partial reinforcement = a type of learning in which behavior is reinforced intermittently. Ratio schedule = a schedule in which reinforcement is based on the number of times the behavior occurs. Interval schedule = a schedule in which reinforcement is available after a specific unit of time. Fixed schedule = a schedule in which reinforcement is consistently provided after a specific number of occurrences or a specific amount of time. Variable schedule = a schedule in which reinforcement is applied at different rates or at different times. Partial-reinforcement extinction effect = refers to the greater persistence of behavior under partial reinforcement than under continuous reinforcement. Behavior modification = the use of operant-conditioning techniques to eliminate unwanted behaviors and replace them with desirable ones. Cognitive map = a visual/spatial mental representation of an environment. Latent learning = learning that takes place in the absence of reinforcement. Insight learning = a form of problem solving in which a solution suddenly emerges after either a period of inaction or contemplation of the problem. Scientific Method: Tolman s Study of Latent Learning Hypothesis: Reinforcement has more impact on performance than on learning.
1) One group of rats is out through trials running through a maze with a goal box that has never had any food reward as reinforcement. 2) The second group of rats is put through trials in maze with a goal box that always has food reinforcement. 3) A third group of rats is put through trials in a maze that has food reinforcement only after the first 10 trials. Results: Rats that were regularly reinforced for correctly running through a maze showed improved performance over time compared to rats that did not receive reinforcement. Rats that were not reinforced for the first 10 trials but were reinforced thereafter showed an immediate change in performance. Note that between days 11 and 12, group3 s average number of errors decreased dramatically. Conclusion: Rats may learn a path through a maze but not reveal their learning, which is called latent learning, until it is reinforced. There is a distinction between acquisition and performance. Memes = a unit of knowledge transferred within a culture. Observational learning = Learning that occurs when behaviors are acquired or modified following exposure to others performing the behavior. Scientific Method: Bandura s Bobo Doll Studies Hypothesis: Children can acquire behaviors through observation. 1) Two groups of children were shown a film of an adult playing with a large inflatable doll called Bobo. 2) One group saw the adult play quietly with the doll. 3) The other group saw the adult attack the doll. Result: When children were allowed to play with the doll later, those who had seen the aggressive display were more than twice as likely to act aggressively toward the doll. Conclusion: Exposing children to violence may encourage them to act aggressively. Scientific Method: Fear Response in Rhesus Monkeys Hypothesis: Monkeys can develop phobias about snakes by observing other monkeys reacting fearfully to snakes.
1) Two sets of monkeys, one reared in the laboratory and one reared in the wild, had to reach past a clear box to get food. 2) When the clear box contained a snake, the laboratory-reared monkeys reached across the box, but the wild-reared monkeys refused to reach across the box. Results: After watching wild-reared monkeys react, laboratory-reared monkeys no longer reached across the box. Conclusion: Fears can be learned through observation. Modeling = the imitation of behavior through observational learning. Vicarious learning = learning that occurs when people learn the consequences of an action by observing others being rewarded or punished for performing the action. Mirror neurons = neurons that are activated during observation of others performing an action. They may allow us to step into the shoes of people we observe so we can better understand those people s actions; they are the neural basis for empathy. Intracranial self-stimulation (ICSS): Olds and Milner set up an experiment to see whether rats would press a lever to self-administer shock to pleasure centres in their brains. ICSS activates dopamine receptors; interfering with dopamine eliminates self-stimulation as well as naturally motivated behaviors, such as feeding, drinking and copulating. Donald Hebb proposed that learning results from alterations in synaptic connections. When one neuron excites another, some change takes place such that the synapse between the two strengthens. One neuron s firing becomes increasingly likely to cause the other s firing. Cells that fire together wire together. Two types of simple learning: 1) Habitual a decrease in behavioral response following repeated exposure to nonthreatening stimuli. 2) Sensitization an increase in behavioral response following exposure to a threatening stimulus. It leads to a heightened responsiveness to other stimuli. Kandel s research: alterations in the functioning of the synapse lead to habituation and sensitization. Presynaptic neurons alter their neurotransmitter release for both types of simple learning. A reduction in neurotransmitter release leads to habituation; an increase in neurotransmitter release leads to sensitization.
Orienting response = when an animal encounters a novel stimulus, it pays attention to it. Long-term potentiation (LTP) >> The strengthening of a synaptic connection so that postsynaptic neurons are more easily activated. Originally observed in the hippocampus, although evidence indicates that fear conditioning may induce LTP in the amygdala. 1) When a presynaptic neuron is given a brief electrical pulse, there is a slight probability that the postsynaptic neuron will fire. 2) Applying intense and frequent pulses to the presynaptic neuron leads to a greater probability that the postsynaptic neuron will fire. 3) When a single brief pulse is applied subsequently, it produces the greatest probability that the postsynaptic neuron will fire. The NDMA receptor (a type of glutamate receptor) is required for it and has a special property: it opens only if a nearby neuron fires at the same time, a phenomenon supporting Hebb s rules ( cells that fire together wire together ).