Operant Conditioning, Biological & Cognitive Influences, Observational Learning & Quick Review

Psychology Operant Conditioning, Biological & Cognitive Influences, Observational Learning & Quick Review

To view other note of Psychology Click Here.

Operant Conditioning

In the late nineteenth century, psychologist Edward Thorndike proposed the law of effect. The law of effect states that any behavior that has good consequences will tend to be repeated, and any behavior that has bad consequences will tend to be avoided. In the 1930s, another psychologist, B. F. Skinner, extended this idea and began to study operant conditioning. Operant conditioning is a type of learning in which responses come to be controlled by their consequences. Operant responses are often new responses.

Just as Pavlov’s fame stems from his experiments with salivating dogs, Skinner’s fame stems from his experiments with animal boxes. Skinner used a device called the Skinner box to study operant conditioning. A Skinner box is a cage set up so that an animal can automatically get a food reward if it makes a particular kind of response. The box also contains an instrument that records the number of responses an animal makes.

Psychologists use several key terms to discuss operant conditioning principles, including reinforcement and punishment.

Reinforcement

Reinforcement is delivery of a consequence that increases the likelihood that a response will occur. Positive reinforcement is the presentation of a stimulus after a response so that the response will occur more often. Negative reinforcement is the removal of a stimulus after a response so that the response will occur more often. In this terminology, positive and negative don’t mean good and bad. Instead, positive means adding a stimulus, and negative means removing a stimulus.

Punishment

Punishment is the delivery of a consequence that decreases the likelihood that a response will occur. Positive and negative punishments are analogous to positive and negative reinforcement. Positive punishment is the presentation of a stimulus after a response so that the response will occur less often. Negative punishment is the removal of a stimulus after a response so that the response will occur less often.

Reinforcement helps to increase a behavior, while punishment helps to decrease a behavior.

Primary and Secondary Reinforcers and Punishers

Reinforcers and punishers are different types of consequences:

  • Primary reinforcers, such as food, water, and caresses, are naturally satisfying.
  • Primary punishers, such as pain and freezing temperatures, are naturally unpleasant.
  • Secondary reinforcers, such as money, fast cars, and good grades, are satisfying because they’ve become associated with primary reinforcers.
  • Secondary punishers, such as failing grades and social disapproval, are unpleasant because they’ve become associated with primary punishers.
  • Secondary reinforcers and punishers are also called conditioned reinforcers and punishers because they arise through classical conditioning.

Shaping

Shaping is a procedure in which reinforcement is used to guide a response closer and closer to a desired response.

Example: Lisa wants to teach her dog, Rover, to bring her the TV remote control. She places the remote in Rover’s mouth and then sits down in her favorite TV–watching chair. Rover doesn’t know what to do with the remote, and he just drops it on the floor. So Lisa teaches him by first praising him every time he accidentally walks toward her before dropping the remote. He likes the praise, so he starts to walk toward her with the remote more often. Then she praises him only when he brings the remote close to the chair. When he starts doing this often, she praises him only when he manages to bring the remote right up to her. Pretty soon, he brings her the remote regularly, and she has succeeded in shaping a response.

Reinforcement Schedules

reinforcement schedule is the pattern in which reinforcement is given over time. Reinforcement schedules can be continuous or intermittent. In continuous reinforcement, someone provides reinforcement every time a particular response occurs. Suppose Rover, Lisa’s dog, pushes the remote under her chair. If she finds this amusing and pats him every time he does it, she is providing continuous reinforcement for his behavior. In intermittent or partial reinforcement, someone provides reinforcement on only some of the occasions on which the response occurs.

Biological Influences

Conditioning accounts for a lot of learning, both in humans and nonhuman species. However, biological factors can limit the capacity for conditioning. Two good examples of biological influences on conditioning are taste aversion and instinctive drift.

Taste Aversion

Psychologist John Garcia and his colleagues found that aversion to a particular taste is conditioned only by pairing the taste (a conditioned stimulus) with nausea (an unconditioned stimulus). If taste is paired with other unconditioned stimuli, conditioning doesn’t occur.

Similarly, nausea paired with most other conditioned stimuli doesn’t produce aversion to those stimuli. Pairing taste and nausea, on the other hand, produces conditioning very quickly, even with a delay of several hours between the conditioned stimulus of the taste and the unconditioned stimulus of nausea. This phenomenon is unusual, since normally classical conditioning occurs only when the unconditioned stimulus immediately follows the conditioned stimulus.

Example: Joe eats pepperoni pizza while watching a movie with his roommate, and three hours later, he becomes nauseated. He may develop an aversion to pepperoni pizza, but he won’t develop an aversion to the movie he was watching or to his roommate, even though they were also present at the same time as the pizza. Joe’s roommate and the movie won’t become conditioned stimuli, but the pizza will. If, right after eating the pizza, Joe gets a sharp pain in his elbow instead of nausea, it’s unlikely that he will develop an aversion to pizza as a result. Unlike nausea, the pain won’t act as an unconditioned stimulus.

Instinctive Drift

Instinctive drift is the tendency for conditioning to be hindered by natural instincts. Two psychologists, Keller and Marian Breland, were the first to describe instinctive drift. The Brelands found that through operant conditioning, they could teach raccoons to put a coin in a box by using food as a reinforcer. However, they couldn’t teach raccoons to put two coins in a box. If given two coins, raccoons just held on to the coins and rubbed them together. Giving the raccoons two coins brought out their instinctive food-washing behavior: raccoons instinctively rub edible things together to clean them before eating them. Once the coins became associated with food, it became impossible to train them to drop the coins into the box.

Cognitive Influences

Researchers once thought of conditioning as automatic and not involving much in the way of higher mental processes. However, now researchers believe that conditioning does involve some information processing.

The psychologist Robert Rescorla showed that in classical conditioning, pairing two stimuli doesn’t always produce the same level of conditioning. Conditioning works better if the conditioned stimulus acts as a reliable signal that predicts the appearance of the unconditioned stimulus.

Example: Consider the earlier example in which Adam’s professor, Professor Smith, pulled out a revolver in class and shot it into the air, causing Adam to cringe. If Adam heard a gunshot only when Professor Smith pulled out her revolver, he would be conditioned to cringe at the sight of the revolver. Now suppose Professor Smith sometimes took out the revolver as before and fired it. Other times, she played an audio recording of a gunshot without taking out the revolver. The revolver wouldn’t predict the gunshot sound as well now, since gunshots happen both with and without the revolver. In this case, Adam wouldn’t respond as strongly to the sight of the revolver.

The fact that classical conditioning depends on the predictive power of the conditioned stimulus, rather than just association of two stimuli, means that some information processing happens during classical conditioning. Cognitive processes are also involved in operant conditioning. A response doesn’t increase just because satisfying consequences follow the response. People usually think about whether the response caused the consequence. If the response did cause the consequence, then it makes sense to keep responding the same way. Otherwise, it doesn’t.

Observational Learning

People and animals don’t learn only by conditioning; they also learn by observing others. Observational learning is the process of learning to respond in a particular way by watching others, who are called models. Observational learning is also called “vicarious conditioning” because it involves learning by watching others acquire responses through classical or operant conditioning.

Example: Brian might learn not to stand too close to a soccer goal because he saw another spectator move away after getting whacked on the head by a wayward soccer ball. The other spectator stopped standing close to the soccer goal because of operant conditioning—getting clobbered by the ball acted as positive punishment for standing too close. Brian was indirectly, or vicariously, conditioned to move away.

Post a Comment

Previous Post Next Post