Friday, July 13, 2007

SD & SΔ: Two sides of the coin

Revised on 3/29/14

Malott’s definitions of SD and SΔ are procedural (see footnote on p. 199). They’re stimuli in the presence of which a reinforcement or punishment contingency is present or absent. This is different from the way some others define these terms. As Malott points out, some other definitions put the emphasis on either (1) the likelihood of the response happening in the presence of the stimulus, or (2) the likelihood that, if the response happens in the presence of the stimulus, it will be reinforced/punished. These two events – response happening and response being reinforced/punished – both depend on whether the reinforcement/punishment contingency is present or absent. This seems to make the presence or absence of the contingency primary and those two events are secondary.

An SΔ is a stimulus in the presence of which the target response will not be reinforced/punished because the relevant contingency is absent. If there are no stimuli (circumstances, settings, occasions) in the presence of which the target response would not be reinforced/punished, then by definition there’s no SΔ. This also means that the contingency is present all the time, so that there’s no particular stimulus “signaling” that the contingency is present. All of this is why Malott says that if there’s no SΔ, then there’s no SD.

The thing about coins is that they have two sides. There’s no such thing as a one-sided coin; you can’t have one without the other. And if you DON’T have one of them, then you don’t have the other either. That’s the way it is with SDs and SΔs. If you don’t have an SΔ, then you don’t have an SD either (and vice versa, of course).

Wednesday, April 4, 2007

How do analogs to punishment work?

In Ch. 24 Malott explained in great detail how rule-governed analogs to avoidance of the loss of a reinforcer work. Now maybe I just missed it (always possible), but I don't think he's explained how rule-governed analogs to punishment work. In the PDF version of Ch. 26 he writes “Commit a single mortal sin and you will definitely spend eternity in hell. The statement of that rule does make noncompliance a most aversive condition (for believers). This is an effective rule-governed analog to punishment.” So what is the mechanism by which a rule like this works?

For rule-governed analogs to avoidance of the loss of a reinforcer, stating the rule establishes noncompliance (not performing the target behavior) as an aversive before condition which can be escaped or decreased by performing the target behavior. This outcome follows the target behavior immediately and the result is an increased frequency of the target behavior in similar situations.

But in a rule-governed analog to punishment, noncompliance with the rule means PERFORMING the target behavior, and noncompliance (having performed the target behavior) is an aversive AFTER condition. Depending on the particular circumstances, that aversive condition might be what we'd call guilt or, perhaps, fear of punishment. This aversive after condition follows the target behavior immediately as part of a direct-acting punishment contingency. When a rule is stated prohibiting a behavior, that behavior becomes a member of the response class of prohibited behaviors. Even if the particular target behavior has never been performed before, other prohibited behaviors have been performed in the past and have been punished. So because members of this response class have been punished in the past, resulting in a decreased frequency of performing such behaviors, the frequency of newly prohibited behaviors should also be reduced.

I think that's how rule-governed analogs to punishment work.

Sunday, March 25, 2007

Direct-acting, indirect-acting, and ineffective

All behavioral contingencies consist of three elements: the occasion for a behavior/response, the actual behavior/response, and the outcome of the behavior/response (p. 16). The contingencies we learned about first are the direct-acting contingencies, which Malott defines on p. 366 as those for which "the outcome of the response reinforces or punishes that response." The outcome (such as presentation of a reinforcer or an aversive stimulus) reinforces or punishes the target behavior because it immediately follows that behavior. In other words, the outcome directly affects the future frequency of the target behavior.

Indirect-acting contingencies consist of the same three elements, but we call them indirect-acting because the outcome (such as presentation of a reinforcer or an aversive stimulus) does NOT reinforce or punish the target behavior because it does not immediately follow that behavior but, instead, comes after some delay. This delayed outcome still affects the future frequency of the target behavior, but it affects it indirectly instead of directly. These indirect effects on the behavior's frequency are not called "reinforcement" or "punishment" because, by definition, reinforcement and punishment involve outcomes that follow the target behavior immediately.

These indirect-acting contingencies are one type of analog contingency (or what Malott calls "analogs to behavioral contingencies"). They're analogs because they resemble the direct-acting contingencies, but they're different because of their delayed outcomes. For our present purposes, indirect-acting contingencies and analog contingencies are the same thing.

In order for an indirect-acting contingency to be effective (affect the future frequency of the behavior), the contingency must be described to the behaver. A statement that describes a contingency (direct-acting or indirect-acting) is a rule. If the statement of a rule describing an indirect-acting contingency affects the frequency of the target behavior, then we can say that the behavior is "rule-governed."

When we talk about analog/indirect-acting contingencies, we need to say more. We need to say what kind of analog/indirect-acting contingency we're talking about. For instance, in Ch. 22 Malott talks about analog reinforcement contingencies and analog discriminated avoidance contingencies.

Thursday, March 15, 2007

To our visitors...

It could be that some folks who are not fellow students in your class may be visiting the ole DMT site from time to time. If so, this post is intended mainly for them.

I hope our guests will feel free to explore and to add their comments to any of the posts here. For now, at least, things are set up so that anyone can add comments without restraint. I trust that all comments, whether from students or guests, will be offered in the same spirit that motivated creating DMT in the first place. That spirit is best-expressed in the words of Rudolph the Rat, who appears in the upper-left of our front page. Getting a little more specific, our goal at DMT is for more and more people to learn the principles of behavior analysis and how to use them to improve our lives. And we're always open to suggestions about how we can do that better. If you'd like to communicate directly with me (PW), you can send an email to williamspsATgmail.com (replace AT with @).

Wednesday, March 14, 2007

How do you avoid something immediately?

Most of the behavioral contingencies that we deal with in Principles of Behavior have immediate consequences, that is, reinforcing or aversive consequences that follow the target behavior immediately. Starting with Ch. 22 we get deeper into analog contingencies, which often means that the consequences don't follow the target behavior immediately, or so it seems. Actually, we'll learn that even with these analogs, the consequences that directly affect the future frequency of the target behavior do, indeed, follow the target behavior immediately.

But I digress .... In the case of some avoidance contingencies, it's hard to see how this immediacy criterion applies. In other cases it's obvious. If you're a race car driver whizzing around a track surrounded by lots of other drivers in close quarters, you're going to experience something pretty aversive any second unless you're continuously performing several different behaviors. Because all kinds of nasty stuff threatens to happen to you immediately, within seconds if not less, then whatever behaviors you perform to prevent those things from happening have the immediate consequence of avoiding/preventing aversive consequences. This is the sense in which the consequences of avoidance follow the target behavior immediately.

What that means when you're inventing avoidance CAs is that the aversive stimulus described in your before box must be something that's going to be experienced within seconds UNLESS the target behavior happens. Another way to say this is that the aversive stimulus is going to experienced within seconds unless the next thing you do is the target behavior.

What that also means is that behaviors like taking an alternate route so you won't have to put up with the heavy traffic on your regular route, or telling them to "hold the onions" when you order bean burritos from Taco Bell so you won't gross out everyone you talk to, are not examples of avoidance. In this latter case, when you tell them to hold the onions, you haven't yet eaten them, right? So at the time you tell them to hold the onions, the aversive condition of onion breath is not going to happen within seconds. That aversive condition won't happen unless you do something else first, namely actually eating a burrito with onions on it. So telling them to hold the onions is not an avoidance behavior. In an avoidance situation, the aversive stimulus or condition is going to happen within seconds unless the next thing you do is the target behavior.

But even though it's not avoidance, you should still tell them to hold the onions.

Saturday, February 24, 2007

Learned imitative reinforcers

This is a difficult concept for some students to get, so I'll see if I can supplement Malott's discussion a little bit.

If you do something and I do the same thing, I can tell that I'm doing the same thing you're doing because I can see that we're doing the same thing. If you say something and I say the same thing, then I can hear that I said the same thing. In other words, I know when my actions match yours because of the perceptual feedback I get from you and from myself. But it goes beyond seeing and hearing. If you raise your arm in the air and I do the same, even if my eyes are closed I'm getting perceptual (proprioceptive) feedback informing me that my arm is raised. All of these types of perceptual feedback are stimuli.

If a child imitates someone else's behavior and it's reinforced, then those reinforcers are paired with those stimuli that inform the child that their behaviors are matching the model's. When this has happened a sufficient number of times and in a variety of imitative situations, then stimuli showing us that our behavior matches a model's become learned reinforcers. From that point on, whenever we perceive that our behavior matches someone else's, that matching (imitative) behavior will be automatically reinforced by those learned imitative reinforcers.

That's how generalized imitation happens.

Wednesday, February 21, 2007

What is a response class?

Revised on 3/29/14

In Ch. 7 (p. 128) Malott discusses the three ways to define a response class. Skinner pointed out that no one ever performs a response/behavior the same way twice. Opening the refrigerator with your right hand is basically the same behavior as opening it with your left hand, even though they're obviously different too. But because they're more similar in important ways than they are different, they're considered members of the same response class. So a response class is a collection of similar behaviors.

What Malott does for us in Ch. 7 is to explain the three ways in which the members of a response class can be similar to each other. If two or more responses are similar in one or more of these ways, then they're members of the same response class.

(1) First, behaviors can be similar on one or more response dimensions. A response dimension is a physical property of a response. So this means that responses may be members of the same response class because they're physically similar.

(2) Behaviors can also be similar because they share the effects of reinforcement or punishment. That means that if one member of a response class is reinforced or punished, and its frequency subsequently changes, the frequency of the other members of the response class will also change in the same direction, even though they haven't been directly reinforced or punished. An implication of this is that if reinforcing or punishing a behavior changes its frequency, and the frequency of another behavior also changes in the same way, that's an indication that the two behaviors are members of the same response class.

(3) Behaviors can also be similar because they serve the same function or produce the same outcome. That means that if a behavior is followed by a particular reinforcer or aversive stimulus (punisher), then other members of the same response class will also be followed by that reinforcer or punisher. So if two behaviors produce the same reinforcing or punishing consequence, that's an indication that they're members of the same response class. This doesn't prove that they're members of the same response class, but it may suggest that you should investigate further to determine if, in fact, they are.