Sunday, January 28, 2007

Rolling over the dead man

Please pay very careful attention to this section that starts on p. 92. I wasn't kidding when I said, in the About DMT part of this blog, that there are probably more dead man rule violations in this course than anything else. So when it happens to you, you need to know exactly how to fix it.

Important tables

The tables on p. 87 are very important and very useful. You should probably tattoo them on the inside of your eyeballs. Both tables contain the same information, just organized in different ways, so you don't have to tattoo both of them. The tree diagram on the next page also contains the same information, but in a different format that might make it easier to remember and use.

The reinforcer that's lost in penalty

It's small, so you might miss it... On p. 81 Malott tells us that the reinforcer that's lost in a penalty contingency cannot be the one that maintains the penalized behavior. Good thing to remember if you do an extra credit CA showing a penalty contingency.

Saturday, January 27, 2007

Nobody wants to hurt themselves

And nobody hurts themselves in order to get attention. This is a topic where it's easy to violate the Don't Say Rule.

As weird as it may sound to your ears, all behaviors, including self-injurious behaviors, happen because they've been reinforced. In the complexities of real life, we're swimming in an ocean of simultaneously intertwining behavioral contingencies that are constantly interacting in such ways that the effects on our behavior can be unpredictable and surprising. That's why we begin the course, and end up spending most of it, on the basic principles of behavior, which we learn by studying pretty simple examples. You're already realizing that mastering even those basic principles is quite a challenge.

Chapter 19 is about concurrent contingencies, and there we'll be looking specifically at how simultaneous behavioral contingencies interact to control behavior. You could make an argument that it's the most important chapter in the book because it formally introduces the reality of that ocean of intertwining contingencies that I referred to, which is the real world.

Whether you like it or not

On p. 57 Malott emphasizes that "An aversive stimulus is one we tend to minimize contact with." This is one of the ways to characterize aversive stimuli without saying that they're things you don't like. The fact is that in most cases we don't like aversive stimuli (which are also often called "punishers"). But that fact is peripheral to the best sort of definition.

In BA-speak, as much as possible we define things functionally, that is, in terms of the functions they have or demonstrate. This is similar to defining things operationally, which is a notion that many of you have run into in your other psychology courses.

To put it simply, we try to define things in terms of what they do, the effects they have on other things, or how they affect behavior. So another way to characterize the aversive stimulus (or "punisher") is to say that it's a stimulus which, if it immediately follows a behavior, has the effect of decreasing the future frequency of that behavior.

It can be positively (or negatively) confusing!

In BehaviorAnalysis-speak, "positive" refers to presenting a stimulus immediately following the target behavior. You might find it useful to remember that both "positive" and "present" start with a "p." "Negative" refers to removing a stimulus immediately following the target behavior. If you think of a memory trick for that one, let us know.

The result is that there's positive reinforcement and negative reinforcement, as well as positive punishment (sounds weird, doesn't it?) and negative punishment (sounds redundant, doesn't it?).

The potential for confusion is why Malott uses different terms for these 4 basic kinds of behavioral contingencies. You should become fluent with these 4 traditional terms, because they get used a lot and you need to know what they mean when you encounter them, even though you won't see some of them ("positive punishment?") very much in our textbook.

Sunday, January 21, 2007

He has autistic behaviors ...

Some people get upset with Malott or others who seem to say that there's no underlying psychological condition that's responsible for autistic behaviors. I don't think he really explains his position as well as I wish he would. Or maybe it's that I don't think he explains my position.

Whether we're talking about autism or anything else that manifests in the form of abnormal or undesirable behaviors, the 1st approach should always be to try changing the behavior. Very often that will work, and if it does, then there's no need for medication, surgery, or other expensive, invasive methods. Behavioral therapies are the most natural, least expensive, and very often the most effective. For many reasons this should be the 1st approach that's tried.

Awareness not required

In the 1st column on p. 41 Malott makes a very important point that's true not just for escape contingencies, but for all behavioral contingencies. In order for a behavioral contingency to control someone's behavior, they don't have to have any awareness of the contingency at all. On p. 51 Malott summarizes a classic experiment that really gives additional force to this point.

It's not uncommon for someone to become aware of a contingency that's controlling their behavior. But when that happens, it's a byproduct of the operation of the contingency, and their behavior would have changed in the same way if they'd never become aware of the contingency.

BehaviorAnalysis-Speak

Revised on 1/4/14

In your class discussion posts, in your CAs, and in whatever you do as a part of your course, or whenever you're dealing with your fellow behavior analysts on professional matters (when you're hanging out at a bar it's different), you should use behavioral language. Think before you speak or write. Are you about to indulge in mentalistic language? Say or write something that invites or reveals circular reasoning? The stuff in the latter part of Ch. 2 (starting with "Avoid Circular Reasoning" on p. 26 & through p. 32) just might be the most important material in the book. There's a close connection between how you think and how you speak/write. Each has effects on the other. If you want to understand behavior, you have to speak and write about it properly. If you speak and write about it properly, you're more likely to think about it properly too.

Quizzes require fluency

By now some of you have had your 1st experience with the quizzes in your course. At the beginning of every semester there are often protests about how little time is allowed for quizzes, and requests that more time be allowed. You'll get better at completing the quizzes in the time that's allowed. These quizzes put a premium on fluency, which is the ability to respond not only correctly but quickly. In order to master something you must become fluent, just like when learning a new language. You must be able to speak and understand others not only correctly, but at a high rate of speed.

In your course there should NOT be enough time to look up answers to quiz questions. Instead, if you're studying well, you'll know the answers without looking them up, and in that situation, there's plenty of time to complete the quiz.

Saturday, January 20, 2007

Read the comments!

Readers have the option of replying to these posts by clicking on the comments link at the end of each post. This blog is not primarily designed for discussion, but thru its comments capability, it can be used that way. That means that the comments made by other readers might contain useful information for you. So read the comments!

To help you find the comments, look at the Archive on the left side of the page. It's a list of the posts that I've made. If someone has commented on a post and you haven't read that comment yet, the post's title will be highlighted, and clicking on it will take you to that post and all of its comments.

Thursday, January 18, 2007

They really oughta wanna do it

Revised on 1/4/14

Nobody ever does anything just because they ought to or because it's the right thing to do. The only reason behaviors ever happen is because they've been reinforced. This is an important rule of thumb that you really oughta wanna keep close at hand. If you want to understand why a behavior happens, figure out what's reinforcing it or what has reinforced it in the past.

When we say we did something because it was the right thing to do, we're speaking metaphorically or poetically, and not scientifically. Language like that seems to provide an explanation, but really doesn't. It tends to put an end to our search for the true explanation. Metaphor and poetry are wonderful, but they're not science. Behavior analysis is the science of behavior.

The ABCs of behavior

Revised on 1/4/14

On p. 18 there's a definition of behavioral contingency. There's a little formula that you'll often see for this: ABC.

  • A stands for antecedent, which means something that "comes before." The occasion for a response can be thought of as the situation or circumstances that exist right before the target behavior occurs.
  • B stands for behavior, that is, the target behavior that we're interested in understanding or whose frequency we want to change.
  • C stands for consequence. Every behavior has one or more consequences or outcomes, which may be the presentation or receipt of a reinforcer or an aversive stimulus.
Every behavior happens in some kind of situation, circumstance, setting, occasion, and every behavior has some kind of consequence or outcome. In order to understand a behavior, you have to understand at least these three aspects of the behavioral contingency that it's a part of.

More on immediacy

Revised on 1/4/14

In the 2nd column on p. 17 and on p. 32 Malott talks about immediacy again. This is a big deal in this course. I wouldn't be surprised if there are more immediacy errors in the CAs you'll turn in than any other kind of error. One of the reasons is that it seems so obvious that a reinforcer sometimes follows a behavior by more than 60 seconds, yet it still causes the behavior's frequency to increase. Yes, there are lots of situations in which it looks like that's what's going on. Many of those situations are what Malott calls analogs to reinforcement. Toward the end of the book we'll get into that topic in a big way. In the meantime, in discussions of examples of behavior in your course, and especially when you're creating CAs, remember that your consequence (e.g., a reinforcer or an aversive stimulus, event, or condition) must follow the target behavior immediately.

Asynchronicity

This blog is an example of asynchronicity (look it up) in a couple of ways. First, the online course you're taking is an example of asynchronous learning, meaning that the class never has to be assembled, all together, at the same time. You can "go to class" in the middle of the night from your bedroom in your pajamas, and you'll still see and read and participate in all aspects of the class.

Secondly, this blog serves two different classes during this current spring semester of 2007. Some of you are in an undergraduate class called Learning and Behavior at UH-Downtown, and some of you are in a graduate class call Learning Principles at UH-Clear Lake. The subject matter of the two courses is the same, and one of the things you have in common is the textbook, Principles of Behavior. But the UHCL class has a 2nd textbook too, which means that they're moving through PoB faster than the UHD class. My aim is to post chapter-related stuff here so that it appears while the UHCL class is on that chapter. That will make it a little early for the UHD class, but that shouldn't be a problem, and it might even be a help.

If you want to see all of the posts on a particular chapter (or any of the other topics), click on the appropriate link under Topics on the left side of the page.

Ch. 2 previews of coming attractions

Revised on 1/4/14

On p. 16, 2nd column, we get a preview of what's sometimes called "thinning the schedule." Right before this Malott described how, in addition to grandmother reinforcing certain of grandfather's behaviors, Juke was reinforcing grandmother's behavior of reinforcing grandfather's behaviors. This gives us a peek at how complex real life is, with many intertwined and interacting behavioral contingencies in operation at every moment. Anyway, on p. 16 we see Juke reinforcing grandmother's behavior less and less frequently. This was possible because the more grandmother practiced reinforcing grandfather's behaviors, the more her own behavior of reinforcing was, itself, reinforced by its consequences. This is what's known as a "behavior trap," which you can look up in the index if you want to read ahead. And we'll learn more about "thinning the schedule" in several places, including the chapters on extinction (6), intermittent reinforcement (17 and 18), and maintenance (27).

Wednesday, January 17, 2007

Feedback on a reinforcement CA

One of your classmates has submitted a reinforcement contingency analysis (CA) for Ch. 2. I returned it with feedback for revising it, and I want to share that feedback with the rest of you. The target behavior was mowing someone's lawn and the reinforcer was the pay received for the job.

There are 2 problems with this as an example of a reinforcement contingency. The first is that mowing a lawn is a series or chain of behaviors (mow a while, take a rest, mow some more, get a drink, etc.) rather than a discrete behavior with a clear starting and ending point & no other behaviors in between. That makes it unclear what it would mean to say that a reinforcer follows this "behavior" immediately.

Second, even if we allowed mowing the lawn as an acceptable target behavior in an example of a reinforcement contingency, it's not likely that the reinforcer ($) immediately follows the "behavior." In all of your examples, we need target behaviors that are discrete behaviors (additional discussion about this in another Ch. 2 post) & that are followed by their reinforcing or punishing consequence immediately.

Class discussions

Please read the syllabus for your course carefully. You'll see that a significant portion of your grade is based on class discussion on the discussion boards for your course. You'll receive a class participation grade each week. The 1st week ends on Saturday at midnight. Though everyone gets a free point toward their 1st week's grade for their 1st post, regardless of what it says, all other posts, in order to receive credit, must meet the criteria for mastery posts as explained in your course's syllabus.

How immediate is immediate?

Revised on 1/4/14

Besides the dead man test, the thing that we may remind ourselves of more than anything else, as we go through the semester, is that we should always be looking at the immediate consequences of whatever target behavior we're interested in. On p. 4 Malott reminds us that immediate means 60 seconds or less, and even that is stretching things. As you construct contingency analyses, always keep in mind that whatever you think is the consequence of the target behavior (e.g., a reinforcing or aversive stimulus, event, or condition) must follow the target behavior immediately.

Additional comment, posted on Jan. 21, 2011: Malott puts a lot of emphasis on the proposition that a behavior's frequency changes as a result of its immediate consequences, & I agree with him. This may make you wonder about the many situations in which a behavior seems to be reinforced or punished by a consequence that is not immediate. For instance, seeing an "A" on your quiz paper might seem to reinforce the studying you did last night. Understanding why situations like this don't contradict what I said above is tricky. But Malott explains it by the end of the book.

In the meantime, for purposes of learning the fundamental principles of behavior, it's important for us to stick to examples & scenarios in which it's very obvious that the target behavior's frequency is changed because of its immediate consequences. That's why I'll probably say to you, when you turn in some of your contingency analyses, that they need to be revised because the reinforcing or punishing consequence doesn't follow the target behavior immediately.

Tuesday, January 16, 2007

First task always: Pinpoint the target behavior

Revised on 1/4/14

The 1st thing you must always do when trying to understand or change an organism's behavior (including your own) is to specify exactly what behavior you want to understand or change. You'll probably hear this from me over and over throughout the semester.

The term, "target behavior," as we use it in this course, refers to the behavior (or "response;" they're synonyms, as we see in Ch. 1) that's of most interest to us in the present scenario. We may be interested in a behavior because we want to understand why it happens. Or we may want to change the frequency of a certain behavior. In both kinds of cases, the behavior of primary interest is the "target behavior."

In the 3 diagrams on p. 2, the target behavior in each one is described in the "Behavior" box in the middle of the diagram. In each of these cases, Malott is suggesting why those behaviors happen. It's because they're immediately followed by the delivery of a reinforcer. In the 1st diagram, before Rod cries he doesn't have the reinforcer of Dawn's attention. Then he performs the target behavior, & immediately following that behavior, he has that reinforcer. That's why the frequency of Rod's crying in these kinds of situations is high - it's usually reinforced. Can you see the same kind of explanation in the other 2 diagrams?

BTW, helping you to identify the correct target behavior is the main purpose of the dead man test.