We often need to analyze and compare cumulative graphs. Malott discusses the cumulative graph in Ch. 17 and Cooper, Heron, and Heward discuss it in Ch. 6. Three basic ways in which you can compare graphs are in terms of their level, trend, and variability.
The level of behavior depicted on a graph refers to the average frequency of the behavior across time. Calculate it by dividing the number of responses by the amount of time in which those responses were made. For example, if a rat pressed a lever 120 times in an hour, then one way to express the level of behavior would be 2 responses per minute (120 lever presses divided by 60 minutes). When visually analyzing a graph, imagine a straight horizontal line running across the graph at the level on the vertical axis that represents the average frequency based on all the data points. Sometimes a graph actually shows this "mean level line" (sometimes it's a "median level line"). The higher the mean level line is from the baseline, the greater the average frequency of the behavior across the period of time represented by the graph.
Trend refers to the overall tendency across time for the behavior to increase in frequency, decrease in frequency, or remain stable. Again, imagine a straight line, this time running through the data points in such a way that approximately half of them are above the line and half of them are below the line. If this trend line is horizontal, it tells you that, overall, the behavior did not change in frequency over the time period represented by the graph. If the line slopes upward from left to right, the frequency increased across time, and if it slopes downward, the frequency decreased. Another word for trend that you'll often see is slope.
Variability refers to the average change in frequency from one data point to the next. Imagine the trend line again. If the actual data points tend to be far from the line, then variability is high. This would indicate that the frequency of the behavior tended to change a lot from moment to moment during the session. There were periods of fast responding mixed in with periods of slow responding. We'd probably describe a line like this as very "jagged." When variability is low, the line is less jagged, more smooth.
The following graph shows a low level of behavior, a near-zero trend, and low or fairly stable variability.
The following graph shows a moderate level of behavior, an increasing trend, and high variability. Notice that a trend line has been added.
The following graph shows a low level of behavior, a slightly increasing trend, and moderate variability:
Saturday, December 22, 2007
Comparing graphs
Posted by
Anonymous
at
11:04 AM
Labels:
Applied Behavior Analysis: Ch. 06,
Principles of Behavior: Ch. 17
3
comments
Tuesday, November 27, 2007
What gets paired in a verbal pairing procedure?
Several times in the chapters on rule-governed behavior (23, 24, 25, maybe elsewhere too), Malott discusses the verbal analog to the pairing procedure (“verbal pairing procedure” for short). Remember that a neutral stimulus becomes a learned reinforcer or a learned aversive stimulus (punisher) by being paired with a stimulus that’s already a reinforcer or aversive stimulus (Ch. 11). Like this...
According to Malott’s theory of how rule-governed behavior works, in order for a rule to control behavior, there has to be an aversive stimulus/condition that’s escaped by performing the target behavior that the rule specifies. This direct acting escape contingency is the engine at the heart of rule control. If behavior is controlled by its immediate consequences, as Malott posits, then in order to understand any behavior, including complex rule-governed behavior, we have to dig deep until we uncover whatever direct acting contingency is actually doing the work of controlling the behavior.
So in rule-governed behavior, where does the necessary aversive stimulus/condition come from? Malott makes it clear that when a rule is stated (by someone else or by oneself), and if there’s a deadline, then the combination of noncompliance with the rule (not performing the target behavior) and the approaching deadline constitutes an aversive stimulus/condition. It’s a conditional aversive stimulus because each of the two components (noncompliance and approaching deadline) by itself would not be aversive. The aversiveness of one of the components is conditional upon its being combined with the other.
But what still requires a little further clarification, I think, is why that conditional stimulus is aversive. The mere combining of noncompliance and an approaching deadline isn’t necessarily aversive. For instance, consider this rule: Take your kid to the dentist before the end of the week and you’ll receive 5 cents. Most of us would not worry about losing the opportunity for that 5 cents. So noncompliance (I haven’t taken the kid to the dentist yet) plus the approaching deadline (It’s already Friday afternoon) would not constitute an aversive stimulus/condition. But if the amount were $100 instead of 5 cents, we’d probably worry and noncompliance plus approaching deadline would be aversive. So whether or not this kind of conditional stimulus is aversive depends on the consequence specified in the contingency that the rule describes. If the consequence is sizable enough and/or probable enough, then the conditional stimulus (noncompliance + approaching deadline) will be aversive.
So back to the original question about the verbal pairing procedure. Remember that in order to turn a neutral stimulus into an aversive stimulus, it has to be paired with an already-aversive stimulus. As explained above, noncompliance with a rule plus an approaching deadline constitutes a conditional stimulus which, by itself, is neutral, that is, it’s not aversive. It only becomes aversive when it’s paired with an already-aversive stimulus, such as loss of the opportunity to receive a sizable and/or probable reinforcer. Like this…
Pardon me for getting mentalistic for just a moment, but this “pairing” doesn’t take place in the outside, observable world, but “in your head.” The proper way to say that is that the neutral conditional stimulus and the already-aversive stimulus are “verbally paired.” Or to say it another way, because we’re not talking about actually physically pairing two stimuli, this is a verbal analog of the pairing procedure.
Anyway, this verbal pairing procedure makes “it's Friday afternoon and kid hasn't been taken to dentist” an aversive condition. So now it can function as the before condition in the direct acting escape contingency that ultimately controls the target behavior, as in the diagram in the 2nd column on p. 405. This contingency and the 3rd contingency in the 1st column on that page are essentially the same, or at least we’ll treat them the same for now. I believe the before conditions described in these two contingencies are different from each other. But for present purposes they can be treated as interchangeable because under normal circumstances they would always occur together.
Posted by
Anonymous
at
4:36 PM
Labels:
Principles of Behavior: Ch. 25
0
comments
Sunday, November 25, 2007
Tinkering with some contingencies in Ch. 26B
(1) In Ch. 26B on the web, Malott calls the contingency at the top of p. 7 an analog to penalty. But I think it's an analog to punishment by prevention of a reinforcer. What do you think?
Before: You will enter Heaven when you die.
Behavior: You dump a barrel of toxic waste.
After: You will not enter Heaven when you die.
(2) At the bottom of p. 12 there's a description of a rule-governed analog to punishment, and on the top of the next page it's diagrammed, but incorrectly, I think. It seems to me that the diagram should say:
Before: You won't enter Hell when you die.
Behavior: You commit one mortal sin.
After: You will enter Hell when you die.
(3) On pgs. 13-14 Malott offers the example of an analog to avoidance of the loss of the opportunity for a reinforcer (AALOR). As we've learned, if a contingency looks like an analog to reinforcement, but it includes a deadline, then it's really an AALOR. In this example, the rule is to do a good deed before the end of the day so you'll go to Heaven if you die before you wake. Malott says the deadline is the end of the day and that it functions as an SD. But I don't think so. I think the deadline is something like "before you fall asleep and never wake up." If this rule is effective in controlling someone's good deed behavior, it's because noncompliance as sleepy time approaches is highly aversive since you won't get another chance to earn entry into Heaven if you die before you wake. This deadline is not an SD because the corresponding SΔ would be something like "after you wake up, still alive." In that circumstance, the target behavior of doing a good deed would still earn the reinforcer of getting to Heaven, or at least getting closer. So I think this is another example of a deadline that functions as an opportunity to respond. So I'd change the diagram at the top of p. 14 to:
Before: You won't go to Heaven.
Opportunity to Respond/Deadline: Before you fall asleep and never wake up.
Behavior: You perform a good deed.
After: You will go to Heaven.
(4) The stranded motorist scenario is another example in which the deadline functions as an opportunity to respond rather than as an SD.
Posted by
Anonymous
at
5:55 PM
Labels:
Principles of Behavior: Ch. 26
0
comments
Tuesday, November 13, 2007
Another "opportunity to respond" vs. SD
On p. 366 Malott explained how some stimulus situations that were formerly thought to function as SDs don't really fit that definition. The example was Mary having to eat her meal before the deadline (mealtime's end) in order to avoid losing the reinforcer that would be delivered the next day. If that deadline functions as an SD, then the corresponding SΔ would be after mealtime ends. The problem with that is that after mealtime ends, it's no longer possible to carry out the target behavior of eating her meal. So instead of the deadline functioning as an SD, Malott tells us it functions as an "opportunity to respond." This is like situations in which an operandum (e.g., the lever in a Skinner box) might seem to function as an SD but, in fact, since the target behavior cannot even occur in its absence, the presence of the operandum really functions as the opportunity to respond.
OK, on to p. 380. Carefully think about the examples diagrammed there. It seems to me that after the play ends (labeled as the SΔ), the target behavior of making a good play cannot be performed. If I'm right about this, then in those two diagrams, there should be no SΔ box nor its corresponding "after" box, and the box describing the deadline should be labeled "Opportunity to respond" instead of SD.
What do you think?
Posted by
Anonymous
at
3:55 PM
Labels:
Principles of Behavior: Ch. 23
1 comments
Does feedback really function as an SD?
I don’t think so, and I think Dr. Malott might agree. It’s obvious from reading his book that he and his team are always thinking more and more deeply about various issues. And my guess is that deeper thought about this issue will result in the view that rather than feedback functioning as an SD, it functions more like a prompt.
Here’s why. In order for there to be an SD, there also has to be an SΔ, which is a stimulus in the presence of which the target behavior is not reinforced/punished. So think about the football scenario in Ch. 23. If feedback delivered before a play functions as an SD, in the presence of which the target behavior will be reinforced, then the corresponding SΔ would be no feedback delivered before the play. But if no feedback were delivered before the play, yet the target behavior occurred anyway (that is, the play was executed correctly), it would still be reinforced. This means that the “no feedback” condition is not an SΔ. And this further means that feedback is not an SD.
Now remember the definition of prompt - a supplemental stimulus that raises the probability of a correct response. Seems to fit, right?
Posted by
Anonymous
at
3:53 PM
Labels:
Principles of Behavior: Ch. 23
1 comments
Tuesday, August 21, 2007
Motivating operations
The concept of motivating operation (MO) is defined and discussed quite differently in Ch. 16 of Applied Behavior Analysis and in Ch. 9 of Principles of Behavior. In the former, Michael defines and describes MOs as having two kinds of effects – behavior-altering (BA) effects and value-altering (VA) effects. BA effects are the temporary effects of the MO on the frequency of current behavior. For example, the MO of food deprivation temporarily increases the frequency of behaviors that have been reinforced by food in the past. VA effects are the temporary effects of the MO on the reinforcing or punishing effectiveness of a stimulus, event, object, or condition. For example, the MO of food deprivation temporarily increases the reinforcing effectiveness of food.
These two effects of an MO are usually presented as if they were two different and independent types of effects that are brought about by an MO. But in my opinion this is an incorrect understanding. An alternative description of an MO's effect, which I prefer, is that MOs have only one kind of effect – a behavior-altering effect. An MO causes a change in the frequency of behaviors that have been reinforced or punished by a stimulus, event, object, or condition in the past. The so-called value-altering effect is not a second, different effect that's independent of the BA effect. We see that when we realize that the value or effectiveness of a reinforcer or punisher can only be understood in terms of whatever changes in behavioral frequency are observed. In other words, when we talk about an MO's value-altering effect, it's really just another way of talking about its behavior-altering effect.
Malott seems to be on the same track, although he doesn't say so explicitly. But he defines MO as "a procedure or condition that affects learning and performance with respect to a particular reinforcer or aversive stimulus." By "affects learning and performance" he can only mean "changes the frequency of the target behavior." So this definition focuses on the MO's BA effects and says nothing about the value or effectiveness of the relevant reinforcer or punisher (which he calls "aversive stimulus"), that is, it says nothing about the MO's VA effect.
As Michael points out in Ch. 16 of ABA, there's still a lot of work to be done before we'll fully understand MOs, especially MO's for punishment. In the meantime, I think Malott's definition is not only simpler to understand, but I also think it's more conceptually accurate because of its focus on the MO's BA effect without claiming that MOs also have a VA effect.
Posted by
Anonymous
at
3:04 PM
Labels:
Applied Behavior Analysis: Ch. 16,
Principles of Behavior: Ch. 09
7
comments
Saturday, August 11, 2007
Kinds of reinforcers, Part 2
Revised on 12/22/14
I suggest reading this post after you read the post called Kinds of reinforcers, Part 1.
See the definition of Reinforcer (Positive Reinforcer) on p. 3. Be sure you understand that stimulus is not a synonym of reinforcer and reinforcer is not a synonym of stimulus. These two words DO NOT mean the same thing. Stimulus is the larger category and reinforcer is a subcategory of that larger category. So every reinforcer is a stimulus, but not every stimulus is a reinforcer. Sometimes a particular stimulus functions as a reinforcer, but sometimes it has a different function.
Stimulus, like many other words, has multiple meanings. In the second column on p. 3 Malott says that a stimulus is any physical change, such as a change in sound, light, pressure, or temperature. This is a “default” definition of stimulus as the word is commonly used in everyday language. In his list of four types of stimuli, Malott refers to this as the “restricted sense” of the word. But he also says that throughout Principles of Behavior, when the word is used, it might refer to this kind of physical change, but it also might refer to an event, activity, or condition. So looking again at the definition of Reinforcer (Positive Reinforcer), we should understand that a stimulus that functions as a reinforcer might be a physical change, event, activity, or condition. Any of these kinds of stimuli might function as a reinforcer in a particular situation.
Another way to think about Malott’s list is that there are four basic kinds of reinforcers. A stimulus (in the restricted sense of the word), such as a pleasant taste or aroma, can function as a reinforcer. So can an event, like a football game or a concert. So can a condition or, more specifically, a change in condition. For instance, if it's dark and you can't see, then the behavior of flipping a light switch may change the visibility condition, and that change in condition is a reinforcer. As for activities as reinforcers, I'll expand a little on what Malott says. Rather than an activity functioning as a reinforcer, it's more often the opportunity to engage in a particular activity that functions as a reinforcer. For example, if you wash the dishes (target behavior), you'll have the opportunity to engage in the activity of playing video games for a while. That opportunity, then, functions as a reinforcer.
Posted by
Anonymous
at
8:05 PM
Labels:
Principles of Behavior: Ch. 01
0
comments