On p. 366 Malott explained how some stimulus situations that were formerly thought to function as SDs don't really fit that definition. The example was Mary having to eat her meal before the deadline (mealtime's end) in order to avoid losing the reinforcer that would be delivered the next day. If that deadline functions as an SD, then the corresponding SΔ would be after mealtime ends. The problem with that is that after mealtime ends, it's no longer possible to carry out the target behavior of eating her meal. So instead of the deadline functioning as an SD, Malott tells us it functions as an "opportunity to respond." This is like situations in which an operandum (e.g., the lever in a Skinner box) might seem to function as an SD but, in fact, since the target behavior cannot even occur in its absence, the presence of the operandum really functions as the opportunity to respond.
OK, on to p. 380. Carefully think about the examples diagrammed there. It seems to me that after the play ends (labeled as the SΔ), the target behavior of making a good play cannot be performed. If I'm right about this, then in those two diagrams, there should be no SΔ box nor its corresponding "after" box, and the box describing the deadline should be labeled "Opportunity to respond" instead of SD.
What do you think?
Tuesday, November 13, 2007
Another "opportunity to respond" vs. SD
Posted by
Anonymous
at
3:55 PM
Labels:
Principles of Behavior: Ch. 23
1 comments
Does feedback really function as an SD?
I don’t think so, and I think Dr. Malott might agree. It’s obvious from reading his book that he and his team are always thinking more and more deeply about various issues. And my guess is that deeper thought about this issue will result in the view that rather than feedback functioning as an SD, it functions more like a prompt.
Here’s why. In order for there to be an SD, there also has to be an SΔ, which is a stimulus in the presence of which the target behavior is not reinforced/punished. So think about the football scenario in Ch. 23. If feedback delivered before a play functions as an SD, in the presence of which the target behavior will be reinforced, then the corresponding SΔ would be no feedback delivered before the play. But if no feedback were delivered before the play, yet the target behavior occurred anyway (that is, the play was executed correctly), it would still be reinforced. This means that the “no feedback” condition is not an SΔ. And this further means that feedback is not an SD.
Now remember the definition of prompt - a supplemental stimulus that raises the probability of a correct response. Seems to fit, right?
Posted by
Anonymous
at
3:53 PM
Labels:
Principles of Behavior: Ch. 23
1 comments
Tuesday, August 21, 2007
Motivating operations
The concept of motivating operation (MO) is defined and discussed quite differently in Ch. 16 of Applied Behavior Analysis and in Ch. 9 of Principles of Behavior. In the former, Michael defines and describes MOs as having two kinds of effects – behavior-altering (BA) effects and value-altering (VA) effects. BA effects are the temporary effects of the MO on the frequency of current behavior. For example, the MO of food deprivation temporarily increases the frequency of behaviors that have been reinforced by food in the past. VA effects are the temporary effects of the MO on the reinforcing or punishing effectiveness of a stimulus, event, object, or condition. For example, the MO of food deprivation temporarily increases the reinforcing effectiveness of food.
These two effects of an MO are usually presented as if they were two different and independent types of effects that are brought about by an MO. But in my opinion this is an incorrect understanding. An alternative description of an MO's effect, which I prefer, is that MOs have only one kind of effect – a behavior-altering effect. An MO causes a change in the frequency of behaviors that have been reinforced or punished by a stimulus, event, object, or condition in the past. The so-called value-altering effect is not a second, different effect that's independent of the BA effect. We see that when we realize that the value or effectiveness of a reinforcer or punisher can only be understood in terms of whatever changes in behavioral frequency are observed. In other words, when we talk about an MO's value-altering effect, it's really just another way of talking about its behavior-altering effect.
Malott seems to be on the same track, although he doesn't say so explicitly. But he defines MO as "a procedure or condition that affects learning and performance with respect to a particular reinforcer or aversive stimulus." By "affects learning and performance" he can only mean "changes the frequency of the target behavior." So this definition focuses on the MO's BA effects and says nothing about the value or effectiveness of the relevant reinforcer or punisher (which he calls "aversive stimulus"), that is, it says nothing about the MO's VA effect.
As Michael points out in Ch. 16 of ABA, there's still a lot of work to be done before we'll fully understand MOs, especially MO's for punishment. In the meantime, I think Malott's definition is not only simpler to understand, but I also think it's more conceptually accurate because of its focus on the MO's BA effect without claiming that MOs also have a VA effect.
Posted by
Anonymous
at
3:04 PM
Labels:
Applied Behavior Analysis: Ch. 16,
Principles of Behavior: Ch. 09
7
comments
Saturday, August 11, 2007
Kinds of reinforcers, Part 2
Revised on 12/22/14
I suggest reading this post after you read the post called Kinds of reinforcers, Part 1.
See the definition of Reinforcer (Positive Reinforcer) on p. 3. Be sure you understand that stimulus is not a synonym of reinforcer and reinforcer is not a synonym of stimulus. These two words DO NOT mean the same thing. Stimulus is the larger category and reinforcer is a subcategory of that larger category. So every reinforcer is a stimulus, but not every stimulus is a reinforcer. Sometimes a particular stimulus functions as a reinforcer, but sometimes it has a different function.
Stimulus, like many other words, has multiple meanings. In the second column on p. 3 Malott says that a stimulus is any physical change, such as a change in sound, light, pressure, or temperature. This is a “default” definition of stimulus as the word is commonly used in everyday language. In his list of four types of stimuli, Malott refers to this as the “restricted sense” of the word. But he also says that throughout Principles of Behavior, when the word is used, it might refer to this kind of physical change, but it also might refer to an event, activity, or condition. So looking again at the definition of Reinforcer (Positive Reinforcer), we should understand that a stimulus that functions as a reinforcer might be a physical change, event, activity, or condition. Any of these kinds of stimuli might function as a reinforcer in a particular situation.
Another way to think about Malott’s list is that there are four basic kinds of reinforcers. A stimulus (in the restricted sense of the word), such as a pleasant taste or aroma, can function as a reinforcer. So can an event, like a football game or a concert. So can a condition or, more specifically, a change in condition. For instance, if it's dark and you can't see, then the behavior of flipping a light switch may change the visibility condition, and that change in condition is a reinforcer. As for activities as reinforcers, I'll expand a little on what Malott says. Rather than an activity functioning as a reinforcer, it's more often the opportunity to engage in a particular activity that functions as a reinforcer. For example, if you wash the dishes (target behavior), you'll have the opportunity to engage in the activity of playing video games for a while. That opportunity, then, functions as a reinforcer.
Posted by
Anonymous
at
8:05 PM
Labels:
Principles of Behavior: Ch. 01
0
comments
Monday, August 6, 2007
More on SDs & SΔs
According to Malott, and just about everyone else as far as I can tell, the term, discriminative stimulus, is the "proper" name for the antecedent variable whose abbreviation is SD. Its opposite, whose abbreviation is SΔ, doesn't seem to have a proper name. Instead, we're usually told that the abbreviation stands for S-delta, which is really just a way of spelling out SΔ that makes it clear how it should be pronounced and accommodates keyboards that don't know Greek.
In my opinion, discriminative stimulus should be the label for the category of antecedent variables that includes both SD and SΔ. In other words, there are two kinds of discriminative stimuli – SDs and SΔs. An SD is a stimulus in the presence of which a particular response will be reinforced or punished (depending on whether we're dealing with a reinforcement or punishment contingency), and an SΔ is a stimulus in the presence of which a particular response will not be reinforced or punished.
Posted by
Anonymous
at
12:49 PM
Labels:
Principles of Behavior: Ch. 12
0
comments
Friday, July 13, 2007
SD & SΔ: Two sides of the coin
Revised on 3/29/14
Malott’s definitions of SD and SΔ are procedural (see footnote on p. 199). They’re stimuli in the presence of which a reinforcement or punishment contingency is present or absent. This is different from the way some others define these terms. As Malott points out, some other definitions put the emphasis on either (1) the likelihood of the response happening in the presence of the stimulus, or (2) the likelihood that, if the response happens in the presence of the stimulus, it will be reinforced/punished. These two events – response happening and response being reinforced/punished – both depend on whether the reinforcement/punishment contingency is present or absent. This seems to make the presence or absence of the contingency primary and those two events are secondary.
An SΔ is a stimulus in the presence of which the target response will not be reinforced/punished because the relevant contingency is absent. If there are no stimuli (circumstances, settings, occasions) in the presence of which the target response would not be reinforced/punished, then by definition there’s no SΔ. This also means that the contingency is present all the time, so that there’s no particular stimulus “signaling” that the contingency is present. All of this is why Malott says that if there’s no SΔ, then there’s no SD.
The thing about coins is that they have two sides. There’s no such thing as a one-sided coin; you can’t have one without the other. And if you DON’T have one of them, then you don’t have the other either. That’s the way it is with SDs and SΔs. If you don’t have an SΔ, then you don’t have an SD either (and vice versa, of course).
Posted by
Anonymous
at
3:03 PM
Labels:
Principles of Behavior: Ch. 12
0
comments
Wednesday, April 4, 2007
How do analogs to punishment work?
In Ch. 24 Malott explained in great detail how rule-governed analogs to avoidance of the loss of a reinforcer work. Now maybe I just missed it (always possible), but I don't think he's explained how rule-governed analogs to punishment work. In the PDF version of Ch. 26 he writes “Commit a single mortal sin and you will definitely spend eternity in hell. The statement of that rule does make noncompliance a most aversive condition (for believers). This is an effective rule-governed analog to punishment.” So what is the mechanism by which a rule like this works?
For rule-governed analogs to avoidance of the loss of a reinforcer, stating the rule establishes noncompliance (not performing the target behavior) as an aversive before condition which can be escaped or decreased by performing the target behavior. This outcome follows the target behavior immediately and the result is an increased frequency of the target behavior in similar situations.
But in a rule-governed analog to punishment, noncompliance with the rule means PERFORMING the target behavior, and noncompliance (having performed the target behavior) is an aversive AFTER condition. Depending on the particular circumstances, that aversive condition might be what we'd call guilt or, perhaps, fear of punishment. This aversive after condition follows the target behavior immediately as part of a direct-acting punishment contingency. When a rule is stated prohibiting a behavior, that behavior becomes a member of the response class of prohibited behaviors. Even if the particular target behavior has never been performed before, other prohibited behaviors have been performed in the past and have been punished. So because members of this response class have been punished in the past, resulting in a decreased frequency of performing such behaviors, the frequency of newly prohibited behaviors should also be reduced.
I think that's how rule-governed analogs to punishment work.
Posted by
Anonymous
at
4:27 PM
Labels:
Principles of Behavior: Ch. 26
1 comments