Tuesday, August 21, 2007

Motivating operations

The concept of motivating operation (MO) is defined and discussed quite differently in Ch. 16 of Applied Behavior Analysis and in Ch. 9 of Principles of Behavior. In the former, Michael defines and describes MOs as having two kinds of effects – behavior-altering (BA) effects and value-altering (VA) effects. BA effects are the temporary effects of the MO on the frequency of current behavior. For example, the MO of food deprivation temporarily increases the frequency of behaviors that have been reinforced by food in the past. VA effects are the temporary effects of the MO on the reinforcing or punishing effectiveness of a stimulus, event, object, or condition. For example, the MO of food deprivation temporarily increases the reinforcing effectiveness of food.

These two effects of an MO are usually presented as if they were two different and independent types of effects that are brought about by an MO. But in my opinion this is an incorrect understanding. An alternative description of an MO's effect, which I prefer, is that MOs have only one kind of effect – a behavior-altering effect. An MO causes a change in the frequency of behaviors that have been reinforced or punished by a stimulus, event, object, or condition in the past. The so-called value-altering effect is not a second, different effect that's independent of the BA effect. We see that when we realize that the value or effectiveness of a reinforcer or punisher can only be understood in terms of whatever changes in behavioral frequency are observed. In other words, when we talk about an MO's value-altering effect, it's really just another way of talking about its behavior-altering effect.

Malott seems to be on the same track, although he doesn't say so explicitly. But he defines MO as "a procedure or condition that affects learning and performance with respect to a particular reinforcer or aversive stimulus." By "affects learning and performance" he can only mean "changes the frequency of the target behavior." So this definition focuses on the MO's BA effects and says nothing about the value or effectiveness of the relevant reinforcer or punisher (which he calls "aversive stimulus"), that is, it says nothing about the MO's VA effect.

As Michael points out in Ch. 16 of ABA, there's still a lot of work to be done before we'll fully understand MOs, especially MO's for punishment. In the meantime, I think Malott's definition is not only simpler to understand, but I also think it's more conceptually accurate because of its focus on the MO's BA effect without claiming that MOs also have a VA effect.

Saturday, August 11, 2007

Kinds of reinforcers, Part 2

Revised on 12/22/14

I suggest reading this post after you read the post called Kinds of reinforcers, Part 1.

See the definition of Reinforcer (Positive Reinforcer) on p. 3. Be sure you understand that stimulus is not a synonym of reinforcer and reinforcer is not a synonym of stimulus. These two words DO NOT mean the same thing. Stimulus is the larger category and reinforcer is a subcategory of that larger category. So every reinforcer is a stimulus, but not every stimulus is a reinforcer. Sometimes a particular stimulus functions as a reinforcer, but sometimes it has a different function.

Stimulus, like many other words, has multiple meanings. In the second column on p. 3 Malott says that a stimulus is any physical change, such as a change in sound, light, pressure, or temperature. This is a “default” definition of stimulus as the word is commonly used in everyday language. In his list of four types of stimuli, Malott refers to this as the “restricted sense” of the word. But he also says that throughout Principles of Behavior, when the word is used, it might refer to this kind of physical change, but it also might refer to an event, activity, or condition. So looking again at the definition of Reinforcer (Positive Reinforcer), we should understand that a stimulus that functions as a reinforcer might be a physical change, event, activity, or condition. Any of these kinds of stimuli might function as a reinforcer in a particular situation.

Another way to think about Malott’s list is that there are four basic kinds of reinforcers. A stimulus (in the restricted sense of the word), such as a pleasant taste or aroma, can function as a reinforcer. So can an event, like a football game or a concert. So can a condition or, more specifically, a change in condition. For instance, if it's dark and you can't see, then the behavior of flipping a light switch may change the visibility condition, and that change in condition is a reinforcer. As for activities as reinforcers, I'll expand a little on what Malott says. Rather than an activity functioning as a reinforcer, it's more often the opportunity to engage in a particular activity that functions as a reinforcer. For example, if you wash the dishes (target behavior), you'll have the opportunity to engage in the activity of playing video games for a while. That opportunity, then, functions as a reinforcer.

Monday, August 6, 2007

More on SDs & SΔs

According to Malott, and just about everyone else as far as I can tell, the term, discriminative stimulus, is the "proper" name for the antecedent variable whose abbreviation is SD. Its opposite, whose abbreviation is SΔ, doesn't seem to have a proper name. Instead, we're usually told that the abbreviation stands for S-delta, which is really just a way of spelling out SΔ that makes it clear how it should be pronounced and accommodates keyboards that don't know Greek.

In my opinion, discriminative stimulus should be the label for the category of antecedent variables that includes both SD and SΔ. In other words, there are two kinds of discriminative stimuli – SDs and SΔs. An SD is a stimulus in the presence of which a particular response will be reinforced or punished (depending on whether we're dealing with a reinforcement or punishment contingency), and an SΔ is a stimulus in the presence of which a particular response will not be reinforced or punished.