Wednesday, January 17, 2007

Feedback on a reinforcement CA

One of your classmates has submitted a reinforcement contingency analysis (CA) for Ch. 2. I returned it with feedback for revising it, and I want to share that feedback with the rest of you. The target behavior was mowing someone's lawn and the reinforcer was the pay received for the job.

There are 2 problems with this as an example of a reinforcement contingency. The first is that mowing a lawn is a series or chain of behaviors (mow a while, take a rest, mow some more, get a drink, etc.) rather than a discrete behavior with a clear starting and ending point & no other behaviors in between. That makes it unclear what it would mean to say that a reinforcer follows this "behavior" immediately.

Second, even if we allowed mowing the lawn as an acceptable target behavior in an example of a reinforcement contingency, it's not likely that the reinforcer ($) immediately follows the "behavior." In all of your examples, we need target behaviors that are discrete behaviors (additional discussion about this in another Ch. 2 post) & that are followed by their reinforcing or punishing consequence immediately.

5 comments:

Anonymous said...

Could we define the discrete behavior of pushing a running lawn mower?

grass is tall and uneven -- push lawn mower -- grass is short

Anonymous said...

Probably so.

I hesitate to do this, but here goes anyway... Take a look at p. 408 where Malott writes about reinforceable response units. Read carefully because an incomplete understanding, especially at this early point as we make our way thru the book, can lead you astray as you work on your own CAs.

What's interesting about an example like pushing a lawnmower is that there seems to be ongoing reinforcement, from moment to moment, since you're continually seeing the nice, short, even grass as you're performing the ongoing behavior of pushing. So tho there seems to be an ongoing flow of reinforcers, it seems kinda artificial to think of the ongoing behavior as a series of discrete behaviors. Where would the dividing line come between each of these discrete behaviors? This kind of situation clarifies why the notion of reinforceable response unit makes sense. But I've still always been curious about how best to understand these situations in which there seems to be an ongoing flow of reinforcers.

Specfriggintacular said...

Maybe instead of stating that mowing the grass = pay leaving the floor open to many "in between task or breaks that can be accomplished, we could say the following:

The amount of time spent mowing the lawn depends on the amount paid for the mowing of the lawn.

So, time spent mowing law=amount earned.

Would this work instead?

Anonymous said...

Are you asking if it works as an example of a reinforcement contingency? I'm afraid not because a behavioral contingency has 3 parts (see definition on p. 20 and an earlier post on this blog). And of those 3 parts, I'm not sure which of them you intend "time spent mowing" and "amount earned" to be.

Anonymous said...

I goofed; a discrete behavior has a beginning and an end. Doesn't this fail the reinforceable response unit test? Breaks would be interuptions of greater that 60 seconds.