

Experiment: Paying for performance works (for teachers in india) - yummyfajitas
http://www.marginalrevolution.com/marginalrevolution/2009/09/teacher-performance-pay-experimental-evidence-from-india.html

======
lazyant
One of the problems is that in education performance is equaled to students
test results so as result teachers would have an incentive to be just "test-
oriented".

~~~
barry-cotter
We find no evidence of any adverse consequences as a result of the incentive
programs. Incentive schools do significantly better on both mechanical
components of the test (designed to reflect rote learning) and conceptual
components of the test (designed to capture deeper understanding of the
material),suggesting that the gains in test scores represent an actual
increase in learning outcomes. _Students in incentive schools do significantly
better not only in math and language (for which there were incentives), but
also in science and social studies (for which there were no incentives)_ ,
suggesting positive spillover effects....

~~~
aik
It was only a 2 year study and the first year had no positive results.

A teacher motivated by incentives receives their motivation externally.
Motivation in such a form is never sustainable I don't believe. Over time, I
would expect the motivation teachers receive from incentives would decrease
over time.

~~~
yummyfajitas
Even if it works only for years 1-5 of a teacher's career, it would still be
useful. If we combine an incentive plan with a plan to keep most teachers in
this window (i.e., get rid of teachers after year 6), it could be
fantastically useful.

Though I must admit, I don't really buy your thesis that incentives only work
short term. If they did, you'd expect entrepreneurs to be unmotivated beyond
the short term, for example.

~~~
aik
Why would entrepreneurs lose their motivation? Entrepreneurs are working for
themselves, not for some chunk of change someone else will give them if they
perform as well as some random person wants them to.

If we get rid of teachers after 6 years, what would that teacher do? How would
this work exactly?

~~~
yummyfajitas
Entrepreneurs work harder because they believe it will get them more money.
But if you dislike that example, consider traders, lawyers or salespeople (all
of whom work for "some chunk of change someone else will give them"). Do
incentives fail with those professions after a few years?

As for "what will teachers do" after 6 years, they can either continue to
perform well or find a new job (just like a trader or salesman). The purpose
of school is education, not jobs for liberal arts grads.

~~~
aik
My main point is that an entrepreneur has complete control over everything
they do and therefore complete control over how much money they make (or as
much control you can possibly have over something like starting a business),
while a teacher simply receives a bonus when their students test well. For the
average person, I can't expect that a bonus is forever-motivating (not to say
entrepreneurs are always motivated). I agree that teachers may perform better
with bonuses, but that's obviously not how it SHOULD be.

"they can either continue to perform well or find a new job" How to measure?

------
hackoder
I think pay for performance matters to a certain stage. When you can meet your
basic necessities, you look for other ways to be more content with life. You
want to have more control over things, give back to your community, do
something you are passionate about, etc.

------
yuvouv
"increasing student test scores"

Yes, paying teachers more for higher test scores is a very good way of
increasing test scores - this was found in california schools years ago. It
had an opposite effect on students actual learning though.

~~~
jbellis
Read TFA; they thought of that.

~~~
btilly
Their way of accounting for it was to label certain parts of the test
"conceptual" and look at performance on those parts as evidence of real
learning. However that's still part of the test. Given two students whose
understanding is the same, the one who is better prepared for the other parts
of the test will have more time for the "conceptual" section and therefore
will likely perform better there as well.

That is assuming the unlikely proposition that preparing for the test doesn't
directly prepare you for the conceptual portion of the test. I say unlikely
because test makers have been trying to make such tests (eg the SAT) for
decades, and test preparation companies (eg Kaplan) have been demonstrating
that you can prepare for them after all.

You can't escape the circularity. If you're measuring performance with a test,
then you can't really distinguish preparation for the test from actual
performance. In this study they had much stronger results in their second
year, than their first. Which strongly suggests that teachers did a better job
of preparing for the test after teachers saw the tests that would be used.

~~~
yummyfajitas
They also accounted for it by studying performance on science and social
studies tests. Both improved. However, it would be useful to follow up with a
differently structured test to see how much comes from test prep.

Incidentally, bringing up the SAT is a red herring. The SAT was originally
meant to measure g/intelligence/"aptitude". It did this by measuring a body of
knowledge which is not explicitly taught in school, giving questions like
_wheel : car - > pick one of [leg: horse, egg : chicken, computer : TV]_.
Obviously, you can improve performance by explicitly teaching that knowledge
(e.g., memorizing analogies).

A subject test does not suffer from this problem, since it is designed to
measure subject knowledge rather than aptitude. You can improve performance on
a multiplication test by memorizing multiplication tables, but so what?
Multiplication performance is what you are trying to measure, it's not a proxy
for something else.

------
d3vvnull
This says nothing about how this would work in the U.S. Cultural differences
can not be discounted. Are there any studies about U.S. efforts to implement
pay for performance in schools?

~~~
ujjwalg
The title clearly mentions "teachers in India".

~~~
d3vvnull
Yes. I saw that.

------
Calamitous
Wow... do we have such an epically failed understanding of economics that
"behavior incented by money increases" merits a headline? Seriously?

Now whether they're incenting the right things (test scores) is debatable, but
why is this newsworthy?

~~~
jseliger
Actually, the question of "What incentives motivate teachers to teach well?"
is extremely debatable, and the real question underlying your "behavior
incented by money increases" comment.

At the moment, it's effectively impossible to fire teachers after two or three
years, no matter how poorly they do: see, for example,
[http://www.newyorker.com/reporting/2009/08/31/090831fa_fact_...](http://www.newyorker.com/reporting/2009/08/31/090831fa_fact_brill?currentPage=all)
.

~~~
Calamitous
I think see your point in the first sentence, in that money alone (once you've
achieved a basic livelihood) is a questionable motivator of job quality. The
article, however, is not necessarily talking about "teaching well," it was
fundamentally about improving test scores.

I was mostly taking issue with the headline's (and article's) apparent
surprise that offering money for higher X caused X to increase. Not sure why
an experiment was needed to establish this.

Also, your rubber room article is horrifying, but I don't really see the
relevance... except that the reward mechanism has been completely divorced
from performance of any kind.

------
stonemetal
0.1 standard deviation isn't significant. So for the most part it doesn't
work. Personally I think it is fairly obvious why. No one does a bad job on
purpose so if you aren't doing well then you are just not capable. This can be
caused by being incompetent or you don't have the power/tools/money necessary
to get the job done. Offering a small bonus doesn't really change either one.

Sure there may be some motivational issues, but they would have to be systemic
for this sort of ploy to work.

~~~
cwan
You may have misread the quote. It's more than a 0.1 standard deviation:

"At the end of two years of the program, students in incentive schools
performed significantly better than those in comparison schools by 0.28 and
0.16 standard deviations (SD) in math and language tests respectively...."

That's 0.28 for math and 0.16 for language. I'd say that's pretty significant
- especially for a large sample size. In this case presumably because of the
sample size it's more than being about "power/tools/money".

~~~
stonemetal
From wikipedia: _In science, researchers commonly report the standard
deviation of experimental data, and only effects that fall far outside the
range of standard deviation are considered statistically significant—normal
random error or variation in the measurements is in this way distinguished
from causal variation._

Thus the results are not statistically significant aka with in normal random
error etc. So when did 0.3 of not significant become significant.

~~~
cwan
My skills in stats aren't quite current but this was from a large sample size
- covering 500 schools and more than 55,000 students.

From the wikipedia article on statistical significance
(<http://en.wikipedia.org/wiki/Statistical_significance>): "Given a
sufficiently large sample, extremely small and non-notable differences can be
found to be statistically significant, and statistical significance says
nothing about the practical significance of a difference."

This isn't even "extremely small". From the report: "The mean treatment effect
of 0.22 SD is equal to _9 percentile points_ at the median of a normal
distribution. We find a minimum average treatment effect of 0.1 SD at every
percentile of baseline test scores, suggesting broad-based gains in test
scores as a result of the incentive program."

