

Quickly separating programmers from non-programmers - BudVVeezer
http://www.eis.mdx.ac.uk/research/PhDArea/saeed/paper1.pdf

======
cheezebubba
"...predictive effect of our test has failed to live up to that early
promise."

A later paper by them backs off:
<http://www.eis.mdx.ac.uk/research/PhDArea/saeed/paper3.pdf>

Abstract: Learning to program is notoriously difficult. Substantial failure
rates plague introductory programming courses the world over, and have
increased rather than decreased over the years. Despite a great deal of
research into teaching methods and student responses, there have been to date
no strong predictors of success in learning to program. Two years ago we
appeared to have discovered an exciting and enigmatic new predictor of success
in a first programming course. We now report that after six experiments,
involving more than 500 students at six institutions in three countries, the
predictive effect of our test has failed to live up to that early promise. We
discuss the strength of the effects that have been observed and the reasons
for some apparent failures of prediction.

------
btilly
Shortest version. Francis Bacon was right, _"Truth comes out of error more
readily than out of confusion."_

Short version, they gave non-programmers a test before they had learned
anything. They found that people could be divided into those whose answers
showed a consistent mental model, and those whose mental models from problem
to problem were inconsistent. A followup administration of the same test
partway in the course showed that the consistent/inconsistent description was
stable, but consistent people's mental models improved. The distribution of
scores for the final exam for the two groups looked like normal distributions
with very different averages - most inconsistent thinkers failed while the
consistent ones did ok.

When analyzed in more detail the final exam scores for the consistent thinkers
had a bimodal distribution as well. However the test provided offered insight
as to how to distinguish the average consistent thinkers (averaging 60% on the
final) with the high scoring ones (averaging 85%). The sample sizes are small
enough that I would not put too much weight on that observation though.

Amusingly people in the social sciences that they showed the results of the
initial test to predicted that students who formed inconsistent mental models
would do better because they tried to form the right model for each question.
The cynical part of me says that they would have approached the test that way,
and assumed that people like themselves would do better.

------
niels_olson
Ground for future studies

1) increased sample size, multiple institutions, multiple age ranges, multiple
programming languages, multiple native languages, correlation with Spearman's
g

2) controlling for confounders (see "Table 1" of all medical trials)

3) randomization of instruction

4) better history of pre-exposure (surely someone in a university computer
class is taking it out of an interest that developed before enrollment)

5) parental professions, levels of education, and annual earnings

6) latency effects -- did people "come around" later, how long do the effects
last?

7) frequency effects -- does a burst of lots of programming exposure kick
someone over a knee in the curve (as is done in language schools)

8) amplitude effects -- does intense instruction yield intense results?

Overall I liked it.

~~~
warfangle
I'm wondering if the double hump results exemplify your point #4: those who
had previous engagement are in the top hump, while those who had not (but
obviously have a little aptitude at least) are in the lower hump.

------
bensummers
This paper is wonderfully cynical about education in the UK:

> _"That administration failed, because the students – rightly, in our opinion
> – were incensed at the conduct of their teaching and the arrangements for
> their study, and simply refused to do anything that wasn’t directly
> beneficial to themselves."_

> _"Another group had very much lower than average A-level scores, and had
> been admitted to a maths and computing course largely to boost numbers in
> the mathematics department: we call this group the low achievers."_

Although I do begin to wonder whether the teaching of our professional skills
is somewhat to blame for the lack of professionalism and ability in our
industry.

~~~
BudVVeezer
I can agree to that somewhat, but what's interesting about this paper is that
it's starting to formalize the fact that some people can "think like a
programmer" and some can't. Now, what that says for programming paradigms as a
whole...

~~~
bensummers
Yes, the tests are the interesting part. I especially liked the way they did
some of the testing before the teaching started, and it still showed useful
results.

Maybe there's hope? (see the recent discussion on interviewing programmers)

------
enum
Based on Fig. 1 and the text describing it, the study seems to conclude that
imperative programming is unlike anything students have seen before.

"all had enough school mathematics to make the equality sign familiar."

Sure, but = in imperative code is nothing like = in math. It's hardly
surprising that it is confusing.

~~~
btilly
Confusion turned out to be irrelevant. _Consistent_ confusion mattered a lot.

------
tfincannon
If "a foolish consistency is the hobgoblin of little minds", where does that
leave _us_?

~~~
AngryParsley
Where does that leave us? Where does that leave Emerson? I guess one could be
an apologist and focus on "foolish," but that turns the quote into,
"Consistency is bad, except when it isn't."

Why do people even care about random quotes from a crazy transcendentalist?
Just because something is old and sounds profound doesn't mean it's correct.

