
Students Learn as Well from Software as from Live Teachers - mhb
http://www.insidehighered.com/news/2012/05/22/report-robots-stack-human-professors-teaching-intro-stats
======
yummyfajitas
_So far that research has failed to persuade many traditional institutions to
deploy the software -- ostensibly for fear of shortchanging students and
alienating faculty with what is liable to be seen as an attempt to use
technology as a smokescreen for draconian personnel cuts... the resistance of
many professors to prefabricated courses that they did not invent and cannot
modify to their own needs and tastes._

This is a perfect illustration of why non-profit education is broken. There is
no principal, so the agents are running the show. (See
<https://en.wikipedia.org/wiki/Principal-agent_problem> if you are
unfamiliar.)

In a competitive market dominated by for-profit principals, this wouldn't be
an issue. The effective principals would order their agents to cut costs and
they would, allowing these principals to dominate the market by cutting costs.

~~~
gavinlynch
"...ostensibly for fear of [...] alienating faculty with what is liable to be
seen as an attempt to use technology as a smokescreen for draconian personnel
cuts."

If the product educates students just as well, and at a better price point,
why would the cuts be "draconian"?

I would deem it intelligent business and a better use of government resources.

~~~
patio11
Because schools are in the business of employing well-organized constituents
and educating students is a not-unwelcome industrial biproduct.

------
zachh
As interesting as the finding is, it's not exactly surprising that _college_
students learn as well from software as professors. What I want to know is
whether K-12 students (especially the lower end of that) learn just as well.
My intuition is no, that early childhood education will always require some
amount of human educational support, but I'd love to see research confirming
or denying as much.

------
cdcox
This is like saying "People find food made by machines as tasty as human made
food" and failing to mention that you only did the test at McDonalds

Looking at the original study:

These were intro stats classes with 100-1000. These classes are already
basically a teacher lecturing, minimum engagement, simple material, no ability
to have complex homework, and from the looks of things not in the student's
major specialization (so super 'in-depth' learning is unnecessary)

Universities have realized for years that they are not very good at these
types of classes. They've tried to move these courses online or encourage
students to take these at community colleges. But, you don't (or shouldn't) go
to university for these classes, you go to the university for the more
specialized upper division training in your department. I would be more
interested to see if a high level, small, for majors course could be replaced
by a similar program. It's worth noting the even the Khan Academy classes are
still the 'factory' classes, he has very few 'specialized' classes in more
complicated subjects.

------
teach
First, the title should be changed. The source article is titled "Score One
for the Robo-Tutors" or "Machine Learning: Score One for the Robo-Tutors", not
"Students Learn as Well from Software as from Live Teachers".

Secondly, this is an interesting result, but not too surprising. Working
through exercises on your own with software that presumably identifies and
targets area of weakness ought to put the onus of learning on the student.

On the other hand, many college students sitting in a lecture don't engage the
material themselves to the same degree and instead rely on the professor to
"teach" them instead attempting to learn it on their own.

In fact, I'm sort-of surprised the machine-learning students didn't do better
than the lecture-listening ones.

~~~
Cushman
> First, the title should be changed. The source article is titled "Score One
> for the Robo-Tutors" or "Machine Learning: Score One for the Robo-Tutors",
> not "Students Learn as Well from Software as from Live Teachers".

Where does this idea come from that the title should be the same? Granted, the
current title isn't quite perfect either -- "Study Reports Software Speeds Up
Teaching" would be more accurate -- but it's at least more informative, and
certainly no more editorial, than "Score One for the Robo-Tutors".

~~~
tokenadult
_Where does this idea come from that the title should be the same?_

It is suggested although not mandated by the Hacker News guidelines:

<http://ycombinator.com/newsguidelines.html>

"Please don't do things to make titles stand out, like using uppercase or
exclamation points, or adding a parenthetical remark saying how great an
article is. It's implicit in submitting something that you think it's
important."

. . . .

"You can make up a new title if you want, but if you put gratuitous editorial
spin on it, the editors may rewrite it."

The retitling in this case seems to have been effective for eliciting more
discussion, and I appreciate the various interesting comments here. I regret
cases where I submit an article with an original title, only to find out that
the same article was submitted hours before with a retitled submission title,
one that is missed by Hacker News participants and does not elicit discussion.
Most professionally published articles have well written headlines, and it is
often hard for HN participants to improve upon those.

~~~
Cushman
It's not even suggested. The guidelines are explicit that the original title
should be edited or replaced to bring it into line with our community
standards.

Headlines may be professionally written, but they are written with incentives
which don't favor HN readers. Most of the time, I'd prefer to hear why the
submitter thinks I should read something rather than why the author does.

------
endlessvoid94
Students need a guide, not an assistant.

~~~
gavinlynch
Software can't guide?

------
tokenadult
I read this interesting article and then shared with friends on Facebook who
are higher education teachers. Thinking about exactly what the study in the
submitted article shows, in view of having had a child recently take an
introductory statistics course, I think that the result may be rather limited
to the course described, an introductory course in statistics for students who
will not go on to major in statistics. This result may not generalize to other
subjects (although, for reasons suggested by other commenters, it is important
to do the research and find out what teaching methods are effective for all
subjects).

Most introductory statistics courses are crap, going wrong right at the start
by hiring the wrong instructor and specifying the wrong textbook.

<http://statland.org/MyPapers/MAAFIXED.PDF>

Statistics, the "science of data," is actually an endlessly fascinating
subject, from the introductory level on up, but few statistics courses are
taught in a manner that requires anything but a robot to present the material,
and those statistics courses are so dull that they don't encourage student
engagement. In fact, statistics gets into some deep issues about the nature of
correct inference right from the beginning,

<http://escholarship.org/uc/item/6hb3k0nz#page-1>

and it occurs to me that a really, really good statistics course might be a
lot harder to automate (although this would be a worthy challenge for
artificial intelligence researchers) than the bleak general run of
introductory statistics courses.

As a good human statistics teacher, and perhaps even a good automated
statistics teacher, would say, further research on this point is needed. One
of the best examples I have seen of application of artificial intelligence
(software) to teaching of a foundational subject is the ALEKS

<http://www.aleks.com/>

online K-12 mathematics course, used by a variety of higher education
institutions to supplement remedial courses, and used by an increasing number
of secondary and elementary schools and self-taught learners of all ages to
learn mathematics and some closely related subjects. (ALEKS has a statistics
course

[http://www.aleks.com/about_aleks/course_products?cmscache=de...](http://www.aleks.com/about_aleks/course_products?cmscache=detailed&detailed=ghighedmath11_cocostatistic#ghighedmath11_cocostatistic)

that mostly covers the dull issues rather than the cool issues in statistics.)
I like ALEKS a lot for my children learning mathematics and for myself and my
wife learning basic principles of sole proprietorship accounting. The ALEKS
learning spaces model

<http://www.aleks.com/about_aleks/research_behind>

is based on some very interesting research, and I have previously recommended
that the Khan Academy developers assimilate that research base as they
continue developing the Khan Academy courses.

Based on real-world examples, it just might be that automated presentation of
many forms of course content will be more efficient (resulting in more
learning for fewer dollars in less time) than current in-person models of
instruction. And everyone should be happy about that. There will always be a
role for in-person teachers, and they do their best work if they use their
comparative advantage of being able to do some things better than machines,
even if everyone acknowledges that some machines do some kinds of teaching
better than most human beings. Let the research continue, until we have an
abundance of good data.

------
excuse-me
My university was one of the pioneers in self directed learning.

It was during the black death when there was a shortage of lecturers and many
students wanted to go away from the city to learn remotely in their own homes

So they came up with these portable reading tablets, a sort of p-book, which
contained the lecturers words and could be read anywhere, even in bright
sunlight. Eventually large stores of these were collected together in data
repositories- a bit like tape libraries but with human robots.

\- they also invented exams when they realised it was cheaper to examine a 100
students at once and outsource the marking than have the professor viva each
student individually.

~~~
zipdog
Yes, they should be testing software-based learning against other forms of
private (self-directed) learning, as well as (face-to-face) instruction.

Remote-instructor learning through software differs markedly from the same
through books in a variety of ways. ie the software can provide more
compelling extrinsic motivations, but can also provide increased distractions
(both if used on a computer with internet access, and through the affordance
of screen-based tools).

So its good not to treat this as an either or problem (does software work as
well as face-to-face) because there's a number of other considerations (is the
face-to-face personal or lecture hall, is the software highly interactive with
immediate feedback, etc

~~~
excuse-me
Except the software is almost always a Powerpoint-ized version of the book
with "an interactive multimedia experience" consisting of either a diagram
being drawn piece by piece by an animated gif, or a video clip in the corner
with a talking head reading the lecture notes.

------
georgieporgie
My attitude toward automated teaching is the same as my attitude toward
automated cars: you won't beat the best humans (in the near future), but
you'll probably rival or beat the average humans (mediocre training, tired,
grumpy, etc.)

I had enough teachers who were barely able to grasp the material themselves,
vindictive, or emotionally unfulfilled in their personal lives, that I'd
rather have learned from a book, computer, and videos. But hey, maybe dealing
with crazy or incompetent educators is an important socialization step for
young people...

------
nerdfiles
If the AI system compiled their Facebook IDs before class (we expected school
IDs to Facebook, so why not vice versa? Why shouldn't we expect Facebook and
unis to advertise together? -- aren't bars validating age against FB IDs by
now anyway?), it could determine an upper limit on analogical connections
which might be possibly not included in the set of students corresponding to
those IDs. In this way, an AI system can make inferences into successful
analogical connections which might similarly function as a growth method for
shared knowledge.

As opposed to _generating_ successful anthropological-demographic predictions
of linguistic behavior? Shouldn't students be "liking" professors' "asides"
anyway?

