
Bill Gates Says There Is Something Perverse In College Ratings - pav3l
http://www.forbes.com/sites/luisakroll/2013/01/31/bill-gates-says-there-is-something-perverse-in-college-ratings/
======
Cookingboy
The currently most cited US college ranking is done by U.S News, which is
heavily favored toward small private institutions from the east coast. I
always find it irritating that a world renowned research power house such as
UC Berkeley can't even make it into the top 20 ranking, where as schools such
as Emory and Vanderbilt are ranked higher. It ALMOST feels like an east coast
old money circle-jerk. I guess with the exceptions of a few schools such as
MIT and Stanford, areas such as Law/Medicine/Business/Finance/Liberal Arts are
historically valued a lot more than science and engineering in this country.

That simply has to change if we want to lure more young people into going into
those fields (which is what the future of this country needs). It's still
unbelievable how many high school kids (remember, most people don't live in
SV) think accountants are these powerful people in suits making six figures
straight out of school and engineers are nerdy people who work in a basement
and gets no money/respect.

~~~
davmre
Have you ever _been_ to UC Berkeley?

As a current grad student here: it's a fantastic research university and a
beautiful campus, but the undergraduate experience is terrible. Classes are
giant and there's very little faculty/student interaction, because the focus
of the faculty is, for better or for worse, not on undergraduate education.
Yes, there are some incredible undergrads and some great opportunities for the
0.1% of them who really stand out, and of course the grad programs are great
-- if you look at the US News grad program rankings, Berkeley is top 5 in
almost every discipline -- but overall the undergrad program is ranked just
about where it belongs.

~~~
JPKab
The US News rankings have a very negative incentivization for schools also.
One of the things that my alma mater, Virginia Tech, used to do to (and I'm
sure still does) is lure top tier students into their (admittedly great)
engineering program. However, they then use a bootcamp style, much harder than
necessary first year "weed out" program for the freshman engineering students.
The effect is that 50% of the incoming engineering students fail out or are
forced to transfer to the school's less prestigous programs, particularly
business. The relatively high SAT scores of these students then allow these
other schools to inflate their rankings with US News. By the way, I know for a
fact that Perdue and a few other state schools with good engineering programs
do the same thing.

The first time I saw videos of top tier freshman engineering, math, and comp
sci courses, I was schocked. The problems were manageable, the pace was
reasonable, the teachers were engaging..... and when I saw the course material
and realized the tests were easier than the ones I took at my much lower
ranked school, I realized I'd been had.

The worst, most socially irresponsible aspect of this practice of "funneling"
and "trapping" your students into less desirable majors is that students who
otherwise would have been engineers end up learning less useful things.
Virginia Tech, and schools like it, are responsible for making the world have
less engineers than it should.

~~~
seanmcdirmid
But its usually just the door that is hard to get into, once through the door
its usually smooth sailing. Actually, this occurs in many exclusive
programs/institutions over and over again. But in reality, they have to be
selective: the world doesn't need that many engineers, especially ones that
aren't that smart, just like we really don't need that many doctors
(especially ones that aren't that smart). Computer science is the same: sure
you could double or triple the size of your program to except just anyone that
wants to program, but then you would flood the market with substandard talent
and your reputation goes into the trashbin!

Now, would we prefer a hard program where students would fail out instead of
being stopped at the door? This also seems like a waste of resources, but it
could work given the right technology (e.g. online courses).

------
carbocation
I don't mean to be incredibly dense, but I'll risk looking that way here.

When you're at the elementary school level, I understand that there are easily
quantified skills that we believe all citizens should possess. Standardized
tests seem reasonable for standardized knowledge.

Once you're at the college level, what is the goal? What is the thing being
maximized, the thing that can be measured and tested and presumably improved?
Creativity? Problem solving? Social adroitness? Rote knowledge? Do I dare say
that it may be different for different people?

The notion that colleges could be measured along just a few axes correlating
with a few particular purposes confuses me in a way that the elementary school
debate does not. What am I missing?

~~~
yummyfajitas
_Once you're at the college level, what is the goal?_

Good question. The lack of a good answer certainly undermines the
justification for colleges continued existence and pervasiveness.

Tangentially, I'd argue that any institution which can't even define it's
goals should not receive any taxpayer dollars.

------
jmharvey
If the primary benefit of attending a good college is education from
professors, then yes, there is something perverse in the rankings. But I'm not
sure that's how college actually works.

At the undergrad level, a particular class at one college is likely pretty
similar to a class at another college. And aside from a few exceptional
students, you're likely to be able to find whatever classes you're looking for
at any college you care to attend. There are more competent people who can
teach, and want to teach, intro to Shakespeare, or second-semester
thermodynamics, or whatever, than there are teaching positions available.

Most of the benefit you get from choosing college A over college B comes from
your interactions with your peers. Some of this is just people working
together on class projects, but a lot of it is the pervasive culture of a
place. At some schools, people will hang out and talk about political theory.
At others, there's a culture of making art. Some places care more about
sports. And at some schools there's a culture of building things. Actually, at
most schools, all of these things happen to some extent, but you're more
likely to encounter them at some places than at others.

And if that's the benefit of college, then it absolutely makes sense to say
that the best colleges are the ones with the best students.

~~~
LarrySDonald
That's true, but also the problem. Which university should, then, these best
students hang out at? I think we all agree they should be at the one that is
the best at teaching them, or which has a style of teaching that works well
for them. But we wouldn't know that because most of the ratings can't (or
rather won't) discern this. Why is school A the best? It has the best
students. Why do they have the best students? Because school A is the best
school. So it's not bad for figuring out where the school you belong at is,
but a complete non-metric for the school itself - they can't tell how they
rate at actual education or if they're a good choice teaching wise (or if all
the students would be better off congregating elsewhere - they'd still have
each other as well as potentially better teachers).

------
lordofmoria
To avoid the needlessly vague allusions to schools going on in this thread: I
went to Princeton University for undergrad.

I'm sure Bill Gates has a more nuanced view on college ratings than this
article suggests. Welcome to media.

We all know why the college rankings are the way they are. Frankly, no one
cares who the "most improved" olympic athlete was in London. People generally
want the unambiguity of an outright set of winners. Unambiguous winners may be
ok in sports, but not in colleges, where the rankings have a big impact on
education here in the US. Gates points out this flaw and argues instead for a
"most improved student" metric.

Unfortunately, Gates' "solution" wouldn't really solve the problem either.
What is the positive feedback loop for schools that rank highly on "produces
the most improved students"? Would they receive extra government funding?
Attract better students? I see neither of these as likely. Try again Mr.
Gates.

~~~
spikels
What? "Improving", that is educating, students is the primary purpose of
colleges. Seems perfectly reasonable to want to measure how effectivly they do
their job.

The quality of the graduates of a college depend on a number of factors but
probably the two most important are the quality incoming students and the
college's ability to improve their quality. In effect colleges perform two
functions: sorting and educating.

Sorting people based on standardized testing, high school grades, essays,
recommendations, interviews and application details is a useful service in
itself. However most of the value and almost all of the cost of colleges comes
from educating not sorting.

The current college rating "system" can't possibly separate these two sources
of quality. Clearly it would be something useful to know as within any group
of similarly ranked colleges there will be differences in the quality of the
education component.

All else being equal prospective students would choose the school that offered
a best education. More applicants would to schools that were better at their
primary mission, educating their students. In turn these schools could be more
selective.

Building such a system will not be easy but should be very valuable. Please
continue working on this Mr Gates.

------
sageikosa
Unless the metrics used for the ratings are based upon things like per capita
salary x-many years after graduation (broken out into specific fields of study
and private/public sectors), then it all comes down to either a popularity
contest, or a measure of how well a school's policies reflect the latest
"progressive" education practices.

In other words, unless it shows what the student "gets" out of the experience
when done, they are selling dreams.

~~~
the_watcher
When you measure by these metrics, you end up getting the law school scam,
where schools employ unemployed graduates for juuust long enough to count them
as long term employed, and only make an effort to collect the salary of highly
paid graduates (and use "reported salaries" as their denominator, not total
grads)

~~~
sageikosa
OK, I'm not advocating that those metrics _must_ be used, and must be
collected in a certain way. What I am doing is being critical of rankings that
aren't at least related in some way to the reasons most people seek higher
education.

If anything, I am most critical of a single ranking that is supposed to tell
me how good something is crafted from within the community that is being
ranked.

------
stfu
Couldn't this be just a side effect of what he advocated a few days ago in his
WSJ commentary? [1]

By putting a lot of pressure on measuring things, people use the data that is
easy accessible/comparable. Consequently they put a lot of effort into
constructing the argument why these factors are the most important ones.
Getting unskewed data is incredibly hard, especially when there is so much to
gain from subtly manipulating the data.

[1][http://online.wsj.com/article/SB1000142412788732353980457826...](http://online.wsj.com/article/SB10001424127887323539804578261780648285770.html)

------
ojbyrne
One of the absurdities of the various college ranking systems is that
reputation is a large component of the ranking. So if you have a good ranking,
you get a good ranking.

------
natural219
When I graduated high school, all I wanted was a list of the top Computer
Science schools in the world. I can't recall exactly what happened during that
period, but somehow, I ended up at the University of Nebraska.

Something is broken.

------
Irregardless
> “The report concluded that there were observable, repeatable and verifiable
> ways of measuring teacher effectiveness,” wrote Gates in the letter.
> Anonymous student surveys that asked such questions as “Does your teacher
> use class time well, get class organized quickly, help you when you are
> confused" – were proven to provide useful feedback as were reports from
> trained professionals observing teachers at work.

Students are notoriusly bad at rating their professors. This was proved with
almost ideal control groups at the Air Force Academy [1] and again with groups
of trained professionals/graduate students who learned first hand about the
'Dr. Fox Effect' [2]. Even teachers don't seem to like the teacher evaluations
done by students [3].

Anecdotally, I can say that most students in my college classes either didn't
show up on the survey days, or they walked out the door as soon as the surveys
were being handed out. There's also no incentive to provide useful feedback
from the student's perspective. If you're taking a survey about the class, it
means the class is over and you'll probably never see that professor again, so
why bother? I made an effort only because I felt an obligation to help future
students, but I'm not sure there are many kids in college who share that
feeling.

> Mary Ann Stavney, a high school “Master Teacher” profiled in the annual
> letter, spends 70% of her time observing other teachers, meeting with them
> and providing input. The problem, of course, is that this kind of measuring,
> particularly the hands-on observation in classrooms, is costly, adding about
> 2% onto payroll.

So you can have cheap and unreliable measurements, or you can have accurate
but costly ones. Who's going to pay for the latter? The rating agencies? The
schools? The students? Imagine the costs to enact such a program across all
colleges in the U.S. alone -- some of the larger state schools easily have >
1,000 teaching faculty across a myriad of disciplines, and they're teaching
increasingly diverse student bodies.

The thought of trying to implement a thorough, standardized program of that
scale is mind boggling. And that's probably why we've been facing this dilemma
of measuring teacher effectiveness since the day the first schools opened.
Bill Gates is right that we have a serious problem, but it doesn't sound like
he's any closer to a solution.

[1] [http://voices.washingtonpost.com/college-
inc/2010/06/study_h...](http://voices.washingtonpost.com/college-
inc/2010/06/study_high-rated_professors_ar.html)

[2] <http://en.wikipedia.org/wiki/Dr._Fox_effect>

[3]
[http://en.wikipedia.org/wiki/Course_evaluation#Criticism_of_...](http://en.wikipedia.org/wiki/Course_evaluation#Criticism_of_course_evaluations_as_measures_of_teaching_effectiveness)

~~~
tokenadult
There was an earlier Hacker News submission about the Gates Foundation
research on teacher effectiveness,

<http://news.ycombinator.com/item?id=4559682>

linking to an article that reported details of the methodology.

[http://www.theatlantic.com/magazine/archive/2012/10/why-
kids...](http://www.theatlantic.com/magazine/archive/2012/10/why-kids-should-
grade-teachers/309088/)

I looked up other research on the matter for the reply I posted in that
thread. From the article submitted then, this is one way this process has been
validated:

"The responses did indeed help predict which classes would have the most test-
score improvement at the end of the year. In math, for example, the teachers
rated most highly by students delivered the equivalent of about six more
months of learning than teachers with the lowest ratings. (By comparison,
teachers who get a master’s degree—one of the few ways to earn a pay raise in
most schools —delivered about one more month of learning per year than
teachers without one.)

. . . .

"The survey did not ask Do you like your teacher? Is your teacher nice? This
wasn’t a popularity contest. The survey mostly asked questions about what
students saw, day in and day out.

"Of the 36 items included in the Gates Foundation study, the five that most
correlated with student learning were very straightforward:

1\. Students in this class treat the teacher with respect.

2\. My classmates behave the way my teacher wants them to.

3\. Our class stays busy and doesn’t waste time.

4\. In this class, we learn a lot almost every day.

5\. In this class, we learn to correct our mistakes."

Here is earlier reporting (10 December 2010) from the New York Times about the
same issue:

<http://www.nytimes.com/2010/12/11/education/11education.html>

Here is the website of Ronald Ferguson's research project at Harvard:

<http://tripodproject.wpengine.com/about/our-team/>

And here are some links about the project from the National Center for Teacher
Effectiveness:

[http://www.gse.harvard.edu/ncte/news/NCTE_Conference_Using_S...](http://www.gse.harvard.edu/ncte/news/NCTE_Conference_Using_Student_Surveys.php)

Simply put, don't assume that what the Gates Foundation was investigating was
the same kind of student opinion survey that I have filled out as a
postsecondary student. (But note that I'm not so sure that those surveys are
as bad or as useless as college faculty often claim they are.) There is a
research base for the primary school pupil and secondary school student
ratings used in the Gates Foundation studies, and I have every reason to
believe those ratings would help school effectiveness--so much so that I use
the same questions to invite my clients of my mathematics program to evaluate
my teaching from that point of view.

Other comments in this thread are about the more general issue of college
rankings as they currently exist. As a parent who has occasion to look at my
children's college search process for four children, I really like the site
College Results

[http://www.collegeresults.org/search1b.aspx?institutionid=11...](http://www.collegeresults.org/search1b.aspx?institutionid=110635)

which aggregates data that colleges are required by law to report to the
federal government into user-friendly data look-ups that allow direct
comparisons of similar colleges along many dimensions. For me as a parent, one
of the most interesting data views is a view of "comparable colleges" for a
college of interest, sorted under the Finance and Faculty tab for a ranking of
colleges by instructional expenditures / FTE (full-time equivalent students).
That comparison often reveals that even the "scholarships" (discounts from
list price) that colleges offer still leave parents spending far more for
their children's higher education than the college itself actually spends on
educating students. That's a raw deal that more parents ought to know about.
Colleges hire expensive consultants to learn how to confuse parents on the
issue of value,

[http://www.maguireassoc.com/services-challenges/optimize-
net...](http://www.maguireassoc.com/services-challenges/optimize-net-revenue/)

and parents have to defend themselves by looking up comparable data.

~~~
ChuckMcM
And this is my hypothesis on why home schooling otherwise normal children is
so much more effective than primary school education.

1) Kids between the ages of 5 and 13 often do treat their parents with
respect.

2) Kids between the ages of 5 and 13 often do what their parents tell them to
do.

3) Home school kids stay busy and don't waste time because the parent(s)
aren't going to let the time go to 'waste'

4) Home schooled kids stay on subjects until they understand them and move on
as soon as they do, this means little down time or 'review' for other students
slowing them down.

5) Home schooled kids go through an correct all the mistakes and talk about
how they made them in the first place and work on ways to avoid them in the
future.

These all relate to the relationship the child has with their parent/teacher,
the teacher is really invested in the child's success, and the precise pacing
of subject introduction which is tailored to the student's ability to take in
new concepts. The more you generalize more students per teacher, more teachers
per student, the harder it is to keep these things optimized.

~~~
overgryphon
Primary school seems to be about social skills just as much as academic
skills. How does homeschooling address that?

~~~
learc83
Everyone says that, but I've seen no data to back it up. In fact, most studies
show the exact opposite, that home schooled children have _better_ social
skills. People just assume that since they went to school, everyone else
should too. Is it really in a child's best interest to spend the majority of
his time with hundreds of other poorly supervised children, learning to
emulate their behavior rather than the behavior of trusted adults?

As soon as you get out of school you realize that adult life, for most people,
is absolutely nothing like school. It's shocking just how much nicer adults
are than children. The only social environment school really prepares you for
is an institutionalized one (e.g., prison).

And don't even get me started on the distractions, by the time I was 13 or so
I spent so much time worrying about girls (and other social stuff, but mostly
girls) that _learning_ was the last thing on my mind (I still made a 4.0 in
high school, but I didn't actually _learn_ much). I also don't think that
those juvenile relationships prepared me in any way for actual adult
relationships.

~~~
brainflake
Which studies? I'm a little skeptical that home schooled children have better
social skills than kids that attend public school.

~~~
ChuckMcM
Its a common question that comes up, and as far as my wife and I could figure
out it has never actually been an issue here. One of the things people don't
recognize right away is that rarely are you the only person home schooling,
there can be lots of people in your area doing the same. One of the programs
we did was for science a bunch of us did a one science topic for the week, and
everyone in the group's kids would go to that parent's house for some
particular expertise or investigation. These were 5, 7 even 10 student groups
of similar ages working on the same material. Similarly for Reikes which was
15 - 25 home schooled kids once a week meeting up at a county park to discuss
the ecology, bio-diversity, flora, fauna, land management, lots of stuff.

If the impression is home schooling is 1 kid sitting at home all day doing the
same things they would do in a class room, you are not seeing what is going on
around here. Groups of kids tackling problems and learning about history,
math, communication, societies and communities, and all the material you'd
normally get in school, just in chunkier bits with the opportunity to go
deeper into the topic if you're interested and just pick up the required bits
if you're not. Lots of reading, lots of field trips (the Sierras are fabulous
for doing geology field trips), museums and such. Oh and lots and lots of
reading.

I'd love to see some more rigorous work on this.

------
ivan_ah
I am all for edumetrics but there doesn't seem to a way to get a good signal
on a general "teaching skills" metric. Does such a metric even make sense? I
would assume that a proper metric for a teacher would be dict like
{"cares":"5*", "energy":4, "perceptibility":3, "subject_knowledge":5,
general_knowledge:3}.

Furthermore I am not sure how school boards and schools will use the metrics.
Should you fire a teacher because some data fit decided that you are a bad
teacher? No way! Anyone who is willing to put in the energy and spend time
with kids teaching them stuff should continue to do it. Metrics for self
assessment YES, but metrics for firing teachers NO.

Also, the whole idea of "Value added" score has been called bullshit upon here
<https://news.ycombinator.com/item?id=5059737> \-->
[http://garyrubinstein.teachforus.org/2013/01/09/the-50-milli...](http://garyrubinstein.teachforus.org/2013/01/09/the-50-million-
dollar-lie/) . [ quote: ... the correlation is so low that I, and many others
who have created similar graphs, concluded that this kind of measurement is
far from ready to be used for high-stakes purposes like determining salaries
and for laying off senior teachers who are below average on this metric. ]

The author basically says that there is no correlation of the "value added"
metric that a teacher brings from year to year.

This lack of correlation is masked in the report "Measures of Effective
Teaching" because "they averaged the predicted and actual scores in five
percentile groups. In doing this, they mask a lot of the variability that
happens" to make it look as if "value added" is a good stable metric.

~~~
yummyfajitas
The author uses a bad graph to convince the reader that there is no
correlation, even when (by his own admission, see the comments) one is
present.

The year to year correlation is 0.3. The correlation across percentile groups
is much higher because that increases the sample size and thereby reduces
statistical noise.

The conclusion we can draw here is that measuring the performance of a single
teacher based on a single class will yield a very large confidence interval.
That's not the same as "bullshit".

------
pokerfacer
US News's rankings are based upon metrics like acceptance rate, retention
rate, yield rate, charitable donations, faculty-student ratio, endowment, etc.
These rankings don't predict the "best" colleges, but rather the most
prestigious.

~~~
lordofmoria
curious, how does faculty-student ratio relate to prestige?

~~~
pokerfacer
It's not a perfect proxy for prestige, but cash-rich universities with large
endowments have more professor positions than less well-off colleges.

------
guelo
It seems like the graduate school entrance exams (GRE, MCAT, GMAT, LSAT) would
be a good indicator of undergrad performance (though not so much for
engineering and non-medical science).

Are those scores available on a per undergrad-school basis?

~~~
colincsl
I'm not sure about the MCAT/GMAT/LSAT but the GRE is a pretty bad indicator.
For example, everyone who goes to grad school for computer science or
engineering gets minimum a 750 on the quantitative section. The math only
tests high school level ability.

While I don't have any links, I think that there are many studies showing that
SAT/GRE type scores don't mean very much.

~~~
seanmcdirmid
I remember taking the GRE, I got 99 percentiles on quantitative and logical
reasoning...and like 69 percentile on verbal. The verbal part was basically a
vocabulary test.

------
nollidge
Off-topic, but in Chrome I got the bar at the top saying "This page is in
French. Translate?"

This mis-identification of language in Chrome happens to me probably once a
day, though usually when looking at code.

------
eli_gottlieb
Well _everyone_ knew _that_. It's just that nobody really has the singular
clout to _change_ the college ranking system.

------
armored_mammal
One of those ideas that's blatantly obvious, but which most will refuse to
consider until someone 'big' mainstreams it.

I have a hard time reconciling the various useful things the Gates foundation
seems to do with the tendency towards obnoxiousness that defines Microsoft.

~~~
Isamu
Making your fortune through ruthlessness and crushing the opposition, followed
by generous philanthropy, is time-honored tradition. One prototype is Andrew
Carnegie.

~~~
lordofmoria
LOL. This comment is awesome.

------
ececconi
Says the guy who went to Harvard

~~~
slurry
And dropped out.

------
CleanedStar
Implicitly stated in this is the idea that all sectors of society want, and
feel they would benefit, from this better education of everyone.

We could posit a counter-point to that assumption. The counter-point being the
posited, hypothetical idea that not all sectors of society think that "a
rising tide lifts all boats". We could hypothesize that there are sectors of
society who would be opposed to the working poor getting good educations.

But with such a non-mainstream contrarian hypothesis being posited, we'd have
to think of a reason for this. Why would some sectors of society be opposed to
this? Well, perhaps they would have a desire for a "reserve army of labor" (
<http://en.wikipedia.org/wiki/Reserve_army_of_labour> ). Perhaps if they had a
company, like say Microsoft, their company would pay dividends. Part of the
money the company doesn't reinvest in continuing costs or re-investment, would
not go to wages, but stock holders. Of course, with the small amount of stock
options most Microsoft employees have relative to their wages (not to mention
permatemping), in game theory it would be better for these workers, if money
was to go to their wage or the dividend, for the money to go the worker.
Perhaps for large MSFT shareholders like Gates, it would be better for the
money to go to the dividends, and not to wages.

How can you stop the workers from demanding higher wages? Perhaps having a
reserve army of labor, an inflated unemployment rate etc. would help. Perhaps
a worker knowing other people as skilled or almost as skilled as him are
lining up to try to get work at MSFT to get the wages he is getting, and are
being rejected in interviews, keeps him being happy with his wage.

Of course this is all just wild, non-mainstream, out there conjecture.
Obviously the world's richest billionaires like Bill Gates only have feelings
of benevolence, and aligned interests with the rest of us. You can see how
lauded he is for his charity and such in the press.

------
bluebearx
not that interesting article..

------
davidroberts
"Bill Gates, the world’s most generous and influential philanthropist." Good
thing this writer is unbiased.

~~~
jiggy2011
This is probably objectively true.

~~~
elliptic
Probably not for useful definitions of "generous," which should take into
account the actual utility (in money or time), given away. Someone who donates
half of their life savings of 50k is (I would say, at least) more generous
than someone who donates 90% of 1 billion.

~~~
ryguytilidie
I would not define someone who gave 25k as more generous than someone who gave
900million, no.

~~~
jlgreco
Would you define someone who gave away until they only had 100million
remaining as more or less generous than someone who gave away until they only
had 25k remaining?

How about if the person with 50k gave away 25k, but the billionaire gave away
double that? Who is more generousness in that case?

It seems fairly plain to me that utility of what was given away is by far the
most important factor in determining generosity.

~~~
ryguytilidie
I see your point, but I simply disagree. The utility to the person is much
different, but after that, money is money and 900m provides much more utility
to starving kids, etc than 25k does, no matter how you slice it.

