Hacker News new | past | comments | ask | show | jobs | submit login
Bill Gates Says There Is Something Perverse In College Ratings (forbes.com/sites/luisakroll)
176 points by pav3l on Feb 1, 2013 | hide | past | favorite | 108 comments



The currently most cited US college ranking is done by U.S News, which is heavily favored toward small private institutions from the east coast. I always find it irritating that a world renowned research power house such as UC Berkeley can't even make it into the top 20 ranking, where as schools such as Emory and Vanderbilt are ranked higher. It ALMOST feels like an east coast old money circle-jerk. I guess with the exceptions of a few schools such as MIT and Stanford, areas such as Law/Medicine/Business/Finance/Liberal Arts are historically valued a lot more than science and engineering in this country.

That simply has to change if we want to lure more young people into going into those fields (which is what the future of this country needs). It's still unbelievable how many high school kids (remember, most people don't live in SV) think accountants are these powerful people in suits making six figures straight out of school and engineers are nerdy people who work in a basement and gets no money/respect.


Have you ever been to UC Berkeley?

As a current grad student here: it's a fantastic research university and a beautiful campus, but the undergraduate experience is terrible. Classes are giant and there's very little faculty/student interaction, because the focus of the faculty is, for better or for worse, not on undergraduate education. Yes, there are some incredible undergrads and some great opportunities for the 0.1% of them who really stand out, and of course the grad programs are great -- if you look at the US News grad program rankings, Berkeley is top 5 in almost every discipline -- but overall the undergrad program is ranked just about where it belongs.


The US News rankings have a very negative incentivization for schools also. One of the things that my alma mater, Virginia Tech, used to do to (and I'm sure still does) is lure top tier students into their (admittedly great) engineering program. However, they then use a bootcamp style, much harder than necessary first year "weed out" program for the freshman engineering students. The effect is that 50% of the incoming engineering students fail out or are forced to transfer to the school's less prestigous programs, particularly business. The relatively high SAT scores of these students then allow these other schools to inflate their rankings with US News. By the way, I know for a fact that Perdue and a few other state schools with good engineering programs do the same thing.

The first time I saw videos of top tier freshman engineering, math, and comp sci courses, I was schocked. The problems were manageable, the pace was reasonable, the teachers were engaging..... and when I saw the course material and realized the tests were easier than the ones I took at my much lower ranked school, I realized I'd been had.

The worst, most socially irresponsible aspect of this practice of "funneling" and "trapping" your students into less desirable majors is that students who otherwise would have been engineers end up learning less useful things. Virginia Tech, and schools like it, are responsible for making the world have less engineers than it should.


But its usually just the door that is hard to get into, once through the door its usually smooth sailing. Actually, this occurs in many exclusive programs/institutions over and over again. But in reality, they have to be selective: the world doesn't need that many engineers, especially ones that aren't that smart, just like we really don't need that many doctors (especially ones that aren't that smart). Computer science is the same: sure you could double or triple the size of your program to except just anyone that wants to program, but then you would flood the market with substandard talent and your reputation goes into the trashbin!

Now, would we prefer a hard program where students would fail out instead of being stopped at the door? This also seems like a waste of resources, but it could work given the right technology (e.g. online courses).


Did you continue with engineering after the freshman classes? I also went to VT, and my classes got harder every year, except for maybe senior year but then I was expected to do a senior design project on top of regular classes. But talking to my other engineering friends, it was very dependent on the major. For Civil engineering freshman year was the weed out year, for Aerospace senior year is the weed out year.


The US News rankings have a very negative incentivization for schools also. One of the things that my alma mater, Virginia Tech, used to do to (and I'm sure still does) is lure top tier students into their (admittedly great) engineering program. However, they then use a bootcamp style, much harder than necessary first year "weed out" program for the freshman engineering students. The effect is that 50% of the incoming engineering students fail out or are forced to transfer to the school's less prestigous programs, particularly business. The relatively high SAT scores of these students then allow these other schools to inflate their rankings with US News. By the way, I know for a fact that Perdue and a few other state schools with good engineering programs do the same thing.

That's not a specific university. That's engineering schools in general. "We'll throw you head-first into the ocean, and see if you survive. If you do, congratulations, you've learned to swim!"


> Classes are giant

Current CS undergrad at Berkeley. This is the main problem I have with Berkeley. You can interact with the faculty plenty (at least in my experience), especially when you get to your upper division classes. I don't really do that much anyway and care more about how well professor's lecture than anything else.

The problem is that it's becoming increasingly difficult to get into the upper division classes you're interested in. There are just too many people and not enough space. My smallest class is 30 people, but it's a graduate level course, while my other two upper division CS classes have 400 people and 315 with 75 on the waitlist. I thought my classes would get smaller as the semesters went on, but that has not been the case.


The undergraduate computer science program is fucking awful.

1. As you say, the introductory classes are enormous. For the math classes this means less feedback on your assignments. What this also means is that you have other undergrads grading your work. If that doesn't terrify you, then it should.

2. Inadequate preparation. Similar classes in other departments have more prerequisites. I'm currently attending a machine learning class where more than half the class is probability-illiterate. Take a look at how the core math curriculum is taught (algebra, analysis)

3. Professors and graduate students are out of sync about the material and assignments, which leads to situations where nobody can answer your questions. See cs162, cs61c. I've heard similar rumors about 161.

4. Only 5-10 upper division undergraduate cs classes. Interested in type theory? Practical encryption? Computer algebra? You're out of luck. Most other universities seem to offer a lot of variety.

5. Less treatment of the fundamentals. Our systems class no longer goes into any depth about malloc. They've replaced it with a section on map reduce. Similar situation with the operating systems class and I believe the security class.

6. The people who rise to the top are the people who were probably going to be successful anyway. That's just my experience/bias though.


//1. As you say, the introductory classes are enormous. For the math classes this means less feedback on your assignments. What this also means is that you have other undergrads grading your work. If that doesn't terrify you, then it should.//

As someone who is an undergraduate TA (for computer science), this doesn't terrify me any more than overworked and underpaid graduate students grading assignments, or overworked research faculty who don't have any incentive to grade well. My anecdotal experience has been that it's generally the top 1-3% of undergrads who end up TAing or grading (in CS, at UVa; mileage elsewhere may vary), and those who do take it pretty seriously (here, at least, we're paid and it's a job). Most of the people I've TA'd with (and myself) take at least one graduate course and actually know the material better than most graduate students (since we've taken the course and universally done very well in it). UVa also rotates graduate TAs out after a year (to research or other things), whereas undergraduate TAs can stay on the same course for two or three years, which offers much more continuity for the professor and fewer TAs who are completely lost. I might be biased (see also: I get paid to do exactly what terrifies you), but I think that if the undergraduates aren't just unpaid volunteers, you get better results than using graduate students for the same things. Unpaid volunteers are much worse: you don't have to pay much, but paying someone for something makes it a job, which switches people's headspace from 'I can drop this' to 'This is my job, and I will do it well'.


4. Only 5-10 upper division undergraduate cs classes. Interested in type theory? Practical encryption? Computer algebra? You're out of luck. Most other universities seem to offer a lot of variety.

I know of almost no university besides maybe MIT where there's no bias in the teaching and research towards certain fields and away from others. It's quite unfortunate, but you have to choose your educational institution based on what specialties you want to enter.


I went to UT Austin, which is a very similar school. All the negatives you mentioned are certainly valid, but U.S News's ranking isn't just an undergrad ranking, it's an overall university ranking. And even for undergrad, career prospect for a CS undergrad from a UC school is probably SLIGHTLY better than a liberal arts grad from Vanderbilt.


Are you under the impression that Vanderbilt is a liberal arts school without a school of engineering? That's not the case.

I dont think it even has liberal arts major, heck they don't even have an undergrad business degree because they want you to actually pick something to major in.

Disclosure- I'm a Vandy Engineering School grad


They have split ratings, schools that only provide an undergrad are in a separate category.


Agree 100%. I went there as an undergrad and I hated it, except for my comp sci courses, which I loved. The department was young then, and it was a completely different experience being there during the rise of BSD.


I'm currently a student at a large (largest in the UC system) public research university, and I don't see any problem with excluding schools like UC Berkeley from the top 20. While we, like Berkeley, are a "research powerhouse," spend a little time as an undergraduate on campus and you will quickly realize how little the phrase "research powerhouse" really means to you. Professors are hired for research, not teaching, which means many are apathetic towards students while others clearly shouldn't be involved in the teaching process at all. There are some outstanding professors in terms of teaching, but the important caveat here is that at a large public research school teaching--which heavily affects the undergraduate experience is not a priority. Some argue that being at a large research school offers, as the name implies, a large number research opportunities for undergraduates. That idea is a bit overly optimistic. When you have > 100 students in almost every upper division class, it is exceedingly difficult to build a relationship with professors where you may find a chance to gain some research experience. At least with my experience in the EE department, the number of professors who take undergraduate research students is countable on one hand and most of them refuse to accept anyone who is not a 3rd/4th year. I managed to claw my way into a few labs (not EE or CS related) merely because they needed someone who could write code. The US College ranking done at U.S. news is meant to consider undergraduate teaching, and in this aspect large public schools fall far behind. If you want to see the value of research taken into account, look no further than the graduate school rankings. I don't mean to imply that the rankings are accurate, just that there is a valid reason for excluding large public schools from the top of the college rankings.


I also went to UCLA and second all of this. My undergraduate experience didn't feel at all like I was at a research university.


I am an undergraduate at UCSB and I have had a much different experience. For both my friends and I getting involved with research has been relatively easy, it just requires making the effort of interacting with the faculty outside of class, and pursuing research actively instead of waiting for it to fall into your lap. I think people forget that the faculty are often excited by having a very motivated undergraduate approach them, and show interest in the work they are doing. One of my friends started in his lab, as a second quarter Freshman, and most of my friends, including myself started second year(Note: I think this is really dependent on the kind of research, things like Data Mining have less barrier to entry then something that requires much more background like Abstract Interpretation).


I had a great research experience at UW in my Junior/Senior year. My work even made it into the New York Times (fuzz testing Java's bytecode verifier at the time to find security vulnerabilities). But it was something you definitely had to look for.


Rankings are broken. I looked into it as part of a startup idea and even memorized the U.S. News 2012 rankings. The 2012 rankings were super easy to memorize because the schools are almost exactly where you'd expect them ranked based simply on pop-culture guesses. This even happened when lawyers were asked to rank the best law schools. Consider this fun excerpt from the New Yorker where they ranked a law school that didn't exist:

Those lawyers put Penn State in the middle of the pack, even though every fact they thought they knew about Penn State’s law school was an illusion, because in their minds Penn State is a middle-of-the-pack brand. (Penn State does have a law school today, by the way.)http://www.newyorker.com/reporting/2011/02/14/110214fa_fact_...


At least 20 years ago, Penn State was in the top 4 for geography. It has definitely never been in the middle of the pack when it comes to Earth Sciences.


The USNWR college ranking is for colleges, not graduate schools. The law, medicine, business, etc, schools have their own rankings. Berkeley does very well in those graduate school rankings (indeed, Berkeley and Stanford probably get the most top-3 USNWR placements of any school).

At the undergraduate level, however, the rankings measure something different. Essentially, they measure: 1) the test scores and grades of the incoming freshman class; 2) the reputation of the undergraduate program; 3) how much money (directly or indirectly) the school spends on undergraduates. As a big state school, Berkeley can't measure up in these categories.

Incidentally, I'm not sure the USNWR rankings are totally senseless in what they measure. For undergrad, I went to a big state research university with very highly ranked graduate engineering programs. For law school, I went to a private university whose undergraduate program was ranked in the top 20. While the two situations weren't directly comparable, I have to say that there is something to what USNWR measures. The experience of being a student at the big state MRU was terrible. Enormous freshman weed out classes, professors and TA's that didn't speak English, outdated facilities, etc. The experience at the private university was wonderful. Very supportive administrators, great facilities, approachable professors, etc. My law department was about the same size as my undergraduate major school, yet I got more personal attention from professors in my first semester of law school than my entire time in engineering school.

That said, attempts to game the rankings creates terrible incentives for universities. Pretty much the whole ranking boils down to how much money you can spend. You can buy high SAT's by spending more on scholarships, you can buy smaller class sizes by spending more on professors, etc. That works fine for well-established private schools with enormous endowments, but not so much for state MRU's. Berkeley's $3 billion endowment supports 35,000 students. Duke's $7-8 billion endowment (depending on how you count) supports 15,000 students.


I agree with you that the USNWR rankings are not senseless. The fact that Berkeley has the most graduate departments in the top 5 of any University but an undergrad ranking outside the top 20 actually tells you a lot about the undergrad experience at the university. It is a research powerhouse where a top undergrad will meet exceptional professors, but you also risk getting lost in a sea of indifference.

That said, we should also keep in mind what the USNWR rankings don't measure. Washington Monthly did a raking that had UCSD on top, and almost all large public research institutions did very well

http://www.washingtonmonthly.com/college_guide/feature/intro...

The clever angle they took was to ask the question "what have you done for us lately?" USNWR is good for an undergrad asking "what can you do for me?" Washington Monthly figured that since all universities receive massive amounts of government funding, tax exempt endowments, and subsidy for tuition through federal loan guarantees, we should ask what they are contributing back. They placed an emphasis on social mobility (percentage of low income students), research (placing an emphasis on science and engineering), and social service.

It does get to the heart of the matter - the things that harm Berkeley in the USNWR ratings (large numbers of undergrads and a relatively low tuition in spite of decreasing support from the state) also enable Berkeley to enroll more low income students than the entire ivy league combined. Berkeley gets dinged for all the negative aspects of enrolling so many low income students relative to Harvard, but gets no credit.

Should it? It's all a matter of perspective - the problem is that because many other major publications don't consider rankings to be useful, USNWR is an almost unanswered voice pushing a ranking system that rewards small, wealthy, private undergraduate programs with relatively few low income students, and they're broadcasting it through a bullhorn into a quiet room.


Come on... it IS an east coast old money circle-jerk. And everyone knows it.


Why does the stench of power always smell the same?

It's all so predictable.


We all want to succeed. We all also want our friends and business partners to succeed. It's just that people who have lots of money and power are better at accomplishing that than the rest of us. We all help our friends out; only the scale is different.


That explains a lot actually. Powerful and simple explanation for the world we live in. However, pretty much the opposite of the so-called "trickle-down" mechanism... more like "trickle-up".


I don't mean to be incredibly dense, but I'll risk looking that way here.

When you're at the elementary school level, I understand that there are easily quantified skills that we believe all citizens should possess. Standardized tests seem reasonable for standardized knowledge.

Once you're at the college level, what is the goal? What is the thing being maximized, the thing that can be measured and tested and presumably improved? Creativity? Problem solving? Social adroitness? Rote knowledge? Do I dare say that it may be different for different people?

The notion that colleges could be measured along just a few axes correlating with a few particular purposes confuses me in a way that the elementary school debate does not. What am I missing?


Once you're at the college level, what is the goal?

Good question. The lack of a good answer certainly undermines the justification for colleges continued existence and pervasiveness.

Tangentially, I'd argue that any institution which can't even define it's goals should not receive any taxpayer dollars.


The only thing you're really missing is capitalism itself, which demands that everything be put on a quantifiable measure in order to justify its existence to the ultimate quantifiable measure, money.


If the primary benefit of attending a good college is education from professors, then yes, there is something perverse in the rankings. But I'm not sure that's how college actually works.

At the undergrad level, a particular class at one college is likely pretty similar to a class at another college. And aside from a few exceptional students, you're likely to be able to find whatever classes you're looking for at any college you care to attend. There are more competent people who can teach, and want to teach, intro to Shakespeare, or second-semester thermodynamics, or whatever, than there are teaching positions available.

Most of the benefit you get from choosing college A over college B comes from your interactions with your peers. Some of this is just people working together on class projects, but a lot of it is the pervasive culture of a place. At some schools, people will hang out and talk about political theory. At others, there's a culture of making art. Some places care more about sports. And at some schools there's a culture of building things. Actually, at most schools, all of these things happen to some extent, but you're more likely to encounter them at some places than at others.

And if that's the benefit of college, then it absolutely makes sense to say that the best colleges are the ones with the best students.


That's true, but also the problem. Which university should, then, these best students hang out at? I think we all agree they should be at the one that is the best at teaching them, or which has a style of teaching that works well for them. But we wouldn't know that because most of the ratings can't (or rather won't) discern this. Why is school A the best? It has the best students. Why do they have the best students? Because school A is the best school. So it's not bad for figuring out where the school you belong at is, but a complete non-metric for the school itself - they can't tell how they rate at actual education or if they're a good choice teaching wise (or if all the students would be better off congregating elsewhere - they'd still have each other as well as potentially better teachers).


To avoid the needlessly vague allusions to schools going on in this thread: I went to Princeton University for undergrad.

I'm sure Bill Gates has a more nuanced view on college ratings than this article suggests. Welcome to media.

We all know why the college rankings are the way they are. Frankly, no one cares who the "most improved" olympic athlete was in London. People generally want the unambiguity of an outright set of winners. Unambiguous winners may be ok in sports, but not in colleges, where the rankings have a big impact on education here in the US. Gates points out this flaw and argues instead for a "most improved student" metric.

Unfortunately, Gates' "solution" wouldn't really solve the problem either. What is the positive feedback loop for schools that rank highly on "produces the most improved students"? Would they receive extra government funding? Attract better students? I see neither of these as likely. Try again Mr. Gates.


What? "Improving", that is educating, students is the primary purpose of colleges. Seems perfectly reasonable to want to measure how effectivly they do their job.

The quality of the graduates of a college depend on a number of factors but probably the two most important are the quality incoming students and the college's ability to improve their quality. In effect colleges perform two functions: sorting and educating.

Sorting people based on standardized testing, high school grades, essays, recommendations, interviews and application details is a useful service in itself. However most of the value and almost all of the cost of colleges comes from educating not sorting.

The current college rating "system" can't possibly separate these two sources of quality. Clearly it would be something useful to know as within any group of similarly ranked colleges there will be differences in the quality of the education component.

All else being equal prospective students would choose the school that offered a best education. More applicants would to schools that were better at their primary mission, educating their students. In turn these schools could be more selective.

Building such a system will not be easy but should be very valuable. Please continue working on this Mr Gates.


> What is the positive feedback loop for schools that rank highly on "produces the most improved students"? Would they receive extra government funding? Attract better students? I see neither of these as likely. Try again Mr. Gates.

If nothing else, colleges that are publicly known for effectiveness will have an easier time recruiting and retaining talented employees. They would presumably also be a magnet for results-driven philanthropists, either through direct donations or being somehow accredited for scholarships.


Wonder if that metric would cause colleges to recruit the worst possible students in order to maximize the amount of improvement.


Unless the metrics used for the ratings are based upon things like per capita salary x-many years after graduation (broken out into specific fields of study and private/public sectors), then it all comes down to either a popularity contest, or a measure of how well a school's policies reflect the latest "progressive" education practices.

In other words, unless it shows what the student "gets" out of the experience when done, they are selling dreams.


That would work for MBAs and perhaps JDs, but why tip the scales against, say, a med school that produces great GPs but not many cosmetic surgeons?

A money-based measurement is also heavily canted toward the "old money" set and their legacy admissions to their alma maters.

Money isn't everything. And often it isn't even accurate at measuring how well a school equips students to make it on their own.


"Things like" would count things that mattered per category. If you wanted to go into pure science, most likely you'd be concerned about the freshman average GPA, the % of graduates who change majors, and the graduating percentage that find employment as researchers and educators.

Medical school costs (and the insurance load to practice medicine) are already tipping the scales against GPs (low return on educational investment), and many medical students already know this. The real looming healthcare crises will be when the margins on being a general practioneer sinks below the educational investment cost, and all we are left with are specialists.

As far as investment return on the cost of college education goes, earnings would be a useful metric, especially when one has to finance the education with student loans that have fixed rates of repayment. It is the most universally applicable metric, and although it may not make everybody's world go around, it does keep things from coming to a grinding halt (apart from coercion and intimidation).

In any case, I doubt the college U.S. News college ranking is the largest signal to where most incoming freshman apply.


Maybe the problem is that, for many kinds of education that are useful and/or beneficial to society, money is a poor measuring stick and market forces are not a desirable influence.

The effect on GPs you describe is already driving a health care crisis in the US. We may be left with an overhang of specialists who will surely argue their education should not become a "stranded asset."


When you measure by these metrics, you end up getting the law school scam, where schools employ unemployed graduates for juuust long enough to count them as long term employed, and only make an effort to collect the salary of highly paid graduates (and use "reported salaries" as their denominator, not total grads)


OK, I'm not advocating that those metrics must be used, and must be collected in a certain way. What I am doing is being critical of rankings that aren't at least related in some way to the reasons most people seek higher education.

If anything, I am most critical of a single ranking that is supposed to tell me how good something is crafted from within the community that is being ranked.


Nothing wrong with employment.


Too simplistic. There are a lot of things that bring bias into the equation if the question is about the effectiveness of teaching/education and not about the ability of a school to create (rather than select or influence) 'success' outside of it actually succeeding at teaching or educating.


Couldn't this be just a side effect of what he advocated a few days ago in his WSJ commentary? [1]

By putting a lot of pressure on measuring things, people use the data that is easy accessible/comparable. Consequently they put a lot of effort into constructing the argument why these factors are the most important ones. Getting unskewed data is incredibly hard, especially when there is so much to gain from subtly manipulating the data.

[1]http://online.wsj.com/article/SB1000142412788732353980457826...


One of the absurdities of the various college ranking systems is that reputation is a large component of the ranking. So if you have a good ranking, you get a good ranking.


When I graduated high school, all I wanted was a list of the top Computer Science schools in the world. I can't recall exactly what happened during that period, but somehow, I ended up at the University of Nebraska.

Something is broken.


> “The report concluded that there were observable, repeatable and verifiable ways of measuring teacher effectiveness,” wrote Gates in the letter. Anonymous student surveys that asked such questions as “Does your teacher use class time well, get class organized quickly, help you when you are confused" – were proven to provide useful feedback as were reports from trained professionals observing teachers at work.

Students are notoriusly bad at rating their professors. This was proved with almost ideal control groups at the Air Force Academy [1] and again with groups of trained professionals/graduate students who learned first hand about the 'Dr. Fox Effect' [2]. Even teachers don't seem to like the teacher evaluations done by students [3].

Anecdotally, I can say that most students in my college classes either didn't show up on the survey days, or they walked out the door as soon as the surveys were being handed out. There's also no incentive to provide useful feedback from the student's perspective. If you're taking a survey about the class, it means the class is over and you'll probably never see that professor again, so why bother? I made an effort only because I felt an obligation to help future students, but I'm not sure there are many kids in college who share that feeling.

> Mary Ann Stavney, a high school “Master Teacher” profiled in the annual letter, spends 70% of her time observing other teachers, meeting with them and providing input. The problem, of course, is that this kind of measuring, particularly the hands-on observation in classrooms, is costly, adding about 2% onto payroll.

So you can have cheap and unreliable measurements, or you can have accurate but costly ones. Who's going to pay for the latter? The rating agencies? The schools? The students? Imagine the costs to enact such a program across all colleges in the U.S. alone -- some of the larger state schools easily have > 1,000 teaching faculty across a myriad of disciplines, and they're teaching increasingly diverse student bodies.

The thought of trying to implement a thorough, standardized program of that scale is mind boggling. And that's probably why we've been facing this dilemma of measuring teacher effectiveness since the day the first schools opened. Bill Gates is right that we have a serious problem, but it doesn't sound like he's any closer to a solution.

[1] http://voices.washingtonpost.com/college-inc/2010/06/study_h...

[2] http://en.wikipedia.org/wiki/Dr._Fox_effect

[3] http://en.wikipedia.org/wiki/Course_evaluation#Criticism_of_...


You should look into the details of MET Project that is mentioned as the source of Gates' claim. As is often the case the reporter mangled or left out many of the details.

In addition to student surveys and teacher evaluations teacher performance measured as "value added" in students test scores. These three components were combined into an overall "teacher effectiveness" measure. This was all done using what look like very good experimental practices (random assignment of students to teachers, multiple schools across the US...).

Although not perfect the "teacher effectiveness" measure was quite predictive of future "value added". So perhaps we are closer to a solution. Of course this study was in elementary and middle schools and it may be more difficult in colleges (students walking out on surveys).

The project's methodology and results are well documented in a number of reports[1] including a detailed research report for those literate in statistics[2].

[1] http://www.metproject.org/reports.php

[2] http://www.metproject.org/downloads/MET_Validating_Using_Ran...


"There's also no incentive to provide useful feedback from the student's perspective."

Some departments of the university I'm studying at got around this problem by bribing the students with printing credits after filling out the lecture feedback surveys. Other more general surveys on the campus entered all respondents into a prize draw for vouchers or cash.

Sadly my department did nothing like that and as such our response rates are as abysmal as your experience describes.


There was an earlier Hacker News submission about the Gates Foundation research on teacher effectiveness,

http://news.ycombinator.com/item?id=4559682

linking to an article that reported details of the methodology.

http://www.theatlantic.com/magazine/archive/2012/10/why-kids...

I looked up other research on the matter for the reply I posted in that thread. From the article submitted then, this is one way this process has been validated:

"The responses did indeed help predict which classes would have the most test-score improvement at the end of the year. In math, for example, the teachers rated most highly by students delivered the equivalent of about six more months of learning than teachers with the lowest ratings. (By comparison, teachers who get a master’s degree—one of the few ways to earn a pay raise in most schools —delivered about one more month of learning per year than teachers without one.)

. . . .

"The survey did not ask Do you like your teacher? Is your teacher nice? This wasn’t a popularity contest. The survey mostly asked questions about what students saw, day in and day out.

"Of the 36 items included in the Gates Foundation study, the five that most correlated with student learning were very straightforward:

1. Students in this class treat the teacher with respect.

2. My classmates behave the way my teacher wants them to.

3. Our class stays busy and doesn’t waste time.

4. In this class, we learn a lot almost every day.

5. In this class, we learn to correct our mistakes."

Here is earlier reporting (10 December 2010) from the New York Times about the same issue:

http://www.nytimes.com/2010/12/11/education/11education.html

Here is the website of Ronald Ferguson's research project at Harvard:

http://tripodproject.wpengine.com/about/our-team/

And here are some links about the project from the National Center for Teacher Effectiveness:

http://www.gse.harvard.edu/ncte/news/NCTE_Conference_Using_S...

Simply put, don't assume that what the Gates Foundation was investigating was the same kind of student opinion survey that I have filled out as a postsecondary student. (But note that I'm not so sure that those surveys are as bad or as useless as college faculty often claim they are.) There is a research base for the primary school pupil and secondary school student ratings used in the Gates Foundation studies, and I have every reason to believe those ratings would help school effectiveness--so much so that I use the same questions to invite my clients of my mathematics program to evaluate my teaching from that point of view.

Other comments in this thread are about the more general issue of college rankings as they currently exist. As a parent who has occasion to look at my children's college search process for four children, I really like the site College Results

http://www.collegeresults.org/search1b.aspx?institutionid=11...

which aggregates data that colleges are required by law to report to the federal government into user-friendly data look-ups that allow direct comparisons of similar colleges along many dimensions. For me as a parent, one of the most interesting data views is a view of "comparable colleges" for a college of interest, sorted under the Finance and Faculty tab for a ranking of colleges by instructional expenditures / FTE (full-time equivalent students). That comparison often reveals that even the "scholarships" (discounts from list price) that colleges offer still leave parents spending far more for their children's higher education than the college itself actually spends on educating students. That's a raw deal that more parents ought to know about. Colleges hire expensive consultants to learn how to confuse parents on the issue of value,

http://www.maguireassoc.com/services-challenges/optimize-net...

and parents have to defend themselves by looking up comparable data.


And this is my hypothesis on why home schooling otherwise normal children is so much more effective than primary school education.

1) Kids between the ages of 5 and 13 often do treat their parents with respect.

2) Kids between the ages of 5 and 13 often do what their parents tell them to do.

3) Home school kids stay busy and don't waste time because the parent(s) aren't going to let the time go to 'waste'

4) Home schooled kids stay on subjects until they understand them and move on as soon as they do, this means little down time or 'review' for other students slowing them down.

5) Home schooled kids go through an correct all the mistakes and talk about how they made them in the first place and work on ways to avoid them in the future.

These all relate to the relationship the child has with their parent/teacher, the teacher is really invested in the child's success, and the precise pacing of subject introduction which is tailored to the student's ability to take in new concepts. The more you generalize more students per teacher, more teachers per student, the harder it is to keep these things optimized.


Assuming an involved parent (with the accumulated educational knowledge/value associated with a proper education of their own) rather than purely an "unschooling" approach (which is increasingly more popular, if risky) - this is likely true.

You are effectively providing the student (your child) with a full time, one-on-one tutor (you) which is significantly more effective (even with a less experienced teacher) than an instructor being asked to teach 20/30/40 children at one time.

A teacher in a full classroom of 30 students simply cannot pay attention to a child's advancement as precisely as a teacher working one-on-one.


Nonsense!

Homeschool kids are from a selected group where their parents are extremely dedicated

Look at public school kids who also come from the group of extremely dedicated parents


This. And consider that homeschooling in no way scales, and is not very effective for kids from troubled families (for obvious reasons).


One can never guess what is going to be projected into one's writing.

Tokenadult pointed out some things that MET figured out were highly correlated with effective teaching, he quoted them with : "Of the 36 items included in the Gates Foundation study, the five that most correlated with student learning were very straightforward:

My wife and I home schooled our kids through "8th" grade, they went to the local high school. I got to meet a number of kids being home schooled, and their parents. I recognized all five of the factors that the MET identified as being present in a home schooling situation. That is all.

What did you think I was saying?


You are confusing cause and effect, the factors are already present in families who are dedicated/resourceful enough to consider homeschooling, so homeschooling simply has no effect in that case.

It is already known that the most important factor in whether a kid learns or not is their home situation. These studies are on the effectiveness of schools assuming the home factor can't be changed easily by society.


Hmm, that wasn't my experience. There is a huge variation in the family types who are home schooling in the Bay Area, it runs the gamut. Two of the kids in the Reikes class were homeless, their Mom was doing this because while she was homeless she did not want here kids to get behind in their education, and California has some really strict rules about letting your kids go to a public school if you don't have an address in the district. Perhaps the biggest cohort were kids where the parents had an issue[1] with the public schools but didn't have the resources or desire to send their kids to private school. I do know that there was a group who leaned more toward the creationist side of things but that wasn't the groups in our circle so I can't say much about them other than that you could find them if you wanted to.

"It is already known that the most important factor in whether a kid learns or not is their home situation."

My experience would support that hypothesis. Of all the kids I've known over my life those who had parents that were supportive of their educational goals did better than those whose parents were absent or destructive. That said though, it didn't change the fact that for kids who were otherwise nominal (reasonable support from parents, no major mental health issues) learned a lot (and I believe more) in their home school environment than they would have had they gone to public school.

[1] Issues ranged from teachers who were 'bad', district policies (no pocket knives allowed), school choice in their district (gang issues), and 'bad influences' (drug use or sexual experimentation).


While you are right that the background of homeschooled children may be a confounding factor, you are overreaching when you (seem to) claim that this rules out the possibility that being homeschooled contributes as well.


Perhaps. Separating out the home factor of homeschooling to get some useful truth seems quite impossible.


I don't know what you mean by 'home factor' but my idea would be to take children from similar backgrounds who are homeschooled or not and see if that variable correlates with some measure of performance. It's not perfect of course because homeschooling parents are self-selecting, but a fully controlled experiment is not going to be feasible; however, this does not make the hypothesis any less likely, just less testable.


Can you say more about this? Or can you talk about your reasoning that makes 'home factor' the dominant factor?


The biggest impact on a kid's future is their socioeconomic status and/or the determination of their parents. Schooling provided by society is not even a close second, though it does make being a good parent easier (you can delegate 6 or 7 hours a day to society, whereas you'd have to make some really tough decisions if that option wasn't available). We know this is true because social mobility is not that great even in the states; your background matters.

Home schooling tends to just reinforce what is already true about the home environment, so there doesn't seem to be any new variable their. Of course, we could argue that schools can have adverse effects and that you are avoiding those by homeschooling, but since home life is so dominant, you'd expect kids to just regurjitate what they've learned at home anyways, at least until middle or high school where teenagers begin to break free.


Primary school seems to be about social skills just as much as academic skills. How does homeschooling address that?


In my experience (and granted the Bay Area has a fairly healthy home schooling community) is that there are plenty of opportunities for socializing which are not necessarily 'school'. One of the things we found for our daughters was Reikes Nature studies [1] (can't say enough good things about that program), activities at our church (that happen for all kids, home schooled or not) and get togethers with other kids who were part of the community.

Generally we found filling in for the 'good' part of school socializing wasn't a problem and missing out on the 'bad' part of school socializing was a benefit.

[1] http://riekes.org/nature-awareness/


Everyone says that, but I've seen no data to back it up. In fact, most studies show the exact opposite, that home schooled children have better social skills. People just assume that since they went to school, everyone else should too. Is it really in a child's best interest to spend the majority of his time with hundreds of other poorly supervised children, learning to emulate their behavior rather than the behavior of trusted adults?

As soon as you get out of school you realize that adult life, for most people, is absolutely nothing like school. It's shocking just how much nicer adults are than children. The only social environment school really prepares you for is an institutionalized one (e.g., prison).

And don't even get me started on the distractions, by the time I was 13 or so I spent so much time worrying about girls (and other social stuff, but mostly girls) that learning was the last thing on my mind (I still made a 4.0 in high school, but I didn't actually learn much). I also don't think that those juvenile relationships prepared me in any way for actual adult relationships.


Which studies? I'm a little skeptical that home schooled children have better social skills than kids that attend public school.


Its a common question that comes up, and as far as my wife and I could figure out it has never actually been an issue here. One of the things people don't recognize right away is that rarely are you the only person home schooling, there can be lots of people in your area doing the same. One of the programs we did was for science a bunch of us did a one science topic for the week, and everyone in the group's kids would go to that parent's house for some particular expertise or investigation. These were 5, 7 even 10 student groups of similar ages working on the same material. Similarly for Reikes which was 15 - 25 home schooled kids once a week meeting up at a county park to discuss the ecology, bio-diversity, flora, fauna, land management, lots of stuff.

If the impression is home schooling is 1 kid sitting at home all day doing the same things they would do in a class room, you are not seeing what is going on around here. Groups of kids tackling problems and learning about history, math, communication, societies and communities, and all the material you'd normally get in school, just in chunkier bits with the opportunity to go deeper into the topic if you're interested and just pick up the required bits if you're not. Lots of reading, lots of field trips (the Sierras are fabulous for doing geology field trips), museums and such. Oh and lots and lots of reading.

I'd love to see some more rigorous work on this.


I can't find anything in those links about their survey being tested beyond 12th grade -- have they done that yet? It would be interesting to see if the same survey is effective in a college setting, especially one where classes can exceed 300 students.


> Students are notoriusly bad at rating their professors

I wonder whether this was because they weren't asked the right questions.

The Air Force survey [1] seemed to have more broad, abstract questions like "The instructor's effectiveness in facilitating my learning in the course was good/bad/etc." or "Value of questions and problems raised by instructor was good/bad/etc." or "Amount you learned in the course was lots/little/etc."

The Gates survey [2] seemed to have more fine-grained, specific questions like "My teacher knows when the class understands." and "My classmates behave the way the teacher wants them to." and "The comments I get help me know how to improve."

[1] http://www.economics.harvard.edu/faculty/staiger/files/carre...

[2] http://www.metproject.org/downloads/Asking_Students_Practiti...


I am all for edumetrics but there doesn't seem to a way to get a good signal on a general "teaching skills" metric. Does such a metric even make sense? I would assume that a proper metric for a teacher would be dict like {"cares":"5*", "energy":4, "perceptibility":3, "subject_knowledge":5, general_knowledge:3}.

Furthermore I am not sure how school boards and schools will use the metrics. Should you fire a teacher because some data fit decided that you are a bad teacher? No way! Anyone who is willing to put in the energy and spend time with kids teaching them stuff should continue to do it. Metrics for self assessment YES, but metrics for firing teachers NO.

Also, the whole idea of "Value added" score has been called bullshit upon here https://news.ycombinator.com/item?id=5059737 --> http://garyrubinstein.teachforus.org/2013/01/09/the-50-milli... . [ quote: ... the correlation is so low that I, and many others who have created similar graphs, concluded that this kind of measurement is far from ready to be used for high-stakes purposes like determining salaries and for laying off senior teachers who are below average on this metric. ]

The author basically says that there is no correlation of the "value added" metric that a teacher brings from year to year.

This lack of correlation is masked in the report "Measures of Effective Teaching" because "they averaged the predicted and actual scores in five percentile groups. In doing this, they mask a lot of the variability that happens" to make it look as if "value added" is a good stable metric.


The author uses a bad graph to convince the reader that there is no correlation, even when (by his own admission, see the comments) one is present.

The year to year correlation is 0.3. The correlation across percentile groups is much higher because that increases the sample size and thereby reduces statistical noise.

The conclusion we can draw here is that measuring the performance of a single teacher based on a single class will yield a very large confidence interval. That's not the same as "bullshit".


If you want to understand this issue please take the time to look at the actual MET project report as well as other well done research. Gary Rubinstein's analysis seems very misleading (see comments to prior HN story). Apparently is is very controversial to even try to measure teacher effectiveness.


The author explicitly states there is a 24% correlation, and then says there is no correlation. Can he or she not make up their mind?


US News's rankings are based upon metrics like acceptance rate, retention rate, yield rate, charitable donations, faculty-student ratio, endowment, etc. These rankings don't predict the "best" colleges, but rather the most prestigious.


curious, how does faculty-student ratio relate to prestige?


It's not a perfect proxy for prestige, but cash-rich universities with large endowments have more professor positions than less well-off colleges.


It seems like the graduate school entrance exams (GRE, MCAT, GMAT, LSAT) would be a good indicator of undergrad performance (though not so much for engineering and non-medical science).

Are those scores available on a per undergrad-school basis?


I'm not sure about the MCAT/GMAT/LSAT but the GRE is a pretty bad indicator. For example, everyone who goes to grad school for computer science or engineering gets minimum a 750 on the quantitative section. The math only tests high school level ability.

While I don't have any links, I think that there are many studies showing that SAT/GRE type scores don't mean very much.


I remember taking the GRE, I got 99 percentiles on quantitative and logical reasoning...and like 69 percentile on verbal. The verbal part was basically a vocabulary test.


Off-topic, but in Chrome I got the bar at the top saying "This page is in French. Translate?"

This mis-identification of language in Chrome happens to me probably once a day, though usually when looking at code.


Well everyone knew that. It's just that nobody really has the singular clout to change the college ranking system.


One of those ideas that's blatantly obvious, but which most will refuse to consider until someone 'big' mainstreams it.

I have a hard time reconciling the various useful things the Gates foundation seems to do with the tendency towards obnoxiousness that defines Microsoft.


Making your fortune through ruthlessness and crushing the opposition, followed by generous philanthropy, is time-honored tradition. One prototype is Andrew Carnegie.


LOL. This comment is awesome.


Obnoxiousness isn't the right word.


Says the guy who went to Harvard


And dropped out.


Implicitly stated in this is the idea that all sectors of society want, and feel they would benefit, from this better education of everyone.

We could posit a counter-point to that assumption. The counter-point being the posited, hypothetical idea that not all sectors of society think that "a rising tide lifts all boats". We could hypothesize that there are sectors of society who would be opposed to the working poor getting good educations.

But with such a non-mainstream contrarian hypothesis being posited, we'd have to think of a reason for this. Why would some sectors of society be opposed to this? Well, perhaps they would have a desire for a "reserve army of labor" ( http://en.wikipedia.org/wiki/Reserve_army_of_labour ). Perhaps if they had a company, like say Microsoft, their company would pay dividends. Part of the money the company doesn't reinvest in continuing costs or re-investment, would not go to wages, but stock holders. Of course, with the small amount of stock options most Microsoft employees have relative to their wages (not to mention permatemping), in game theory it would be better for these workers, if money was to go to their wage or the dividend, for the money to go the worker. Perhaps for large MSFT shareholders like Gates, it would be better for the money to go to the dividends, and not to wages.

How can you stop the workers from demanding higher wages? Perhaps having a reserve army of labor, an inflated unemployment rate etc. would help. Perhaps a worker knowing other people as skilled or almost as skilled as him are lining up to try to get work at MSFT to get the wages he is getting, and are being rejected in interviews, keeps him being happy with his wage.

Of course this is all just wild, non-mainstream, out there conjecture. Obviously the world's richest billionaires like Bill Gates only have feelings of benevolence, and aligned interests with the rest of us. You can see how lauded he is for his charity and such in the press.


not that interesting article..


"Bill Gates, the world’s most generous and influential philanthropist." Good thing this writer is unbiased.


This is probably objectively true.


Probably not for useful definitions of "generous," which should take into account the actual utility (in money or time), given away. Someone who donates half of their life savings of 50k is (I would say, at least) more generous than someone who donates 90% of 1 billion.


I would not define someone who gave 25k as more generous than someone who gave 900million, no.


Would you define someone who gave away until they only had 100million remaining as more or less generous than someone who gave away until they only had 25k remaining?

How about if the person with 50k gave away 25k, but the billionaire gave away double that? Who is more generousness in that case?

It seems fairly plain to me that utility of what was given away is by far the most important factor in determining generosity.


I see your point, but I simply disagree. The utility to the person is much different, but after that, money is money and 900m provides much more utility to starving kids, etc than 25k does, no matter how you slice it.


I would say the person with 50k is making a larger personal sacrifice, but is not being "more generous" objectively.


I know the comment you're responding to wasn't that great, but this really isn't "objectively true." He's done a lot of good for sure, but to simply accept his contributions as "the most generous and influential" because of his fame and the money behind it, is to discredit lots of other brilliant philanthropists.

For example, wasn't there just an article on HN within the past week about Norman Borlaug, whose agricultural techniques helped prevent hundreds of millions of people from starving? That doesn't seem less significant.


Norman Borlaug is probably a greater humanitarian than Bill Gates, but I'm not aware of him doing any philanthropy. Now, you probably come up with metric of philanthropy where, say, Rockefeller beat him (Percentage of then existing human wealth donated?). But most of the measures that come to mind put Bill Gates on top, so while "clearly objectively true" would be going too far, "probably objectively true" is quite reasonable.


What's the difference between philanthropy and humanitarianism? Just donating wealth vs developing it? Is that really a significant thing to distinguish?


A philanthropist is someone who gives a lot of money to charitable causes, with a connotation of noble motives. A humanitarian is someone who acts on their strong desire to help their fellow humans, with a connotation of successfully helping many people. These have a broad area of overlap, but a person can be one without being the other.

One can be a philanthropist without being a humanitarian by, for example, donating large amounts of money to creating wildlife refuges. Charitable giving, but not to humans.

From the other direction, people like Norman Borlaug who devote their careers to helping other people are excellent examples of people who are humanitarians without necessarily being philanthropists.


Yeah, my point is there's a technical difference, but if the end effect is simply creating a benefit to humanity, it doesn't really seem important to distinguish between "philanthropist" and "humanitarian".

So saying Gates is the biggest philanthropist, I guess, is "objectively true" if you're being pedantic, and using philanthropist in the technical sense. My point was that it's not "objectively true" that he's the biggest benefactor of humanity, which is the more general-use term for "philanthropist."


can you think of someone more generous and more influential in philanthropy ?


from a pure cash standpoint what about warren buffet?


But Warren Buffet gave his money to Bill Gates to spend; and may be been goaded into it by Bill Gates, so influence is hard to compare there.


I know this writer, and she's about as "objective" as any journalist can be.


just curious who you think is more generous?


I'm not disagreeing with the statement. It's just that it's rather shoddy journalism. Most influential is a matter of opinion and needs to be justified. Most generous is plausible, but the metric for generosity needs definition.


I think that's a fair reading of it, and I'm not sure why you've been downvoted so.

An alternate reading could be that the statement is meant to be obviously opinion; one school of thought on writing is that it should be obvious which statements are fact and which are opinion by the very nature of the statement, and it is superfluous to preface them with phrases like "in my opinion".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: