Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's really depressing to read articles like this, partly because it does seem like there's no way out when you're in the middle of it, and partly because there are significant parts of the American educational system that are really little better. I did not have abusive parents or extended family members, and I did not have to endure the suffering that this author did in a far-away boarding school, but I certainly can relate to the notion of school being a prison. The idea of teaching to tests is really damaging, and it's unfortunate that the past eight years in the United States anyway have focused educational policy on testing scores alone.


Test scores are like democracy, they're the least bad option. There's simply not a better way to measure progress, or one more accurate to life.


I'm not sure I agree with this sentiment. Surely there must be a better way to show proficiency in something besides testing (in the conventional sense).

Besides, the US isn't a Democracy, it's a Republic, albiet a democratic one. The founders really hated the word and made sure it wasn't in any of the founding documents.

"Democracy: A government of the masses. Authority derived through mass meeting or any form of direct expression. Results in mobocracy. Attitude toward property is communistic negating property rights. Attitude toward law is that the will of the majority shall regulate, whether it is based upon deliberation or governed by passion, prejudice, and impulse, without restraint or regard for consequences. Results in demagogism, license, agitation, discontent, anarchy."

-1928 U.S. Army Training Manual


There are better ways, but when your objective is to be quick, at the expense of accuracy, tests are the "generally accepted" method.

If you don't want to sacrifice accuracy, I think the answer lies in neuroscience :)


Interesting how the etymologies for "democracy" and "republic" are essentially the same. :-)


Which is why PG and RTM worked so hard to develop the admission test for YC applicants . . .


It's not that test scores are somehow the least bad option. It's that they're the cheapest least bad option. Actually assessing a students comprehension would require a verbal examination.


By who? Then you're getting into subjective territory. What if the examiner doesn't like the student or is having a bad day? Both of those have been shown to have subtle effects on that sort of thing. Even a teacher who considers herself fair will subconsciously lean toward grading students they dislike worse than ones they like. Or what if the student knows the material but isn't a very good talker?

Tests are considerably more objective, which is why they're the least bad option. I'll take an objective measure over a possibly superior but significantly more subjective measure any day if accuracy is the goal.


My high school measured performance through "Exhibitions" and "Gateways" (the difference was that an Exhibition was usually the culmination of a 2ish month long project, while a Gateway was the culmination of a 2 year Division, roughly equivalent to 2 grades in a conventional school).

The requirements for a Gateway were:

1.) A substantial portfolio - we usually needed 2-3 pieces of work that met the standards for advancement into the next Division, in each skill area (there were like 6 core skill areas).

2.) An oral presentation of one of those pieces of work in front of a panel of judges, similar to a thesis defense.

The panel would have 4-5 people and would usually consist of one's teachers in the subject area (all classes were team-taught, so we had 2 teachers), a teacher who was not one of ours, a classmate, and an outside community member.

The oldest graduates of my high school are only 4 years out of college now, so it's hard to judge how successful this is. But consider that the first two graduating classes were 32 and 48 students, respectively. Of those, I'm at Google. One engineering-ish friend is at Intuit. Two classmates are at Harvard Law, one of whom spent a couple years at Bain beforehand. There're a bunch that have gone into teaching, and several that have started blue-collar businesses (the school was in central Massachusetts, which has much less of a knowledge economy than the immediate Boston area).


I'm not sure...in your parent post, you mentioned being "accurate to life" as an important measure. While subjective, verbal examinations (and all the possible unfairness they entail) can be significantly more true to life (in the real-world, post-school sense) than the multiple-choice/short-answer tests that are common today.

"Objective" tests are best for only the very few things that can be "objectively" measured. Often, they succeed only in rewarding mediocrity and parroting.


How does a math test, or a history test, or an SAT reward mediocrity or parroting?


You parrot the practice tests.


That's not a good strategy since the questions are different.


I boosted my SAT scores 100 point with practice tests, by getting a sense of how the questions were written.


Does the phrase "plug and chug" mean anything to you?


There are better options. For example:

1. A conversation.

2. Real-world problems.


Conversations aren't scalable, and they're subject to the biases of the grader so they're not particularly useful either. And how do you test someone's knowledge on history with real world tests? Subjects like English, Physics and Chemistry put real world problems on the test.


The problem is that we have testing all the way up the ladder. You can't implement a better option without hurting students when it comes time to take SATs or go to another school. For a better method to be developed you would have to somehow implement it across the board, which is probably not going to happen.


"Test scores are like democracy, they're the least bad option. There's simply not a better way to measure progress, or one more accurate to life."

There are so many different types of tests that your statement is essentially meaningless.


The OP was talking about teaching to tests, which generally means 12th grade proficiencies. No Child Left Behind has made that common practice.

The proficiencies are more than anything designed to test the quality of a given school's education and measure their progress. School districts' federal (and often state) funding and various other things are determined by their results on those tests.

My statement was that there isn't really a better metric. Does that explain it better?


"My statement was that there isn't really a better metric."

How about other types of standardized tests that are non multiple choice?


Where did anyone say anything about multiple choice? Math tests for instance are often not yet are still totally objective.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: