Hacker News new | comments | show | ask | jobs | submit login
SAT: Getting the lowest score possible (colinfahey.com)
109 points by solipsist 1847 days ago | hide | past | web | 90 comments | favorite

Spoliers (it's very long): he got one question correct, sadly.

I feel for him given the preparation and detail he put into it. I missed a few SAT questions my senior year but helpfully, The College Board decided to make the test easier the following year and magically recalibrate my scores to perfect. It was too late for colleges to care and the expected thousand girlfriends never materialized but it did make me feel warm and fuzzy inside.

I didn't get the recalibration :-( one off perfect forever. same as my LSAT.

I honestly don't know whether to be impressed or frightened by the level of obsessive attention to detail on display here.

I visited some of the article from his homepage. This guy is an all-around genius! He writes every page on his website in this manner, on so many different fields. Bookmarked.

I hope no one will only look at his SAT.

This is possibly the funniest thing I have read this year. As an aside, the amount of thought and focus this guy obviously can devote to a single topic is amazing. I really hope someone is paying him lots of money (or whatever he prefers) to use his talents for good.

It was funny, but in a somewhat disturbing, manic way. I skimmed much of it, but the reference to Weekly World News caught my eye and still can't figure out the point of the tangent even after going back and thoroughly reading most of the article.

Section 8.3.10 starts with possible numeric encodings for answers, but then jumps into a philosophical question about admission tests for religion and ends with the reference to WWN and the headline: "10 MORE COMMANDMENTS FOUND! YOU WON'T BELIEVE WHAT THEY SAY!"

A lot of the SAT-apologist comments here missed one of the most amazing parts of the post:

> "The correlation between [...] combined verbal and math scores and freshman GPA is .52;"

.52! And it's pulled straight from the College Board's Terms and Conditions! And later on, it goes on to explain that high school GPA's correlation is just .54! The graph he produced to visualize the scatter involved with a .52 correlation is both hilarious and horrifying.

Ideally, shouldn't there be weak or no correlation between score and college grades, when your sample is all SAT takers? My reasoning (not entirely non-serious) is that people who score high on the SAT get into hard schools and so get their GPA smacked down to average, whereas those with poor SAT scores get into easy schools where they can earn an average GPA.

Harder schools don't necessarily have harder grading. I taught at both Rutgers and NYU - if anything, Rutgers was tougher than NYU.

At Rutgers we failed people regularly. At NYU this would raise red flags and generate huge numbers of complaints (from students and parents).

I thought Rutgers was actually a harder/better school than NYU, at least for engineering. Rutgers is obviously better for film studies.

The problem with that line of reasoning is that GPA is a relative measure - your GPA tells people (theoretically) how you stack up against your classmates - the concept of grading classes on a "curve." Furthermore, there is a correlation between grade inflation and both the selectivity of a school and whether a school is private, with private schools exhibiting more grade inflation (http://gradeinflation.com).

I admit, I didn't read the whole thing, it was super long.

I personally think the ACT (and by association the SAT) is a better measure of intelligence than a GPA, and I would love it if companies used that more often as a filtering metric. Obviously, grades in general aren't the best metric, but they're simple and generally pretty reliable.

If I were hiring somebody, I'd look at a combination of test scores and personal achievements. If I were hiring a programmer, I'd filter by test scores, then look at examples of projects the applicant works on/is associated with. Grades aren't as important in my opinion.

If that's the premise of the post, I totally agree. If not, I'm not willing to read a forever long post that forces me to relive the horror of the testing I went through before I got into college.

You're looking for the canned tokenadult post which summarizes as follows:

[paraphrased] "Decades of research have established that the most effective indicators of whether someone will be a good hire are (on the one hand) a general intelligence test, and (on the other hand) a work sample. Outside the US, the use of intelligence tests in hiring is common; within the US, it subjects the hiring agent to legal difficulties. If you are hiring outside the US, use both. If you are hiring inside the US, use work samples."

Actually, the effective hiring procedures of either kind are rare everywhere, but, yes, that is otherwise a fair summary of my FAQ post on company hiring procedures,


which I too thought seems to fit in with several of the comments in this thread. Regrettably, most companies miss out on opportunities to use the best available hiring procedures, preferring the traditional methods to methods validated by research.

> the ACT (and by association the SAT)

As someone who nearly aced both, the ACT is a noticeably better test. The SAT has a lot of dumb and predictable tricks involved. The ACT requires more actual ability.

Anything involving a test will simply measure one's skill at taking tests instead of what the test is actually about. If this wasn't true at some point in the past, it's certainly true now.

Furthermore, seeing as you didn't read the whole thing, you would have missed the part where the SAT itself explicitly states that it is intended not to measure intellect, but rather how well one would perform at a university.

This reminds of me playing Hearts on a PC. You can try really hard to get a good score, or you can get the worst score possible and win the round[1]. I didn't read the whole article, but it was pretty clear that to get the lowest possible score you have to know the correct response to every question. (A little less black & white for the essay section, but you get the gist.)

Colleges should have a lottery admission available to people who can get a perfect score on the SAT/ACT. Students would inadvertently study harder and learn proportionally more than if they were to study hard enough to get a perfect score.

[1] http://en.wikipedia.org/wiki/Hearts#Shooting_the_moon

> but it was pretty clear that to get the lowest possible score you have to know the correct response to every question.

Not exactly. To get the lowest possible score, you only need to know a incorrect response to every questions. This is a very different thing, as it is not uncommon for a question to have some obviously incorrect answers.

Knowing an absolutely incorrect answer to every question is difficult to ensure. It leaves things up to chance since you cannot prepare for the exact questions you get. On the other hand, knowing all the correct answers is perfectly doable.

This is not true. For example you know that the result of 5.123.443 x 9.999.999 can't be 42. But you dont immediately know the correct answer, which is harder.

Making the answer hard to deduce/calculate is only one side of the difficulty. The other side is making the question hard to decipher. I don't think the SAT has any questions that are as straightforward as your example, but I could be wrong.

The lottery idea is a terrible one. Missing just one question on the test is enough to drop your score from a 2400 to a 2370 in many cases. Achieving a "perfect" score is largely due to luck.

Woops, my lottery was supposed to refer to people who can get the lowest possible ("perfect") score. Looks like I can't edit my comment now.

You don't really need to identify the correct answer; you only have to be able to find an answer that you're sure isn't correct. Of course, that's much easier if you know what the right answer is, but there's pretty commonly one obviously wrong option.

The "obviously wrong option" often turns out not to be as wrong as I first thought.

Isn't it considerably easier to get every question wrong than it would be to get a perfect score (every question right)? For each question, there is only one correct answer but four incorrect ones. If you're aiming for a perfect score, you have to choose the only one correct answer among the five choices. But if you're trying to get every question wrong, you can choose one of four different answers to get the outcome you want (much better margin for error).

That would be true, but you are in fact allowed to miss a couple of questions and still get a perfect score.

That depends on the section of the test. Critical reading you can get several wrong, writing depends on your essay score, and math you typically can't get any wrong.

I made 1 mistake and math and ended up with 790. I was quite annoyed with myself when I walked out knowing exactly which question I messed up too.

Did it hurt your future prospects?

Ruined my life.

Just kidding. It would be impossible to tell what, if any, effect it had. Do I feel like it actually had any effect? Not really.

I had to laugh when reading about the Fantasy "Calculators" - included as a fantasy is a "Slide Rule" - when I took my grade 12 physics provincial finals in 1987, I didn't have a calculator with me, but that was not unexpected, so students were also provided with a booklet of log tables - a predecessor, of course, to the slide rule.

The log tables, of course, were more than sufficient for whatever complex multiplication and division that had to be done.

These are still used frequently in high schools across the world. For example, Indian high schools following the CBSE system do not allow calculators, but log tables are permitted for some courses.

Yep, at my high school in New Zealand we were required to learn how to use log tables, even though we all had calculators. I don't know if it was a compulsory part of the curriculum though, and I wouldn't be surprised if they don't do it any longer.

Sad. Learning to use of tables helps you learn the concept of logarithm. Way better than a calculator.

Mostly off topic, but do USA schoolschildren get taught to use things like "number of degrees of arc", "measures in degrees" and "inverse logarithm" instead of just "degrees" and "exponentiation"/"exponential" (and the use of the last one is the geometric mean, which is a much pithier explanation and concept).

And, a correlation of .52 isn't the same as a coin toss: that would be a correlation of 0.

Yes, they do.

After 35 years teaching in Australia, my parents are now in NYC running seminars for teachers there.

They are constantly shocked at the amount of time spent in classes memorizing lexicon, dates, etc. without actually learning to apply any of it. The students can very confidently regurgitate what the "number of degrees of arc" means, but they have no idea how to apply that to anything. They'll also spit out the exact date of some war, but have no idea who fought, why, or what any outcome was.

I have never seen a teacher or student like you describe. The ones who don't know the (purported) reason for the Civil War don't know when it started either.

Isnt amazing that at the time you apply to Harvard,Yale and other top schools,admission officials brag at just how many perfect SAT scorers they have turned down,yet a few years down the line turn back and claim to have sifted out the best and the brightest and gladly hint at scores as the measure of brilliance.I even remember reading an article about Harvard adcoms pointing out how they do not want to turn their school into an Ecole Nomale Superieure .Either they lie about inputs or the outputs.They cant have it both ways.

Can you praphrase? I couldn't follow all your pronoun referents.

A lot of the comments here are related to the idea of whether or not the SAT can be regarded as being much like an IQ test. It can, and psychologists routinely think of the SAT that way. Despite a number of statements to the contrary in the various comments here, taking SAT scores as an informative correlate (proxy) of what psychologists call "general intelligence" is a procedure often found in the professional literature of psychology, with the warrant of studies specifically on that issue. Note that it is standard usage among psychologists to treat "general intelligence" as a term that basically equates with "scoring well on IQ tests and good proxies of IQ tests," which is the point of some of the comments here.


"Frey and Detterman (2004) showed that the SAT was correlated with measures of general intelligence .82 (.87 when corrected for nonlinearity)"


"Indeed, research suggests that SAT scores load highly on the first principal factor of a factor analysis of cognitive measures; a finding that strongly suggests that the SAT is g loaded (Frey & Detterman, 2004)."


"Furthermore, the SAT is largely a measure of general intelligence. Scores on the SAT correlate very highly with scores on standardized tests of intelligence, and like IQ scores, are stable across time and not easily increased through training, coaching or practice."


"Numeracy’s effects can be examined when controlling for other proxies of general intelligence (e.g., SAT scores; Stanovich & West, 2008)."

As I have heard the issue discussed in the local "journal club" I participate in with professors and graduate students of psychology who focus on human behavioral genetics (including the genetics of IQ), one thing that makes the SAT a very good proxy of general intelligence is that its item content is disclosed (in released previous tests that can be used as practice tests), so that almost the only difference between one test-taker and another in performance on the SAT is generally and consistently getting all of the various items correct, which certainly takes cognitive strengths.

Psychologist Keith R. Stanovich makes the interesting point that there are very strong correlations with IQ scores and SAT scores with some of what everyone regards as "smart" behavior (and which psychologists by convention call "general intelligence") while there are still other kinds of tests that plainly have indisputable right answers that high-IQ people are able to muff. Thus Stanovich distinguishes "intelligence" (essentially, IQ) from "rationality" (making correct decisions that overcome human cognitive biases) as distinct aspects of human cognition. He has a whole book on the subject, What Intelligence Tests Miss, that is quite thought-provoking and informative.


(Disclosure: I enjoy this kind of research discussion partly because I am acquainted with one large group of high-IQ young people


and am interested in how such young people develop over the course of life.)

How do you reconcile the large leaps in SAT scores that people achieve through preparing for the test? For example, while I had a fairly respectable score in high school, after I spent a little time working as an SAT tutor in college, I was regularly scoring perfectly on all practice tests and recently release new tests. Surely my IQ hadn't jumped drastically. Just curious for your take on this.

Your IQ could see a similar jump if you spent a lot of time practicing for and taking IQ tests.

IQ is a psychometric measure of a construct, intelligence, so when you increase your IQ by getting better at taking the test, you are not actually increasing intelligence, just influencing the measure of it.

According to studies not performed by a company selling test prep, there is no large leap in SAT score.


Yet practicing old tests clearly does increase scores, and token adult's comment above did not account for that. He assumed that everyone practices and so the measurements are unbiased, or that practice is highly correlated with IQ (which may be true)

correlations are aggregate data measures that are only meaningful on large datasets. a high score on the SAT for one person does not imply that that person is smart. it just means that if you had a large population of people and you wanted to predict their intelligence, you could use SAT scores and get pretty good results.

You don't reconcile. Tests of any kind have this problem as intelligence is not quantifiable, only knowledge and age. The GP is merely showing that the SATs are referenced as IQ tests. Since the same argument applies to IQ tests, even in this respect they are similar.

What I find interesting is that I achieved a massive leap in my SAT scores without studying. As a disclaimer, I grew up in a lower-income area with terrible college prep and neither of my parents knew the college admission process. In short, I didn't actually know that you could study for the SAT. So the first time I took it, I did so completely blind and got an 1800. I took the test, again without studying, two months later and got a 2100. Given that both times I took the exam I was neither stressed out, tired, or even the slightest bit prepared, I'm not sure how to reconcile the 300 point leap.

Just a comment, because I know you get some stick when you bring this material up.

I do appreciate it, and think it really does need to be drummed into people. Particularly the material on job performance indicators(IQ and work samples everyone). Also, you should be consulting with this if you aren't already.


West Germany and East Germany, North Korea and South Korea, China and Taiwan

These nations exhibit a strong difference in adult height also. The reason is well known to be mass childhood malnutrition in command political-economies.

Ethical studies cannot reproduce the effect because malnourishing children is not ethical. Relationships between environment and heritable factors in human development are heavily dependent on context and cannot be objective by definition. When that context includes an overwhelming factor like childhood malnutrition or childhood lead exposure, the results will be extreme. The usual randomized controlled trials or twin studies or genetic marker studies cannot adequately deal with that kind of effect and are not really intended to.

Modern academic studies of IQ seem to refer to populations of well fed, healthy, well cared-for, children raised with free education according to a uniform curriculum in free liberal nations. That is very much a formula for shrinking environmental effects by shrinking environmental variance. It's the reverse of a twin study; you make the environment uniform so all the difference in outcome must be a result of heritable factors. Such studies often indicate that g is 60% heritable.

If you threw in some lead poisoning -- as was near universal in the 1950-1975 generation -- or childhood malnutrition -- very common before the twentieth century everywhere -- you would get a very different result. That's not a defect; it's built into the nature of these studies.

I am from Germany and you are the first person to tell me about malnutrition in the GDR. After a quick google i call it BS. Yes, the GDR was a pretty shitty state, but socialism per se does not eat babies.

Malnutrition is not a simple binary condition. However, while East Germany was one of the wealthiest areas of the Soviet Union and economically better off than many nations today the average diet was lacking by current western standards.

Don't forget this was the middle of the Green Revolution, but the USSR was still slightly behind the curve. http://en.wikipedia.org/wiki/Green_Revolution

Has the SAT been found to have a disparate impact on protected minorities, like other "IQ" tests have been? I know employers usually consider any sort of "IQ test" in America to be legally risky for discrimination reasons; I wonder if Universities could face similar issues.

A lot of those things, particularly the reading comprehension sections, seem like they could be heavily influenced by culture.

There have been a lot of accusations of that over the years, which was part of the reason why the analogies section was dropped. Here's an article from 2003 mentioning probably the most infamous SAT question in this regard ("runner : marathon :: regatta : oarsman" -- the issue of course being that wealthy students are more likely to know what a regatta is): http://articles.latimes.com/2003/jul/27/local/me-sat27

How did you get involved with SET? The people who ran that back when I was a student/study participant (20 years ago!) were amazing. I got a free college CS class when I was 12-13 out of the deal, which got me my first reliable dialup access to the Internet and UNIX/VMS shells for the next few years. It's hard to thank them enough.

Instead of points for correct answers the result of the test should be defined as probability of such set of responses to be achieved by chance only. :-)

I've thought of doing this but never had the courage unfortunately. Maybe when I retire I'll have the time to go back and try this out.

My high school required students to take the ASVAB in addition to the ACT (SAT wasn't offered, so I had to go to a testing center for it).

I know a guy who honestly tried on the math section. He got the single point for signing his name, but missed all the questions. The first question is "2+2".

I have done this before on high school and college exams (not the SAT unfortunately), along with other similar experiments like writing answers backwards, using red pen to write my answers to muddle grading attempts, and writing essays on completely different subjects than the one assigned. The level of disbelief expressed by peers and teachers when you challenge their value system led to perhaps one of the most important lessons I learned in my time spent locked in the school system. Thinking outside the box means you are still stuck in a box.

>Scores on the SAT correlate very highly with scores on standardized tests of intelligence, and like IQ scores, are stable across time and not easily increased through training, coaching or practice.

That is interesting.How would you account for the fact that i increased my score by 130 points upon a retake without the help of a tutor(Too bad my new score was useless since i already had a fullride)?These tests can be gamed.Heck ,even iQ scores arent stable over a lifetime if anyone has bothered to read current research(Fynn Effect blah blah)

Flynn effect compared genereations, not individual's changes.

Arent generational changes just the average of individual changes in the same way that the character of a nation emanates from that of individual nations?

Tangentially related, on one of my (national) highschool math finals, I decided to bring an abacus instead of a calculator. The proctor gave me a weird look, I explained I was using it as a calculator, and he gave me no problems.

Did you use it? Aren't abaci rather loud?

No, it was just for show.

While reading this, I slowly started to tune out and pressed the back button after realizing just how much material was on the page.

I realized that this must be what my friends go through when I explain stuff I'm interested in.


For anyone wondering what is meant by the "12700-choice" questions mentioned starting in section 5.1, they are the questions in the "student-produced response" format analyzed in section 8.

Collin Fahey has won the internet today in my book. Every single thing he wrote had amazing attention to detail and in many cases was charged with extra meaning. Genius.

tl;dr: he got one wrong, discussed in section 14.2.4.

The SAT is an important measure of a humans intelligence and performance in the real world, that's why the best companies always filter candidates and select the ones with highest SAT scores.

It's a good thing we have tests like these and pay large sums of money to the people who maintain it. Otherwise interviewing might be totally screwed up and completely fail at its intended purpose in this country.

The SAT is good for measuring how good one is at taking the SAT, which in turn is an indicator of how much the test taker took preparing for the exam seriously, which is an indirect indicator of the person's work ethic and attitude towards putting in effort for a means to an end (yes, despite the pointlessness of this exam). It does do a decent job at finding such people who strive to excel within the limits of a (flawed) system, whether or not they realize the pointlessness of said structure. If an institution is looking for an indicator for such people, the SAT is an acceptable tool.

Of course, what it's truly good for is to measure how affluent the student's family is.

While I'm a different person at the age of 30 (Just finished my A.S. two days ago with a 3.7 GPA), I have to admit that I was extremely lazy as a teenager. As a matter of fact, I was almost certainly the laziest person in my class. Keeping that in mind, I achieved the second highest score on the PSAT in my school. I don't think that a standardized test as easy as the SATs is a good indicator of work ethic.

In my opinion, the completion of a college degree in combination with one's GPA add up to a pretty good, but not perfect, indicator of one's work ethic. Standardized tests on the other hand, could identify intelligence, but probably not in their current forms.

In particular from the post:

>"The purpose of the SAT is to predict how well a high school student would perform as a freshman at a college or university in the United States of America (USA)."

>"...Another implication of the purpose of the SAT is that taking preparation courses that focus on the exact, narrow content of the SAT, no more and no less, is not "morally wrong". If a person does better on the SAT as a result of specific training or coaching, then, in essence, the SAT has indicated that the person has a quality (such as initiative, motivation, or money) that correlates with good performance in a college or university in the United States of America (USA)."

Well said. Except I'd make the following corrections:

I'd add that sometimes the "end" may be pointless, though you might not see it as such while working toward it; years later, in hindsight you might realise it was the "means", i.e., the process of working toward it, i.e. the "hard work", that was the point.

Interestingly, there's no "hard work" in taking a test you do not prepare for, e.g., an IQ test.

Everyone I know with a high IQ did plenty of IQ test practice.

We basically do that. Harvard, Yale, etc, select based on SAT scores, then top employers select based on Harvard, Yale, etc.

Yes, it's clear (to me at any rate) that you're being sarcastic. But, speaking as someone who once was a SAT taker and now is in a position to hire people based on their SAT scores, your sarcasm is unwarranted and you're looking at the SAT in an incorrect light: the SAT, all things considered, does reflect something.

It's not a random number the College Board pulls out of their backside. At the end of the day, having an independent company running standardized tests across the nation in dozens of subjects (there's SAT and then there's SAT II) is really a good thing (TM).

Interviews are good, but they're not great. You can't have a good interview without good metrics at hand to qualify what you're seeing. There are geniuses that fail at social interaction and will buckle under the pressure of an interview at CalTech, MIT, Harvard, Stanford, etc. or Google, Microsoft, Facebook, & co. but you'd still want; just as there are interviewees that will ace any verbal interview with flying colors (yes, including technical questions) that you wouldn't.

At the end of the day: the SAT score, as it was intended to be used, is a datapoint. It's not the be-all end-all: it's an additional piece of information that should be used responsibly to arrive at a good conclusion whether or not to accept/hire an individual.

A good score on the SAT + a good score at an interview means that this person not only knows their stuff and can think on their feet, it also means they're hard-working and will do what it takes to do great (believe me, it takes MONTHS (or even years) of dedication to actually ace the PSAT or SAT). A good score on the SAT and poor interviewing means this person is a hard-worker but not particularly great at thinking on his or her feet. A good interview w/ poor SAT scores indicates someone that couldn't be bothered to study for even a month for an important event in their high school career - can you trust him/her to research thoroughly before taking on a big project?

The problem is not with the SAT. The problem is with the interviewers at colleges and workplaces around the country that treat it as the Holy Grail. Don't blame the College Board for this one.

Anyway, what would you have the SAT replaced with? Don't tell me you'd have colleges accept students based solely on the grades they achieved - in the United States we have some the highest educational disparity across the nation, and even across neighboring cities and suburbs. An A+ at one school may not even be a C at another. Teachers who's pay depends on the evaluation of their kids' academic performance are not necessarily the people you want giving them a number you'd bet the farm on. Just look at the recent Chicago teachers' strike: at the end of the day, pay and compensations aside, it rested on the fact that teachers don't want standardized tests to be used as metrics to determine the quality of their teaching, and they wanted to set the evaluations themselves.

The statistical balance of the scores on the PSAT/ACT/SAT/SATII is very well-studied and well-designed, just like with the USMLE medical board examinations and all other engineered standardized tests. There are easy questions, there are medium questions, there are hard questions, and there are flawed questions which can't be answered - before and after every examination, the questions are classified and then re-classified to avoid anomalous or unfair results.

Coming up with these questions, organizing the examinations, coming up with the statistical review methods, grading the exams, and getting the results to the universities is not cheap - I paid for all these exams when I was in highschool myself (and my family was lower middle class at best) and while I then considered them to be pretty damn expensive (and that's why I did my best to get the mark I needed the first time around and worked hard to make sure money was not being thrown away), in retrospect the pricing is very fair (although I hear USMLEs are outrageously expensive, but then again, so is everything associated with the medical industry).

...teachers don't want standardized tests to be used as metrics to determine the quality of their teaching, and they wanted to set the evaluations themselves.

There's a very good reason for good teachers to push back against evaluations based on standardized testing, and that's the fact that they can be assigned bad students to force them out of their jobs. When two equally qualified teachers at the same school teach the same subjects, and one of them plays the administration's political games and the other does not, you may mysteriously find that the second teacher's class sizes are larger, and that they got all the known troublemakers.

Wait a second - the system is known to be corrupt and political. Therefore, we should reduce external accountability and give more control to the corrupt political actors?

That's a bit of a strawman. I never said external accountability was unnecessary.

Using standardized testing of students for teacher evaluation, without accounting for variation in students (both random and orchestrated) will not bring better teachers into public schools.

"(believe me, it takes MONTHS (or even years) of dedication to actually ace the PSAT or SAT)"

I'm disinclined to believe this statement. Would you mind taking the time to prove this for me? Also, what is considered an "ace" score?

In my experience the PSAT and SAT were tests that measured how well you can take tests and I do not consider my personal test scores to be the result of years of hard work and dedication.

The PSAT is (in my anecdotal opinion) much harder to "ace" than the SAT. NMSQT qualification was around ~215/240 for last year, that would definitely count as flying colors. I'd say anything over 205ish would be "acing" the PSAT.

I've met many highly-talented individuals who did really great on PSAT/ACT/etc, but none of whom could have achieved a 200+ on the PSAT without studying (for) the test extensively beforehand. In fact, I'd bet there isn't a single NMSQT scholar who didn't kill themselves in preparation for the PSAT.

I second what biscarch said. It is most certainly an over-generalization to imply that highly-talented individuals must have studied for the PSAT in order to have done well. In my experience, it is quite common for intelligent people to do very well (NMSC Finalists) without studying, especially because that was true for me and several of my friends.

I'd be interested to know about your habits in hs to see if there is any behavioral correlation.

ie: did you go to class? do homework? spend more time on projects that were non-school related? etc.

Counterexample: I got a National Merit scholarship. I didn't study for the PSAT at all.

Thank you.

I was an NMSQT semi-finalist, but not a finalist, so I'm fairly certain I don't qualify as an NMSQT Scholar however, by your definition, I probably aced it. The extent of my studying (for any standardized test) was taking an SAT book from the common library in the house and placing it on the shelf in my room.

I've never put much stock in it, but if what you're saying is true, it has ramifications for my worldview in general. I guess what I'm trying to say is thanks for giving me some perspective, I have some thinking to do.

With the exception of the truly brilliant, (the 0.01%) the other 99.99% of us do have to put in the better part of a year to nail 2400 on the SAT. If you were able to do that without a lot of hard work and dedication, than I commend you on your inherent brilliance - life is going to be a lot easier for you than the rest of us.

People who coast through high school effortlessly have a lot of trouble when life throws them problems that lack of textbook answers, and they have no skills for solving those sorts of problems.

I would disagree.

I contend that those that put tremendous importance on high school assignments (ex: the overachievers) come out needing textbook style answers because a huge portion of the time, high school assignments and tests are of the "This is what it says in the textbook. The textbook is always right. Go find the sentence and repeat it here." type.

Furthermore, I doubt you can judge someone's ability to find solutions to non-textbook style problems via their performance in high school due to the fact that those sorts of problems are rarely brought up in a high school context.

Case in point: Multiple choice tests and essay questions based on your ability to cross reference the textbook.

Anecdotal evidence: I coasted through high school and have no trouble arriving at potential solutions (and gauging chances of success) when I meet problems with no discernible textbook-style solution.

Web Page

Making the ugliest one possible

Colin Fahey

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact