I feel for him given the preparation and detail he put into it. I missed a few SAT questions my senior year but helpfully, The College Board decided to make the test easier the following year and magically recalibrate my scores to perfect. It was too late for colleges to care and the expected thousand girlfriends never materialized but it did make me feel warm and fuzzy inside.
I hope no one will only look at his SAT.
Section 8.3.10 starts with possible numeric encodings for answers, but then jumps into a philosophical question about admission tests for religion and ends with the reference to WWN and the headline: "10 MORE COMMANDMENTS FOUND! YOU WON'T BELIEVE WHAT THEY SAY!"
> "The correlation between [...] combined verbal and math scores and freshman GPA is .52;"
.52! And it's pulled straight from the College Board's Terms and Conditions! And later on, it goes on to explain that high school GPA's correlation is just .54! The graph he produced to visualize the scatter involved with a .52 correlation is both hilarious and horrifying.
At Rutgers we failed people regularly. At NYU this would raise red flags and generate huge numbers of complaints (from students and parents).
I personally think the ACT (and by association the SAT) is a better measure of intelligence than a GPA, and I would love it if companies used that more often as a filtering metric. Obviously, grades in general aren't the best metric, but they're simple and generally pretty reliable.
If I were hiring somebody, I'd look at a combination of test scores and personal achievements. If I were hiring a programmer, I'd filter by test scores, then look at examples of projects the applicant works on/is associated with. Grades aren't as important in my opinion.
If that's the premise of the post, I totally agree. If not, I'm not willing to read a forever long post that forces me to relive the horror of the testing I went through before I got into college.
"Decades of research have established that the most effective indicators of whether someone will be a good hire are (on the one hand) a general intelligence test, and (on the other hand) a work sample. Outside the US, the use of intelligence tests in hiring is common; within the US, it subjects the hiring agent to legal difficulties. If you are hiring outside the US, use both. If you are hiring inside the US, use work samples."
which I too thought seems to fit in with several of the comments in this thread. Regrettably, most companies miss out on opportunities to use the best available hiring procedures, preferring the traditional methods to methods validated by research.
As someone who nearly aced both, the ACT is a noticeably better test. The SAT has a lot of dumb and predictable tricks involved. The ACT requires more actual ability.
Furthermore, seeing as you didn't read the whole thing, you would have missed the part where the SAT itself explicitly states that it is intended not to measure intellect, but rather how well one would perform at a university.
Colleges should have a lottery admission available to people who can get a perfect score on the SAT/ACT. Students would inadvertently study harder and learn proportionally more than if they were to study hard enough to get a perfect score.
Not exactly. To get the lowest possible score, you only need to know a incorrect response to every questions. This is a very different thing, as it is not uncommon for a question to have some obviously incorrect answers.
Just kidding. It would be impossible to tell what, if any, effect it had. Do I feel like it actually had any effect? Not really.
The log tables, of course, were more than sufficient for whatever complex multiplication and division that had to be done.
And, a correlation of .52 isn't the same as a coin toss: that would be a correlation of 0.
After 35 years teaching in Australia, my parents are now in NYC running seminars for teachers there.
They are constantly shocked at the amount of time spent in classes memorizing lexicon, dates, etc. without actually learning to apply any of it. The students can very confidently regurgitate what the "number of degrees of arc" means, but they have no idea how to apply that to anything. They'll also spit out the exact date of some war, but have no idea who fought, why, or what any outcome was.
"Frey and Detterman (2004) showed that the SAT was correlated with measures of general intelligence .82 (.87 when corrected for nonlinearity)"
"Indeed, research suggests that SAT scores load highly on the first principal factor of a factor analysis of cognitive measures; a finding that strongly suggests that the SAT is g loaded (Frey & Detterman, 2004)."
"Furthermore, the SAT is largely a measure of general intelligence. Scores on the SAT correlate very highly with scores on standardized tests of intelligence, and like IQ scores, are stable across time and not easily increased through training, coaching or practice."
"Numeracy’s effects can be examined when controlling for other proxies of general intelligence (e.g., SAT scores; Stanovich & West, 2008)."
As I have heard the issue discussed in the local "journal club" I participate in with professors and graduate students of psychology who focus on human behavioral genetics (including the genetics of IQ), one thing that makes the SAT a very good proxy of general intelligence is that its item content is disclosed (in released previous tests that can be used as practice tests), so that almost the only difference between one test-taker and another in performance on the SAT is generally and consistently getting all of the various items correct, which certainly takes cognitive strengths.
Psychologist Keith R. Stanovich makes the interesting point that there are very strong correlations with IQ scores and SAT scores with some of what everyone regards as "smart" behavior (and which psychologists by convention call "general intelligence") while there are still other kinds of tests that plainly have indisputable right answers that high-IQ people are able to muff. Thus Stanovich distinguishes "intelligence" (essentially, IQ) from "rationality" (making correct decisions that overcome human cognitive biases) as distinct aspects of human cognition. He has a whole book on the subject, What Intelligence Tests Miss, that is quite thought-provoking and informative.
(Disclosure: I enjoy this kind of research discussion partly because I am acquainted with one large group of high-IQ young people
and am interested in how such young people develop over the course of life.)
IQ is a psychometric measure of a construct, intelligence, so when you increase your IQ by getting better at taking the test, you are not actually increasing intelligence, just influencing the measure of it.
I do appreciate it, and think it really does need to be drummed into people. Particularly the material on job performance indicators(IQ and work samples everyone). Also, you should be consulting with this if you aren't already.
These nations exhibit a strong difference in adult height also. The reason is well known to be mass childhood malnutrition in command political-economies.
Ethical studies cannot reproduce the effect because malnourishing children is not ethical. Relationships between environment and heritable factors in human development are heavily dependent on context and cannot be objective by definition. When that context includes an overwhelming factor like childhood malnutrition or childhood lead exposure, the results will be extreme. The usual randomized controlled trials or twin studies or genetic marker studies cannot adequately deal with that kind of effect and are not really intended to.
Modern academic studies of IQ seem to refer to populations of well fed, healthy, well cared-for, children raised with free education according to a uniform curriculum in free liberal nations. That is very much a formula for shrinking environmental effects by shrinking environmental variance. It's the reverse of a twin study; you make the environment uniform so all the difference in outcome must be a result of heritable factors. Such studies often indicate that g is 60% heritable.
If you threw in some lead poisoning -- as was near universal in the 1950-1975 generation -- or childhood malnutrition -- very common before the twentieth century everywhere -- you would get a very different result. That's not a defect; it's built into the nature of these studies.
Don't forget this was the middle of the Green Revolution, but the USSR was still slightly behind the curve. http://en.wikipedia.org/wiki/Green_Revolution
A lot of those things, particularly the reading comprehension sections, seem like they could be heavily influenced by culture.
I know a guy who honestly tried on the math section. He got the single point for signing his name, but missed all the questions. The first question is "2+2".
That is interesting.How would you account for the fact that i increased my score by 130 points upon a retake without the help of a tutor(Too bad my new score was useless since i already had a fullride)?These tests can be gamed.Heck ,even iQ scores arent stable over a lifetime if anyone has bothered to read current research(Fynn Effect blah blah)
I realized that this must be what my friends go through when I explain stuff I'm interested in.
It's a good thing we have tests like these and pay large sums of money to the people who maintain it. Otherwise interviewing might be totally screwed up and completely fail at its intended purpose in this country.
Of course, what it's truly good for is to measure how affluent the student's family is.
In my opinion, the completion of a college degree in combination with one's GPA add up to a pretty good, but not perfect, indicator of one's work ethic. Standardized tests on the other hand, could identify intelligence, but probably not in their current forms.
>"The purpose of the SAT is to predict how well a high school student would perform as a freshman at a college or university in the United States of America (USA)."
>"...Another implication of the purpose of the SAT is that taking preparation courses that focus on the exact, narrow content of the SAT, no more and no less, is not "morally wrong". If a person does better on the SAT as a result of specific training or coaching, then, in essence, the SAT has indicated that the person has a quality (such as initiative, motivation, or money) that correlates with good performance in a college or university in the United States of America (USA)."
Interestingly, there's no "hard work" in taking a test you do not prepare for, e.g., an IQ test.
It's not a random number the College Board pulls out of their backside. At the end of the day, having an independent company running standardized tests across the nation in dozens of subjects (there's SAT and then there's SAT II) is really a good thing (TM).
Interviews are good, but they're not great. You can't have a good interview without good metrics at hand to qualify what you're seeing. There are geniuses that fail at social interaction and will buckle under the pressure of an interview at CalTech, MIT, Harvard, Stanford, etc. or Google, Microsoft, Facebook, & co. but you'd still want; just as there are interviewees that will ace any verbal interview with flying colors (yes, including technical questions) that you wouldn't.
At the end of the day: the SAT score, as it was intended to be used, is a datapoint. It's not the be-all end-all: it's an additional piece of information that should be used responsibly to arrive at a good conclusion whether or not to accept/hire an individual.
A good score on the SAT + a good score at an interview means that this person not only knows their stuff and can think on their feet, it also means they're hard-working and will do what it takes to do great (believe me, it takes MONTHS (or even years) of dedication to actually ace the PSAT or SAT). A good score on the SAT and poor interviewing means this person is a hard-worker but not particularly great at thinking on his or her feet. A good interview w/ poor SAT scores indicates someone that couldn't be bothered to study for even a month for an important event in their high school career - can you trust him/her to research thoroughly before taking on a big project?
The problem is not with the SAT. The problem is with the interviewers at colleges and workplaces around the country that treat it as the Holy Grail. Don't blame the College Board for this one.
Anyway, what would you have the SAT replaced with? Don't tell me you'd have colleges accept students based solely on the grades they achieved - in the United States we have some the highest educational disparity across the nation, and even across neighboring cities and suburbs. An A+ at one school may not even be a C at another. Teachers who's pay depends on the evaluation of their kids' academic performance are not necessarily the people you want giving them a number you'd bet the farm on. Just look at the recent Chicago teachers' strike: at the end of the day, pay and compensations aside, it rested on the fact that teachers don't want standardized tests to be used as metrics to determine the quality of their teaching, and they wanted to set the evaluations themselves.
The statistical balance of the scores on the PSAT/ACT/SAT/SATII is very well-studied and well-designed, just like with the USMLE medical board examinations and all other engineered standardized tests. There are easy questions, there are medium questions, there are hard questions, and there are flawed questions which can't be answered - before and after every examination, the questions are classified and then re-classified to avoid anomalous or unfair results.
Coming up with these questions, organizing the examinations, coming up with the statistical review methods, grading the exams, and getting the results to the universities is not cheap - I paid for all these exams when I was in highschool myself (and my family was lower middle class at best) and while I then considered them to be pretty damn expensive (and that's why I did my best to get the mark I needed the first time around and worked hard to make sure money was not being thrown away), in retrospect the pricing is very fair (although I hear USMLEs are outrageously expensive, but then again, so is everything associated with the medical industry).
There's a very good reason for good teachers to push back against evaluations based on standardized testing, and that's the fact that they can be assigned bad students to force them out of their jobs. When two equally qualified teachers at the same school teach the same subjects, and one of them plays the administration's political games and the other does not, you may mysteriously find that the second teacher's class sizes are larger, and that they got all the known troublemakers.
Using standardized testing of students for teacher evaluation, without accounting for variation in students (both random and orchestrated) will not bring better teachers into public schools.
I'm disinclined to believe this statement. Would you mind taking the time to prove this for me? Also, what is considered an "ace" score?
In my experience the PSAT and SAT were tests that measured how well you can take tests and I do not consider my personal test scores to be the result of years of hard work and dedication.
I've met many highly-talented individuals who did really great on PSAT/ACT/etc, but none of whom could have achieved a 200+ on the PSAT without studying (for) the test extensively beforehand. In fact, I'd bet there isn't a single NMSQT scholar who didn't kill themselves in preparation for the PSAT.
ie: did you go to class? do homework? spend more time on projects that were non-school related? etc.
I was an NMSQT semi-finalist, but not a finalist, so I'm fairly certain I don't qualify as an NMSQT Scholar however, by your definition, I probably aced it. The extent of my studying (for any standardized test) was taking an SAT book from the common library in the house and placing it on the shelf in my room.
I've never put much stock in it, but if what you're saying is true, it has ramifications for my worldview in general. I guess what I'm trying to say is thanks for giving me some perspective, I have some thinking to do.
I contend that those that put tremendous importance on high school assignments (ex: the overachievers) come out needing textbook style answers because a huge portion of the time, high school assignments and tests are of the "This is what it says in the textbook. The textbook is always right. Go find the sentence and repeat it here." type.
Furthermore, I doubt you can judge someone's ability to find solutions to non-textbook style problems via their performance in high school due to the fact that those sorts of problems are rarely brought up in a high school context.
Case in point: Multiple choice tests and essay questions based on your ability to cross reference the textbook.
Anecdotal evidence: I coasted through high school and have no trouble arriving at potential solutions (and gauging chances of success) when I meet problems with no discernible textbook-style solution.
Making the ugliest one possible