As the father of a five-year-old, two things puzzled me here:
1) "In the United States, students begin Grade 1 at the age of 5 or 6." Say what? In our part of Michigan, kids begin kindergarten at 5 or 6, leaning towards 6.
2) I see people here complaining about the math on the test, but what boggles me is the reading! Are beginning first graders really expected to be able to handle word problems? Maybe it's just been too long since I was there myself, but I thought we were just learning to read "See Spot run" at that point in school. For sure it seems like an insane amount of reading skill to ask of a five-year-old.
People are put off by the jargon. The jargon in the test will have been introduced in class by the teacher. For example, "number sentence" is "equation". I support that one. It gets kids understanding the equal sign, probably the most important symbol of mathematics, and making the connection between story problems (in English sentences) and their mathematical interpretation (in equations).
And the idea of a "related subtraction sentence" is good too. The related subtraction sentences for 4 + 3 = 7 are 7 - 3 = 4 and 7 - 4 = 3. That's one way to teach subtraction, as the inverse of addition. Understanding it this way helps to remember the subtraction tables.
The problem is that the test itself is rubbish. (My third-grader has been through this curriculum, in CA, but not with this test.) The worst one is question 12, which asks for a subtraction sentence, but none of the choices are subtraction. D'oh!
For what it's worth, I wasn't trying to excuse the math or the jargon, which is terrible. But at least simple addition and subtraction problems sound like something a beginning first grader might be able to handle, particularly after a month of studying the jargon. I could probably teach my son to do that in an afternoon or two.
That level of reading, though, seems completely insane to expect one month into first grade!
I'm curious, you say your child has been through this curriculum. Are you saying your child was in a school using the Pearson curriculum or that your child's school used a curriculum based on common core?
Common core curriculum (in CA, not NY, but they seem to be similar), but not Pearson tests.
Her tests and homework were reasonable.
I do have a problem with the attention her district pays to testing, but not really with the curriculum per se. The testing does give teachers less latitude with what subjects they teach, and when they teach it. They have to cover certain concepts by the time the test is taken, which in her schools is March or April. This means they have to re-order the curriculum to cover the subjects to-be-tested in time for the test.
> Common core curriculum (in CA, not NY, but they seem to be similar), but not Pearson tests.
Yeah, the whole point is that if both states are implementing common core they should be the same. If you look at the NY Common Core standard it's a word for word match with what is posted on achievethecore.org and commoncore.org. Where the differences lie is in the curriculums that each state, and even each school district within a state, choose to buy and then how the schools implement it. I'm seeing some schools being extremely strict and enforcing scripted curriculum on teachers such as that provided by Success for All and other schools allowing teachers leeway to teach how they feel is best for their class each year so long as it's consistent across the grade level (all teachers teaching the same concepts at the same times).
> I do have a problem with the attention her district pays to testing, but not really with the curriculum per se.
Has California, or specifically your school district, implemented performance based pay or bonuses? That's in a trial run here and there have already been complaints about increases in test focus at the schools in the trial. A big part of the performance evaluation is based on test scores.
edit: I should note that there's the common core standards and then common core curriculums. The curriculums should be compliant with the standards but aren't the same thing. So NY and CA should both be adhering to the same standard while they are free to use different curriculum (and almost certainly do from district to district).
Performance-based pay has been proposed and heartily resisted by teachers in LAUSD (Los Angeles). This article (http://www.huffingtonpost.com/2013/01/21/lausd-teacher-evalu...) says the union (UTLA) voted in January to allow performance-based evaluation, but I don't think this is linked to pay yet.
The thing I just learned is that LAUSD has been forced to (eventually) link evaluations to pay due to something called the Stull Act. And it looks like that is a state-wide ruling that will eventually force all CA districts to do this.
It's quite a situation. The outgoing mayor had made reform (a loaded word) of LAUSD a priority, but his candidates failed to dominate the school board. The charters and testing have the union grumpy. There were layoffs after the recession and several idealistic and highly-motivated LAUSD teachers I know were forced out under the union's last-in, first-out policy. They are now at charters and private schools.
*
I'm actually pretty doubtful about the precision of these test scores; I think they are noisier than most administrators seem to. And once you link pay to the scores, things are going to go downhill fast.
Good point, it does seem like a lot of reading. Personally I remember being able to read around that level in 1st grade, but I'm pretty sure I was a huge exception. I doubt most kids have reached a high enough level of reading ability to understand this crap. And I should point out that children to do not acquire the ability to read, it's a purely _learned skill_ so it's kind of like asking them to be extremely good at one thing in order to do something only tangentially related.
> Are beginning first graders really expected to be able to handle word problems?
My wife thought that the test might have been for second grade when she saw it (2nd grade teacher) but you are correct in thinking the language is not appropriate for beginning of first grade.
1) For the 2013-14 school year, NYS children need to be 5 by December. The exact date in December varies from city to city. So (for NYS at least) the kindergarten starting age is 4 or 5.
2) Yes, it is that intense. I am glad this is getting some publicity. It is especially rough for students with special needs who are often held to the same standards.
> It is especially rough for students with special needs who are often held to the same standards.
This is an issue with the law. A student that is behind for any reason be it disability, language barrier, etc, that is in a mainstream classroom must be provided with grade level instruction and materials. If they have an individualized education plan (IEP) they'll also typically get individualized instruction from supplemental sources and/or their teachers. If they are just behind it's really up to the teacher to present the student with materials that they can handle and to find ways to help the kids close the gap. The best teachers will do this all the time, decent teachers will do it some, and crappy teachers won't do it at all. You can probably imagine that schools will be comprised of all three types. As far as grading and evaluation goes the teachers don't have a choice but to hold all the kids to the grade level standards, again due to the legal issues.
The test becomes much simpler after realizing that it covers simple addition and subtraction and that it was written by someone who failed logic 101 and doesn't know who Edward Tufte is.
When a question doesn't make sense, visual positioning of the elements offers clues, and in the logically incorrect questions, there are simple assumptions (only two red in question 11, only pb or only jelly in question 2) that make the question a simple addition or subtraction problem.
I'm not saying the test is good for first graders, but I think complaints that problems are wrong or confusing miss the point of the test. It is not a logic test.
I'm okay with questions 2 and 11. Admittedly #2 could be made more explicit by adding the word "only" after "have", but I think in the absence of the word "some" then it's a valid inference to conclude the writer meant "only". Same thing with #11, it could have been made more explicit (in two ways!) by saying "only 2 are entirely red" and asking how many are not entirely red. But again, I think in the absence of a phrase like "at least two of them are partially red" then inferring "only two are entirely red" is valid. I also think the majority would have no problem making those inferences -- the person whose test was uploaded didn't have a problem (they just failed with subtraction on #11).
I think I get #5. The person wrote a 9 when they should have written a 4.
#6 is fine, but then it makes #12 not make sense. Shouldn't a subtraction sentence have a subtraction symbol in it? Or is a "related subtraction sentence" an actual thing, not just asking what the subtraction sentence is in relation to the picture?
#7 is fine.
#8 introduces the "number sentence". (What's wrong with the word "equation"?) But aren't all of the examples "subtraction sentences"? Bah! Apart from that it's fine...
"but I think in the absence of the word "some" then it's a valid inference to conclude the writer meant "only""
The first two years of critical thinking were dedicated to trying to drive that type of logic out of our head. "Six Jars Have Jelly" in no way implies they don't also have peanut butter, it simply makes explicit that six jars have Jelly. "The rest have peanut butter" doesn't imply the rest don't have jelly, it simply makes explicit there are at least two jars with Peanut Butter. That entire sentence would have been perfectly applied to 8 jars of peanut butter/jelly. By suggesting it does not, you are setting in place logical frameworks in a child's head that become very difficult to remove - even if it's clear they are invalid frameworks.
When you say, "I also think the majority would have no problem making those inferences" - you are just making my point - the Majority of people come to conclusions that are in no way supported by the text, and, even worse, when you point out their error, they instead suggest an inference has been made in the text, when it very clearly has not.
"Some of the Apples in the barrel" in no way suggests not all of the Apples are rotten. "A few people in the crowd acted out against the police" doesn't imply they didn't all act against the police, etc...
The person whose test was uploaded is a six year old who is doing their level best to come up with the "right" answer. In ten years, they are they same type of student, who, in a chemistry exam, when provided with the question, "What particle in an atom has the least negative charge" will answer, "Electron".
I wonder how many years of probabilistic thinking it would take to drive it back into your head?
What's your prior for p1 = P(jars contain jelly and something else | you're told "six jars have jelly"), and your prior for p2 = P(jars contain only jelly | you're told "six jars have jelly")? Are they the same for you? For me, they're quite different: p1 << p2. Why is that? As a first guess, I'm very unused to seeing jars containing jelly plus other crap, even though I have seen those strange peanut butter + jelly combo jars. (http://www.americansweets.co.uk/ekmps/shops/statesidecandy/i...) Furthermore I'm unused to being told "a jar contains X" when a jar in fact contains "X and Y (and Z and ...)".
The Majority use probabilistic reasoning (even if they often suck at it) in every day life. Rational agents (which humans approximate) use probabilistic reasoning, see Cox's Theorem and the work of others. This is (part of) why calling out a logical fallacy in an argument is often a mistake -- a lot of arguments aren't small (or large) trades of deductive proofs, they're probabilistic arguments where each side is trying to update the other side with supposedly new information. Many logical fallacies can turn into probabilistic theorems. For example, "arguments from authority" can become valid in a probabilistic framework. How? While [expert said X is true] --> [X is true] is a fallacious deduction (unless you assume as a premise that whatever expert says is true, then it's just a tautology...), probabilistically our everyday experience suggests P(x is true | expert said x is true) > P(x is true | random nobody said x is true). We can find plenty of cases where experts are wrong, but in general that pattern seems to hold, and we're more willing to accept counter-intuitive truths from experts than from people on the street.
To address one of your examples: [We're informed 'a few people in the crowd acted out against the police'] --> [Not all of them acted out] is indeed an invalid logical deduction without more premises, but let's treat this probabilistically. (Fair warning: I tried my best to make sure there are no errors in my math but I've been up all night writing signal processing code...)
Let x="Not all of the crowd acted out",
let y="We're informed 'a few people in the crowd acted out against the police'",
and let c=our general background context="We know an incident occurred with the crowd and the police".
We have a prior belief about x: P(x | c) = P("Not all of the crowd acted out" | "we know an incident occurred").
We receive a new piece of information, y: "We're informed 'a few people in the crowd acted out against the police'". Now let's perform a Bayesian update:
P(x | y and c) = P(x | c) * P(y | x and c) / P(y | c)
Clearly if P(y | x and c) > P(y | c), then our updated belief about "Not all of the crowd acted out" will go up, and the inference [y] --> [x] is valid. (You would interpret the --> not as "therefore" but "increases our confidence in". [Being informed the phrase 'a few people acted out'] increases our confidence in ['not all people acted out'].)
I think it's clear that P(y | x and c) is in fact greater than P(y | c). If you don't, then continue with this next section. Using marginalization we can break down P(y | c) independent of x, then use the product rule to break it down even further: P(y | c) = P(y and x | c) + P(y and not(x) | c) = P(y | x and c) * P(x | c) + P(y | not(x) and c) * P(not(x) | c)
So, is P(y | x and c) >? P(y | x and c) * P(x | c) + P(y | not(x) and c) * P(not(x) | c), subtract and factor:
P(y | x and c) * (1 - P(x | c)) >? P(y | not(x) and c) * P(not(x) | c), by the sum rule:
P(y | x and c) * p(not(x) | c) >? P(y | not(x) and c) * P(not(x) | c)
P(y | x and c) >? P(y | not(x) and c)
Reminding what all the variables are again (temporarily dropping off the c for brevity):
P("told 'a few' acted out" | "not all acted out") >? P("told 'a few' acted out" | "all acted out!")
I don't know about you, but I think it's very unlikely to be told 'a few' acted out if in reality everyone acted out. QED.~
To put my whole comment another way: try writing an AI and sick it on this test. If it doesn't use probabilistic algorithms and instead insists when it comes to question #2 that "I cannot know the answer other than it's between 0 and 8, the question isn't explicit enough!", it's going to perform very poorly. Human language is not crisp, treating it as if it were is a bad idea, that's why we use crisp math when we want to be crisp and if we can do automated proofs so much the better.
On your last remark, I find it amusing that if I take your "it's not explicit enough!" complaint seriously, I'm not sure whether you think the correct answer is supposed to be "proton" or "neutron" or "electron". That's because I'm not sure whether you mean "least negative" to mean to "furthest to the right from negative epsilon (i.e. 'most positive')", implying the proton, or "smallest member of the negative reals excluding 0", implying the electron, or "smallest member of the negative reals and 0", implying the neutron. If I wrote the question I think I would have meant "proton" as the answer. So given that the answer is proton, however, the error the student most likely made is just a parse error, failing to properly change "least negative" into "most positive". Similar to answering "yes" to the question "the sky is not not green?", or asserting -3 * -3=-9. It's a different sort of trick on the student than what you insist question #2 could be tricking the student. In #2, the trick would be "Aha, but we never said the jars were entirely jelly! Don't insert words that aren't there!" In this charge question, the trick is "Aha, you didn't parse 'least negative' into 'most positive'! Read carefully!"
I don't disagree with you, that, if we have to make a "Best Guess", we need to bring in our day to day context - but using the the one word, "only" - not only eliminates the guessing, it also helps to reinforce logical thinking.
Re: Electron - I was trying to use it as an example of a poorly written exam. This particular question was on a Grade 10 final exam, and I thought it was obviously, "proton." The "correct" (for some definition of correct?) was "electron." I was using it as an example to demonstrated that "test creators" - are people, just like you and me, who make mistakes. Also using it to show how the student who is always looking for the "correct" answer (as compared to the correct answer) starts to lose their ability to parse and logically communicate.
"... #12 [does] not make sense. Shouldn't a subtraction sentence have a subtraction symbol in it?"
I have a kid who went through this curriculum, and yes, a subtraction sentence needs a "-" sign in it. The idea of a "related subtraction sentence" is a standard thing in this curriculum. I think #12 is the worst question on the test, because it is flat-out wrong.
Like I wrote elsewhere, I'm ok with "number sentence" because of the parallel with regular sentences. It gets the kids familiar with the idea that the math is making assertions.
I TA'd several undergrad engineering classes (at a top-10 engineering school) and you'd be surprised how many students did not use the equal sign in their work. They would write one expression down, and then another under it, and another beside that one, etc., but without equal signs, it's just babbling.
I don't agree with your complaints about 2 or 11, assumptions are a part of everyday communication. Anyway, wouldn't the question would be unanswerable if some of the jelly jars could have peanut butter? We know the question is answerable, therefor no jelly jars can have peanut butter.
Could you accept that these assumption are frequently made incorrectly?
Let us say I went shopping, and I know that I purchased eight jars of sandwich spread. I know that six of the jars had a mixed peanut butter/jelly stuff that you can squeeze out, and I remember the other two had peanut butter in a glass jar, but I can't remember if they were mixed or not, and I say, of those eight jars, "Six of the jars had jelly, the rest had peanut butter" - because that's all I was certain of - that somebody then inferring that the six jars did NOT have peanut butter, and that the other two jars did NOT have jelly -would be wrong in doing so?
It's entirely possible that the question was poorly put together, and the answers provided were not sufficient to correctly answer the question.
Test makers aren't all perfect beings - as clearly demonstrated by this test.
> Anyway, wouldn't the question would be unanswerable if some of the jelly jars could have peanut butter?
No, the question is answerable but it's answerable with a range rather than a specific number.
> We know the question is answerable
How do we know that? Lots of real-world questions aren't answerable and even if the author meant this one to be, textbook authors are fallible. So I don't think that's a fair assumption.
And yeah, I nitpicked exactly like that back when I was in grade school too. :-)
>Take a look at question No. 1, which shows students five pennies, under which it says “part I know,” and then a full coffee cup labeled with a “6″ and, under it, the word, “Whole.” Students are asked to find “the missing part” from a list of four numbers.
Some probably do have surrounding context and in the case of #3 the kids were probably provided with manipulatives so they could literally use cubes to solve the problem. The test is still terrible.
This test is so, so bad. If this is how I had been exposed to math when I was in the first grade, I hate to think how my life would have been different. (I have two degrees in mathematics.)
I've heard stories of my first grader niece crying while trying to complete her daily homework. I'm starting to understand why.
I think grade school math curriculums are incredibly bad in most of the world, though. I know it's awful here in Ontario. Everything I learned about mathematics was from non-textbooks (e.g. Dover math books), lectures, papers and blog posts. School didn't teach me a thing. I think the only decent "textbook" I have is Concrete Mathematics, and that's not even close to most textbooks. It's probably been posted a million times before on here, but http://www.textbookleague.org/103feyn.htm
Wonderful article, but one verdict turns out to be wrong:
Translating from one base to another is an utterly useless thing. If you can do it, maybe it's entertaining; if you can't do it, forget it. There's no point to it.
Do not mistake "tradition" for "useful", and especially in the context of a math cirriculum. It is useless to learn it in school. For that fraction of a percent that goes on to not only program computers, but also routinely dabble in bit-bashing, let them take the time to learn it (and they'll probably only learn special cases anyhow). Don't jam it down everyone's throat, and pile on one more reason on the already voluminous pile of reasons to hate math and think it's useless. (A conclusion that our school system does its level best to make the rational conclusion based on the evidence!)
The purpose of working in number bases other than 10 is to reinforce the meaning of place value, and to implicitly teach students that the way we write numerals is (1) very logical and sensible, but (2) an accident of biology.
What we should really do is make students add and multiply in Roman numerals. Quick, what's LXXVII x XLIV? Attempting that will make you appreciate decimal notation real quick.
Dover maths books are good readin'. In Blackwells in Oxford (UK) there used to be a budget section with loads of old Dover books. I remember one about Fourier series that was translated from a Russian text.
One of my favorite stories regarding an elementary level Standards of Learning (SOL) test in Virginia. . .Students are taught how to estimate relatively simple sums, difference, products and quotients in their heads. Students are required to select the "correct" estimate. One of the questions accidentally (I guess) included the actual answer among the choices. My son selected the actual answer rather than the "correct" estimate and it was marked wrong.
OK, technically he didn't follow directions. . .the question asked for the estimate. But. . .really. . .you penalize a kid for calculating the correct answer and selecting that over the correct estimate??
First, I should apologize for relating a personal anecdote, because it doesn't have anything to do with Common Core. It does provide something of a benchmark for the state of education 25-ish years ago, and it's troubling to me that we're continuing a downward spiral. Your story made me think immediately of this.
When I was in 2nd grade sometime toward the late 1980s, I was subjected to a science test with the following question:
"Which of the following is the closest star to Earth?"
The answers provided were "the sun," "the moon," "Venus," and "Mars."
I selected "the sun." I was marked off, and the teacher had returned the test with "the moon" circled as the correct answer. I don't recall being bothered so much by having marks taken off for providing an answer I knew with certainty was the truth.
Following receipt of the test, I distinctly recall my mum heading off to the school with me the following day to dispute this "fact." The teacher in question and the school's principal both provided the bothersome answer "Well, it's marked as 'the moon' in the answer sheet, so that's what we have to go by."
While I don't recall what became of that particular test or my final grade, it stands out as one of my earliest negative experiences in public education. I can only imagine what these poor kids are feeling when faced with questions that make no reasonable sense. It upsets me to think what memories they might harbor in the years to come, because kids certainly do NOT forget.
And to think that education has only gotten worse since. Troubling times ahead indeed!
> It's tragic that these are the attitudes of the people entrusted to educate children.
One person != everyone. Most of my teachers were good ones, willing to show me things other people considered "too advanced" for kids like me. Such people were the reason I could do things like purchase ethyl acetate as a child. Yes, that gives off poisonous fumes. It's used in entomology and was recommended by the books I was following.
I am also a nut job. The first thing people do when they learn this is ask me "oh, you're religious then?". I say no, the opposite. It would require religious faith at this point to believe that those schools could teach a kid anything at all.
I showed the test to my wife who is a 2nd grade teacher in a school district implementing common core. She was appalled both at the grading and the test itself. Then I showed her that it was from Pearson and she exclaimed "That's your problem right there."
Exactly. The concepts are appropriate (maybe not for this age though, I was unaware that NY first-graders could be only 5 in October), but the test is crap.
> I was unaware that NY first-graders could be only 5 in October
That surprised me also. Here in Nevada the cut off is in September and so any child in first grade would be at least six years old by October. A month or two doesn't make a huge difference so perhaps their cutoff is in October or November.
They're simply trying to get kids to use more abstract thought. The wording is strange, but there's nothing particularly mystifying about the answers. The abstractions are absurdly simple, but I'm not surprised that there are adults confused by this. I used to work for a factory and was constantly dismayed by how many adults are unable to read a tape measure.
Don't you think they should understand the most concrete instances of it before they're made to understand abstractions? Don't you think they should understand it with marbles and sticks before they understand it with very abstract concepts like arithmetic operators?
Actually, no. In this case, the link between numbers and objects and linking addition and subtraction to telling the "stories" of real activities is an important thing for them to know.
Far better they realize that and see the importance, than that they memorize the one digit addition and subtraction tables but think them useless and forget them entirely during summer vacation.
Never seen one imagine a cardboard box as a rocket ship, or such? That's simply abstract thought of another kind. To be honest, I feel more like rote schoolwork beats it out of them than the opposite, given how many children I've seen that are more capable than the adults, but that's a bit of a tangent.
For the record, it's not actually that hard to teach younger kids advanced mathematics. Just turn it into a game. I managed to teach a ~10 year old about mutually tangent circles, after all. And I was certainly never bound by the imaginary limits of what kids were "supposed" to learn at a certain age. Roughly the first half of the "advanced" branches of math is giving funny names to things and learning a few tricks. Basic set theory, for example, would not be all that hard to teach to a young child, but most don't see any value in it.
Anyhow, yes, that test is fairly confusingly worded (partly just from stupid and partly because we lack the context of the lessons). In particular, I agree that coins & coffee cups are a bridge too far, so they could certainly do better. But I guess people would rather WTF over the strange wording and downvote and such rather than consider whether little kids should be taught that there's more to addition than rote memorization of the addition tables? (For the record, yes, they should memorize the tables, too.)
I found it quite simple to discern that they were taught to split the problems into the part they know and the whole thing, then look for the missing piece and turn the problems into "stories" connecting the numbers to real objects.
Would everyone rather that kids learn that math is a bunch of boring number facts to memorize that have no (apparent) application to real life? Or is it the feeling that the way you were taught is plenty good and nobody should ever try to improve it, especially in a way that you don't understand? Or does this whole line of thinking simply make people uncomfortable and cause them to reject the thought entirely?
There are definitely some very advanced 5 year olds and some very talented teachers, and it's great that kids can have the opportunity to learn advanced stuff. However, a "core curriculum" has to be designed to be usable by average students and average teachers.
Perhaps, but the way things are I feel more like we should try to improve upon the average person. Young kids don't know what is and isn't "hard" yet, so they're more apt to try. Makes it easier to teach them certain things than adults, because they won't just give up straight away.
I have an 8 yo going through the common core transition now. What I would recommend parents do is teach the traditional fundamentals after school. Give your kid regular work. Its not worth being an experiment.
I'd recommend you send your kids to a school with a sane curriculum. The only thing these public school morons will understand is when they start losing students (and corresponding dollars that go along with that).
The public school system here is up in arms about a quite limited voucher program, because their funding is based on the number of bodies in the classroom. So they see it as taking "their" money, rather than thinking "I wonder why those parents don't want their kids to go to our schools."
That's a great idea, but with two kids in grade school, the local private schools would cost me about $30k per year. I would rather invest 15 minutes a day with augmenting what they are doing in the classroom. If you have the resources to go private, more power to you.
It seems like they tried to dumb down certain words like "equation" into "number sentence", but only succeeded in making it confusing and ridiculous.
And on top of that these are problem solving questions introduced way too soon. What's wrong with just asking a kid some addition and subtraction questions without all the fluff? I know it's trying to tease out whether they understand the fundamental concept and all that, but it's horribly done and kids of that age shouldn't have to be self-aware of their own learning.
> And on top of that these are problem solving questions introduced way too soon.
While those questions weren't appropriate for a first grader at the beginning of the school year introducing problem solving is a big part of common core.
I am an amateur mathematician, but these days I focus more on imperative programming languages. I tend to use the term expression for both equations and expressions, since in computing an expression returns a value of any type, not just numerical types.
oh god. I hate math, but I'm very very good at basic math, if I had had to learn like this, I cant see how I would have learned anything except that I didnt want to learn more.
The only thing I ever learned from math in high school was "I don't care why it's true, just give me the right answer" (the attitude of every single student other than myself).
1) "In the United States, students begin Grade 1 at the age of 5 or 6." Say what? In our part of Michigan, kids begin kindergarten at 5 or 6, leaning towards 6.
2) I see people here complaining about the math on the test, but what boggles me is the reading! Are beginning first graders really expected to be able to handle word problems? Maybe it's just been too long since I was there myself, but I thought we were just learning to read "See Spot run" at that point in school. For sure it seems like an insane amount of reading skill to ask of a five-year-old.