Hacker News new | more | comments | ask | show | jobs | submit login
How Craig Barton wishes he’d taught maths (gowers.wordpress.com)
98 points by auferstehung 13 days ago | hide | past | web | favorite | 64 comments





More precisely, in order to decide whether it is a good idea, one should assess (i) how difficult it is to give an explanation of why some procedure works and (ii) how difficult it is to learn how to apply the procedure without understanding why it works.

Well, teaching basic math at a commuter college years ago, it felt like the issue of "teaching procedure" to "teaching understanding" was complex. The course I was teaching was close to the end of the math requirements for a significant percentage of the students. I was very attracted to teaching ideas but this group of students essentially had the attitude that they wanted a procedure to memorize rather than an explanation, not matter how complex the procedure. It had a certain logic - mathematical explanation would have touched a world they were happy to and committed to leaving forever soon after this. They'd suffered through this world up this point and thinking about it was more painful than simply acting.

Which is to say, I don't think there any easy answer for how to teach math. The failure of American "new math" years ago is something of a lesson in the push-pull of concepts versus concreteness as they can become ideologies in society at large.


This is a widespread tradeoff. I'd a conversation with a first-tier college biology professor, about a way to give a more integrated, transferable understanding of a topic. He liked it, but observed, my students will shortly be taking the MCAT (high-stakes medical school entrance exam), and our time together is limited, and the MCAT doesn't test for understanding of the topic, only for something superficial and memorizable, so, I would be doing my students a disservice if I reallocated time to understanding.

Perhaps early primary school is an opportunity to escape this tension. With weaker test constraints, and more years of payoffs over which to amortize the costs of better understanding.


My understanding of learning is that it both takes a longer time to actually learn something useful and that we're inefficient in learning or retaining anything because we rushed from one topic to another in our courses.

Wow, that vector space question is a great example. It’s the kind of thing that should be straightforward for anyone who has taken a linear algebra course, but I can also totally see students getting it wrong. This is especially the case because it’s actually very easy fundamentally (the set of all integers does not comprise a field, and so a vector space cannot be defined over it).

But to my recollection, most of the popular linear algebra textbooks[1] don’t spend time showing why the integers cannot form a vector space because it’s “easy.” Instead they spend time tediously walking through examples of bizarre sets defined over R and C to show which axioms are fulfilled and which are not.

In a similar vein to the way students might overthink the elementary probability question, I could see university students trained to disprove each of A), B), C) and D) - perhaps making a mistake along the way - instead of quickly scanning the options and picking out the one which simply isn’t defined over a field.

__________________________

1. I’m thinking of Friedberg et al, Hoffman & Kunze, Axler, Strang, etc.


I got thrown and I know a lot of math. I got thrown because the field the vector space is over is not what I focus on when thinking about the structure of the vector space. indeed it took me a second to even come up with a reason why the ground set has to be a field at all - so that you can undo scalings - and while it's a good reason it still feels trivial.

edit: in fact i learned linear algebra from hoffman kunze


You can generalize vector spaces to work over arbitrary rings (in which case it is a module). I did have to do a double take on that question, because in my mind when I see someone talking about a vector space over a ring, I just silently translate it to a module (because 90% of the time, the person just misspoke, 9% of the time, even if they meant vectorspace, the immediate follow up would be "no, but does your point still stand of we consider it a module?", and maybe 1% of the time they are actually being adversarial.

As I remember, most first courses in linear algebra don't deal with the subject as anything like abstract algebra and aren't going to be targeting students who can distinguish a ring from a field. Essentially, until you can past the calculus sequence, most math in the US is centered on calculation with only a few forays into proofs and definitions.

As a bumbling idiot who never did university level math, I don’t understand the question or the answer. (I did understand the probability one and knew the right answer to that at least.) I’m trying to catch up. Would you be kind enough to explain it?

Sure.

Operations in applied linear algebra - such as matrix multiplication and solving systems of linear equations - are formalized by the theory of vector spaces, much like calculus is formalized through the theory of analysis. Vector spaces are algebraic structures which axiomatize the linearity you need to carry out these operations. If you can establish your equations exist in a vector space, you can prove that they admit linear relations and are thus solvable as linear systems.

More precisely, a vector space V is any set S defined over a field F which is closed under both vector addition and scalar multiplication, where the elements of S are called vectors and the elements of the underlying field F are called scalars. The term "closed" means that for every pair of vectors x, y in V there exists a vector x + y in V, and for every scalar c in F and vector x in V there exists a vector cx in V. There are eight axioms in total, for things like associativity and commutativity, but those aren't germane to this particular example. What's important is that vector spaces are what allow you to form linear combinations of things, which is the scaffolding you need to prove things like linear dependence and independence; whether or not a system of linear equations has no solutions, one solution or infinitely many solutions, etc.

Fields are the algebraic structures which formalize the elementary arithmetic you're already familiar with over sets like the the complex numbers, the real numbers, the rationals, etc. They are sets which are closed under "regular" addition and multiplication. Notably, integers do not comprise a field because integers do not have multiplicative inverses. Multiplicative inverses are the axiomatic way of establishing that in any field, division must be possible. So concretely, there is no multiplicative inverse 1/n for any integer n. There is in the set of rationals, but not the set of integers. Therefore integers are not closed under multiplication, and they cannot comprise a field.

Since the integers do not comprise a field, you cannot define a vector space over the integers, because the scalars used to define scalar multiplication in vector spaces are just elements of the underlying field. If you try to define a vector space over a set without multiplicative closure, the vector space cannot be closed under scalar multiplication. Among other things, linear combinations stop being invertible (or even possible in general), and linear relations don't exist.

So circling back to the specific question: it's asking which of the given sets comprises a vector space. You can make all kinds of abstract vector spaces (e.g. the set of all polynomials over a field, the set of all polynomials with degree at most n over a field, the set of all continuous functions, etc). But if you stick with the definition of a vector space, you don't need to tediously test each of the given sets for the eight axioms. You just have to remember the integers don't comprise a field, so the set of all triples of integers can't be a vector space either.

Hopefully that's clear, let me know if you'd like me to clarify anything.


The integers are totally closed under multiplication. They lack an inverse under multiplication that's why they aren't a field.

Your heavy emphasis on the idea that fields are closed under addition and multiplication is very counterproductive here, because the integers are closed under addition and multiplication.

The integers violate the field axiom that every element in the field (other than 0, the additive identity) must have a multiplicative inverse in the field. But that has nothing to do with the requirement of being closed under multiplication.


> If you try to define a vector space over a set without multiplicative closure, the vector space cannot be closed under scalar multiplication. Among other things, linear combinations stop being invertible (or even possible in general), and linear relations don't exist.

Mind clarifying this part? As someone else already pointed out, the integers are multiplicatively closed, but I suspect you're using "multiplicatively closed" to also mean "closed under multiplicative inverses". But I don't see how linear combinations stop being possible, e.g., "3x + 2y" is still a linear combination in a Z-module, or what it means for a linear combination to be invertible.

(Also not sure what exactly you mean by linear relations not existing if you have a module and not a vector space...)


Thank you, that was really helpful!

you need closure on scalar mult. if (2,3,4) is a valid int triple & 1/3 is your scalar then (2/3,1,4/3) throws you out of the group so there goes your closure. unlike op, you don’t really need to know about fields to solve this.

If the triple is defined over the integers, why would you allow 1/3 as a scalar in the first place.

By this logic, should R not be a vector space, as it is not closed under scalar multiplication by i, or Q not be a vector space, as it is not closed under scalar multiplication by sqrt(2)?


Something being sidestepped in the post you responded to is that what is being talked about is a valid algebraic object with lots of structure to it. It’s called a module which you can think of as a sort of vector space. It’s just that the scalars may not have the property that they have multiplicative inverses. (I’m deliberately focusing on rings that are integral domains for the nitpickers.). When talking about these objects you have to include the underlying scalar set.

For instance the real numbers are a vector space over the rationals. They are a different vector space over the reals. They are not a vector space over the complex numbers and are not a vector space over the integers. But they are a module over the integers. But not a module over the complex numbers.


Everything you've said is true, but circling back to the example given, we still can't choose a scalar 1/n for integral n. Yes the integers are a ring, and yes you can define a module over a ring which generalizes a vector space.

But the point being spoken to here is that the explanation is backwards: you can't choose 1/n from Z. Therefore you can't use it as a scalar, so you'd never even break closure in the vector space. The hypothesis doesn't work before you can engage that contradiction.


You are correct and I wasn't trying to criticize what you wrote. gizmo686's post (the one I responded to) indicated a sense of insight into these issues. I wanted gizmo686 to feel justified in his/her thoughts. Namely, that what we call modules are natural objects and they look at feel like vectors spaces on the surface.

> why would you allow 1/3 as a scalar in the first place.

Because it's a definitional thing. A "scalar" is routinely defined as a real number, not an integer.

And you're absolutely right that it makes no sense, which is the whole point of the multiple-choice question. Four of those answers are plausible, the other requires you to make assumptions (like a redefinition of scalar) not in the question as posed.


Consider the possibility that it does make sense but that you aren’t aware of why it makes sense. A vector space has much more structure than a module and the distinction is not unimportant. Also, scalars are not defined as a real number. Scalars are elements of the base field. When talking about a vector space one must always specify the base field. This is important and is the point of the problem in question. For instance the real numbers are a vector space over the real numbers and that vector space structure is different than the vector space structure of the real numbers as a vector space over the rational numbers.

Huh? A scalar is very specifically a field element by definition. This is why it's important to specify the field you're working with when you talk about a vector space - a scalar is not going to be a real or a complex if your field isn't R or C.

If you've seen someone define a scalar as a real number, that's really only because they're informally stating their underlying field is R.


I keep seeing you people in this thread and wondering, with all respect, what planet you're coming from.

The whole purpose of this exercise is to see if there was a way to come up with a straightforward, reasonably informal, multiple choice question that would expose a fundamental understanding in basic university math concepts like "vector space" in the same way we see in primary math.

And instead all you people want to do is natter over the ways in which someone could cleverly make the "wrong" answer right. It's... beyond missing the point, it's actively working against the whole goal of the exercise.


Because you are going to have students who mark the answer as correct, and you need to be prepared to explain to them why it is wrong. In addition, you explanation of why it is wrong should be accurate, and should not suggest that other correct answers are also wrong. Returning to the original question, why is it that Z3 is not a vector space, but Q3 is. If you say that neither of these are vector spaces, then you have a misunderstanding about what a vector space is which the question would miss because the author forgot to include Q3 as an option.

By itself, this is a minor complaint (you cannot include every example in you choices, although I do think that an example which could not be viewed as an R-vector space would be good to include). However, when you explain why Z3 is not a vector space, your explanation must be correct. An explanation which also excludes Q3 is incorrect.


Except that we have vector spaces with scalars that are not the reals all the time. For instance, consider this excerpt from the article:

"Or perhaps they wouldn’t like A because the scalar field [the complex numbers] is the same as the set of vectors (unless, that is, they thought that the obvious scalars were the real numbers)."

In this case, while there is an an acknowledgement that you could take the reals as your scalars, it is regarded as the secondary of the "natural" choices.

Or, in my example, example, there is no way to view Q as a vector space over R, but it is clearly a vector space. There is an entire field of algebra (field theory), that relies on the fact that, for example, Q(sqrt(2)) is a 2 dimensional vectorspace over Q.


> Because it's a definitional thing. A "scalar" is routinely defined as a real number, not an integer.

Well, this is totally untrue. A scalar is defined as a non-vector quantity, a single element as opposed to a multidimensional list of them.


Not to undergraduates in early mathematics courses it's not. This is a term introduced in grade school, for goodness sake.

I give up on this thread. It's a bunch of people not just willfully misunderstanding the linked article, but actively campaigning against the whole idea of math education in an attempt to prove how much smarter than each other they are. This is... awful, folks.


Maybe because the article was talking about undergraduates:

> Could one devise a university-level question that would catch a significant proportion of people out in a similar way? I’m not sure, but here’s an attempt.

> Which of the following is not a vector space with the obvious notions of addition and scalar multiplication?

> ...


While that's the right idea, I'd push back against not needing to know about fields since the scalars are just field elements. If you try to define a vector space over the integers, it's more accurate to say you can't choose 1/n as a scalar, because 1/n doesn't exist in your underlying field. Your closure ends before you even get to choose the element.

For students it might not be immediately obvious why that's a problem for vector spaces, but yes it does mean scalar multiplication won't be closed in the vector space. And more practically speaking, if you tried to solve a system of equations without invertible linear combinations, you'd have no linearity whatsoever. Elementary row operations likewise cease to be invertible, so matrix reduction isn't possible...the whole thing breaks down really.


> I'd push back against not needing to know about fields since the scalars are just field elements.

The point was that you don't need to know the jargon of "field" and the full set of implications. It's enough to know that multiplying integers by non-integer scalars can give non-integers, which means that "scalar multiplication" can produce a thing that is not a "triple of integers". So it's not a well defined vector space operation.

No need for "field" or "closure" or any other jargon not in the question as posed.


I suppose. All I'm getting at is that since Z doesn't contain 1/n for integral n, you wouldn't be able to use it as a scalar in the first place. So if you extrapolate from there, you have to choose a different route to show that defining the vector space doesn't work because you can't trigger the contradiction that fails scalar multiplicative closure.

> It's enough to know that multiplying integers by non-integer scalars can give non-integers

Not quite: you also need to know that multiplying by integer scalars instead isn't an option.

The question as posed asked as to use the "obvious" choice of scalar multiplication, and to a student who hasn't yet taken the "field" part on board, it might seem obvious to achieve closure by using the integers for scalars.


This isn’t quite right. When I personally learnt these things in an undergrad program in math in the US, we learnt monoids. Then we learnt semigroups. Then groups. Then abelian groups. Then vector spaces. Then on the midterm we got questions exactly like the one we are debating here - is this guy a vector space, is that guy a semigroup, is that guy abelian etc. At that point, none of us knew what a ring was, what a field was etc. In the US you learn things like cosets and Lagrange’s theorem way before you even get to fields. That’s why I said you don’t need fields.

If you have (2,3,4) and want to navigate to (5,6,7) who is also in your space and you have scalar mult as your tool of choice then mult with 2 gets you to (4,6,8) but then you are stuck. Soon you realize no matter what you do you can’t navigate that space without fractions.

A working definition of a space might be - you have a member in that space, you can get to every other member by just scalar mult. Addition is just freebie because you can rephrase it as bunch of scalar mults.


>If you have (2,3,4) and want to navigate to (5,6,7) who is also in your space and you have scalar mult as your tool of choice then mult with 2 gets you to (4,6,8) but then you are stuck. Soon you realize no matter what you do you can’t navigate that space without fractions.

One of us is very confused. It seems to me that I also can't get from (2,3,4) to (5,6,7) by pure scalar multiplication even if fractions are allowed. If I pick a scalar factor of 2.5 to make 2 -> 5 work, then I get (5, 7.5, 10). If I pick anything else, the result won't start with 5.

>>A working definition of a space might be - you have a member in that space, you can get to every other member by just scalar mult.

Really no. You can only access parallel vectors by scalar multiplication. E.g. if your vector space is R2, given a starting vector and scalar multiplication, you can anything in a line with the direction of that vector, but nothing pointing in a different direction. That's more or less why it's called "scalar" multiplication - it scales the original vector, but doesn't change its direction.


you can change the direction, -1 is a scalar. but yeah, you are right about the rest. cheers!

>A working definition of a space might be - you have a member in that space, you can get to every other member by just scalar mult.

Isn't this a 1 dimensional space. Eg. Consider the vector space R2 over R.

If you have the vector (1,0), there is no way to arrive at the vector (1,1) through just scalar multiplication.


if you don’t give me basis how’ll i span the space ?

You talked about having "a member" and getting to every other member with scalar multiplication only. But even given a 2-member basis for R2, how are you planning on using 2 members, with scalar multiplication only?

I'm afraid you have badly misremembered this stuff.


you are right.

You learned cosets and Lagrange's theorem before you learned fields? Did you take a course in abstract algebra before you took analysis? If so that seems a little unconventional to me, but I don't see another explanation since fields are taught in analysis.

That would be typicall if you go the algebra route. In an introductory algebra class, you would typically open with group theory. The first deep theorem you cover would be Lagrange, whose proof is normally based on cosets.

Typically, students don't start on an algebra track until after a fair amount of analysis, but there is no real reason for that to be the case. Its a shame too since, as someone who prefers algebra myself, I (totally unfairly) blame analysis for giving math a bad image.


yeah i took abstract algebra before real analysis. we did hit rings & fields in algebra but by that time it was finals week & they got minimal coverage. we used Herstein, that’s the order in that book.

You are doing this the wrong way around, as your scalars are in Z and so you can't just "pick" 1/n, any more than you can pick pi.

In fact, closure under scalar multiplication is there. Pick d in Z and d(a,b,c) = (da,db,dc) is fine.

The real problem is, I need an inverse. So if that exists, we have : e(da,db,dc) = (a,b,c) and e must exist in the set for (a,b,c) != (0,0,0).

Now you are trying to find e that behaves like 1/d , but you've left the set - no good.


But if you don't know about fields, you could use the integers instead of the reals for your scalars, and imagine you've made a perfectly good closed vector space that way.

A vector space is a set of elements that are closed under addition and scalar multiplication (multiplying by a real or complex number).

The precise definition of what we mean by that is covered by the vector space axioms.

http://mathworld.wolfram.com/VectorSpace.html

  '
A. The set of all complex numbers.

You can add two complex numbers, and multiply them by a real (or a complex number in this case) and still have a complex number.

So it's a vector space.

  '
B. The set of all functions from (0,1) to \mathbb R that are twice differentiable.

Two twice differentiable functions can be added together to yield a twice differentiable function

  (f(x)+g(x))''=f''(x)+g''(x)
and scaled by a scalar

  (cf(x))''=c(f''(x))

  '
C. The set of all polynomials in x with real coefficients that have x^2+x+1 as a factor.

Given two polynomials P,Q, which have x^2+x+1 as a factor they can be expressed as

  P(x) = f(x)(x^2+x+1)
  Q(x) = g(x)(x^2+x+1)
So

  P(x)+Q(x) = (f(x)+g(x))(x^2+x+1)
And of course

  c*P(x) = cf(x)(x^2+x+1)
Which is also a polynomial with real coefficients.

  '
D. The set of all triples (a,b,c) of integers.

Almost works, except multiplying by a scalar would yield a tuple of real numbers. Kinda silly. In programming, if you attempted to write a function

  function List<Int> multiply(List<Int> list, Real c) {
    return list.map((x) => x * c);
  }
You'd get a compiler exception since the return type wouldn't match (or you'd get some warning about shortening the precision).

  '
E. The set of all sequences (x_1,\dots,x_n)\in\mathbb R^n such that x_1+\dots+x_n=0 and x_1+2x_2+\dots+nx_n=0.

Adding two sequences

  (x_1,...x_n) + (y_1...,y_n)
gets you

  (x_1+y_1...,x_n+y_n)
which plugging into the above two equations, and rearranging, will show that you'd get

  (x1+y1) + (x2+y2)... + (xn+yn) = (x1+x2+..xn) + (y1+...+yn) = (0) + (0)
and the same for the second equation.

Also, scalar scaling works fine.

  '
So only D has any issues.

The short explanation as to why we insist on scalars being real or complex is that a major goal of linear algebra is to provide ways of solving equations. And you really want to perform division to solve equations, which real and complex numbers let you do. Integers aren't closed over division, so they aren't good for solving linear equations.

For example, if your vector space is integer triplets then

  2 * x = (1,0,0)
Wouldn't have any solutions in the set of integer triplets. This is a linear equation, and the goal of linear algebra is to provide solutions, so it's better to have the framework yield the answer of x = (.5,0,0) and then say "Oh, the answer lies outside of the original set, so it wasn't a vector space to start with." Well, that's kinda the extrinsic view of it, the intrinsic view would throw an exception :).

The math education I experienced focused heavily on "how". "How" such and such operation arrive to its conclusion and "how" such and such operation fulfil some "rules". Seldom does it touch on "why". Why certain notion, like linear algebra, heck, maybe even negative number, exists in first place. Procedures like negative times negative gives positive number. Yeah sure, but why? What does that mean really. I think the "why" of everything is what makes one understand anything

I'm a high school math and science teacher, and I completely agree this is a problem. Several of my students struggled greatly with simple arithmetic until I set down and gave them reasons for why a negative times a negative was a positive (which meant I also had to explain to them why multiplication works like it does).

I personally think part of the problem is how elementary education is structured. At least in my state, elementary school teachers are expected to be extreme generalists, and only have to take a math class or two -- and then nothing above simple college algebra (which is fine). But they always complain about how difficult it is, and they don't understand math, often taking the state's required exam multiple times because they can't pass math. Yet these are the people we allow to teach kids math; it's a huge issue when they're being taught math by people who don't understand why it works, only the algorithms they've memorized.

Coincidentally, this is also why the 'new math' was so lambasted -- these teachers (and often, parents) don't understand how numbers work, thus they think it's useless to teach kids to subtract 20 and add 2 instead of subtracting 18. Despite the fact that one is much easier to do mentally, and allows you to get a good sense of how subtraction and addition interact.

I'm not a fan of charter schools, but if I had money, I'd start an elementary charter school where the subjects were taught by people who understood that and not generalists. And students would get like multiple hours of recess a day, especially those first few years. Just pure, unstructured play time. But that's a rant for another day.


Hi dorchadas, I really like your comment!

I'm looking for great high-school Math teachers for a project. If you wouldn't mind sparing a few moments, please get in touch with me via email (gmail with my username) so I can give you some more details.


Quotes from OA that struck me as on the button...

"A prejudice that was strongly confirmed was the value of mathematical fluency. Barton says, and I agree with him (and suggested something like it in my book Mathematics, A Very Short Introduction) that it is often a good idea to teach fluency first and understanding later."

Agree fully with Barton and OA here. Until recently I taught GCSE Maths re-take students aged 16 and over in a further education college. They were constantly tripping over really quite basic little skill issues and that prevented them from seeing how to tackle the longer and more complex problem solving questions.

"I would go for something roughly equivalent [in the solving of equations such as 4x - 8 = 2x + 2], but not quite the same, which is to stress the rule you can do the same thing to both sides of an equation (worrying about things like squaring both sides or multiplying by zero later). Then the problem of solving linear equations would be reduced to a kind of puzzle: what can we do to both sides of this equation to make the whole thing look simpler?"

The idea of just playing with the notation is one I fully intend to try but getting people to think in that abstract way is hard work.


> Agree fully with Barton and OA here. Until recently I taught GCSE Maths re-take students aged 16 and over in a further education college. They were constantly tripping over really quite basic little skill issues and that prevented them from seeing how to tackle the longer and more complex problem solving questions.

I also agree fully. A little while back I did some support tutoring for A-level maths students. The number of students who turned up who mysteriously "had problems with longer questions"... I wish I'd known the example of calculating the perimeter of the rectangle with fractions. That would have really helped explain why the problem wasn't really the length of the question, it was the fact that the student had never properly learned the component skills separately.

Unfortunately, the problem of building impressive-looking edifices on shaky foundations is absolutely endemic in British high-school maths teaching. Thousands of students who never quite understood fractions are "learning" calculus through being taught recipes, and the easier exam questions are formulaic enough that they get through with Cs at least, without any mathematical understanding.

The A-level statistics modules, in particular, have very impressive _sounding_ syllabuses. Students learn T-tests, Chi-squared tests, all this sophisticated statistical machinery. If all these students really understood this stuff, Britain would have a vast army of highly trained statisticians. But nothing of the sort is true, of course: students are just learning a recipe for processing numbers. I can't imagine the carnage if a statistics exam asked the students to write an essay explaining the principle by which a T-test works.

Pardon my rant, this has been on my mind for a while.


Going back around the millennium or before when I last taught A level maths at college, we had them in over the summer before term started for a two week intensive algebra and basics course.

Seemed to help.

The original author (Tim Gowers, a Fields medallist and professor of mathematics at Cambridge) has a totally hilarious blog post about being asked to coach a teenager doing A level maths...

https://gowers.wordpress.com/2012/11/20/what-maths-a-level-d...


Thanks for linking that, it's a great read. I really should read more of Gowers' posts.

The phrase "memory works far better when you learn networks of facts" was a happy find - I've never been able to express that idea so concisely.

I remember discovering they'd moved "differentiation from first principles" away to a further-maths module, as if it's a peripheral, difficult little oddity for the keen kids to hear about. It was the surest, saddest sign that the powers that be had given up on genuinely educating the average A-level maths student.


> memory works far better when you learn networks of facts

One challenge with teaching a more rough-quantitative Fermi-question-ish introduction to sciences, is it's more sensitive to integration and correctness of understanding. With a Trivial-Pursuit memorize and regurgitate style of "understanding", damage from misconceptions and fragmentation of knowledge is local. Whereas rough-quantitative reasoning benefits from being able to... slide around the knowledge space. Jagged misconceptions and fragmented knowledge seriously impedes the sliding. I imagine memory is similar. Nice phrase.


> which is to stress the rule you can do the same thing to both sides of an equation (worrying about things like squaring both sides or multiplying by zero later).

It's a pity that they're being so ambiguous here, because explaining why and when "you can do the same thing" to both sides of an equation is not actually hard! You can apply an injective function that's always defined over the appropriate domain to both sides of an arbitrary equation, and this will preserve the equation entirely because (a = b) is equivalent to (f(a) = f(b)) when f has this property. You can apply a non-injective function with no restriction on its domain, and this may introduce extraneous solutions but will not "miss" any, because (a = b) implies (f(a) = f(b)) if f is always defined. You can apply an injective function, perhaps defined over a more limited domain than the original equality, and this will not introduce extraneous solutions but may "miss" some, because (f(a) = f(b)) implies (a = b) if f is injective, but the converse is not true given any restriction on f's domain. Of course, if these functions are defined in terms of x, then you get to worry about whether the function is injective or well-defined given some value of x. For instance, multiplication by x is not injective if (x = 0) but it is otherwise.


Yes, the first one looks like an important quote :) . When I studied math, I usually had trouble understanding or memorizing a rule unless I had at least a rough idea why it holds. In this case I'd suspect just remembering rules and then using them without understanding would - often - cause inconveniences or later errors, when a rule is remembered incorrectly. So maybe it's subjective - what should be taught first?

As philosopher Daniel Dennett puts it: competence comes before comprehension. It's totally possible to do something well, as animals do, without understanding what one is doing. But for comprehension one needs to have something in place to reason about and make connections.

And this isn't the full picture. Motivation comes before competence. One needs reasons to acquire skills: they have to address problems in one's mind if the mind is to fully engage. Which is why coercive education with its curricula, exams, etc, largely fails.


Personally I find understanding, or trying to understand, a very good mnemonic for remembering things.

Math is done very very wrong, I don't think most teachers know enough math (sorry for that dubious and bold claim).

As a computer guy who hates state machines and was always obsessed with math, I feel that just about everything about maths is taught wrong from the get go.

Just the other day I learned about something inductive function got me curious about: linear ordering of structures as proof of termination. Turns out it's been studied in math for long: it's called a well-order. Fine.. thing is we're taught about linear recursion in HS .. but we have no pragmatic notion of induction except ~~ P n-1 => P n ~~ It's so cryptically compressed that I suspect no student beside aspies and other prodigies can have the slightest clue about that. Yet it's so important (and so obvious when shown).


Induction is baked in the most “common” way of defining the naturals (Peano axioms). IIRC, it’s the definition I got for the “proper naturals” when I was in HS (but, my Maths teacher was a mathematician, and was who got me interested in them).

I skipped two years of school which also happened to be when a lot of the basics of Algebra were introduced. When I returned to school my scores were good enough to not repeat but the gaps didn't become obvious until it was too late to fix.

Reading that article I now realise it was that I lacked fluency. I didn't instinctively "know" how to do simultaneous equations because unlike my peers I hadn't spent two years doing them, so I had to remember how to solve them every single time.

All I can say now is thankfully there is the Khan Academy which rapidly improved my mathematical understanding when I needed it.


I consider myself pretty strong at math (in university right now) and I was stumped by the vector space question. I never considered, actually, what domain scalars should be drawn from.

Wikipedia says "the scalars can be taken from any field, including the rational, algebraic, real, and complex numbers, as well as finite fields."


Does Wikipedia actually say that? That's pretty misleading. For any vector space V defined over a field F, V is only closed under scalar multiplication using the scalars of F. You can't choose scalars from arbitrary fields for any given vector space. The scalars have to be chosen from the underlying field of the particular vector space.

Without context it isn't a well-posed question; the important context of that question is the teaching material itself, which would have introduced scalars as reals in this case (I assume), thus establishing what is meant by "obvious" or "usual" [object].

> One question I had in the back of my mind when reading the book was whether any of it applied to teaching at university level. I’m still not sure what I think about that. There is a reason to think not, because the focus of the book is very much on school-level teaching, and many of the challenges that arise do not have obvious analogues at university level. [...] I think at Cambridge almost everyone would get this question right (though I’d love to do the experiment). But Cambridge mathematics undergraduates have been selected specifically to study mathematics. Perhaps at a US university, before people have chosen their majors, [...] More generally, I feel that there are certain kinds of mistakes that are commonly made at school level that are much less common at university level simply because those who survive long enough to reach that stage have been trained not to make them.

Note the "I think [...] almost everyone would get this question right (though I’d love to do the experiment)". This is a familiar state. Widespread. Call it, teachers who have not yet had their "oh shit!" moment.

One of the blog comments points at Eric Mazur's (Harvard, physics) oft-repeated talk "Confessions of a Converted Lecturer". Who describes the first time he gave students a Force Concept Inventory. Worried about wasting their time with such easy questions. :) Unaware physics education research was about to become a focus of his career.

Many have been surprised by "Minds of Our Own" (1997) https://www.learner.org/resources/series26.html The short (3 min) introductory video shows MIT and Harvard students struggling to light a bulb with a battery and a wire. Full episodes are below (by clicking on "VoD" buttons).

Harvard Center for Astrophysics has both first-tier astronomy and astronomy education programs. When meeting a new CfA graduate student, I've a little drill, prompting for the color of the Sun, and then of sunlight. They almost always get the first wrong, and then get a conflict, often with a nice "oh, wait, that doesn't make sense does it" moment. The collision of two bits of non-integrated and flawed understanding. Of the few who get it right, halfish (but small N) learned it from CfA instruction on common misconceptions in astronomy education, rather than from their own astronomy education.

But perhaps mathematics is doing better at robust integrated understanding than are astronomy, physics, chemistry, biology and medical school. It seems possible at least.

It's not just people who have had, or not had, their "oh shit!" moment. Professions too. Medicine realizing that medical errors were a major cause of mortality. Realizing even cheap easy universally-approved interventions (aspirin for ER chest pain) weren't consistently being executed. Realizing other industries had decades of experience on how to pursue quality, to which medicine had been oblivious. When the New York Times babbles about "Truth" and "The Journalism You Deserve", I shake my head and think, there's a field that has no clue how badly it's doing, how much work on process quality it's unaware of; a field that has not yet had its "oh shit!" moment.


Soon someone from HN is going to come and remove the word ‘how’ from the headline. Why? Because clickbait!

Except, of course, the book really is about how he feels he should’ve taught math.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: