Hacker News new | past | comments | ask | show | jobs | submit login
New straighforward approach to teaching quantum mechanics (scottaaronson.com)
155 points by georgecmu on July 31, 2012 | hide | past | favorite | 55 comments



I took 2 courses on Quantum Mechanics during my undergrad and I'm taking the Coursera course now as well for fun.

My biggest quibble with how QM is presented is that it is, paradoxically, never tied into physics. Quantum Physics is really a Linear Algebra + Statistics but with extension to complex numbers. In my experience it is just another math course and there is no Physics anywhere.

There's always talk of measuring states, applying Hadamard gates and writing down their decomposition in the eigenbasis but it's abstract and meaningless. What does a particle with some particular wave function look like? As it evolves in time it "smears" out according to the time evolution equations, but how fast is the process? Does it occur on scale of nanoseconds? Seconds? Hours? How is it suspended, or operated on? What exactly is an example of a measurement? There are so many tangible questions that would help with intuition, but are never addressed.

This would be my approach to teaching quantum mechanics. Connect it to a concrete physical system, explore in detail going back and forth between experiment and math. And best, maybe even simulate the system somehow. Some time ago I made an effort to try to simulate what I learned and this was the result (as an example): http://www.youtube.com/watch?v=a88GlrUmI9Y&feature=plcp I'ts ugly and it's probably wrong, but it's tangible and the best I could do because finding this kind of Quantum Mechanics, as opposed to a lot of talk about measuring things is very hard.


Cool demo. Interesting questions. There are nobel prizes waiting for the answers to some of them.

Warning: The following is not precisely target at your post.

The core of some your questions are the kind answerable only with mu. Essentially, you are asking for some physical intuitions to relate the quantum world up to the world as we see it. But the thing is that there is very little in our macroscopic reality that relates to the quantum world. All analogies are broken. Here's the key thing to realize. Quantum mechanics is hard not because it is complex =). Far from it. It hard because we have no mental basis with which to represent its concepts. The opposite should hold too. A Quantum intuition would find our world bizarre, very hard to understand and - unlike how we feel about QM - justifiably complex. But if one takes a multicultural appreciation approach to how systems evolve, QM becomes a bit less offensive to one's sensibilities.

The only path to approximate intuition is to drill the math and think about the concepts. Simulations are another great option. They allow one to create a rope bridge from the math to something that feels a bit more concrete. Still not intuitive but better than nothing. The idea is to get to the point where you can use the math as a map. So you won't ever be able to feel as comfortable with it as galilean relativity but you can form questions, think about reality and use the map to guide your mind. I am the opposite of you - I think a lot of the physical or incidentals of experiments are useless baggage for building that map.

What does a particle with some particular wave function look like?

At this point it is not useful to think about how things look. Focusing on what things "look" like would again be just be a rough analogy, possibly misleading (similar in spirit to focusing too much on tangent lines for derivatives) and might create a false crutch by waylaying the brain from becoming more comfortable with the abstract surroundings. And then there is the question of: is the wave function? Does it make sense to think of it as something physical? (I don't think so).

Does it occur on scale of nanoseconds? Seconds? Hours?

While kinda opposite in direction to your questions, work on decoherence is answering some questions of timing. But nothing will shed light on entanglement, coherence and aspects of measurements better than quantum computers. Lets hope they get invented soon or less preferably, proven not to be possible. Each would learn us a lot.


There is an advantage to the original formulations of QM and they are precisely these. It's true that in Quantum Field Theory you don't see these Bell Inequality ideas and I've seen people working in Quantum Information theory who struggle to prove that you can multiply a wavefunction by an arbitrary phase factor and it is an unobservable change. QFT has real current statistics and Lagrangian densities and Feynman diagrams, which give you a much more tangible feel of what physics you're describing.

In my experience it is just another math course and there is no Physics anywhere.

Heisenberg equations of motion are a good place to start. The original way we stumbled upon quantum mechanics was due to Heisenberg, who noticed that a lot of the wavy stuff people wanted to explain could be explained if Hamilton's equations of motion df/dt = {f, H} + ∂f/∂t were generalized by treating x(t) and p(t) as matrices and insisting that they do not commute, leaving instead [x, p] = i ħ as a matrix version of an "uncertainty principle."

The corresponding quantum equation for an observable  is that dÂ/dt = i [Ĥ/ħ, Â], which allows you to start (most famously) with a harmonic oscilator Ĥ and derive the Hamilton equations dx/dt = p/m, dp/dt = - k x, precisely due to the failure of x and p to commute.

So you get this direct connection between known physics equations and the quantum theory, and often the same thing which is responsible for driving the uncertainty relation also drives all of classical physics.

What does a particle with some particular wave function look like?

|Psi|^2 in the appropriate basis, I should say.

As it evolves in time it "smears" out according to the time evolution equations, but how fast is the process? Does it occur on scale of nanoseconds? Seconds? Hours?

In quantum mechanics there is a discrete energy level spacing ΔE, and the answer is that no observable can change much faster than ħ / ΔE. For the harmonic oscillator for example, nothing can change much faster than 1/Ω by this criterion.

What exactly is an example of a measurement?

This can get a little touchy but a good example is a photon hitting a photomultiplier tube and generating a click -- especially if you start to play with the polarization of the photon to generate entanglement and so forth.

Edit: With all that said, I highly recommend watching Feynman's New Zealand lectures which explain, among other things, why CDs show rainbow patterns (in a time, sadly, before people had CDs and so Feynman kind of just says "I wish I could have brought you an example.") http://vega.org.uk/video/subseries/8


You're right. And it's well explained despite the subtle jabs at Quantum Informations theory =P. There are definite advantages to the physics focused approach. As a physicist (which I am not), in your practice of QM you get familiarish concepts like spin, momentum , oscillators and Lagrangians.

But despite the wild successes of QFT, when it comes to explaining things it is extremely hard to do without either using broken analogies or talking about Feynman diagrams, Lagrangians and Hamiltonians. To realistically depict what is going on you are replacing one branch of math with another more complex branch with the main advantage being that someone who has learned the math of classical mechanics can have a slightly stronger physical intuition.

The advantage of QIT is that because it is relatively simple already, a slightly simplified form is still easier to understand and more representative of the real thing than a highly simplified explanation of QFT. The other advantage of this is that the simplicity allows the raw structure to be exposed and tackled much more readily.

If I would bet I would say answers to questions like what is the wave function exactly , how does the macroscopic universe arise from the cloudy quantum picture, what is really going on in measurement etc will come from QIT. I get the impression that many physicist don't have much respect for foundations but these questions are still worth answering and would have a practical effect on our world right away.

QIT is young yet, there are advantages to being able to take multiple viewpoints of the same thing. The correct viewpoint can vastly simplify a problem. To quote Egan: " Everything becomes clearer, once you express it in the proper language."


Oh, I don't mean to demean the field of Quantum Information. Especially, I find it really useful to run through the double-slit experiment by labelling one slit as |0> and one slit as |1>, and then going through both slits comes out as sqrt(1/2)[ |0> + |1> ] = |+>, which has certain "off-diagonal terms" in its "density matrix."

If all that formalism is built up, you can have fun working through when these off-diagonal terms exist and when they do not, especially in cases where you take a new qubit as |0> and then entangle it into the system with a CNOT gate, where you get |00> + |11>.

If folks are confused by the above, I have begun trying to explain it here: https://github.com/drostie/essay-seeds/blob/master/physics/d...

It's really kind of rough (in particular I'd like to use proper HTML subscripts rather than Unicode subscripts eventually) but it should be intelligible to a bright student who wants to know the basic ideas of QM.


I suppose that most of the time you are drawing the probability density (|phi|^2), but in a few case you show Re(phi) and Im(phi).

It looks a little to wavy. For example, if the initial state of a free particle is a (real) Gaussian distribution, after a time it should still be a wider Gaussian distribution (with the wave function phi modulated by a complex phase). I think that this is due to a problem in the numerical method.

In spite of this, the general behavior of the simulation looks right. For example, in the harmonic oscillator, if the initial configuration is a well localized state, after 1/2 period the configuration should be a well localized distribution on the other side of the parabola, and this exactly what the simulation shows.


yeah, but this is more general than qm - it's amusing to ask (other) physicists how fast electrons move in typical wires (a purely classical number, but most won't have a clue).


It's not exactly clear what you mean by "velocity" in that question. Drift velocity? Fermi velocity? (and others) These two differ by multiple orders of magnitude, so it's easy to wrongly conclude that another physicist doesn't have a clue.

Intuitive explanation: drift velocity is the overall streaming velocity of the moving electrons, fermi velocity is the velocity of a single electron. The reason these are different is that electrons move back and forth heavily, thus the fermi velocity is high. But under normal voltages, they move just slighly more in one direction than the other on average, which is the drift velocity. Of course this is highly simplified, and the real story is quantum mechanical.


maybe i'm being over-sensitive here, but it really feels like you're making excuses and trying to hide behind details. surely it's obvious i mean drift velocity from the context (how can you call fermi velocity classical?).

i don't want to drop names, or pull rank, or argue from authority, but this comment is based on memories of a happy afternoon chatting with other students. none of them said "oh, i don't understand, do you mean drift or fermi or one of the many other velocities i can think of?" instead, to a man or woman they said some random large number than stared, then did the maths, and then burst out laughing.

they were smart people. and i admit i was one of the dumbest (and i didn't come up with the question - i can't remember who did). and none of them felt the need to make excuses or smokescreens about learning something new.


I didn't read the "(a purely classical number, but most won't have a clue)" as being part of the question, but rather as commentary about people answering the question. Also "how fast do electrons move" does really suggest the velocity of an electron and not the net average velocity...anyway, calculating drift velocity is easy. From the current and the charge of an electron, calculate how many electrons are passing through the wire per second. Then calculate the number of electrons in the wire from the material properties. The answer is the ratio between these two, multiplied by the length of the wire. Since this contains many numbers that most people (even physicists) don't know off the top of their head (density & weight of copper, avogadro's number, electron charge) this is a bit hard to guesstimate within an order of magnitude without looking things up, but it will be a small number of meters per second.


This sounds a lot like what I started reading at LessWrong: http://lesswrong.com/lw/r5/the_quantum_physics_sequence/

One line that particularly stuck with me is: "Dragging a modern-day student through all this may be a historically realistic approach to the subject matter, but it also ensures the historically realistic outcome of total bewilderment. Talking to aspiring young physicists about 'wave/particle duality' is like starting chemistry students on the Four Elements."


In Physics sometimes and approximated theory / model is useful, in spite it is incorrect.

The problem is that the Schrödinger / Heisenberg quantum states of a particle are a lie. If you have an electron, it doesn't follow the Schrödinger / Heisenberg equation. It can emit a photon an reabsorb it a little time after. This is not part of that equation and has a very easy to measure effect that is the Lambs Shift http://en.wikipedia.org/wiki/Lamb_shift#Lamb_shift_in_the_hy... . And the electron can even do more crazy things, but they are luckily more difficult to measure. This is the reason to use Quantum field theory.

* If you are only going to buy a lens to put in front of your photodetector, then probably the wave/particle duality is a good enough approximation (in spite that it doesn't make sense).

* If you are doing quantum-chemistry then use the Schrödinger equation (in spite that it doesn't work for strong electric fields and is not compatible with special relativity).

* If you work near a big particle accelerator you should try at lest to use the standard model (in spite that renormalization doesn't make sense).

* And I hope that you never have to use string theory to explain and experiment.


It's nothing like the quantum physics sequence. Yudkowsky starts out making up numbers from nowhere and asserts things for purely philosophical reasons, whereas Aaronson actually has proofs (albeit left to the reader).


Sure, the numbers in "Quantum Explanations" are made up, but (1) the experiments are real, and (2) everything besides the numbers is accurate (as far as I know). Plus, the goal of the sequence was never to actually explain quantum physics. It was to explain why a realist perspective (the wave function is all there is, and the math says it doesn't collapse, so it really doesn't) is by far the most probably correct, despite the fact that it makes no new prediction compared to previous interpretations. That, plus answering some philosophical question with physics.

Aaronson's explanation definitely is a step in the right direction. I still have a quibble however: he keeps mentioning "probability" as an analogy to amplitude. That confuses his explanation in my opinion. I'd rather have a straight explanation of QM math, then an explanation about its similarities with probability theory. And the mixed state paragraph seems to conflate subjective probability and actual distribution of amplitude. Ick.


"Today we look at quantum physics, which inherently takes place inside a configuration space, and cannot be taken out."

Why couldn't I read this 5 years ago?


When Yudkowsky says configuration space in that sequence, he doesn't mean a separable Hilbert space, because he doesn't believe (for philosophical reasons) that that's the correct setting for QM.


But… Of course it's not the correct setting for QM. Before even talking about Turing computability and infinite set atheism (which rule out a continuous, infinite configuration space), configuration space is folded on itself around the identity axis.

Unless you think (a,b) is not the same configuration as (b,a), even though their amplitudes would add up before we have access to their square at the experimental level? Evidence towards "its the same configuration" looks quite overwhelming.

Or, could a "permutable" space, where (here with 2 dimensions) (x,y)=(y,x) for all x and y, be a Hilbert space as well?

Overall, I'm not sure what you're talking about. Can you be more explicit, or provide some links?


I don't know how you expect me to respond to this.

> Before even talking about Turing computability and infinite set atheism (which rule out a continuous, infinite configuration space), configuration space is folded on itself around the identity axis.

Infinite set atheism is basically Yudkowsky's reason for denying Hilbert space, so I don't know why we should talk before it.

Read the comments on "The Quantum Arena" -- Yudkowsky didn't even know whether the thing he was railing against as an uncountably infinite set was indeed infinite! (Presumably he has updated by now.)

> Unless you think (a,b) is not the same configuration as (b,a)

Well, it depends on the situation. I assume you're talking about the configuration space of the position of two indistinguishable particles, in which case of course I think they're the same configuration (that's what 'indistinguishable' means) and you're just beating down a straw man. If wavefunctions in general are members of a Hilbert space, then so are symmetric wavefunctions.

> Overall, I'm not sure what you're talking about. Can you be more explicit, or provide some links?

http://galileo.phys.virginia.edu/classes/252/symmetry/Symmet...

All of the wavefunctions for two particles described within are elements of L^2(R^2); the subset of physically realizable wavefunctions forms a subspace which is also a Hilbert space (answering your question about "permutable" spaces).

TL;DR: Don't try to learn QM from EY.


Thanks for the link.

> Don't try to learn QM from EY.

Well… I agree. But then again, I don't think he really was trying to teach it. The way I see it, he just lifted confusions you would have if you start to really learn QM.


> Two other perfect examples of "obvious-in-retrospect" theories are evolution and special relativity. Admittedly, I don't know if the ancient Greeks, sitting around in their togas, could have figured out that these theories were true. But certainly -- certainly! -- they could've figured out that they were possibly true: that they're powerful principles that would've at least been on God's whiteboard when She was brainstorming the world.

Actually, people studying the Pre-socratics like to point out that Anaxagoras (IIRC) theorizes something eerily close to evolution: that at the beginning, there were all sorts of random creatures, and only the ones which did well survived and created more creatures like them. I don't know why noone followed up on it (in contrast to something like heliocentrism, where we know why the Greeks abandoned it for what were excellent and unobjectionable reasons at the time).


I find the references to "God" (e.g., "...why did God choose to do it that way and not some other way?") to be both distracting and unhelpful working through the text itself. It signals to my brain (possibly incorrectly) that either (1) the author is trying to inject an unwarranted religious idea into an otherwise potentially helpful explanation in a devious way, or (2) the author has sadly mis-chosen a loaded term that doesn't add anything helpful to the explanation at all, and instead detracts from working through it because it creates the nagging question in my head of, "Is he really suggesting that God (whatever that may be in the reader's mind) chose to do this?".

A better option (for just the chosen example) would be: "... why does the Universe do it that way and not some other way?".


This is exceedingly common language in the mathematical community, particularly for the generation of Scott Aaronson and the one before him.


Oh, okay. Thank you so much for pointing that out to me. I somewhat assumed as much, and I honestly wasn't intending to create anything remotely ad hominem in my critique. Since it's not so common in hardly anything I typically read, it just stuck out strangely (I am only just digging into understanding QM/QP).

Again, sincere thanks.


Are they referring to a God in the religious sense, or using it as a shorthand for "nature" or "the universe"?


Some of them are no doubt serious; I seem to remember a study showing that mathematicians are more likely to be theist than atheist.

In my experience, "Why did God pick X?" and the like is shorthand for, "Is there a classification/uniqueness theorem that says X is the only possible solution to the problem?", which is how it is used here.


Again, thanks for drawing out the shorthand. I personally prefer reading the "is there a classification/uniqueness theorem that says..." over "why did God choose to...". It requires zero cycles to process the intent of the first version, while the second makes me stop each time and wonder which one the author means.


I actually find it very illustrative. It is a way of saying that something is just what it is, and we can't really say why. It is both succinct and rooted in culture, thus rather easily understandable. Using such metaphor is similar to writing poetry: you can compress a pack of thoughts and emotions in just a few words. And you can replace "God" with whatever you want if you're that much biased.


This is actually quite a projection on your end. Saying "God" != "something is just what it is, and we can't really say why." It is typically, from an investigatory and cultural-historical standpoint, a curiosity-stopper, not an explanation (or even a signifier of an explanation). It is equivalent to stating "magic" (ignoring for just this moment the generational and historical bit offered as explanation for the term's usage).

Including references to "God" in an attempted explanation of quantum mechanics is not "easily understandable". Moreover, it's rootedness in culture brings with it myriad histories that impact the way a reader understands what a writer intends. Explaining quantum mechanics by anthropomorphising a God-construct who "chose" to make something one way versus some other way is distracting to readers who do not inject God-construct into their own explanations specifically to avoid making understanding more difficult on the reader.

I can accept that this is a practice from the author's field and generation and still find it distracting and unhelpful without that equating to me being "that much biased." God does not elicit a mentally neutral concept that compresses "a pack of thoughts and emotions in just a few words" when discussing quantum mechanics. It would still be better for the author to not use "God", insofar as it eliminates mysterious language from his explanation, thus reducing the overhead of his reader needing to substitute a non-loaded term (especially when dealing with the sciences, where injecting the term "God" has a history rooted in Western culture of not being too helpful and enlightening).

I can accept it being a practice among the community. That, however, does not imply it is an easily understood, necessary, or sufficient shorthand.


I'm sold, although I'd still start by sitting them down with a copy of Feynman's QED. When doing math about a thing, it helps tremendously if you already have some sort of intuition about what the math is describing.

Also, the historical stuff is fun and interesting. Physics students should probably be taught this stuff regardless, but maybe not as part of a class on theory.


On the other hand, Feynman's most important lesson is thoroughly modern: the surest way to understand something "intuitively" is to create it.


Cool idea. Screw learning QM - let's rediscover it on our own!


> the surest way to understand something "intuitively" is to create it.

And there's no reason the best way to create something has to be the way it was created the first time.


I find historical narrative to be an effective way to learn, but just reading names, dates, and important events is boring. You have to pretend like you have a time machine.

An effective historical lesson takes you back in time and puts you right at the scene of the development, almost allowing you to have a conversation with the characters and follow along with their thought process.

That being said, I appreciate this straightforward approach just as much, and find that having a good understanding of the core concepts first, makes adjusting the settings on the "time machine" much smoother.


IMO, the simple approach is dangerous. Saying here is the theory have at it is fine if you want to teach someone how to design transistors, but the point is not that QM is correct it's just the best approximation yet and in no way the 'holy word' on how the universe operates. The single most important thing to teach science students is to discard existing theory's when reality does not support them and to require any new theory to also be supported by the body of existing experimental evidence.


> the simple approach is dangerous

Definitely. However, it also happens to be the approach used since kindergarten up to high school, and in most universities. Whatever the field, as far as I know, students learn real science only when they start their PhD. Sometimes even later.

I would love to see a real science course, where students come up with theories and discard them, again and again, until they come up with something probably correct (or not). That would teach them to accept being wrong (I hope). Today, it's hard to do for anything but open questions, which tend to be pretty difficult.


I recently did one of my Wikipedia curiosity dives along a similar route. Starting with cathode ray tubes in the 1800s through the discovery of electrons, photons and then into the discovery of quantum effects. Trying to understand what were the questions that were raised by each experiment and how they led to the next set of experiments and theories. It was very enlightening, though Wikipedia's dry style and inconsistent depth on each subject makes it hard to follow the narrative sometimes.


Yes, someone really needs to do a fun Bill and Ted style video series (or even better - a full blown video game franchise) on a wide variety of topics. I would pay good money for entertaining, interactive, and highly educational trips such as: Life at Bell Labs with Dennis Ritchie, Finding Mario with Shigeru Miyamoto, Building and Selling Your First Computer with Steve Wozniak and Steve Jobs.

So many great stories, and so much untapped storytelling potential... Wikipedia, History Channel, and random documentaries on Netflix aren't good enough!


> allowing you to have a conversation with the characters and follow along with their thought process.

Which were muddled, confused, and, pretty much by definition, partially incorrect.

Historical development necessarily involves recapitulating the confusions of the people who finally got confused enough to throw over their original ideas.

Even if you trim the history down to only the path that actually lead to modern theory, you still have a lot of confusion to get through before you can actually introduce the main idea of the topic in all its glory. And that's in addition to the confusion students have naturally.


Very true. Though there is still some value in the journey through history, aside from learning the main idea. It can serve as inspiration for your own quest, and also challenge you to question modern theory as those who came before did.

Being equipped with some fundamentals first, makes exploring the history less confusing. I think Charles Petzold's book "Code", is a great example of blending historical narrative with core concepts - I highly recommend it.


Perhaps this is a good approach to Quantum Mechanics for a course about Quantum Computers. (Maybe is has too much emphasis is the probability part.) Quantum Mechanics has a little set of clear rules that are easy to follow and get the correct result. So if your plan is to have some little black quantum boxes and combine them to get quantum computations perhaps it is a good idea to jump directly to the rules and forget about the history part.

The problem arises when you really want to understand how the little black quantum boxes work, or how this scales to bigger systems and to the real work. The correspondence principle says that if the system is big enough the quantum effects should almost disappear, and the system should be correctly approximated by Newtonian Physic. (There are some macroscopic effects that depend on the quantum nature of the world, the most clear is superfluity.)

The biggest problem is the correspondence, how to make the connection between the real world that you see and the quantum physics. The pseudo-historic path gives in each step a more complex model, but the jumps are smaller. (For example, usually the Schrödinger's model is explained before the Heisenberg's model because it is more intuitive, but Heisenberg's work was a few years before Schrödinger's work.) In this approach, there is no reference to the method that you should use to transform a classical model to a quantum model. The main examples are the harmonic oscillator and the hydrogen-like atom that are used as the base to explain the properties of the matter.


By completely random occurrence, I was watching this video yesterday:

http://www.youtube.com/watch?v=_Cv5ldhxpLA&feature=g-his...

Its by the fellow that does Minute Physics on Youtube. Towards the end of his talk, he specifically talks about this topic.

He makes a comparison between mathematics and physics, wherein mathematics you are taught the math without a ton of history. No scaffolding. In physics, you are taught the entire history of physics, i.e. scaffolding, from the ground up.

He also advocates directly teaching modern theory and leaving most of the history out. Make learning physics more like learning math.


As a PhD student in mathematics, I have to say that history is valuable in mathematics because it explains why people are interested in certain problems.

When I was a master level student, I always got annoyed by advanced courses in algebra (things like homological algebra) where it was often unclear why one should be interested in the problems stated there. Prompting the profs to give some short historical overview can be very enlightening.

I do believe you can leave the vast majority of the history out. Advances in notation etc. were made for a reason. But a little bit of history can be quite important for context.


History may be valuable in mathematics, but it is demonstrably skipped. Few know much about it.

When I was in grad school in math, I made the interesting discovery that if p and q are polynomials over a commutative ring, any polynomial that is symmetric in the roots of p and q is actually a polynomial in the original ring in the coefficients of p and q. (The construction works whether or not the ring can be embedded in a field where said roots actually exist.) Using this observation it is trivial, for instance, to write down in fully factored form a polynomial that has sqrt(2) + cube_root(3) as a root.

This construction was news to various mathematicians that I talked to, including a combinatorics prof who studied symmetric polynomials and a number theorist who worked on stuff related to the algebraic integers. Then finally I talked to a very old mathematician with an interest in history. He told me that I had rediscovered an old way to do things. At his encouragement I went to the library, and picked up an algebra book from the 1800s. My construction was taught, and there was a whole chapter full of problems where students were expected to use it to come up with polynomials with specific roots.

Furthermore as I looked into it both of the professors that I mentioned before worked in areas whose history dated back to the observation that I mentioned. It was used in the original proof that the algebraic integers form a ring, and that construction was the original reason that people were interested in symmetric polynomials.

For another demonstration of how little of their own history mathematicians know, ask anyone why the notation for the second derivative is d^2y/dx^2. Then ask them where the f' notation comes from. Then ask them what Cauchy was trying to do that lead to Cauchy sequences. Most will draw a blank on all three.

Don't read on until you're satisfied that you don't know the answers.

In the original infinitesmal notation, d was an operator. It could be defined by d(y) = y(x + dx) - y(x). And you'd calculate a slope as dy/dx (drop any infinitesmal bits). Well when you work out d(dy/dx)/dx it turns out that you get d(d(y))/(dx * dx) which is more compactly written d^2y/dx^2.

The f' notation was introduced by Lagrange in an attempt to get rid of infinitesmals by defining differentiation as a formal algebraic operation on polynomials and power series. This fell apart when Fourier demonstrated that apparently well-behaved power series could be used to construct pathological things like step functions.

Cauchy came up with Cauchy sequences while attempting to define infinitesmals rigorously. His approach fell apart on the seemingly trivial example of how you rigorously prove the chain rule when the derivative of the inner thing is 0. (He was trying to avoid 0/0, but in that special case you get 0/0 all over the place.)


> I have to say that history is valuable in mathematics because it explains why people are interested in certain problems.

Motivation is essential, but there are motivations other than the ones that historically lead to the creation of the field.

In number theory, one of my favorite examples, the motivations that lead people to be interested in it now come from modern cryptography, which flatly didn't exist when the field was founded but provides more interesting and relevant examples than the obsessions of century-old mathematicians.


That's not true for a lot of subjects.

For example, every math program I've looked at teaches integration of complex functions the way it was arrived at historically, which involves increasingly complex shapes in the plane that eventually lead to the general case - some program spend half a semester going through this construction.

However, if you're familiar with vector calculus, it is an immediate result of Green's theorem in the plane - doesn't even take 5 minutes of work. Engineering programs often go that route.

And if you structured the material in the "right" way, it would be even simpler - Green's theorem in the plane is itself a special case of Stokes' theorem; However, "simpler" is relative - stokes theorem is much more abstract. So it's "simpler" in the sense that you can prove much, much less. It's more complex in the sense that you have to grok all those higher-level abstractions without a mental picture.

Sort of like software engineering - LISP is a better fundamental system. But it is too abstract for most people in the field. And so are APL/J/K, although in a different way.


No, no, no, no. Instinctively I recoil at that idea. I dont quite know how to explain why, but I'll give it a go...

Im good with computers because I was there in the early days when it was pretty simple. Everything since has build upon those simple blocks. So, I can always work it back as it were. It kinda means I can simplify current complexities down to simple fundamentals. So, it leaves me with an ability to be presented something new, or different and very quickly I can understand it.

Does that make any sense? I assume many people here are exactly like that, or recognise what Im trying to say.

Pathetic I know, but if any one can put it better, please do!!!!

Anyway, all I know is that I am so much better off knowing how it all came together than people who don't. I don't understand how any one can claim to understand any subject with out understanding it's journey as it were.

I make a distinction here between know how to use something and understand it. If all one wants to do is "use", then understanding isn't absolutely necessary. You know, plenty of people drive cars with out understanding anything about how they work.


No one is saying that knowledge of the history isn't useful. But it's not the most efficient way to learn the concepts. For example, it would be crazy to start learning basic programming by first studying the physics behind computers.

Learn the history behind your subject, but separate it from the pedagogy. Otherwise you'll have a much harder time learning knowledge that you can actually apply.


> For example, it would be crazy to start learning basic programming by first studying the physics behind computers.

Technically, a truly historical development in terms of teaching programming would involve plugboards and specialized hardware design freshman year, punch cards sophomore year, teletypes junior year, and then 'intelligent' terminals senior year. We simply wouldn't have time to teach anything similar to modern networking, which only really came about in the 1980s, let alone GUI design or Web programming.


> I was there in the early days when it was pretty simple.

I think you have an interesting idea of 'simple': A lot of software becomes orders of magnitude simpler to design and reason about once you have enough RAM to organize it in the obvious way.

For example, for many parsing tasks, a recursive-descent parser is the obvious way, but you can only design your software that way once you have a language that can directly express recursion and a computer that can grow a big enough call stack (on the stack or in the heap) to allow the parser to juggle nontrivial sentences. Otherwise, you're left the laborious, non-obvious, labor-intensive, but ultimately pointless task of turning your design into a program your tools can implement.

> So, it leaves me with an ability to be presented something new, or different and very quickly I can understand it.

Honestly, I think that says a lot more about your intelligence than your history. That's actually a pretty fair definition of intelligence, in fact.

How about this: Teaching networking now is teaching TCP/IP and everything that TCP/IP rests upon, such as Ethernet and Wi-Fi. Theory naturally flows from practice, and that is the practice.

The OSI Model? Gone. Out. Forget it. Nobody actually implements all seven layers; at most, we have four.

NCP? What? If you know what NCP stands for, congratulations, you know something entirely useless. "Jeopardy!" would love you.

So why teach the stuff we know is useless? It actively discourages students. It makes them think they're just wasting their time, primarily because they are.

The history of computer networking is a valid topic. It should be taught in its own course, it deserves its own course. It does not deserve to be shoved into a course on how networking actually works.


In chemistry class (where we learned particle physics in high school) I really hated having to unlearn all the crap we were quizzed on in the last section to learn some new crap that I'd be tested on and then expected NOT to use later because it was obsolete! /rant

Anyway, if you really want to teach the history of a subject like math, you should probably do it after teaching the modern understanding of the subject. In math class our teacher explained some of the controversy over the invention of calculus, and the origin of the different notations. It gave us some appreciation for the unintuitive and tricky nature of the subject, which looks very simple now.


> the controversy over the invention of calculus

... which lead to the less-intuitive epsilon-delta proof framework replacing the much more intuitive infinitesimal framework calculus was originally founded on, until the 1960s when infinitesimals were reformulated as part of nonstandard analysis.

(Unless the controversy you're talking about is the utterly uninteresting one about Leibniz vs Newton.)

Another example where a historical development would lead students through a completely pointless diversion (epsilon-delta proofs) simply because Robinson was born in 1918 instead of 1618.


I was trying to play devil's advocate but I honestly couldn't think of a good reason to teach Calculus using epsilon-delta vs. Nonstandard Analysis. The way Calculus is taught today is already unrigorous until you get to Analysis, so there's no strong reason to teach students using epsilon-delta first.


Right, because students struggling with "for all P there exists Q" won't have any problem with higher-order logic?

It's always seemed to me a more sensible road to both simpler and more rigorous calculus courses would somehow involve reducing the scope of functions under consideration, since most of the exercises involve analytical functions anyway. Then, when students are ready for analysis, it can be more about "how to reduce nasty cases to problems you know how to solve, and how to recognize the truly pathological specimens where you can't" rather than "everything you thought you knew is wrong."


You don't need higher order logic to present a nonstandard analysis approach at the same level as of rigour as a standard limits-based calculus course. Keisler wrote a great infinitesimal-based Calc book (http://www.math.wisc.edu/~keisler/calc.html).

In terms of pedagogy I've found that there's a huge leap that students make between Math focused on computations and anything involving proofs. The reason it's difficult to make Calculus rigorous is not because you have to address a ton of cases-it's because understanding proofs is really hard. This is why students who take Algebra and then Analysis or vice versa tend to do much better in their second course-because they're already used to proofs. So I don't really think it's possible to make a first-year Calculus course more rigorous by sticking to analytic functions, because you still have to get over the proof barrier.


I did mean Leibniz vs. Newton but I remember infinitesimals being glossed and wishing we would get into them more.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: