Hacker News new | past | comments | ask | show | jobs | submit login
Why is Maxwell's theory so hard to understand? (2007) [pdf] (cam.ac.uk)
222 points by fanf2 on Jan 6, 2019 | hide | past | web | favorite | 125 comments

It is unfortunate that Dyson neglects the role of Oliver Heaviside, again. This is in a long tradition of English neglect, ultimately traceable to Heaviside's status as a commoner. Heaviside invented the mathematical tools we still use to understand and teach Maxwell, and most of the important consequences of the theory, but Pupin, Hertz, Marconi, and deForest used his methods and <del>took</del> got the credit.

Today Heaviside's method is taught as Laplace transforms, with Heaviside's name scrubbed off. We only hear of him as an alternative name for the step function, the integral of the Dirac impulse function, and of the "Heaviside layer", the ionosphere that makes transcontinental radio actually possible, but we would have waited decades longer without him.

An excellent reference for the importance of Heaviside in the ultimate success of application if Maxwell's theory is Paul J. Nahin, "Oliver Heaviside: The Life, Work, and Times of an Electrical Genius of the Victorian Age", https://www.amazon.com/Oliver-Heaviside-Electrical-Genius-Vi...

> This is in a long tradition of English neglect, ultimately traceable to Heaviside's status as a commoner

This is unfair on the English. Glancing at their wikipedia entries, Heavyside doesn't seem any more common than [Dirac][1] or [Faraday][2]. If I had to guess at a sociological reason why Heavyside got ignored, it's that he got caught in the transition when people started learning engineering through universities rather than apprenticeships.

[1]: https://en.wikipedia.org/wiki/Paul_Dirac#Early_years [2]: https://en.wikipedia.org/wiki/Michael_Faraday#Early_life

Heavyside's uncle Charles Wheatstone was a tradie who never went to unversity, but got knighted for services to electrical engineering. Faraday was another tradie who joined the Royal Society eight years before he got an (honorary) university degree. But in a later age Dirac became a world famous scientist, but would probably have gone nowhere if he hadn't got a scholarship to study electrical engineering at a university.

We can't wag our finger at Victorian English society for a the sin of credentialism when our own society does it much more vigorously. It's especially for our own profession, as programming is much more a craft than a thing you can learn at university.


I'll second that book recommendation.

It's incredible that you can ask a graduating class of EEs if they know of heaviside and only get some mumbling about a step function, given that Heaviside developed the majority of the fundamental ideas still used in EE. Complex impedance, vector calculus, Maxwell's equations, telegrapher's equations, precursors of s-domain methods, inductive loading, many of the names for AC circuit concepts.

I think Hertz acknowledged Heaviside's contributions, though, and didn't intentionally take credit for his work. I could be confused, though.

It was deForest, particularly, who pretended to originality. I should have written "got".

Exactly, one of the main reasons why maxwell is so hard to understand is that everything is expressed using quarternions unlike Heaviside who expressed the equations using the vector notation we see them expressed in today. In reality ‘Maxwell’s’ equations are in fact Heaviside’s.

Arguably the reason that generations of STEM students have been horribly confused about 3-dimensional vectors and rotations (including electric/magnetic fields), etc. is that they were reframed in the confused and non-generalizable Gibbs/Heaviside language, instead of in Grassmann/Clifford’s formalism in which vectors and bivectors can be properly described as separate types of objects.

It can be so much nicer. http://geocalc.clas.asu.edu/pdf/OerstedMedalLecture.pdf

Geometric algebra obscures the point that what really matters is the algebraic structure. There are many ways to construct things that behave the same way (subgroups of matrices, vectors and the cross product, and so on), and while it is nice to invent one construction that gives you everything, you risk loosing sight of the fact that each construction is arbitrary and what really matters is its algebraic structure. That's why it's good to expose students to many partial, fragmented devices, so that they will realize the deeper point that underlies them all. With spin matrices, it is obvious that they are arbitrary manifestations of a group, but if a student were to spend their entire education manipulating blades they might start getting the idea that in some sense the universe was "made out of them."

Reminds me, I've been meaning to read this [0] blog post for a while now. (See also the HN discussion [1].) My clueless intuition tells me its points may be analogous to your linked document. (It mentions Heaviside, at least.)

[0] https://www.gamedev.net/articles/programming/math-and-physic...

[1] https://news.ycombinator.com/item?id=18365433

Perhaps start with https://www.shapeoperator.com/2016/12/12/sunset-geometry/ for a concrete example.

Looks good, thanks.

Wow Thanks for that... fascinating.

It was my understanding that the original Maxwell equations used separate letters for each component rather than indices or any modern method of writing down the components as a unified thing (so the four expand to at least 12 separate equations (20, once you factor in constitutive and continuity equations)). Later on, he did use quaternions to organize like quantities, as we would use vectors, but still generally worked component-by-component. But yes, the most common modern versions are effectively due to Heaviside.

We should not neglect Josiah Willard Gibbs's notational contribution, either.

We otherwise hear of Gibbs mainly when we mention Gibbs free energy.

My realization about Gibbs's contributions was reading an article about Fourier analysis in electronics, the article said when you approximate a square wave by using a series of sine waves, there will be overshoots at the edges, and this problem is called "Gibbs phenomenon".

What, "Gibbs"? Wasn't he the physicist working on the physics behind chemical reactions? So he did some applied mathematics, too? It shouldn't be the same person, is it? Then I've read his Wikipedia article...

And ironically, Gibbs phenomenon was initially discovered by an unnamed mathematician Henry Wilbraham in 1848, not Gibbs, but his paper was ignored and forgotten, like many other important ones in the history of science...

Gibbs free energy and the entire body of literature on classical statistical thermodynamics. That alone would have been a enviable legacy to leave behind.

the physics equivalent to vi/emacs flamewars doesn't need any more fuel please...


Also curious, which of Maxwell's texts you are referencing?

“He took these books home and tried to find out. He succeeded after some trouble, but found some of the properties of vectors professedly proved were wholly incomprehensible. How could the square of a vector be negative? And Hamilton was so positive about it. After the deepest research, the youth gave it up [and] died.

My own introduction to quaternions took place in quite a different manner. Maxwell exhibited his main results in quaternionic form in his treatise. I went to Prof. Tait’s treatise to get information, and to learn how to work them. I had the same difficulties as the deceased youth, but by skipping them, was able to see that quaternions could be explored consistently in vectorial form. But on proceeding to apply quaternionics to the development of electrical theory, I found it very inconvenient. ... So I dropped out the quaternions altogether, and kept to pure scalars and vectors....” —Heaviside

One of my maths professors was a humorous guy and in his lectures would, as an ongoing joke, only explicitly credit Heaviside and Hungarians.

In university in eng.math, diff.eq, electromagnetism and signals and systems courses, we had a "Heaviside Method" (with this exact name) in deriving the numerator coefficients in partial fraction expansion process of yielding time-domain equivalents of s-domain functions. I specifically remember my professor saying that this method is ascribed to "Heaviside" so much so that I later googled him and read his Wikipedia entry.

My thoughts exactly. Heaviside gets a chapter in the book, Strange Brains and Genius, which reveals some of the odder things about him: https://www.amazon.com/Strange-Brains-Genius-Eccentric-Scien...

E&M is far from "simple". It contains special relativity, for starters. Also it is incompatible with thermodynamics: solving this problem is why Planck invented quantum mechanics. Also point charges have infinite energy: this problem leads to renormalization theory. Also it introduces gauge invariance, an essential but complex part of all modern theories. And lastly, the mathematics of E&M is a big step up from Newtonian theory.

>"E&M is far from "simple". It contains special relativity, for starters."

A theory (which is just a set of assumed first principles and rules of logic) can be simple but allow you to deduce vast complexity from it. In fact, it is ideal for a theory to be as simple as possible.

The "game of life" is not really a theory, but it demonstrates that simple rules can lead to surprising complexity: https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life


I can't begin to imagine why this would trigger a downvote. Are there people out there who prefer complex theories that make things more difficult to understand than necessary? Perhaps because it makes them feel smart or something, I don't know.

But that is basically saying you dislike science, because the point of science is to synthesize information into a small set of simple "laws" that allow us to deduce accurate and precise predictions.

you are being downvoted because of a problem with the nature of HN echo chamber being a symptom. if the voting/karma system was improved we wouldnt have problems with this kind of downvoting

I reflexively downvote the game of life. I have never heard of a single useful application or analogy of “finite automata.” Yes, there are speculations about extremely small Planck lengths, by no new physics has panned out. Finally I am sad that one of the greatest computer scientists and polymaths, Ed Fredkin, got sucked into this. We all have weaknesses.

GoL wasn't used here because of any value or analogy... it's used as an example of how a simple set of rules can generate complex behavior. GoL can display complex behavior, but it would be a stretch to to say that the rules governing it are complex. If you think GoL is too simplistic, you are agreeing with the original comment.

Surely you mean cellular automata, not finite automata.

Yes, you are right. My ignorance shows, but I am still hanging on to my opinion of cellular automata.

It’s remarkable how many silent downvotes you get when you tweak something that people think is cool. C’mon guys. Refute what I said.

E&M is simple in the sense that the equations that govern the relationship between the charge distribution and the electromagnetic field are quite simple. And the equations governing how charges move in an electromagnetic field are quite simple. I think this is what Dyson has in mind. Understanding the implications and limitations of those equations is not simple at all.

An even more blatant difficulty with E&M, related to your third point, is that it is schizophrenic. Given the trajectories of charges it will tell you what the electromagnetic field will be, and given the electromagnetic field it will tell you how charges will move. Unfortunately, these two parts of the theory seem to be incompatible, and the theory will not tell you how fields + charges will evolve in time.

> Given the trajectories of charges it will tell you what the electromagnetic field will be, and given the electromagnetic field it will tell you how charges will move. Unfortunately, these two parts of the theory seem to be incompatible, and the theory will not tell you how fields + charges will evolve in time.

The Problem is not coupling charged matter fields to the EM field. You get a well defined set of coupled and (now) non-linear PDEs. The problems arise when you try to model point charges. Then the theory is plagued by infinities that originate in the infinite charge and current densities.

The infinities in QED are actually far less problematic than those in the classic theory. They just seem to be more problematic because you cant (approximately) ignore the backreaction of EM and matter fields.

what you expounding upon is [1] of the indicators that we dont completely understand physics at the "point charge" scale, and very possible there is no such thing as a point charge, rather there is a centroid of field intensty/probability I.E. a wave function. point charges are likely an overly simplified view, and artefactual convienience of extrapolation.

The problems in the classical theory are easily understood. The charge and current densities of point particles are not smooth functions but distributions (think of the Dirac δ-“function”). If they act as the sources of the EM field the EM field itself becomes singular. Now if you try to solve the full Maxwell equations including the backreaction of matter & radiation fields you would have to multiply distributions which is ill defined.

There are similar problems in the quantum theory but the divergences are less severe and can be dealt with in a systematic way. Most physicist believe they will totally disappear in some more fundamental underlying theory. From a mathematicians point of view there is the hope that at least some QFTs are finite and the divergences are just an artifact of the construction & pertubation theory.

with schizophrenic you mean a system of coupled differential equations? in that case I wouldn't agree with schizophrenic.

or are you referring to the fact that mathematically speaking the system of differential equations is indeterministic due to mathematically perfectly valid advanced potentials? in this case I agree that there may be much more to learn from the Maxwell equations, and I have no qualms with naming this feature (highly) schizophrenic.

I mean that the theory splits into two parts: the part that tells you the fields given the charges, and the part that tells you the charges given the fields. For instance, it doesn't tell you what two charges with given initial velocities will do. (not even if you also give an initial electromagnetic field)

Well, the Lorentz force does though. And that follows from energy conservation and the Maxwell equations, e.g.:


So that's not really terribly deep or even true.

So what are the equations for those two charges?

For any number of particles, the equation for each of them is this one here:


Edit: Coupled differential equations.

If I have an equation for x in terms of y, and one for y in terms of x, then in total I have a set of equations for x and y.


  dx/dt = y
  dy/dt = -x
then the solution is

  x = C e^(i t)
  y = i C e^(i t)
with C a constant determined by the initial conditions. Nothing mysterious.

I understand differential equations ;-)

That page describes Maxwell's equations and the Lorentz force law. That a naive approach cannot work can be seen by considering the following example. Take one charge initially at rest with huge mass, and another light charge orbiting around it. According to Lorentz law it will orbit in a circle for the right initial conditions. However, then according to Maxwell's equations it will radiate electromagnetic waves, violating energy conservation. The field of the accelerated charge will affect the charge itself, but this is not easy to take into account, because that field is infinite at the location of the charge.

Then stop playing with words and say what you mean.

Point charges make sense without qualification in EM, that much is true.

But that doesn't mean that there is something strange or fishy going on here. You can formulate Lorentz forces using mass distributions and you're fine (it's right there in the next subsection of the wiki). The pathologies don't appear. Of course the resulting equations are non-linear, and that is the origin of the failure of point sources to make sense.

If you then try to recover point sources by taking a limit, the details of the assumptions you make matter a great deal [1]. For a very easy example, your point source might have a dipole.

This problem is kind of well known and studied in the General Relativity literature where the non-linearity of the field equations forces you to confront similar type of problems already when trying to derive geodesic motion [2].

I suspect all this is familiar to you. I would phrase the conclusion as such: The Newtonian idea of focusing exclusively on the motion of the centre of mass, which reflects the influence of distributed forces on a rigid body, breaks down in any relativistic theory. This is simply because the notion of a rigid body breaks down. As such the point mass, which was a conceptual cornerstone of Newtonian mechanics, along with the notion of a force acting on it, are relegated to a technical tool useful only when non-linearities are neglected.

[1] https://arxiv.org/abs/0905.2391

When googling for this paper I found a nice set of slides by Wald about this whole business: http://www.math.utk.edu/~fernando/barrett/bwald1.pdf

[2] https://arxiv.org/abs/gr-qc/0309074

I'm not trying to play with words...by EM I mean Maxwell's equations and the Lorentz force law. I think that's the conventional meaning. The point is that these two are taught in an EM class as if it's a single coherent theory that tells you what point charges do.

Mass distributions don't solve the issue in a satisfactory way, in my opinion. If you replace a particle with a finite size sphere you've solved the infinity but lost relativistic invariance. You could perhaps come up with a way to hold the charge distribution together in a relativistically invariant way, but that can hardly be considered part of EM, and might involve arbitrary choices. That a dipole behaves differently than a monopole is clear. That's already the case even if you ignore the self interaction.

The question "What happens if I put an electron in a uniform magnetic field?" or "What happens if I have two electrons?" seems like it should be answered by EM. One can hardly ask a simpler question. I'm pretty sure that most physics students who've had an EM course are under the impression that they should be able to answer this question. When I was in such a class it was never explained that this was even an issue, and when I asked about it the answer I got was "just wait for QED".

If you don't like the word "schizophrenic" for this issue, that's cool. I think it's descriptive, but YMMV. Wald's slides say:

> Classical Electrodynamics as Taught in Courses

> At least 95% of what is taught in electrodynamics courses at all levels focuses on the following two separate problems: (i) Given a distribution of charges and/or currents, find the electric and magnetic fields (i.e., solve Maxwell’s equations with given source terms). (ii) Given the electric and magnetic fields, find the motion of a point charge (possibly with an electric and/or magnetic dipole moment) by solving the Lorentz force equation (possibly with additional dipole force terms).

That's all I meant by it.

Wald's slides are interesting, thanks :)

I would like to respond to your reply at https://news.ycombinator.com/item?id=18846249 and I was going to use among other things the example of an electron in a uniform magnetic field. So I was totally surprised when I read this comment already mentioning

> ... "What happens if I put an electron in a uniform magnetic field?" ...

Either 1) this is pure coincidence (and you are contrasting the difficulty of the 2 electrons compared to the "simpler" electron in a magnetic field), or 2) you are referencing a certain 'issue' or puzzle about the electron in a uniform magnetiic field?

Could you clarify if it is 1) or 2) or something else? and if 2) clarify the puzzling issue regarding the "electron in a unifoorm magnetic field"?

Then I will feel more comfortable answering the other comment you made, so I can clarify my earlier reply to you :)

It's fundamentally the same issue, and the same difficulty.

The reason I mentioned 2 charges orbiting around each other is to avoid getting into a discussion about the uniform magnetic field, and that maybe the extra radiation energy is just coming from the uniform magnetic field, and that energy is still conserved because the total energy was infinite to begin with.

If you take a non-relativistic model of your matter content, then the theory becomes non-relativistic. That's trivial. So take a relativistic model for your matter and you have no problem [1]. EM gives you a theory of EM Fields and their interaction with matter. It shouldn't be surprising that EM doesn't give you a theory of matter.

I maintain there is no conceptual problem with EM, the problem is with your electron model which is unphysical. It might seem reasonable to you, but that's because of your intuition to build up matter from point particles, which is only justified by QFT considerations that came almost a century after EM.

[1] A simple matter model often used in GR is dust. I'm sure this would work for EM even better.

I don't think making a relativistic theory of charged matter that approximates anything in the real world is as easy as you think. Charged dust will behave in very complicated ways, so I'd have to see a differential equation that models it.

I'm not saying that the point particle model is reasonable. I'm saying that it seems reasonable given what is said in a standard EM course.

Let me phrase it in a different way. In classical mechanics you have lots of problems of the form "the state of the system at time 0 is X, what is the state at time t?".

The problem with EM is that it doesn't have a relativistically invariant answer to such questions when point charges are involved. And, as far as I am aware, there also isn't a standard relativistically invariant answer involving a charge distribution, or at the very least it's not commonly taught.

Maybe you think that I shouldn't find this surprising, but given how EM is taught, I'd say that my surprise is fully justified.

The teaching will vary greatly depending on teacher. But I remember that I was told that rigid bodies, and hence centre of mass thinking did not work in relativistic mechanics. It's very possible that as a student I never put that together with the inadmissibility of point charges in EM.

I must insist though that EM has no problem with initial value formulations. It simply doesn't provide you with a theory of matter. It turns out that that theory of matter really requires QM, hence in EM we never bother with non-QM models of relativistic matter. That's why you have to look in the GR literature.

As a pedagogical point I can agree that the limits of the conceptual foundations of our theories are never really explored enough. EM turns out to be fine, the field tensors are completely measurable, but that's a really cool paper that isn't taught either:


As for charged dust, it works even if you switch on GR as well:


define radiate? if you calculate the poynting vector you see energy is not really leaving the system in this case, although there certainly is electromagnetic oscillation/rotation/circulation, so there is no energy violation.

1) If we consider the ground state of a system this behaves as expected quantum mechanically: there is motion in the ground state but no energy leaves the system! Why hold Maxwell equations to a perhaps higher or perhaps falser standard than we hold quantum mechanics??

2) But if a system is not in its ground state we do expect (by the observation that excited molecules emit light) this oscillation/rotation of the electromagnetic field to radiate away energy that leaves the system. (Actually the simplest, most widely taught formulations of quantum mechanics, also don't predict decay of an excited energy state: The eigenstates of energy are valid solutions, and the solutions rotate in the complex plane indefinitely, so here it is both deterministic solutions to Maxwell Equations and Introductory QM predicting incorrectly no decay, with the important difference that Introductory QM can already calculate the energy spectrum, and hence absorption and emission spectrum, glossing over the fact that the transition is not in fact yet predicted)

3) As I said, I am willing to agree to calling Maxwell's equatins schizophrenic, but not because of their coupled equations nature, but because of its indeterministic nature (again a similarity with quantum mechanics!). If we subdivide the worldline of each charge's motion into individual infinitessimmal but continuous motion events, then we can spend a total of one Lienard Wiechert potentials (say 1.0 delayed + 0 advanced, the other way around, 0.7 delayed and 0.3 advanced, or -10.4 delayed and +11.4 advanced, etc!) on each such event individually, i.e. we can have this division that sums to 1 depend on both particle and time, as long as the total applied force in the past and future respect are compatible with the trajectory of the particle... this is quite schizophrenic indeed!

I since long suspect Maxwells Equations to contain quantum mechanical behaviour that has not been explored yet.

EDIT: There are roughly speaking 2 kinds of physicists/students: 1) those who either deny, ignore, gloss over this indeterminacy problem, or point at a pseudo-contradiction, and ignore they are holding Maxwell equations to a higher standard than Introductory QM and "just move on to QM, btw shut up and calculate!" 2) those who recognize explicitly or implicitly this schizophrenia (of the second indeterminacy kind, not the coupledness kind) and have attempted various approaches or formulations to rigorously enumerate/solve for all mathematically valid (self-consistent) histories/trajectories/solutions of Maxwell's equations. Einstein was famously trying to resolve this issue (among others) for the rest of his life, Feynmann worried about it (you can read this between the lines in his Absorber Theory), and so on. I believe many of those approaches and attempts will turn out to be different but equivalent perspectives, once the problem of solving for all self-consistent histories of ME's with given initial conditions has been solved...

I obviously belong to group 2) and given your dissatisfaction, I presume you too belong to group 2). However I believe that the issue is not the maxwell equation's themselves, but our lack of mathematics to solve for all self-consistent histories... Those who do not express dissatisfaction at the lack of complete enumeration of solutions to this insidious/schizophrenic/indeterminacy of ME, or just move on to QM belong to camp 1). There is no shame in that as long as they don't ridicule/demotivate those of us trying to bridge (semi-)classical physics with quantum mechanics... and make the analogies Dyson points out between Maxwell Equations and QM (the first and second layer, etc) more complete.

Perhaps this problem has already been resolved between Maxwell's publication of the ME's, and today. Perhaps the solution is present in the literature, but with very low uptake because most people are in camp 1). Perhaps the person who solved the issue was too modest in describing the significance of what he has concluded, and we will read about it in 70 years...

jules is pointing out that, as EM + distributed matter is a non-linear set of equations, you can't easily give meaning to point masses. This is true, but the original implication, that the backreaction on matter is ill-defined or "shizophrenic" does not follow. This is a general feature of relativistic theories where you can not have rigid bodies.

> if you calculate the poynting vector you see energy is not really leaving the system in this case

Isn't there? That wasn't what I learned. If you set the light charge in motion around the heavy charge, then space will fill with radiation that has energy. No?

> As I said, I am willing to agree to calling Maxwell's equatins schizophrenic, but not because of their coupled equations nature, but because of its indeterministic nature

Well, if you don't set a boundary condition then the solution isn't determined. So in some sense that's obviously demanding too much. Math can't tell you what a differential equation will do if you don't give it boundary conditions. The retarded/advanced potentials are Green's functions, and Green's functions depend on boundary conditions.

The problem I'm talking about is in addition to this problem. If you somehow decided that you're only using the purely retarded potential, then you could calculate the EM field given the trajectories of the charges, but that still doesn't tell you how charges move. If you decide that a charge is affected by the entire EM field including its own field then you run into infinities, and if you decide that a charge is affected only by the combined field from all other charges in the universe then you run into the issue that I mentioned above. Simple ways to address the problem fail, e.g. if you replace the charges by a sphere with smeared out charge, then the solution is no longer relativistically invariant because rigid spheres aren't.

> I since long suspect Maxwells Equations to contain quantum mechanical behaviour that has not been explored yet.

I don't know about unexplored, but if I recall correctly, Schrodinger was highly influenced by Maxwell's equations.

Basically, he noted the correspondence between ray optics (Fermat's least time principle) and classical mechanics (least action principle). Maxwell's equations are the wave version of ray optics, so he asked himself whether there is a wave version of mechanics. In other words, ray optics is to classical mechanics as Maxwell's equations are to what? So there's not just an analogy between Maxwell's equations and QM, but that's actually partially how QM came to be in the first place! Maybe you could even claim that Maxwell's equations should be classified as quantum physics not classical physics. After all, if the paradoxical nature of the double slit experiment is called quantum physics then surely Maxwell's equations are quantum physics because they correctly predict the result, as opposed to ray optics which does not. Maxwell's equations are a quantum theory of a single photon, ray optics are the classical theory of a single photon.

As a depressed MIT freshman I modeled my mood as a pair of coupled equations. I got growing oscillations!

Dyson's statement was about Maxwell's theory, not all of E&M and its interactions with every other physical theory. There are no point charges in Maxwell's theory, for example.

I think the point is that electromagnetism really is a very complex and knotty phenomenon, and we moderns are spoiled by having

(1) Heavyside's deceptively elegant formulas for the theory (which we now call the Maxwell Equations) and,

(2) A well oiled pedagogy about how those equations the capture specific phenomena named after the likes of Ampere and Faraday, and

(3) Powerful formalisms for studying circuits, optics, transmission lines etc that can mostly be used without going back to the fundamentals.

Most of that was not available to Maxwell when he started out, so it's not surprising that he presented his theory in a way that reflected the real complexity of it all.

special relativity is also simple, once again its the differential equations that provide a difficult explanation.

I am struck by the clarity and simplicity of Dyson's writing.

As to the substance: yes, hiding your work under a bushel does nobody any favor, but it's understandable when it feels like the alternative is the strident trumpeting of trivial "advancements".

But also cautionary is the constraints due to the wrong metaphors. We often see this today in linear predictions as to the impact of some new idea. It's easy to mock Ballmer when he laughed at the iphone back in 2007 (back when this essay was written). But his comment made sense for his customer base. He (and nor could Apple) couldn't really see that the game was changing.

Dyson points out that Maxwell himself had the same problem.

Why would you say Apple, whose main business was an extensive traxk record in creating, promoting, and selling next generation technology, that their major investment in a new field was blind speculation?

In contrast to Microsoft whose business was providing stable solutions and extracting monopoly rents.

Not sure I understand your point. Mine was that in science or business we are often held back from understanding by the limitations of our models, even when we ourselves are the inventors.

I thought I used an example well known to HN to make the point. Apple’s stated hope was to eventually have 1% of the market (and they didn’t originally plan for an App Store). Like Maxwell, they didn’t know they had a blockbuster.

Microsoft listened very closely to their customers, who were and are mainly corporate I/T and something like the iphone wasn’t something they said they wanted. In fact it honestly had no affordances that IT wanted.

Maxwell understood the math but his own mechanical metaphor distracted him, and other physicists too, so that they couldn’t really see the larger implications.

an example well known to HN

I think the issue is that example doesn't seem to be true.. Jobs said at the time of launch that it was a revolutionary and magical product which was five years ahead of everyone else and would change everything. And the hope wasn't to have 1% eventually, but by the end of 2008.

This is (1) False, or a venomous mid-80s position. (2) Irrelevant to the theory of electromagnetism.

Interesting. And makes me wonder what else we are missing and misunderstanding, as we waste so much time trying to express the complexity of the world in spoken language rather than math. Not just physics but in sociology, psychology, economics etc.

Faraday purely from experiments 30-40 years before Maxwell, intuitively understood electromagnetism. Nobody serious believed it because he didn't have the training to express it mathematically. Dyson is saying it took Physicists another 20-30 years post Maxwell to get it. So basically ~70 years wasted.

Spoken language has a lot going for it. I note the featured article and it's discussion are all words not maths.

By the way looking at Maxwell's paper it seems awful complicated https://royalsocietypublishing.org/doi/pdf/10.1098/rstl.1865...

compared to the modern form of the equations https://ethw.org/w/images/d/d6/Maxwell_image_02.jpg

>compared to the modern form of the equations

Are the Geometric Algebra...

    ∇F = μ₀cJ
..and Differential Form...

    dF = 0
    d⋆F = J
...equivalents post-modern?


Heaviside thought that too which is why he invented the modern form. Not that he just used a different set of math to describe the problem, he invented the math too.

The modern form of the equations was worked out by commoners, so had to be filtered through the work of others before it could be taught.

You're really caught up on this commoner/non-commoner dynamic (from your other comments). Was it really that important? - many of the well known scientists of that time were 'commoners'.

I'm not, but they were. But it was not discussed openly in print at the time, and so must be inferred.

I just read his Wikipedia and he was sponsored by his uncle Sir Charles Wheatstone of Wheatstone bridge fame and ended up a Fellow of the Royal Society so was not so badly situated.

Wasted implies the solution is somehow obvious but that’s hindsite bias. Looking at lotto winners it feels easy. But, the reality is a lot of effort went to things both promising and wacky to get those steps forward. And other less major progress is being glossed over.

Maxwell unified electromagnetism into a propagating wave. That was the big leap. Before that, things were disparate.

Hertz proved it experimentally, and Heaviside turned it into an engineering discipline.

This is true of the mathematics, but it's worth noting that Faraday had already proposed EM waves ~30 years earlier based on his intuitive understanding.

And had rejected the aether, on the basis that it was unimaginable that any medium could propagate transverse waves in all directions but not allow longitudinal waves.

> Pupin went first to Cambridge and enrolled as a student, hoping to learn the theory from Maxwell himself. He did not know that Maxwe ll had died four years earlier. After learning that Maxwell was dead, he stayed on in Cambridge and was assigned to a college tutor.

It wasn't that people spend all those years actually working on this problem. There wasn't an internet nor modern travel to share/disseminate/argue about these ideas.

> So basically ~70 years wasted.

to call those years wasted implies that we shouldn't have had to wait for them. it suggests that we should look for a way to bypass them.

I’d rephrase it as: we would probably do ourselves good by coming up with better mathematical intuition and tools of understanding math.

They were wasted if people toiled fruitlessly on the problem. Not necessarily wasted if they were making progress on other problems.

In truth, it generally takes significant time to arrive at fundamental theories. Most people don't realize this because in a physics class, at best, you might get taught when so-and-so published an equation.


Lord? Faraday was no lord. He was also dead by the time Maxwell gave this presentation.

It's not until now, over 20 years later, that I realize what a magician my E&M prof was. After months of careful development of basic E&M, Maxwell's equations simply fell out. As I recall, it was capacitance that was the impetus. It was as though we had struggled in a dark tunnel for months and suddenly came around a turn and there was light. Not a little, but sheets of daylight and all the equations just fell together, one from the next. All the equations revealed themselves from the middle of one lecture to the middle of the next lecture.

And then, in E&M 2, with a different professor, I spent the entire semester coming to grips with the fact that the implications of these formulas would challenge my grasp on reality for the rest of my life.

In what ways were they that important later in life?

Because my undergrad is in physics and I'm still in science, E&M opportunities are evident to me, but I know I would struggle to really go after them myself. So I get help, but I sure wish I could do it on my own.

Some people hold the view that a better way of teaching electromagnetism is through geometric algebra. See: https://arxiv.org/abs/1010.4947

Also, the approach based on differential forms is very appealing; for a list of various formulations of electromagnetism, see https://en.m.wikipedia.org/wiki/Mathematical_descriptions_of...

The differential forms approach also has the advantage over geometric algebra of being widely used by mathematicians and theoretical physicists, and perhaps increasingly used by physicists in general. If you're going to put in the effort to learn geometric algebra, learning differential forms instead would be much much more useful in opening up the literature to you. A 'lightweight' intro is [1].

[1] https://www.amazon.com/Electricity-Magnetism-Mathematicians-...

Aren't the two very closely related, if not equivalent? Differential forms and Geometric Algebra I mean.

Yes, they operate on the same objects. The main operations on differential forms are the exterior derivative d and the wedge product f /\ g. These are independent of the metric. If you add a metric you get one extra operation called the Hodge star ⋆, which is a prefix operator, so it operates as ⋆f.

Alternatively, if you have a metric on your space you can define the geometric algebra derivative D and the geometric product. These are closely related to the differential forms operations, e.g. Df = df + ⋆d⋆f. Using it, Maxwell's equations become one equation, namely DF = J where F is the electromagnetic field tensor and J is the four current.

So in my opinion it's indeed best to not put this as differential forms vs geometric algebra, but rather as a single theory in which some operations don't depend on the metric (d, /\) and some do (⋆, D, geometric product).

Unfortunately, geometric algebra has been overhyped by its proponents, and the papers they write are less than rigorous, so some people are under the impression that it's crackpottery, so it's safer to call it Clifford algebra :)

Is it possible to work with geometric algebra effectively without a metric using some more complicated construction?

You can put an arbitrary metric on your space, but usually you wouldn't need metric dependent operations if you don't have a metric.

Yes, but the formulation in that paper is really not the best one. Electromagnetism is best formulated in geometric algebra of 4-space (3 space dimensions + 1 time dimension), not 3-space.

People didn't really get Maxwell because Maxwell's equations don't need a medium aka "aether".

Waves propagating without a medium is a shocking message in this time frame.

Maxwell published his equations in 1861/1862 in first form.

In 1887, the Michelson–Morley experiment gave the first experimental disproof of the "aether".

It also doesn't help that you have things like "displacement current" and have to find strange integration surfaces to make Maxwell's equations work out for things like capacitors or motors. (The Heaviside-Hertz pedagogy that everybody is STILL taught is particularly problematic to use for motors.)

> In 1887, the Michelson–Morley experiment gave the first experimental disproof of the "aether".

In what way is LIGO not a bigger, and successful, version of the Michelson-Morley experiment?

I'm honestly asking, because from my limited understanding, it seems to be.

From Wiki:

> This result is generally considered to be the first strong evidence against the then-prevalent aether theory, and initiated a line of research that eventually led to special relativity, which rules out a stationary aether.

It seems that LIGO merely confirmed properties of an existent aether, not that we no longer believe in one:

> Aether theories (also known as ether theories) in physics propose the existence of a medium, the aether (also spelled ether, from the Greek word (αἰθήρ), meaning "upper air" or "pure, fresh air"[1]), a space-filling substance or field, thought to be necessary as a transmission medium for the propagation of electromagnetic or gravitational forces.

Naively, spacetime and quantum fields are both forms of aether theories.

If LIGO confirmed "aether", then the measurements in the perpendicular arms would vary depending upon time (orientation of Earth in rotation, orientation of Earth around Sun, etc.).

As far as I know, they very much do NOT vary--to an absolutely amazing precision.

As far as I can tell, if the "aether" existed and we could detect it, we basically would have no hope of detecting gravitational waves.

> Naively, spacetime and quantum fields are both forms of aether theories.

Not ... really.

The existence of "aether" implies a preferential frame of reference. And every experiment we have done to attempt to establish such has failed.

But the arms are predicted to vary in those cases by the same theory that LIGO is confirming: our rotation causes frame dragging, and there's some weird Sun-Earth orbital relativistic effects as well, right?

I know the Sun-Jupiter orbit produces a large portion of the estimated 5,000 watts of gravitational emissions given off by our solar system. My mental model of this is that it perturbs the aether ("spacetime") through which it travels, radiating waves which carry energy out of the system. Is there a different one?

It's just the power output in those waves is incredibly, incredibly low -- LIGO can only hear much larger events, such as large astral bodies colliding.

So I'm not sure I understand -- there seems to be at least a gravitational aether.

By aether, do you mean an underlying medium that allows forces to be propagated? I guess that using that definition, there is an 'aether'.

The difference with the rejected concept of aether is that this "aether" is deformable by gravity, whereas the rejected one is not.

> underlying medium that allows forces to be propagated

General Relativity is a local theory concerned with the mechanisms that generate the metric, the geodesics implied by the metric, and the coupling of objects to those geodesics.

The relevant forces are those which accelerate objects into non-geodesic motion (or boost them from one geodesic to another). Those are local[1] as well: electromagnetism and the nuclear interactions. There's nothing like a luminiferous aether or underlying medium in the Standard Model, even if you look funnily at the gauge bosons -- they obey Lorentz covariance.

I made a couple of sibling comments to yours, one of which deals directly with your comment's parent's idea of a gravitational aether.

- --

[1] RM Wald, _Quantum Field Theory on Curved Spacetime and Black Hole Thermodynamics_ (University of Chicago Press, 1994). Cf. the top of page 6 of Hollands & Wald 2014, https://arxiv.org/abs/1401.2026

Isn't every "field" in QFT basically an aether?

Please tell me what you mean by aether, as rigorously as you can, and then I can give you an answer to the question which seems to be bedevilling you.

Meanwhile I can guess at what you're asking:

The Standard Model (a QFT) has interacting quantum fields obeying purely local dynamics. The behaviour "here-and-now" depends on the field-values "here". It does not depend on field-values "now" but far from "here". Moreover, the Standard Model is Lorentz-invariant, meaning its laws hold in any inertial frame of reference, thus the scare quotes in the previous sentence.

The luminiferous aether that Michelson & Morley were looking to measure was motivated by finding a single special (and universal, or at least covering the whole solar system) inertial frame of reference in which Maxwell's equations hold exactly. No such frame exists; there is a huge democracy of inertial frames in which relativistic electrodynamics is exact in the classical limit. ( https://en.wikipedia.org/wiki/Preferred_frame )

> By aether, do you mean an underlying medium that allows forces to be propagated? I guess that using that definition, there is an 'aether'.

That's what an 'aether' is -- that background medium which the interaction is happening within.

My conception of it is that Michelson-Morley disproved an infinitely rigid aether, which was the conception at the time and squared with Newtonian physics because it's equivalent to an infinite speed of information.

Einstein then corrected the aether models to account for the finite speed of information through the aether, and the impacts that aether waves have on causality.

My impression has always been that dropping 'aether' was merely a branding move, rather than anything technical about the term. (Not that I fault scientists for this, it's politically savvy -- but it's weird to see their marketing used as arguments decades later.)

I just noticed this:

> our rotation causes frame dragging

Not exactly; it's the choice of accelerated coordinates and pretending that the coordinates are freely-falling that manifests seemingly odd coordinate-dependent physical effects. One can resolve these by switching to freely-falling coordinates, or by not pretending that the accelerated coordinates are freely-falling. In practice this means doing Special Relativity calculations only in the Special background of the theory, and using the full covariant physics otherwise.

One can treat the Lense-Thirring effect as a distortion of a Special Relativistic background (or a different static background, like Schwarzschild), or one can determine the actual background from the distribution of stress-energy.

The first choice leads one to conclusions like the precession of test objects in polar orbits around axisymmetric bodies with nonzero angular momentum, and applying that to Earth and a satellite.

The second choice is more work, because angular momentum and axisymmetry are only two of the departures from Schwarzschild in the system. Crucially the contributions from surface bumps, masscons, the distribution of matter in the satellite, and so on are all small enough that it's fair to ignore them in the second case.

The difference is that the second case simply correctly calculates out the geodesic the satellite occupies in the real spacetime, while the latter calculates out an orbit that is simply wrong for the real configuration of the system and then applies corrections by bringing in pseudo-forces.

In the weak field limit, one can get completely correct coordinate-dependent results taking the former approach. Additionally, analogies arise in this approach that can lead to interesting intuitions. However, one should always check to make sure that the intuitions can be explained in terms of coordinate-independent formulations of physics.

> weird Sun-Earth orbital relativistic effects

If you calculate out the geodesics, no; all the parts of Earth down to its individual molecules and below "want" to travel on geodesics sourced by the system and do so unless the stronger three forces interfere with that "want" (and in bulk Beiglböck and Dixon show that the Earth has a coordinate-independent centre-of-mass that does travel on a timelike geodesic, and you arrive at it by considering the vector position of each particle).

This is a lot of work.

So instead, you can use a background like Kerr (or Schwarzschild or Minkowski) and correct the failure of the parts of the Earth (or the planet in the large) to travel on the geodesics of these backgrounds. The correction is typically done by introducing pseudoforces, pseudofields, and dynamics and potentials in these. They're pseudo because they vanish entirely in at least one frame of reference. (see https://en.wikipedia.org/wiki/Fictitious_force which is pretty decent).

> gravitational aether

One could do perturbative General Relativity by choosing Minkowski (flat) spacetime as the background \eta [0], and then recovering all the real motions of objects in a perturbation field h. That field, h, is a pseudofield with its own potentials and dynamics. It's just a set of first-order corrections to the background metric. The real metric g will be an expansion: g = \eta + h + O(h^2) + O(h^3) + ... where O(h^x) are higher-order correcting terms, and those are negligible for systems where stresses are weak and speeds are slow compared to c.

It is reasonable to think of h as having the properties of an aether in some cases: it can walk and quack like a real field, like the electromagnetic one in Maxwell's theory. However it can also be made to vanish entirely without removing matter and the gravitational potentials they source, and it is a pseudofield in the more fundamental theory of General Relativity. The Maxwell electromagnetic field is always present (you can only get rid of it by removing all charges and potentials), and obviously related fields are still there in the more fundamental theories of QED and the Standard Model (a QFT).

The specific theory of the luminiferous aether theory that Michelson-Morley surprised themselves by falsifying was that the aether was stationary and the Earth moved through it. There were comparable theories of a gravitational aether (https://en.wikipedia.org/wiki/Mechanical_explanations_of_gra...) that have never recovered many fairly easily observed behaviours of gravitating masses, so there was never really a surprisingly failed direct test of them.

- --

[0] More generally, as someone raised this with me before, one can write g^{(0)}_{\mu\nu} for an arbitrary choice of background instead of abusing the notation \eta_{\mu\nu} which (out of context) generally means only the Minkowski metric. I leave out the greek-lettered indices above.

Your answer is "not even wrong" -- it's overly pedantic terms which miss the point of my question.

Such as here:

> If you calculate out the geodesics, no; all the parts of Earth down to its individual molecules and below "want" to travel on geodesics sourced by the system and do so unless the stronger three forces interfere with that "want" (and in bulk Beiglböck and Dixon show that the Earth has a coordinate-independent centre-of-mass that does travel on a timelike geodesic, and you arrive at it by considering the vector position of each particle).

This is a very dressed up non-response, because my initial comment was about geodesics braiding around that central one, and the resulting complexity of the paths, emissions caused by that finer level of path resolution, etc.

Responding that you can calculate paths entirely misses the point.

Sorry. Nowhere in any of your previous HN comments is either the word geodesic or anything that makes it obviously clear that you know how the geodesic equation works.

I'm confused though: if you know enough about general relativity to understand vorticity, why are you going on about a gravitational aether (especially without specifying exactly what you mean)?

> My mental model of this is that it perturbs the aether ("spacetime") through which it travels, radiating waves which carry energy out of the system. Is there a different one?

Yes, and I'll get to that in my third last paragraph below.

Two isolated compact objects in mutual orbit generate a spacetime-filling "tumbling barbell" dynamical metric (as opposed to static or stationary; the two weights on the ends of the notional bar eventually collide), and gravitational radiation is simply a piece of the dynamical spacetime.

Let's slice the spacetime into spacelike hypervolumes indexed by time, and call our sources B and b, and our observer O.

Let's take two times, t_| and t_-. Pipes or hyphens are the not-really-there bar of the barbell connecting the weighted ends; in the schematic below - and | have essentially identical spatial length.


  B---b                         O

    |                           O
In a full 3+1 formalism of General Relativity we'd calculate the geodesics generated by B and b. More on that later. However, since O is weakly stressed and moving very slowly compared to c, and B and b are moving slowly with respect to c, we are firmly in the weak field limit and can linearize [1].

In effect we can think pretty Newtonian: what attractive force does O feel? If O is a planet with its equator is on the extended line of the barbell at t_- then observers in the upper hemisphere will measure a gravitational attraction pointing equator-wards. Conversely, at t_| the same observers will measure a gravitational attraction pointing away from the equator.

Since Bb is tumbling (suppose it's clockwise about the middle hyphen or vertical bar in this case), the attraction sweeps equatorwards and anti-equatorwards, and we'll also see the times when the positions of B and b are essentially reversed in the two time-slices above.

The frequency of the changes from anti-equatorwards to equatorwards (in our schematic, that's the maximum stretch/squeeze) is the detection frequency, the periodic change in amplitude depends on how circular or elliptic the tumbling is, the amplitude grows as the orbit decays, and the amplitude itself depends on the masses and the distances among the objects.

Feynman's "sticky bead" [2] visualization is probably helpful here. If we replace our polar observers on O, and O itself, with a long pole (perpendicular to O's equator, which means we are perpendicular to the "bar" at t_-) with a couple of beads able to slide north-and-south, then as we approach t_- we expect the north bead to slide south, and the south bead to slide north. As we approach t_- we expect the north bead to slide north, and the south bead to slide south. The beads oscillate north-and-south matching the Bb orbit. If there is friction between the pole and the beads, heat is generated.

If we make the pole vanish and let the beads sit in space at the same positions they would be on the pole, we can see that their behaviour is simply geodesic; over short timescales the deviation from purely timelike geodesic motion is neglibible (over long timescales they may crash into each other or get drawn into the Bb system, since tumbling barbell metrics are collapsing spacetimes). The geodesics are curved in a patch of local Cartesian coordinates with the origin at the centre of the pole at some moment in time.

In the full GR picture the heat from the "sticky beads" on the pole is because the pole pulls the beads off their geodesics. Moving objects out of free-fall (i.e., off geodesics) requires work. This is a local phenomenon, General Relativity being a local theory.

However, the full GR picture is a bear to work with, and as there are several reasonable choices for slicing the 4-spacetime into 3+1 and as at astronomical distances post-Newtonian corrections are small, linearized gravity is the formalism of choice, and that choice drives vocabulary (and intuitions, especially in experiment design) somewhat. In effect, our BbO schematic system is similar to a https://en.wikipedia.org/wiki/Cavendish_experiment apparatus.

Finally, "... carry energy out of the system ..." depends on the system. In the full GR theory in our toy model above, the system is the entire spacetime. We could measure an energy at spacelike or lightlike infinity and see that it's constant [3]. However, if we measure in a region of spacetime encircling Bb but not O (or our sticky beads, or freely floating beads) then energy is clearly leaving that region. But what's special about that region, physically? Nothing. General Relativity does its thing, with moving masses generating curvature (which you can represent as perturbations of a static background), and curvature determining geodesic motion versus accelerated motion (which you can represent as moving sources dumping momentum into spacetime, and spacetime dumping momentum into matter, and that's about as close to an aether as you can get, but it is UNLIKE the luminiferous aether in crucial ways [4]).

- --

[1] https://en.wikipedia.org/wiki/Linearized_gravity which is where perturbations of a non-dynamical background enter formally

[2] https://en.wikipedia.org/wiki/Sticky_bead_argument

[3] We can rely on the https://en.wikipedia.org/wiki/Peeling_theorem to let us use Bondi mass M_B; cf. http://www2.phys.canterbury.ac.nz/ACGRG5/talks/Scholtz~Marti...

[4] http://www-history.mcs.st-andrews.ac.uk/Extras/Einstein_ethe... - pay close attention to the clause after the comma in the second-last sentence there. Einstein's main point is that flat spacetime is simply a special case of a general curved spacetime where the curvature is dominated by the sources (matter) or the absence thereof.

Also, it's important to remember that nobody's splitting up of 4-spacetime into 3+1 space+time is "most right"; in our toy model above we have chosen a particular foliation and have not introduced any pesky relativistic observers. That we can use a background (especially flat spacetime) and perturb against that does not mean the background is physically privileged.

Ether is an independent absolute entity, and other objects move relative to it. Per GR, spacetime is an embodiment relations between objects, not an independent absolute entity.

There might be some philosophical or creative writing overlap in the way the two are described, but the actual things that the theories are saying are completely different, lacking even a conceptual similarity.

Interesting article by Wilczek on the subject of aether: http://ctpweb.lns.mit.edu/physics_today/phystoday/Ether.pdf

Einstein also gave a talk in the late 20s (?) in which he discussed how spacetime in GR could be considered a kind of aether, though stripped of most of its physical properties.

Could you expand on the problems you have in mind with capacitors and motors? What do you mean by needing to find strange surfaces to "make Maxwell's equations work?" Do you mean find strange surfaces to get useful results, or that the results are wrong if you pick the "wrong" surfaces?

Maxwell’s works all assume an ether so it’s not really true to state that the equations don’t need one.

The aether is not required, assumed by or contained in the equations. Its existence was a physical hypothesis which was discarded after Einstein's work on special relativity and a multitude of related experiments.

Funny that Maxwell has an entire chapter named after him in the history of the ether [1]

[1] https://archive.org/details/historyoftheorie00whitrich/page/...

Maxwell did not use vector calculus, which actually makes it pisser to understand because it bypasses having to convert the vectors to coordinates. I find short hand notion more confusing than writing it all out even if the latter takes more room. I did not understand general relativity at all until I saw an example where everything was written out and then it made much more sense

This is something of a myth. Maxwell actually uses both to ease the mathematical pain for his readers - he writes the equations out coordinate by coordinate, and he also compresses them using Hamilton's (quaternion) model of vectors. In fact Maxwell was the one who introduced the gradient, curl, and divergence (actually he used convergence) operators, and he addressed precisely your concern in his book by giving both formulations!

Quaternions are not a "model of vectors."

Maxwell used per coordinate equations and quaternion equations, but never wrote out what are now known as "Maxwell's equations," which were first formulated by heaviside using Gibbs-Heaviside vector calculus.

Gibbs-Heaviside vector calculus formulations proved far more valuable to engineers and scientists than the equivalent quaternion formulations - despite the criticisms from Hamilton and Tait.

Maxwell explicitly uses the imaginary part of a quaternion as a model of vectors, just as Hamilton did:

> A german letter denotes a Hamilton vector, and indicates not only its magnitude but its direction. The constituents of a vector are denoted with roman or Greek letters. [1]

Gibbs and Heaviside or course use a different algebraic model of vectors, which is the one used in introductory courses now. But the more significant change made to get to "maxwells equations" as we know them now is not a slight change in the mathematical formalism.

The big difference is a change in the underlying physical model: Maxwell's original treatment is largely in terms of potentials, which Heaviside modified to focus on the E-M force fields directly. This change is what gives the familiar four equations we know today, not fiddling with your parameterization of the rotation group, or similar games. Nowadays physicists frequently use a tensor or forms model of vectors in addition to the gibbs model.

[1] https://archive.org/details/ATreatiseOnElectricityMagnetism-...

> Quaternions are not a "model of vectors.

I dont think I agree. Yes they are not the Gibbsian vector but they are a pretty damn good model of what we want to model as vectors. Its already a little closer to geometric algebra / calculus than Gibbsian vectors. Its a pity that Grassman and Clifford's work gained attention much later.

Once javascript is out of the bag one cant do much about it. Thats how I feel about Gibbsian vector calculus and geometric algebra. Not that javascript or Gibbsian vectors are bad, quite the contrary, they are astonishing in their own right and scope, but one still feels they could have been so much better.

I think geometric algebra is a very cool and elegant approach as well, but it has not yet proved useful or valuable to working engineers.

If we want to compare to programming languages, perhaps Haskell vs. C. One is beautiful and elegant, the other is down to earth and conceptually simple for "normal" people. One is primarily used by academics and researchers, the other is used to build the world. Both are admirable.

> has not yet proved useful or valuable

They would have, had they been the "first mover". Hence my Javascript analogy

Had a better language been released at that time it would have been just as useful and valuable. In fact much more so. But as is usual, a "worse_is_better" first mover accrues so much head start in the mind share and in the tooling / literature that it becomes impractical to switch, especially when both can express the same set of things (just that one does it more concisely than the other). Another shallower example: imperial vs metric units.

I dont think its like Haskel::C at all. Haskell is way more mathy and has a steeper learning curve than C. One who can master curl divergence and vector stoke's theorem can master geometric algebra with less effort

> not yet proved useful or valuable to working engineers

The people who value it are those who must deal concretely with the details of the model on a daily basis: computer programmers working in graphics, computer vision, robotics, physical simulation, ...

It doesn’t do all that much for many pure mathematicians who are happy to hand-wave away the fine details content with the knowledge that the languages can be proven equivalent by someone else, just like it doesn’t do much for the type of engineers who just use someone else’s software for their work.

Of course, the current mess is a major point of pain/confusion for the generations of undergraduates studying vector calculus. But they should go through the same hazing ritual the rest of us did, right? It’s only fair.

I have a copy of Maxwell's two volumes on electromagnetism and what makes it difficult to read for me is that his "equations" are actually a long list of disparate equations that describe a unified theory. It's not a simple "answer" such as E=mc^2, it is a series of such descriptor answers.

Mathematically, it is easier for me to focus on one aspect and study that aspect's equations (at a time).

"The phrase “Another theory of electricity which I prefer” seems deliberately intended to obscure the fact that this was his own theory."

To me, this just sounds like typical British understatement, which I'm sure the audience at the time would have understood.

>We still have passionate arguments between believers in various interpretations of quantum mechanics, the Copenhagen interpretation, the many-worlds interpretation, the decoherence interpretation, the hidden-variables interpretation, and many others.

lukcily there has been slight progress since 2007 https://www.quantamagazine.org/frauchiger-renner-paradox-cla...

It seems that this arguably new paradox has some issues: https://www.scottaaronson.com/blog/?p=3975

The theory is very easy to understand. Its the differential notation and manipulation of said notation that is challenging for people who dont math often

Say what? Start with the integral representations of the equations a la Halliday and Resnick. You can do tons of useful problems. Once you understand vector calculus, you will understand the equations as written in differential form. Then, pick up a copy of Purcell to see how magnetism comes out of the Lorentz transformation of electrostatics. If you are mainly interested in applied problems, go through Corson and Lorraine. Finally: 1. Realize that the treatment of ferromagnetism is much too limited in most E&M texts. 2. Optics is also worthless in those books. It is a field unto itself, with theoretical, applied, and quantum parts all handled by different books. 3. You don’t use Feynman’s Lectures to learn anything the first time. You use them to see if you understand physics as well as Feynman (you don’t.). All the problems have to have been done BEFORE this step.

You seem to be responding to the title, not the essay.

Dyson describes the theory as "simple and intelligible" once you accept the concept of fields, which was very much foreign in Maxwell's time.

You’re right. I’m fairly embarrassed.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact