Hacker News new | past | comments | ask | show | jobs | submit login
Why is Maxwell's theory so hard to understand? (2007) [pdf] (cam.ac.uk)
261 points by badprobe 7 months ago | hide | past | favorite | 236 comments



My proudest moment in high school was getting a 5/5 on the calculus based AP Physics C exams at 15 with no calculus and only rudimentary algebra knowledge at the time. That experience permanently colored my thinking, and made me much more open to practicing thorough visual imagination as a way to solve problems. I found that practice useful all the way through my EE degree's vector fields courses a decade later.

I think that's the modern fundamental difficulty in Maxwell's reworked equations - the 4 we all know and love, not the 20 or so he originally published. To even begin to get a true intuition for them, you have to get really really good at visualizing idealized objects with flows running through surfaces, and (if you're lucky) symmetries that cancel each other out. You can't be afraid of imagining the infinitely small and the continuous to really get the most out of it, even if you "know" on some deeper level that the continuity of spacetime is a convenient approximation.

14 years later I am still grappling with the beauty of saying "yeah yeah, this area of interest is technically discrete, but let's pretend it's continuous and see what kinds of stuff falls out." If you have examples of things like this in other areas like mathematical finance, I'd love to hear about them.


The originally published equations were "20 or so" because one equation was written for each scalar component.

Rewriting the equations in vector form reduces the number to the modern number.

Moreover, the original equations are the complete system.

The variant with 4 equations is the simplified variant for vacuum, which is mostly useless, except for the purpose of studying the propagation of electromagnetic radiation in vacuum.

The complete system of equations has around 6 or 7 or even more equations, depending on whether one chooses to have distinct notations for various physical quantities, such as electric polarization, magnetization, current of free electricity carriers etc., or not.

The variants with less equations are simplifications that are valid only in linear media, because only there you have proportionality relationships between quantities like electric field and electric polarization.

Instead of learning a large number of simplified variants of the Maxwell equations with limited applicability, it would have been much better if a manual would present since the beginning the only complete variant that is always true, which must be in integral form, as initially published by Maxwell.

The many simplified variants, for media without discontinuities where differential forms are valid, for stationary media, for linear media, for vacuum and so on, can be easily derived from the general form, while the reverse is not true.


> The originally published equations were "20 or so" because one equation was written for each scalar component.

> Rewriting the equations in vector form reduces the number to the modern number.

And if you use the differential form or 4d tensor notation they get reduced to 1 equation. Of course, for a lot of practical problems this is not very useful and it's better to work with the 3d vector form.

> The variant with 4 equations is the simplified variant for vacuum, which is mostly useless, except for the purpose of studying the propagation of electromagnetic radiation in vacuum.

> Instead of learning a large number of simplified variants of the Maxwell equations with limited applicability, it would have been much better if a manual would present since the beginning the only complete variant that is always true, which must be in integral form, as initially published by Maxwell.

Here I have to strongly disagree. The version of Maxwell's equations that is fundamental and exactly correct [1] is the vacuum version. The ones with magnetization and displacement vectors are only approximations where you assume continuous materials that respond to fields in simple way. In truth, materials are made of atoms and are mostly vacuum: there is no actual displacement vector if you look close enough.

Also the vacuum Maxwell equations are useful in many scenarios. For instance, that's how you compute the energy levels of Hydrogen atom or how you derive QED. Also, you have to start from them to derive the macroscopic versions with magnetization and displacement that you seem to like.

[1] Well, up to non-linear quantum mechanic effects.


Even the vacuum version is incomplete without adding an equation for force or energy, because no meaning can be assigned to the electromagnetic field or potential otherwise than by its relationship with the force or energy.

Even today, there exists no consensus about which is the correct expression for the electromagnetic force. Most people are happy to use approximate expressions that are known to be valid only in restricted circumstances (like when the forces are caused by interactions with closed currents, or the forces are between stationary charges).

Moreover, when the vacuum equations are written in the simplified form present in most manuals, it is impossible to deduce how they should be applied to systems in motion, without adding extra assumptions, which usually are not listed together with the simple form of the equations (e.g. the curl and the divergence are written as depending on a system of coordinates, so it is not obvious how these coordinates can be defined, i.e. to which bodies they are attached).

While the vacuum equations are fundamental, they may be used as such only in few applications like quantum mechanics, where much more is needed beyond them.

In all practical applications of the Maxwell equations you must use the approximation of continuous media that can be characterized by averaged physical quantities that describe the free and bound carriers of electric charge. The useful form of the Maxwell equations is that complete with electric polarization, magnetization, electric current of the free carriers and electric charge of the free carriers. It is trivial to set all those quantities to zero, to retrieve the vacuum form of the equations.


I agree that to fully specify electromagnetism you also need to include how the fields affect charged matter. So EM = Maxwell's equations + Lorentz force equation (not sure why you say there is no consensus about what this is, that is new to me).

This is just a matter of taste, but OTOH I would not include descriptions of how some materials respond to the fields in the continuous limit as part of a definition of EM.

It is true that for most terrestrial applications you do need those to do anything useful with EM. But if you want to study plasmas you need to add Navier-Stokes to EM, doesn't mean hydrodynamics is part of EM. To study charged black holes you need EM + GR, but it still makes sense to treat them as mostly separate theories.


You also need to include how charged matter affects the forcing fields in Maxwell's equations (i.e. moving charges depositing a current field).

I actually basically agree with your viewpoint, I studied Plasma Physics in graduate school in a regime where we did _not_ use Navier-Stokes or constitutive relations and everything was in fact just little smeared-out packets of charge moving according to the Lorentz Force Law and radiating.


The fact that you can write it in one equation shows that the theory is very simple because it is an expression of symmetry. E and B are not these two different things related by an inscrutable cross product but just two aspects of the same thing.


You could write all physics in a single simple equation. deltaW=0 Where deltaW is deviation of the universe from the relevant math.

Writing Maxwell's as 1 equation or 4 or more is just esthetic choice where you decide what to accentuate.

20 might be too much because three dimensions are not really different from each other so the notation that maps over them wholesale is probably a good idea.

4 equations seem perfect if you want to differentiate between classical effects of the electric field and relativistic effects (magnetism).

I don't know if single equation really shows that they really have the same source and the relativity is involved or is it just a matrix mashup of the 4 separate equations that doesn't really provide any insights.


It's true that you can always define notation to combine all equations you want into one. This means that, by itself, the observation that you can write Maxwell's equations as a single equation doesn't say anything very meaningful.

However, the notation that lets you do this in this specific case is very natural and not specific to Maxwell's equations. Differential forms are very natural objects in differential geometry, mathematicians would have likely introduced them and studied without inspiration from physics. The fact that Maxwell's equations are very simple in this natural geometrical language does say something meaningful about their nature and elegance, I think.


> continuity of spacetime is a convenient approximation

I disagree, and there's no evidence for this. This is computer science leaking out; physics has no formulation of spacetime in discrete terms, and indeed, all of physics presumes continuity.

In QM, the space of wavefns is infinite-dim continuous, and if wasnt, QM wouldnt be linear.

Cognition is discrete, but the world is continuous.


If it really was continuous so that physical quantities were real numbers as defined in mathematics, then it is in contradiction to maximal information density. Because almost all real numbers contain infinite amount of information.

Full argument is elaborated here "Indeterminism in Physics, Classical Chaos and Bohmian Mechanics. Are Real Numbers Really Real? by Nicolas Gisin": https://arxiv.org/abs/1803.06824


All the people who use thinks like the word "information" in this context are confusing thermodynamic, logical, probabilistic, (+ many others) and equivocating.

"Information" is not a physical quantity, and there cant be a "volume" of it. Nor does this have anything to do with real numbers.

It is impossible for there to be any system extended in space and time to "zoom infinitely" into a continuous range and hence record an infinite amount of information. No one claims this, and the formulation of physics (entirely on real numbers) does not require it.

Rather to say, eg., space is continuous, is to say its unbroken. There is no physical quantity which is becoming infinite.


I suspect that Gisin has a very clear idea of what he means by "information" in this context, having worked for over 40 years at the forefront of theoretical physics with a specialisation in quantum information theory.


I can link to people who’ve worked their whole life on various fields of Physics who still talk about perpetual motion. I am not saying he is wrong in this specific case, but an appeal to authority is not very convincing.


I've given your comment quite a lot of thought over the last few days. Maybe too much.

At first I was inclined to agree with you that this is an appeal to authority, with the caveat that such appeals do not always constitute a fallacy. For example, if we both agreed that such a Gisin is an expert whose opinion on this topic can be trusted, then his statements are valid evidence for one way or another.

But then I realised that the very claim being challenged is whether Gisin knows what he's talking about. Floating his credentials and experience feels like a valid contribution. For what it's worth, back in my PhD days I read several of his papers and saw a couple of his talks at conferences, and can confirm he's one of the leading researchers in the field and is particularly thoughtful and careful in his work.


I don't have much issue with Gisin's solution, the idea that the reals are random is one solution to the problem of how to deal with them that I like (since, my meta-issue is whether reality is computable, I say it isnt, and randomness is not computable).

There's a problem for people who think reality can be modelled by computable functions of finite inputs: this makes classical physics non-deterministic, because chaos requires infinite precision for determinism.

So either you go for "reality is deterministic and continuous, and not computable" or "reality is non-deterministic, and discrete, and not computable"

either option in this fork includes properties that offend the minds of the people who want everything to be discrete.

I lean towards a preference for determinism & continuity (via, in QM, superdeterminism) since that's trivial to justify on our best physics


Well, I don't think Gisin's mind is offended by non-determinism:

> I argue that there is another theory, similar but different from classical mechanics, with precisely the same set of predictions, though this alternative theory is indeterministic

and in the footnote he describes indeterminism to mean:

> given the present and the laws of nature, there is more than one possible future

Out of curiosity, why do you lean towards superdeterminism and not other deterministic interpretations of QM such as Many-Worlds or Bohmian mechanics?


Some people would disagree with dismissing information as non physical. For instance: https://scottaaronson.blog/?p=3327 The argument there would be that stuffing an extra bit of information in an information saturated volume would make it collapse into a black hole.


It's not entirely clear "Energy" is a physical property either. By physical I mean a causal property of a system which is a basic constituent of reality.

For example in E = 1/2mv^2, a particle has kinetic energy in virtue of being matter in motion -- it is motion and matter which are basic. Energy is just a system of accounting which tracks motion in the aggregate over time (with kinetic/potential just being the future/past in the accounting) hence why energy conservation is just a temporal invarience.

When making arguments about the physical properties reality has (eg., whether aspects are continuous) you need to be exceptionally clear what your terms mean, and terminology in physics isnt designed for this.

There are no "information saturated volumes", this is a series of abstractions piled on top of each other.

All the words in this area have quite complex formal definitions that are have quite difficult to unpack semantics, you cannot just go around saying "saturated volumes" -- it is this sort of language which breeds cranks, and pop sci does it with abandon.

This entire discussion is a matter of several PhDs, and to be done only well by people with PhDs in the matter (philosophy of physics), or equivalent research. It's not possible to scrap fragaments of what compusci bloggers say and derive much that's likely to be actually correct.


Nothing in physics is more basic than something else, as there are equivalent formulations in other quantities. Energy and momentum are as real as matter and motion. Matter is bundles of energy exhibiting inertia and motion is just some transformational relationship between phenomena in different areas of space-time. There is nothing "real" about any of this, only what animals like humans have evolved to model directly in their brains.


Aaronson is one of the world's top quantum computing scientists, he's a professor at I believe UT Austin.

He's also written papers that are basically philosophy of physics. It would be interesting to go over what he has actually said on this topic.


Yes, the thermodynamic properties of information are well established.

Various Hawking-Bekenstein results about black holes relate to information density, especially, shockingly, that information is proportional to surface area, not volume. This makes perfect sense because a black hole has all its incoming matter and energy sprawled, flattened and red-shifted on its horizon (to a distant observer). It can never export its internal state to the outside world, so you might never expect a volume's-worth of states to be exposed.

The idea was generalized by 't Hooft to the Holographic Principle, for 2D screens encoding the state of 3D volumes on the other side.

However, the full AdS/CFT Correspondence only applies to a certain type of AdS space, not our actual dS space. At the moment, it seems half of theoretical physics doctoral students are trying to extend AdS/CFT to dS space (obviously - strings :) and half of observational astrophysics doctoral students are desperately hoping to show we live in AdS space - LOL


> If it really was continuous so that physical quantities were real numbers as defined in mathematics, then it is in contradiction to maximal information density. Because almost all real numbers contain infinite amount of information.

Yeah. As they said, it’s computer science leaking out.

It can be misleading to reason about entropy, which is the relevant physical concept, as if it were strictly equivalent to information as computer scientists understand it. Entropy works perfectly fine with continuous densities of states and real numbers.


All observable quantities are eigenvalues of some operator, which are real numbers but discrete. How can they contain infinite amount of information?


There are operators with continuous spectra. The previous commenter was accidentally half-right, in that the usual intro QM picture where everything lives in L2 really isn't fully rigorous, but this is fairly easy to resolve.

The correct setting is a rigged Hilbert space: given an algebra of operators A on a Hilbert space H, let S be the maximal subspace of H such that |sa| is finite for any s in S, a in A. These are your states. Operators in A don't necessarily have eigenvectors in H, but they do have eigenvectors in the space S* of all continuous linear functionals on S. So <x|, for instance, is just the map `psi -> delta_x(psi)`.


I take minor issue with the phrase "correct" here. Thats one way you can do things but its also works completely fine to not do that. Another way of setting these things up has your states be honest elements of L2, and says observables are just POVMs (i.e. maps from a space of measurable sets to positive operators which obey some natural restrictions like additivity). Then given a measurable subset A of the spectrum of some operator the Born-rule probability is just given by an inner product like < phi | P phi> where P is the projector you get if you integrate the spectral measure of the operator over A.

This has the advantage of not having any funky "rigged" states suddenly appearing in your calculations and is also exactly how we deal with non-projective measurements in finite dimensional quantum mechanics.

See here, for example

https://en.wikipedia.org/wiki/POVM


Sure, I should have been clearer: a rigged Hilbert space is the right setting for bras and kets. You can also get rid of them entirely. In my experience QM classes unfortunately tend to split the difference by slinging around suggestive nonsense like \int_{x} |x><x|.


Consider the Hilbert L^2([0,1]) associated with a physical particle that has position somewhere between 0 and 1, the corresponding multiplication operator X which takes a wavefunction f and maps it to Xf where (Xf)(x) = x f(x). Then X is a bounded self-adjoint operator. It doesn't have any eigenvalues or eigenvectors but it's spectrum is exactly the set of numbers [0,1] as you'd expect (prefect measurements of position return real numbers in [0,1]).

The spectral theorem, rather than decomposing X in terms of a sum of eigenvectors & eigenvalues instead decomposes it as an integral over the spectrum with respect to the (spectral) projection-valued measure.

Now it is fair to question whether this "observable" is really observable, but it certainly works out mathematically consistently in the normal way we do things in quantum mechanics.


As the sibling hinted at, Gisin's statement is quite sloppy and – at the very least – confuses "definite" "information"¹ (a given real number) with "uncertain" "information" (entropy), at least if you follow the definition of entropy by the book: The probability distribution for an observable that takes on (exactly) the value of a given real number with probability 1 has entropy 0.

That being said, Gisin's approach is still interesting and his results can still be valid. But he starts with the assumption that real (irrational) numbers are unphysical, i.e. that – in a sense – our observable from above can actually only take on certain (rational) values, and then he derives certain predictions from that.

¹) Putting "information" in quotation marks here because no one really knows what it is.


I am very sympathetic to Gisin and his cause, but he does not propose any sensible resolution. By the way, not a fault, and no blame for him. Pointing out logical deficiencies always comes before a satisfying solution, and he is to be praised for his insight.

There are many interesting ways to probe this problem.... here's one:

Say I tell you to imagine a circle, an ideal Platonic circle in a Cartesian coordinate system (real coordinates, first uneasiness). Let's ignore translation, so it is centered at (0,0). I tell you the radius. Can you imagine the circle with Plato? Model the circle? Reproduce the circle? Do you need pi? Does the circle include or encode pi? But pi is has infinite information.

Perhaps all you need is the square root function? But that's also an infinite Taylor series expansion. You can plot and recreate the circle to any precision if you have a square root function. The series will only need to run to the required precision. The circle will always be granular, depending on the number of terms you use in pi, or the square root function. Yeah, right, obvious, so why is that a problem?

What if I tell you the circle is the physical manifestation of equipotentials of a stationary charge (say, nucleus), or mass (say Earth), with inverse square law - so basically a geometric fall-off with range determined by spatial (circular, spherical) considerations. What is the force at some distant point? Do you need pi? Do you need square root function? Or reciprocals? How does the other charge or mass feel the 56,323rd decimal place of the force due to the potential?

Maybe it doesn't, because by the time it has felt the second decimal place, time has moved on, the charges/masses have moved on, and the nuance of what would've/should've been felt in a never changing universe are never experienced. There is a modified differential equation that relates various time derivatives to precision of experienced forces (this almost sounds like relativity :)

The discrete explanation with photons goes like this: the force is produced by radiating photons. They automatically encode the geometric expansion as inverse square law, because of their pathways, no need for pi, or sqrt functions. But that is statistical, the accuracy is only as good as the number of photons that can arrive from the source. The circular/spherical nature of the force only emerges over time, as photons arrive and act. The accuracy of smooth circularity and inverse square only establishes itself over time...

Elapsed time affects experienced precision - hmmm, interesting.

How would you quantify such a thing, where time changes the precision of what you feel? Well, the other obvious example is the Heisenberg Uncertainty Principle. This is just a simple and obvious example of Fourier Analysis for any theory based on a linear wave equation. It almost doesn't need stating, and if it must have a name, it is certainly Fourier, not Heisenberg. Anyway, any math/physics/engineering student knows Fourier to their core, and it gives a nice solution to the information problem: coordinates may be real-valued degrees of freedom, but there is no way to mathematically or physically resolve all coordinates and their derivatives to infinite precision. It's just not possible, even if the underlying equations/reality maintain the fiction of real-valuedness.

Fourier combines time, waves, amplitude, velocity (momentum, etc.) with a specific expression for possible information. A picture is worth a thousand words at this point, just look at a wave-packet, it's obvious. Fourier is a masterwork, and vastly underappreciated as a fundamental limit on knowledge, in a real world sitting on smooth continuous waves.

So Fourier sets limits on knowledge, even in the wavy world of the smooth continuum. Of course, I do not believe in the smooth continuum anyway, but Fourier is my wingman to fight the real-infinitists on their own smooth turf.


> But pi is has infinite information

It does not, according to any sane way of defining its information content. For example the Kolomogrov complexity of pi is clearly finite - I can write down a program for a Turing machine which will run (forever) and keep writing down digits of pi as it does so.


Not true either. The original Kolmogorov complexity is for finite strings. Plus, the program would need to store that for whichever special strings you choose you will use this function but for other ones not. That will be a giant table that is part of your program.

That's also a different point than the parent's. Seems they're saying if you were to specify pi as the limit of some expansion that describes the physical process of photons arriving in some area, then that specification's information increases with more terms added. Pi, being almost random by every statistical measure, has as much information as a random string, in fact, in any normal conception of information. You cannot wave that away by machine manipulation tricks or by defining a new constant, and this is borne out also by the parent's physical argument that in reality there are no low-complexity universal constants, but that there may be limits to information density (in space and time).

Continuous physics can be a manipulation of limiting quantities without being literal.


> The original Kolmogorov complexity is for finite strings.

I wrote "Kolmogorov complexity" not "original Kolmogorov complexity" so this isn't particularly relevant. The application of the concept to the infinite string which represents pi is essentially trivial.

> Plus, the program would need to store that for whichever special strings you choose you will use this function but for other ones not. That will be a giant table that is part of your program.

I honestly can't parse this.

> Pi, being almost random by every statistical measure, has as much information as a random string

This is wrong. You can consider something like a simple communication task. Alice and Bob share a phone line and she is attempting to tell him a number. Every second the line allows her to send a bit to Bob. For a truly random number she has to use the line infinitely many times to tell him the number. To send pi she can send a finite number of bits which amount to a program to compute pi and he can do the computation on his end.


I urge you to go back and look at how Kolmogorov complexity is defined. It includes the notion that a program needs to decide whether to output the string directly or to generate it from some program.

You're assuming Alice and Bob have already pre-synchronized what kind of computing machine is going to be used, one in which pi is the output of a relatively short program, as opposed to another type of machine where some other random-looking number has that property (random to you, pseudo-randomly generated via some machinery for all you know). You are assuming many things away.

Also it is absolutely not trivial to extend Kolmogorov complexity to infinite strings. There are multiple formulations and they are a lot more difficult than for finite strings. Not the computation part but the complexity assignment part.


I agree there is a bunch of complexity in generalising Kolmogorov complexity to general infinite strings. However I'm not really trying to do that here, all I want is enough to back-up the statement I made before, that the complexity of pi is finite. Doing that is much more trivial than what you're talking about.

Theres a bunch of fine detail in getting it down to defining an actual number measuring complexity which I don't care about at all, all I care about (in the context of this discussion) is that the number is finite.


But we never know any quantity to full precision, so it's not like we get infinite bits about any given quantity.


That depends on the unit of measure. We know that the charge of the electron is exactly 1, with the appropriately chosen unit.


> physics has no formulation of spacetime in discrete terms

There are some attempts of working with discrete spacetime (e.g. causal set theory), but yeah, all our best descriptions so far very much assume smooth spacetime.


> Cognition is discrete

There's little evidence even of this, except in the trivial sense that language (minus prosody) is composed of discrete units.


You're technically correct.

That said, our Turing Machine model of computation is discrete, and the Church-Turing thesis implies human thought is Turing Complete.

It's not empirical evidence, but it's something. (I really doubt an empirical test is possible at all, so it seems philosophizing is all we have, unfortunately.) I'm not aware of any (communicable) model of thought that actually can't be reduced to the Turing model (in fact, that AFAIK precisely the reason he proposed the model).

Analog signals can be approximated to arbitrary precision, so while we conventionally think of it as continuous, it doesn't imply our cognition really has infinite precision floats internally...

I think it's really unfair to only focus on half of the picture (saying there's no evidence for "Cognition is discrete") where in fact we actually have no evidence at all whether anything is fundamentally continuous or merely approximated as such with high precision.

Traditionally the math in physics is continuous, and the math in computing is mostly discrete. If people point to Hilbert space as some kind of justification for believing physics is continuous, then it seems equally valid (or invalid) to use the Turing model as justification to believe cognition is discrete. I think both approaches are misguided, but as I said, it's really unfair to point out only the convenient half of these invalid arguments.


It is discrete insofar as we're talking about sequences of thoughts, ie., reasoning.

What offends the minds of some people is the world might not be like their mind at all. They want always to analogise everything to Reason.

Everything should be countable, everything should be knowable, etc.


And everything should be... "quantizable"?

https://www.energy.gov/science/doe-explainsquantum-mechanics


That's the "trivial sense" I'm talking about. If we restrict "cognition" to the stuff we know is discrete then trivially it's discrete. But cognition is a hell of a lot more than that.


I don't see how what we know is discrete.

A word doesn't even have a discrete meaning, except locally in relation to other words.

Saying A = B + C looks discrete, just by hiding any potential non-discreteness inside B and C.


I entirely agree. I wasn't making a statement that "what we know is discrete". I was referring to a particular subset of cognition as "what [i.e. the things that] we know are discrete".

There are aspects of cognition that are discrete: a language contains a finite set of phonemes and words, a human mind is capable of (painfully slowly) carrying out purely symbolic algorithms like those a computer performs, etc. My point was that these things are a small subset of cognition, and most of cognition we have no particular reason to think depends on discreteness, which I think is the same point you're making.

Personally I strongly suspect that the "discrete" aspects of cognition are things that have evolved on top of / within a system that is fundamentally continuous (analogue) in nature.


> My point was that these things are a small subset of cognition, and most of cognition we have no particular reason to think depends on discreteness, which I think is the same point you're making.

How do you convince yourself that you have thoughts that cannot be accurately written down no matter how many words you use?

> Personally I strongly suspect that the "discrete" aspects of cognition are things that have evolved on top of / within a system that is fundamentally continuous (analogue) in nature.

How do you tell whether things are really fundamentally continuous, or a really high definition pixel art?


I can't. That's why said "I personally strongly suspect" rather than presenting my partially-informed intuitions as facts.


Discrete here means "non-continuous" - i.e. there is no smooth transition function between ideas/thoughts/rationalizations.

For example, a formal proof is a discrete process: it follows step-wise rules that you can assign natural numbers to (this is the first step, this is the second step, this is the third step). A non-discrete process, a continuous one, would have a smooth transition between these steps, which is hard to even imagine.

While I am not convinced it is correct to say that "human reasoning is discrete", human language is definitely discrete. Words don't blend smoothly into each other. If you don't believe me, try to define a function f:[0,1] -> Words, such that f(0) = "red" and f(1) = "blue" and tell me what is f(sqrt(2)/2), or what is df/dx.


what's a sequence of thoughts?


What are thoughts, anyway?


This seems very unlikely.

If you're from Copenhagen every measurement is a lossy discontinuity that resets the wavefunction.

This is not an abstraction, it's directly observable.

As for discrete formulations:

https://en.wikipedia.org/wiki/Causal_dynamical_triangulation


Interpretations of QM and measurement have very little to do with whether space-time is discrete or continuous. The simple fact is that no common QM formulation uses discrete mathematics for space-time, and it's unclear if any that does would even work.

Also, your link is not a formulation of QM, it is a different theory which makes different predictions (it is a quantum gravity theory). And, per the sounds of the Wikipedia article at least, it is not actually proven equivalent to QM in the regimes where it needs to be ("There is evidence [1] that, at large scales, CDT approximates the familiar 4-dimensional spacetime", or in other words, it is not fully worked out if this is the case).


Is wave function collapse continuous? Is photon absorption and emission continuous?

No one knows what the universe of truly made of.

Reality is measured up to certain error tolerances.

Don't confuse the map (math and physics) with the territory (reality).

Also, Stephen Wolfram would like a word with you.


Collapse is not continuous, by definition. Copenhagen is discontinuous.

If you want continuity, then shun collapse, and believe in Many Worlds. You know you should.

Wolfram is another conversation, and one that does not fit in this margin.


> the world is continuous

Is it though? Does it matter one way or the other? Do we think reality is the math in some way, or is the math a really darn good model of the reality?


Imagine there was a grid for space. For simplicity consider a regular grid of size 1unit in one direction and 1unit in a perpendicular direction. If such a grid existed, using one unit of ?something? would move you 1 unit along the axes of the grid, but you'd need 2 units of ?something? to move root2 units 45deg to the grid. Any discrete grid of any shape or size or pattern would have something like this, some sort of preferred alignment, but as far as we can tell there no such preference. Physics in free space is rotationally invariant and thus not on a grid thus continuous.


You don't have to imagine an ordered grid. If grid unit is small enough (say plank length 1,6 10^-35) and the grid is chaotic, for the distances of ~ 10^-16 that we can measure, everything will look the same in all directions.

This happens the same way in which steel demonstrates isotropic behavior although its microscopic structure is anisotropic.

So there is no easy way to prove or disprove continuity of space.


The "underlying issue" often at stake in the debate is whether reality is a computer, since it would need to be discrete if so, and often whether a computer can be made to simulate it.

However, what's missed here is that discrete is a necessary but not sufficient condition.

Once you give any sort of plausible account of how reality could be discrete, as you've done here, you end up with non-computable aspects (eg., typically randomness). So the metagame is lost regardless: reality isnt a computer (/ no complete physical theories of reality are computable).

Though the meta-meta-game around "simulation" is probably internally incoherent in itself -- whether reality is a computer or not would really have nothing to do with whether any properties had by it (eg., mass) are simulated.

Since either you take reality to have this property and hence "simulation" doesn't make sense, or you take it to be faked. If it's faked, being computable or not is irrelevant. There's an infinite number of conceivable ways that, globally, all properties could be faked (eg., by a demon that is dreaming).


I don't see how randomness can make anything non-computable. Sure you may not know the exact random numbers but you get a similar enough universe with any other sequence of random numbers.

Also continuous doesn't mean uncomputable either, because in many cases the infinite amount of computation for continuum does not add anything interesting and finite approximation works good enough.

> So the metagame is lost regardless: reality isnt a computer (/ no complete physical theories of reality are computable).

I don't see any evidence for this. For now we do not have a proof for one way or another. If for instance it turns out that quantum computers really can run Shor's algorithm factoring very large numbers, it would be a good evidence for continuum, but we are not there yet.

But even that would not be an evidence for reality not being a computer, since it will still allow the possibility of reality being a computer that can perform operations on real numbers.


Why is randomness non-computable? In computer science, the theorem is that the set of all Deterministic Finite Automata is equivalent to the set of all Nondeterministic Finite Automata. It is a non-obvious theorem that is a one page proof taught in every junior level theory of computation course. This theorem is what lets deterministic and nondeterministic Turing machines to be used interchangeably in many subsequent proof sketches in these classes.


> Nondeterministic Finite Automata.

The "Nondeterministic" in NFA means its transition function goes from states to sets of states, instead of from states to states. Informally, it can explore multiple paths in parallel for the cost of one. They're not probabilistic.


The computational semantics of the NFA simply requires that the next state be one of the allowable next states in the transition function d: Q x ∑ --> PowerSet(Q).

Thus, this semantics implicitly encodes the notion that the machine is nondeterministically choosing the next state in each execution.

The decision problem of whether an NFA accepts a string w is what allows for the informal parallel interpretation, that it accepts iff you imagine the computation is forking off a new thread at each nondeterministic branch. But to say that this not nondeterministic or not probabilistic is like saying the Many-Worlds Interpretation means there is no real superposition, or something like that. It's like saying a throw of a dice does not really involve probability because of a symmetry argument that a dice has six equal sides. Mainly, I don't understand that, because I see probability as a way to implement nondeterminism: a system is probabilistic only because it is making nondeterministic choices according to some probability distribution. And checking Sipser 2nd ed. p.368: "A probabilistic Turing machine is a type of nondeterministic Turing machine in which each step is a coin-flip step".

Anyhow, my main issue was that the original commenter casually claimed that probability makes things (physics) uncomputable. But Turing computability has nothing to do with probability, since as I recall the closest concept is the Non-deterministic Turing Machine (NDTM) and with that it is a basic proof to show that NFAs vs. DFAs, as well as NDTMs vs. DTMs, are computationally equivalent and there are theorems for that.

Meaning either they are using an idiosyncratic definition of computability or are ignorant of an introductory course on theory of computing which explains formally what Turing/Church's theories were about when clarifying the concept of computability. Okay or maybe they have a deeper philosophical disagreement with computability and complexity theorists - maybe they reject Sipser's definition above - but these are standard undergraduate curricula in CS by now and it could be argued that perhaps it is the non-CS experts who haven't thought deeply enough about what computability really is and would benefit from actually learning from these subdisciplines. I don't know, as they did not reply.


A chaotic grid would be macroscopically observable because random + random != 2 random, it's equal to 'bell curve'. Everything would be smeared as a function of distance, which we don't see.

This characteristic is observable for metals as well. Steel becomes less flexible as it's worked because it's grains become smaller and more chaotic - A microscopic property with a macroscopic effect.


If we are talking about a grid with a very small spacing, say around the Planck length, I don't see how we would be able to macroscopically observe it.

Everything we can see move on the grid is at least 20 orders of magnitude bigger than the grid spacing. Any macroscopic objects we can experiment with are more like 30+ orders of magnitude bigger than the grid spacing and consist of numerous atoms that will all be moving within the object due to thermal jiggling over distances orders of magnitude bigger than the grid spacing.


In physics you never have measurements differentiating between distance 2 and say 2+10^-20, and that gives enough space to hide any 'bell curve' you want.


There are many times in physics where people have thought they've had to choose between either one thing or its opposite, where both choices had clear deficiencies. The ultimate solution ended up being a new hybrid that nobody thought of for a long time.

I kind of suspect "is the universe continuous versus discrete" will come down to that. I don't know what a hybrid of such things looks like. With our current conceptions it seems impossible. But it always does, before the breakthrough comes and then in hindsight all the people of the future will get to look back at us going "How could they not see this obvious thing?", to which my only defense is that you, dear future reader, only think it's obvious because it was handed to you on a silver platter and you'd be as confused as we are if you were back here with us.


My guess is every physical value can be written as a sum of rational sines (i.e. sin(tau * a/b)).


> Is it though? Does it matter one way or the other?

These are two very different questions. As for the former, I don't believe there is consensus at this point with good arguments for and against. As for the latter, if we can reliably prove that the reality is not analog but digital, it has consequences at various levels, and we might make better choices when using math to describe it/make approximations.


Given that no one, or at least no human, can experiment what reality in its whole, and as far as we want to honestly recognize the effective scope of our knowledge, probably we will never know in absolute terms.

What matter is a subjective topic. What we all have in common is logistics constraints. So if some people set as a goal something that requires to settle if reality is more easily handled when modeled in continuous or discrete manner for logistical reasons, then it this scope it matters. But whatever you settle on, human brain is thus built that it can always assume that the perfectly fitting model is only valid in its scope which is built on top of an other more subtle level of reality which is on it’s part better modelized with an antithetic approach.

Now, on a very personal out of blue opinion, I fail to see how any causal series might happen without an underlying continuous flow of event. I mean, supposing causal discontinuity is to my mind as relevant as supposing that universe as it is right now, actually just appeared, without anything we can think about it being relevant, and in the next instant could be completely different or nonexistent since universe is not bound in any remote way to what we might expect on our delusional just created sense of causality.


Continuity just hides the ball.

You say you can't comprehend how something can move from 1 to 2 discretely. But the paradoxical notion of infinite continuous change has been known since antiquity. It's faith either way.

Discrete doesn't mean state changes are wholly globally arbitrary. Imagine a graph with nodes and edges, a state machine as computer sciences call it. I think it's easy to agree that the universe could be parsed by a regex ;-) Heck, imagine an integer on the number line that can go up or down.

Worlfram has written a ton about this. Despite all his issues, his math is solid. (Which is not to say his physics is true.)


> You say you can't comprehend how something can move from 1 to 2 discretely.

I didn’t mean that, sorry if my words that induced you to believe so.

What I want to point out is that, to my mind, if I assume a discrete foundation of universe, on meta-cognitive level I must recognize it implies everything I experiment through my current attention might possibly be a just made up state without any compelling ontological relation to anything I can recall or think of. So, as far as I’m concerned, believing in continuity is just a lazy way to relax on a metaphysical Gordian knot.

> It's faith either way.

Yes and no. It’s probably easier to change scientific perspective to whatever model apply best for some purpose than to adhere to some philosophy about Nature.


Zeno's """Paradox""" was nonsense even in it's own time. Easier now that we understand Newton's laws of motion but his contemporaries were able to sufficiently dispute his idea even without them.


Not nonsense. The argument goes that if time and space are both discrete, then to move from A to B in finite time means that you have to perform infinitely many actions in finite time.

Zeno didn't believe that the latter was possible. But he wasn't stupid, he obviously knew that motion was happening all the time in real life. His paradox really only makes sense in the context of Eleatic philosophy which assumes that reality is an illusion because change is fundamentally impossible (how can something come from nothing?).

If you want to reframe it in more modern terms, Zeno's paradox shows a contradiction in axioms. If you want to get rid of the contradiction, you have to change some of the axioms.

In real analysis, loosely speaking, we remove the axiom that an infinite process cannot result in a finite outcome - this way we are allowed to sum (some) infinite series, for example. But we don't "know" if reality behaves that way.

The atomists found a different solution: they argued that reality was fundamentally discrete. This way, Zeno's paradox also doesn't arise.


> if time and space are both discrete

my mistake - that should have read "are both continuous".


But isn't the whole point of QM is that this assumption doesn't hold in some scale? I mean it's literally in the name. Care to explain? :)


No, position and time are typically continuous variables in quantum mechanics. You can have formulations in which they are discrete but they are not required and are relatively exotic. QM certainly doesn't say they must be discrete.


I feel that a lot of people here are confusing the math and "reality".

You're definitely correct about the math, i.e. the systems that we humans have invented to model reality. But I guess most of us don't really care about what mathematical model scientists like to use (especially not whether they're "exotic" or not), but rather what reality could be like.

And the quantum properties of QM do seem to suggest that there's some sort of fundamental discreteness in reality. And it seems to run contrary to the resolute claims that reality must be continuous as if it were a proven fact. What I understand is that the math most commonly used by scientists is definitely continuous, but whatever we can measure seems to have some kind of planck limitation.

So are we talking about empirical science or science-flavored theology here? Have we actually found empirical evidence or proven the continuousness of space/time?


I responded to a couple of people who claimed with great certainty that QM meant spacetime had to be discrete, when it says nothing of the sort. I haven't claimed we have proof that it is continuous and I doubt we ever will as that seems akin to proving a negative existential.

Your penultimate paragraph suggests some confusion about ideas like Planck scale and quantisation.

Firstly, there is nothing special about the Planck length itself. It's just a unit of length. Around that sort of scale, though, our current theories of physics happen to break down because both quantum and gravitational effects become significant. That doesn't imply spacetime is discrete (or preclude it being discrete) at that scale. It's just a realm that our current theories don't work in.

Secondly, while describing aspects of nature that are quantised was a large part of why quantum mechanics was developed (and the source of its name), it in no sense says anything like "there's some sort of fundamental discreteness in reality". Quantum mechanics deals with both discrete and continuous observables in a single framework: functional analysis, essentially. The set of possible values for an observable is modelled as the spectrum of an operator, which can be either continuous or discrete. Which sort of observable is appropriate for a given physical theory is a choice made in constructing that theory. For things like charge and spin we use discrete (quantised) values because we have evidence that those things are quantised. For things like position we use continuous values and have no evidence that using discrete observables would better match nature.

Space could in reality be either discrete or continuous, or not even exist in any form we'd recognise as "space" on those scales. Quantum mechanics doesn't give us any hints one way or another.


Quantum mechanics does not mean everything is quantized. It got its name because the first predictions of quantum mechanics were quantized energy levels in some example systems, but that does not even mean that all energies are quantized in quantum mechanics. There are many systems you can study where energies are continuous, and many examples where other quantities are continuous in quantum mechanics.


Wasn't it more the observation the theory was designed to explain than the first prediction?


I think its a linguistic difference only. At least where I studied it was quite normal to call phenomena you can derive from a physical theory "predictions" even if they have been observed before. I agree the photoelectric effect strongly suggested some quantization before quantum mechanics was formalised.


A butterfly also literally has butter in the name. The point of QM is that certain energy levels are quantized. Or more generally that lots of operators/observables on continuous Hilbert spaces have discrete spectra.


maybe we should take another look at analogue computing


Its 99 years since Einstein published the paper on the photoelectric effect whith had far-reaching consequences. [1]

And 93 years since the first Solvay Conference. [2]

[1] https://en.wikipedia.org/wiki/History_of_quantum_mechanics [2] https://en.wikipedia.org/wiki/Solvay_Conference


What's your point? Everything they did, including Einstein's (and everyone else's at the time) quantum mechanics work, was based on continuous space and time variables.


Quantum mechanics does not mean everything is quantized. It got its name because the first predictions of quantum mechanics were quantized energy levels in some example systems, but that does not even mean that all energies are quantized in quantum mechanics. There are many systems you can study where energies are continuous, and many examples where other quantities are continuous in quantum mechanics.


It is true that there is no experimental evidence, but I think there are some convincing arguments that something must happen at the Planck scale (for very short distances) in a full quantum-gravity theory.

Here are some quotes from "Covariant Loop Quantum Gravity", Rovelli and Vidotto (slightly redacted). I suggest the whole chapter 1, in particular 1.2 to get an idea of why fundamentally spacetime may be discrete.

"In general relativity, any form of energy E acts as a gravitational mass and distorts spacetime around itself. The distortion increases when energy is concentrated, to the point that a black hole forms when a mass M is concentrated in a sphere of radius R ∼ GM/c^2, where G is the Newton constant. If we take L arbitrary small, to get a sharper localization, the concentrated energy will grow to the point where R becomes larger than L. But in this case the region of size L that we wanted to mark will be hidden beyond a black hole horizon, and we lose localization. Therefore we can decrease L only up to a minimum value, which clearly is reached when the horizon radius reaches L, that is when R = L. Combining the relations above, [..] we find that it is not possible to localize anything with a precision better than the Planck length (~10^-35 m). Well above this length scale, we can treat spacetime as a smooth space. Below, it makes no sense to talk about distance. What happens at this scale is that the quantum fluctuations of the gravitational field, namely the metric, become wide, and spacetime can no longer be viewed as a smooth manifold: anything smaller than the Planck length is “hidden inside its own mini-black hole”."

"The existence of a minimal length scale gives quantum gravity universal character, analogous to special relativity and quantum mechanics: Special relativity can be seen as the discovery of the existence of a maximal local physical velocity, the speed of light c. Quantum mechanics can be interpreted as the discovery [..] that a compact region of phase space contains only a finite number of distinguishable quantum states, and therefore there is a minimal amount of information in the state of a system. Quantum gravity yields the discovery that there is a minimal length lo at the Planck scale. This leads to a fundamental finiteness and discreteness of the world."


You're talking about minimum lengths, not discrete spacetime.

It may be the case that there's a minimum length beyond which "no meaningful laws of physics apply", but it really says nothing about whether real numbers are indispensable in the formulation of physics, or about whether spacetime is continuous.

There being a minimum length doesnt mean that everything is a discrete multiple of this length, or that space is broken into units of it, or that objects have to be aligned on grid boundaries defined by it.

Whenever people try to do philosophy of physics the inevitable place everyone lands at is a series of false equivocations, often caused by the language of physics being ambiguous and polysemous. But "minimum length" here does not mean a sort of grid length.


I agree with all your claims here in the literal sense, but suppose there's a minimum length, then it would seem to be at least theoretically possible to use discrete mathematics to formulate an approximation to the "real number formulation of physics"?

The fact that we don't have already a full system using discrete maths doesn't mean it is impossible, because our current system is based on a long tradition of belief in real numbers, and assuming physical space is continuous.

I'd argue (admittedly unhelpfully) that unless we have actually tried to formulate physics using discrete mathematics and found a barrier that we prove unequivocally that it is impossible to overcome, we can't claim that physics must be formulated using real numbers/continuous math. There's a difference between "we don't know how to do this" vs "we know we can't do this".


I don't understand your point, I never said that "everything is a discrete multiple of this length, or that space is broken into units of it, or that objects have to be aligned on grid boundaries defined by it", I just wanted to mention that "continuity of spacetime is a convenient approximation" may be a correct sentence in the context of quantum gravity.

Also, for what is worth, in QM the space of wavefunctions can also be finite dimensional (for instance the Hilbert space of a spin 1/2 particle).


minimum lengths arent relevant to whether things are continuous or not. these arent related.


That's literally the definition of continuity.

You have an object at position p, and the behaviors of the system are discretely different between P and P + h, without an intermediary at P+h/2.


And that's not what a "minimum length" in this case means. We're not talking about space having a minimum unit. We're talking, at best, about (presumably massive) objects having a minimum extension in space .

Even with a "minimum length" (in this specific sense), you have an object at position p, and can (move/observe) it at any p+dx continuously.

importantly, the question is whether the best theories of physics in a world with a minimum extension-in-space require continuous mathematics, and there's nothing about this plank length to suggest they wouldnt


Discreteness would mean that there exists some base distance p such that the distance between any two objects is Np, with N being a natural number (and any surface is some Mp^2 and any volume is Qp^3 and so on). Continuity is simply the opposite of that. It could be that objects can be at arbitrary real-valued distance d from each other, but that d > p is a precondition for any other law of physics.

By contrast, discretness has various unintuitive mathematical properties that mean it's not easy to fit into some other theories (particularly those relying on differential equations).


> computer science leaking out

Planck constant would like to have a word with you. But it is true that CS shines a light on the matter of mapping the infinite into bounded spaces.

This matter of ‘cognition’ is the entire matter (npi) of contention. What is the actual relationship between number and perceived phenomena? What is the deeper meaning of the concordance of mathematics and physics? Where do these magical constants come from and what does it all mean?

It seems we bring the ‘world’ into being by partitioning. See Genesis 1 for details.

> the world is continuous

Reality is actually a unified undivided unity without form and timeless & eternal - that is all we can say with certainty. The “world” is our perception of this reality. Our cognitive machinary is discreet, and a mapping of this reality into metric & temporal spaces of the mind.


> Planck constant would like to have a word with you.

Do you want to flesh this out? Are you suggesting that because phase space is quantized, position space must be quantized as well?


As someone who was once a, "I've watched a lot of YouTube videos about physics", person, i think it's very interesting how confident people are in their understanding of what is essentially the edge of physics, something only seen in a masters or doctorate degree. Specifically the whole spacetime and qm thing.

There's so many videos on it that you begin to feel like you really understand it after half a dozen or so repeat the same words at you. But the thing you don't realize is that those are literally the only words they could possibly communicate, and that's an infinitesimal fraction of the nuance of the real thing. Plus, there's a selection bias in that the videos that make you feel good get more views, so you're more likely to stumble upon videos that make you _think_ you understand it that videos that actually do, partly because the only videos that could make you understand are a graduate degrees worth of 80-100 hour lecture courses that you're gonna have to take notes on.

It makes me wonder if this is true for literally every subject. I fancy myself are least politically versed in modern events but is my entire understanding based on an entertainment-first version of an actual education? How much do i walk around using jargon that i don't really understand to make points whose true depth I'm completely unaware of?


My favorite phenomenon of which you speak are the guys who haven’t thought about math since 11th grade or physics ever aside from seeing some Joe Rogan clips with Eric Weinstein but will pound their keyboards with fury that “STRING THEORY IS A BIG LIE!!!”


Well, unfortunately some otherwise great physics educators intentionally stoke that fire, portraying the current particle physics agenda as if its some conspiracy to waste funding rather than the consensus of thousands of the best minds in physics. Selling people a superficial sense of contrarian insight ends up being a very successful marketing tactic.


Most nerds who "understand quantum mechanics" misunderstand (usually do to incorrect explanations) that the Planck units are somehow the fundamental units of reality


A discreet unit of measure exists in dynamics & its relationship to position space is informed by Heisenberg’s theorem.

(My actual point is that Reality is neither continuous nor discreet - it is an infinitesimal point and it is our mind — that likes to name and number things and relies on duality to make ‘distinctions’ — that creates the universe, the subjective reality that we perceive as inhabiting.)


What is an infinitesimal point?


The tao that can be told is not the eternal Tao The name that can be named is not the eternal Name.

The unnamable is the eternally real. Naming is the origin of all particular things.Free from desire, you realize the mystery. Caught in desire, you see only the manifestations.

Yet mystery and manifestations arise from the same source. This source is called darkness.

Darkness within darkness. The gateway to all understanding.


> If you have examples of things like this in other areas like mathematical finance, I'd love to hear about them

There are tons of examples in mathematical finance but the obvious one is the Black/Scholes [1] paper where one of the key assumptions they make (which they know to be untrue but helpful) is that you can replicate a portfolio in continuous time. This allows them to use a constructed portfolio of a risk-free intrest-bearing instrument and the underlying to replicate the price of an option, and the process is a Brownian motion. Everyone knows that actual trading (and thus price processes) are discrete in real markets, but continuous time is much easier to model. Much later on people like Heston and Matytsyn(?sp) came up with stochastic vol models with jumps to replicate discrete price discontinuities, but they're a lot harder to work with in many ways.

[1] https://www.cs.princeton.edu/courses/archive/fall09/cos323/p...


In quantum electrodynamics there is this problem: if you imagine an electron is a little sphere like the ball on a Van de Graff generator, there is a certain amount of energy in the electric field around it. As the radius gets tiny the field in the space immediately around it gets stronger so if you integrate it the energy of the EM field becomes infinite as the radius goes to zero…. We’ve got no evidence that the electron is more than a point, however.

We use a trick called renormalization which, in this case, is recognizing that the mass of the electron has a term from the EM field. We’d assume that the EM theory is not completely true but that below some distance the theory breaks down. Working in momentum space there is a certain momentum that corresponds to the cutoff distance so we just don’t integrate beyond that. You can vary the cutoff and also vary the other parameters of the theory (such as the bare mass of the electron) so the theory gives the same answers at macroscopic distances so it doesn’t matter where you put the cutoff.

Thus it does not matter much what the “true” theory is whether space is discrete or the electron really is a little ball or the EM field merges with the other forces at high energy to make some different force that (slowly) eats protons or quantum gravity or whatever.

Discretization is problematic in a relativistic world because it breaks Lorenz invariance. That is, if I am moving quickly I would see the gap between the “pixels” get smaller. Now maybe the pixels can be non-Lorenz invariant but can “fake it” at low energies and large sizes but when the energy gets large you’d expect to see some evidence of the grain. Even if the gap was the Planck length you’d probably see things get weird at much lower energies, such as those of the highest energy cosmic rays. There has been a lot of research on that and there is no clear evidence of relativity being broken but it is still highly mysterious

https://en.wikipedia.org/wiki/Greisen%E2%80%93Zatsepin%E2%80...

for instance Lorenz violation might allow particles to bypass that GZK limit.


Mathematical finance is basically all people saying look I know in the real world markets are driven by things like idiots buying Dogecoin because number go up but let's assume it's made of well informed participants who price everything correctly. Assuming this we can show...


I'm jealous. Unfortunately, people like me with aphantasia have no visual imagination. The hardest part in physics was converting the problems into equations. Once it is an equation, I could solve it (depending on the problem, with some difficulty or not, I guess).


You may have taken a slightly wrong lesson from that exam. AP exams are extremely curved (scoring about 70% or less is a 5), and are half multiple-choice, so a few clues can get the answer), and it's partly a conceptual test that doesn't rely on math.

But of course you're absolutely correct that continuous and discrete systems are approximately equal.


AP Physics C is also one of the more heavily curved exams, where a 55-60% is a 5.


I agree however as a commentator pointed out in a similar thread a few years ago, mental visualization only works for relatively small problems with limited variables. Most reasoning after that is done via equations when the problem complexity surpasses the n'th dimension (where n is maybe 3 or 4).


Can't agree more, mental visualization is such an asset for any understanding as it relies on compressing information and forces oneself to digest the material.


Prior to computer-generated 3D animation, I can imagine it was very difficult to float and spin vector-arrows in mid-air with enough accuracy to show what goes on without having to resort to reams of explanatory paragraphs.

Eugene Khutoryansky is something of a lesser-known 3b1b that's more focused on physics than math. I found his animations very helpful for building intuition around Maxwell's equations:

https://www.youtube.com/watch?v=9Tm2c6NJH4Y


I wish most explanations wouldn't skip over the fact that field lines arent real, and just a tool to graphically depict what is going on. Statements like the following gets the causality entirely backwards.

>the strength of an electric field depends on the number of electric field lines.


It’s like saying that rain falls where there are blue regions on the weather map :)


I think the question of whether field lines are real is more of a philosophical (of physics) question so it usually falls outside the scope of introductory material on E&M. However, some texts like Purcell and Morin do kinda take a stance on whether fields are real: "since it works, it doesn’t make any difference."


Very much this. The (standard model's) "answer" is that the four vector potential probably is the "most real" and we're all just excitons along for the ride.

At some point the definitions become almost circular and opinions about what it fundamental have shifted a bit over the centuries. The cgs system of units -- which differs profoundly from SI in the treatment of electromagnetism -- was associated with those who viewed D and H rather than E and B the most fundamental. I'm quite happy with the level of theory used being appropriate to solve the problem at hand. There's always a bit of wiggle room around exactly what that problem is, however ;-)


So, a bit like how the conventional depiction of electric flow is in the opposite direction of the actual electron travel?

It doesn't matter in terms of the math (in the vast majority of situations), so while the conventional idea of electric flow is incorrect, we keep it anyway.


I think it is closer to the conventional view of current as the travel of electrons down a wire.

Current moves far faster than electrons. it is more similar to a wave in the ocean with the electrons being the water molecule.

As a result, and counterintuitively for most, the speed of electrons will give you a completely wrong answer for when a light will turn on after you flip a switch.


> Current moves far faster than electrons.

Current is the movement of charges. It cannot “move” faster than said charges. (Or, perhaps, you meant the electomotive force that makes the electrons move along the wire, then sure, that thing spreads pretty quickly.)


yes, the EM force, Field, or whatever it is called. I still struggle, mostly because I was taught a fundamentally flawed model for how electrical power is transmitted.


Current is about the throughput of electrical charge at a specific point of an electrical circuit, while what you're describing seems to be about the latency or the speed of electricity.


it works for a physics test, but is also part of the problem, as it misleads and prevents conceptualization, even for simple problems.


isn't that a special case of Plato's argument that "triangles aren't real. show me a perfect triangle...you can't, you can only show representations of a perfect triangle"

Your comment could be reduced to "lines arent real. show me a perfect line"


we can fix that.

the number of electric field lines, depends on the strength of an electric field. ,


Wow this video is actually great at imparting the concepts with animation.

It's a little distracting how it looks like an ad for an adult themed video game, but it's very well thought out.


> It's a little distracting how it looks like an ad for an adult themed video game

Well-put: Does that demon-squid in the intro look like a Chihuly glass installation to anybody else?

Also I swear I've heard that song long ago on OCRemix.


I suspect something similar happens with manifolds for GR. Riemanian manifolds aren't a big deal when you contrast them to what happens inside of a DNN, but physical analogs for these structures start to break down.

e.g.

Imagine a 4-dimensional hyperbolic surface defined by the lightcone of a particular point in space-time, now imagine that this surface is stretched/compressed by the distortions of gravity. Now let's talk about equations which are only loosely tied to this surface.

vs.

Consider the metric tensor defined by this 4x4 matrix g_xz. Distance is computed as a^x b^z g_xz, now consider all possible walks from point a to c to b. Now let's show the relation between these walks and a quantity we'll call the stress-energy tensor which represents the energy/momentum density at any particular point in space, and it's flux towards any other direction in space.

The latter is a very algebraic description, which does not rely on the audience having to visualize the constructs involved. Practically, even if you get a feel for what a Riemanian manifold looks like in 4-dimensions - you'll struggle to visualize the Riemann tensor, or Christophel symbols.


This video is worth watching for the soundtrack alone. I came across Eugene’s channel before but somehow I missed this gem!


> … for the soundtrack alone

Did I sense mild sarcasm here?


I enjoyed reading that. Maxwell's equations in differential form is the most elegant equations that I saw in my life. I remember in my freshman year I had 8 questions in my EM final and the last question was name four equations that you can use to solve all the other 7 questions. It was a straight question for free grade obviously. But I was puzzled that I could not actually derive all of what I used. I went on and submitted the exam and returned to my seat. I went on with deriving all of them and spent a couple of hours. I left the room being a physicist from that moment until now.


William Burk published a little book on understanding differential geometry on an intuitive level using visualisations.

One chapters deals with Maxwell. After this it was easy to understand

https://www.cambridge.org/core/books/applied-differential-ge...


I'm just a bit surprised that this post says nothing about Heaviside who rewrote Maxwell's equations in the form commonly used today.

According to wikipedia [1], Heaviside significantly shaped the way Maxwell's equations are understood and applied in the decades following Maxwell's death.

[1] https://en.wikipedia.org/wiki/Oliver_Heaviside


Which was not very useful.

The integral equations of Maxwell, which few know today, are much more generally applicable and actually easier to understand.

The differential equations of Heaviside are valid only when certain restrictions about continuity are true. Moreover, the meanings of curl and divergence are hard to understand otherwise than by deriving them from the integrals over curves and surfaces used in the original equations of Maxwell, which are also necessary to determine how to handle discontinuities.

The differential form of the equations looks prettier on paper due to a simpler notation, but it is less helpful for understanding and for solving practical problems than the integral form.

In my opinion, it is a serious mistake that almost all manuals show the equations of Maxwell in the Heaviside form, instead of showing them in their original form. This is one of the main reasons why they are hard to understand for many.


IEEE floating point is far more practical for computation than axiomatic arithmetic, but the fundamental axioms are a more intuitively enlightening description of what arithmetic is. Same with Maxwell and Heaviside. Understanding how it all fits together in fewer words makes the rules make sense. Heaviside gives meaning to Maxwells equations.


Excellent essay. The thing I do not agree with is how they mention that one has to abandon natural language to understand quantum mechanics. The only reason why we made the jump from Newton to Einstein through Maxwell's theory is Einstein's intuition of the physics behind electrodynamics


> The only reason why we made the jump from Newton to Einstein through Maxwell's theory is Einstein's intuition of the physics behind electrodynamics

I'm not sure that's true. What kind of model of causality are you using here?

Other people were also on the cusp of doing (most of) what Einstein did. Ultimately, without Einstein most likely progress would have been delayed by a few years, and the laurels would be spread amongst more people. (Eg Lorentz' work was on the way to formulating special relativity.)

So it's hard to say that there was any single 'only reason' for any discovery here.


I would argue that Einstein made big leaps in knowledge to formulate SR and even more to formulate GR. He was able to imagine the motion of a body through a curved space-time as the source of gravity imo something that most people would not be able to do.


Unlike the special relativity, which was just a new interpretation for facts and formulae already established by others (like Lorentz and Poincare) in electrodynamics, so it can hardly be called as "big leaps", and which was fully developed also only by others, like Minkowsky, the general relativity, which was developed more than a decade later, was the most original work of Einstein, being a completely new mathematical model of gravity and inertia. Nevertheless, if Einstein had not existed, it is likely that Hilbert would have become the author of the general relativity.

However the general relativity has no relationship with the equations of Maxwell discussed here.

While general relativity is the most original work of Einstein and the one best known, the second most original work of Einstein is the one that had the greatest impact on practical technology: the discovery that for computing the properties of electromagnetic radiation one must take into account also the stimulated emission (like in lasers), not only the spontaneous emission and the absorption.

For me, Einstein's paper on stimulated emission is the most important of his work. Even if more than a century has passed, it is not yet clear if Einstein's mathematical model is the best for gravity and inertia or if there exists another model that would be more comprehensive and which could relegate Einstein's model to be an approximation that would be no longer useful.


I agree that what Einstein did was beyond most people. But not beyond all of the great physicists of the time, especially if given more time.


Everyone seems to think they would have figured out Einstein’s theory but they didn’t until Einstein did.


"Don't be snarky."

"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

https://news.ycombinator.com/newsguidelines.html


Huh? Where did I say I would have figured it out?

Albert Einstein was smart, and he definitely was smarter than me. But his achievements weren't beyond all the other smart people. Especially individually and with more time.

See also how Newton and Leibniz came up with calculus at the same time. Or how public key cryptography was invented independently multiple times.


I don't think you're the "they" here. I think "they" means "one of Einstein's contemporaries".


Natural language can conjure the visual imagination of intuition, but it is less specific than images themselves.

Einstein reportedly ran thought experiments in his imagination.

His intuition was visual and dynamic.

That is images that change - animations.

A natural language description of relativity is one step removed from the moving images of Einstein's intuition.

Fields are not grounded in everyday experience but most modern movie-goers are happy with rapid and grand changes of scale and thanks to wizard and superhero movies there exists a sophisticated visual grammar of rapidly propagating fields, strange paradoxes and simultanous weakly interacting realities.

Many of the concepts of quantum field theory can be grasped by a wide audience when presented as animations.


You are probably right that natural language is not synonymous to a physical intuition. However, I believe that the way quantum theories are taught or even understood are in a much less intuitive manner than GR. Something that is evident by the number of different interpretations of quantum mechanics


Years ago I completed a post-graduate degree in physics, and although I had studied Maxwell's equations, I didn't have a good "feel" for them.

I recently read "A Student's Guide to Maxwell's Equations", and it was perfect for me - it explained enough of the maths to understand the equations, without having to first learn differential geometry. https://www.cambridge.org/highereducation/books/a-students-g...


If you do want to learn the geometric formulation, part 1 of Gauge Fields, Knots and Gravity is a good resource (and has exercises!).


Related: Maxwell on the Electromagnetic Field: A Guided Study by Thomas Simpson.


Maxwell didn't have the nice differential geometric notations that we use today, which allow us to write his equations in a very concise and easy to understand form. His original paper is way more convoluted, so at the time it must have been really difficult to understand for everyone except the subject matter experts. And he was of course building on the work of Faraday, Ampere and others.

But like with other theories, people find ways to simplify the notation and formalism and explain it better. Quantum mechanics, special and general relativity are similar in that regard.


True, but he did use quaternions (by 1873), which allow the field properties to be written as a single equation. It's kind of sad that more physicists don't use or teach quaternions, while math and CS have fully adopted them.

I really liked Kathy Joseph's historical reviews of vector physics and the people who developed it, which explain some of the reason's for how it's taught. Most texts don't even develop electrodynamics from relativistic electrostatics as a demonstration.

https://youtu.be/CdwxpSInhvU

I think I fell down the rabbit hole from Freya Holmer's "why you can't multiply vectors".

https://youtu.be/htYh-Tq7ZBI

The key being that all of the Hamiltonian fields can be found in a single quaternion equation, which is just what happens when you start multiplying vectors together.


Unfortunately he didn't use quaternions in his initial formulation (that was all split into xyz coordinates) and in his later revision he took apart the quaternions into scalar and vectors parts. It could have been so much prettier....but luckily we have geometric algebra for that today.

On the other hand he did derive the electric and magnetic fields from a scalar and potential field. In that sense Heaviside made a step backwards.


Faraday didn't even know trigonometry, allegedly (he never studied mathematics). It's interesting that his student (Maxwell) who did have the mathematical background would extend his theories and figure out the math to explain it all


Maxwell was deeply influenced by Faraday's work but he was not his student: he first noticed Faraday’s work while he was at Cambridge in the 1850s. He found a way to translate Faraday’s experimental findings (expressed as the interaction of "lines of force") into a mathematical model (with fields), also taking into account Coulomb's and Ampère's laws. Maxwell then extended that model to take into account the conservation of charge / the continuity of currents.

While Maxwell’s work was inspired by Faraday's, it was also built upon the contributions of many other scientists (Coulomb, Ampère, Thomson, Neumann, Lenz, ...)


> It is better for the progress of science if people who make great discoveries are not too modest to blow their own trumpets.

This is a nice remark, but very difficult to implement in practice. In reality, many non-modest people will overrate their contributions, while the few modest people will have a hard time to act non-modest in certain situations. We are in a world where modesty is even rarer than 100 years ago. I am sure many important discoveries are hidden in the myriad peer reviewed publications published just for quantitative reasons.


Really enjoyed the interview with Freeman Dyson here on his life story - “someone left some Real Analysis textbooks in the school library but they were in French. It was probably (G. H.) Hardy.”

https://m.youtube.com/playlist?list=PLVV0r6CmEsFzDA6mtmKQEgW...


OK, I do not understand prof.Dyson's argument at all.

"This does not mean that an electric field-strength can be measured with the square-root of a calorimeter. It means that an electric field-strength is an abstract quantity, incommensurable with any quantities that we can measure directly."

Electric field-strength is measurable no less directly than energy, it is a force experienced by a unit charge placed within the electric field.


You can make the electric field disappear by choosing the right gauge. Same goes for the magnetic field (can't make both disappear together though). The vector potential, in that sense, can be regarded as a more fundamental description of the electromagnetic field. It can't be observed directly though, but electric and magnetic field strengths are manifestations of the vector potential, they are not fundamental in that sense. Not sure if it's that what he's getting at, though.


> You can make the electric field disappear by choosing the right gauge.

That, had it been true, would have made the gauge observable.


>You can make the electric field disappear by choosing the right gauge. Same goes for the magnetic field (can't make both disappear together though).

What? No you can't. The fields are invariant under gauge transformations.


You're right, sorry I was thinking of a Lorentz transformation that would make either the magnetic or electric field disappear under certain conditions.


"transform into each other" would be more appropriate. The gauge choice you mentioned is not totally wrong. The gauge freedom can be used to set the electric field to zero, but only once at a single point.


Sorry, but gauge transformations do not (by construction) affect the physical fields at all. You cannot set E to 0, even at a point, with a gauge transformation.


You need to read the paragraph prior to the quote. He is talking about which one of field and mechanical stress is more "fundamental" or less "direct". If one measure the force exerted by electric field using a unit charge, one measures the field by measuring the mechanical stress first.

Of course, the context matters. Often if one compares potential and field, field would be the one directly measured. It is just semantics really.


I have read the prior paragraph. It does not clarify how measurement of energy is more direct than measuring the field.


Just today I was watching a cool video by Angela Collier[1] about how Faraday's experimental work really laid the groundwork for Maxwell by proving that light polarity could be affected by an electromagnetic field.

[1] https://www.youtube.com/watch?v=Fbi-_8zOuR8


The two-layer separation of our world between imperceivable objects that define perceivable objects seems quite similar to how philosophy is essentially split between metaphysics and philosophy that takes things for granted from that "first layer".

> The reason for these arguments is that the various interpreters are trying to describe the quantum world in the words of everyday language, and the language is inappropriate for the purpose. Everyday language describes the world as human beings encounter it. Our experience of the world is entirely concerned with macroscopic objects which behave according to the rules of classical physics. All the concepts that appear in our language are classical. [...] The battles between the rival interpretations [of quantum dynamics] continue unabated and no end is in sight.

Replace 'quantum dynamics' with metaphysics (or post-Kantian metaphysics) and the statement seems true as well.


As a lay person, this was beautifully written, and I feel like I understand the issues to a degree better than I have before from casual reading.

Very important to people like me, because I really struggle with advanced math. I dropped an EE degree because while I could do the math, it was incredibly hard and in no way intuitive to me.


Only tangentially related, but I highly recommend watching Veritasium's YouTube video on electricity if you're curious as to how Maxwell's fields create the current / amp abstractions in EE [1].

It's a common misconception that electrons or current transfer energy. In reality it's the electric field that exists between the wires that is doing the heavy lifting, the electrons in the wires are just controlling the field.

This has always confused me and I was very irritated when I first learned electromagnetics about how rote all the initial learnings are. I wish more work was put earlier into making everything relate back to Maxwell's equations to make it make sense.

[1] https://www.youtube.com/watch?v=bHIhgxav9LY


I always explain it to people like waves in the sea. The bobbing up and down isn’t the water molecules travelling they are simply going up and down as the wave moves through the water. People seem to accept this analogy as, even if the water thing is new information to them, it’s easier to visualise.


That's not what PP and Derek Veritasium Muller are harping on.

It's not about the misconception about "AC is vibrating so how can electrons be delivering their energy from the power plant to the light bulb far away?"

They are talking about how the electric field is outside the wires almost entirely.

Their argument is that in the water wave analogy, the wave wouldn't be in the water at all, because it's "actually" transmitted via an invisible field in the space above the water, which pushes back on the water farther away.

Most respected electricity/physics YouTubers disagree with Veritasium's emphasis on this perspective, by the way. The think he conflated the first misconception I mentioned with the second idea, which is about how you model electric circuits.


It's a practical simplification, not a misconception.

It's the same argument as "Einstein corrected the misconception that Newtonian mechanics is how bodies interact, and it's irritating how rote mechanical engineering of a car is."


> Instead of thinking of mechanical objects as primary and electromagnetic stresses as secondary consequences, you must think of the electromagnetic field as primary and mechanical forces as secondary.

Feynman explained this nicely. He said essentially, you ask me to explain what is electmagnetism. Is it like two hands pushing on other? Well, if it is, then what is "pushing"? Pushing is just the result of electomagnetism in your hands! It is impossible explain electromagnetism to you in terms of anything simpler that you already understand. It is a fundamental force.


The Maxwell equations are conventionally being taught and written as 4 equations for two 3 dimensional vectors instead of a single equation for a single anti-symmetric 4-dimensional tensor. Also, the tensor exaction is explicitly relativistic covariant while in the vector equations formulation this fact is well hidden and requires quite a long proof to see it.


“And then god said let there be light”

https://i.etsystatic.com/16048150/c/500/397/254/275/il/4c656...


I taught this subject at the graduate level.

With an emphasis on basis functions and computing we could do it in a single semester.

The more interesting thing was that most students had practically no useful previous knowledge despite formally having some years of eduction on the subject

My insight was the biggest issue was the eduction system failing the students and not anything specific to Maxwell’s equations.


It seems strange and unlikely that human beings should have access to an abstract language that “nature speaks”, as Dyson puts it.

“Here you are, a speck of thinking matter. Oh and by the way you just so happen to have a special capacity for mathematics, the secret language of the universe.”

Convenient, isn’t it?


In the article Dyson retells the story from Pupin's autobiography. This 1923 Pulitzer-winning book is now out of copyright and freely available: https://www.gutenberg.org/ebooks/66886


All that was a long time ago. The Maxwell magnetic component is a result of special relativity, and I am wondering what would result from using general relativity instead of special relativity to get approximation equations from QED at the same scale than those very Maxwell equations.


i wonder if further progress in physics will require abandoning the maxwellian paradigm in the same way that maxwell had to abandon the newtonian paradigm? presumably for something even further removed from everyday experience. dyson must have thought about that possibility, but evidently at least at the time of this paper he rejected it. i'd like to read his reasoning

unrelatedly, i feel like i kind of understand maxwell's theory in terms of vector analysis, but the clifford algebra formulation is still beyond me. it sure looks a lot simpler


Keep in mind that we for all the talk of 'abandoning' the Newtonion paradigm, it's predictions are still valid for a large space of conditions. Eg NASA uses Newtonian mechanics to fly all their spacecraft.

Special relativity includes classical mechanics when speeds are low. In the same way, any replacement for Maxwell's paradigm must reproduce the predictions of Maxwell's equations under the large swathe of conditions where they agree with reality.


yes, i thought that was too obvious to be worth saying, but i'm glad you've said it so the knuckle-dragging contingent don't think i'm endorsing their untutored 'theories'

for that matter i use the aristotelian paradigm of physics when i expect my bed to stop moving when i stop pushing it across the floor; i don't bother with calculating the deceleration due to the friction coefficient with the floor


Sorry, I think the first part of your original comment was a bit confusing. Upon rereading: the second part already made it clear that this is what you meant.


i don't think there's a non-confusing way to discuss questions like this, so plausibly this is not the right forum for it


Maxwell's Equations have already been superseded by Quantum Electrodynamics, in the same way that Newtonian gravity has been superseded by General Relativity.

Both Newtonian gravity and Maxwell's Equations are still very good approximations in their regimes of validity.


we are commenting on an article which describes in depth how quantum mechanics, including qed, falls squarely within the paradigm maxwell pioneered. that's why my comment specifically talks about the 'maxwellian paradigm' and not 'maxwell's equations', which is, by the way, not a brand name

i thought it was too obvious to be worth saying that classical physics is still an excellent approximation to reality, but hopefully you've enlightened someone reading this thread


> quantum mechanics, including qed, falls squarely within the paradigm maxwell pioneered

Does it, though? Sure, QED is a field theory, but it's perturbative where classical EM is exact, its fields are operator-valued distributions where classical fields are number-valued functions, its interactions are transition probabilities rather than forces - it's not clear to me that these are smaller jumps than introducing fields in the first place.


many people have argued that qm is a larger jump than maxwell's approach, and i agree that that's a very reasonable position, which is why i found it interesting to read dyson arguing the opposite


> 'maxwell's equations', which is, by the way, not a brand name

Are you commenting on the capitalization of "Maxwell"? It is a proper name, and it should be capitalized.


admittedly so, but no, on the capitalization of 'equations' (and 'quantum' and 'electrodynamics'), which are not


It is common to capitalize the names of famous equations or theories.

Maxwell wrote tons of equations during his life, but there's only one set of "Maxwell's Equations." Maxwell himself never even wrote down Maxwell's Equations in the form we now know them.


coherent and reasonable, but as a guide to current usage, wrong


Capitalization of "Equations" is clearly style-dependent, and thus neither right nor wrong, unlike, say, capitalizing the first letter in a sentence.


[dead]


> but hopefully you've enlightened some small child reading this thread

There's really no need for this kind of language here.


that's true, people who are one of today's lucky ten thousand might feel insulted, and that's unnecessary


The moral of this story absolutely is not that "modesty is not always a virtue", lol


Maxwell's theory is not hard to understand--once you have the proper tools.

The problem is that because of trying to cram a degree into 4 years, you wind up having a class on electromagnetics without any understanding of vector fields.

Electrical engineering is particularly bad about this. You never get exposed to the Hamiltonian formulations of classical mechanics, and you never get exposed to vector analysis. Consequently, you are stuck with the Heaviside-Hertz pedagogy with silly things like "displacement current" and stupid, weird-ass integration contours (which don't work in motors, LOL)--and the attendant difficulty in understanding Maxwell's theory.

However, if you have vector analysis and fields, then you can understand formulations like Carver Mead's "Collective Electrodynamics": https://www.amazon.com/Collective-Electrodynamics-Quantum-Fo...

Suddenly, emag is a whole lot more straightforward to understand. It's not EASY as it's very math heavy, but it has a lot fewer weird things that are just "we say it works."


I think the essay is not about Maxwell's theory being hard for college students to understand, but rather, for other 19th century physicists to understand. And that's mainly because Maxwell himself didn't do a very good job of communicating his theory at the time, so it took other talented physicists to rework, explain, and popularize his ideas.


People forget that Maxwell's Theory contradicts the prevailing belief of the time that waves ALWAYS require a medium for propagation--the so called "aether".

That's a really large conceptual jump and physicists did not make that jump lightly or easily.

Maxwell's paper was 1865. The Michelson–Morley experiment was 1887. Michelson himself couldn't make the shift. Quoting Wikipedia (https://en.wikipedia.org/wiki/Michelson%E2%80%93Morley_exper...): "The negative result led Michelson to the conclusion that there is no measurable aether drift.[1] However, he never accepted this on a personal level, and the negative result haunted him for the rest of his life (Source; The Mechanical Universe, episode 41[8])."

In an attempt to preserve "aether", Lorentz contraction then enters the picture as an ad hoc explanation for the Michelson-Morley result. It turns out Lorentz contraction is correct, but not because of the existence of "aether" but because of the constancy of the speed of light--c (Einstein Special Relativity--1905).

Once you finally give up on "aether" after 40 years of trying otherwise, you can finally just roll with the mathematical implications of Maxwell's equations.


Maxwell saying that his theory "attributes electric action to tensions and pressures in an all-pervading medium... the medium being identical with that in which light is supposed to be propagated" suggests that Maxwell himself did not view his theory to be contradicting the existence of an aether (or was being coy about it). Which is especially interesting because Maxwell based his theory on Faraday's, and Faraday didn't believe in the aether.[1]

[1] Michael Faraday's Thoughts on Ray Vibrations, 1846, cited by Maxwell in his paper. Faraday says: "The view which I am so bold to put forth considers, therefore, radiation as a kind of species of vibration in the lines of force which are known to connect particles and also masses of matter together. It endeavors to dismiss the aether, but not the vibration." (https://pwg.gsfc.nasa.gov/Education/wfarad1846.html)


I mean, language is hard, and it evolves. What they called mediums, we now call (quantized) fields. QED would say the photon field _exists_ at all points in space and that it is kinda fair to say that it is a "medium" for electromagnetic forces and waves to propagate (with the additional point that special relativity is required to understand how to correctly transform it into a different reference frame, rather than imagining it as a strictly Newtonian medium).


Admittedly the tools to make the mathematics more compact and clearer weren't invented yet (except as quaternions).

We did get stuck with Gibbs' vector calculus formulation as the canonical view unfortunately.


Which is interesting considering Freeman Dyson was the guy that made the connection between Schwinger's and Feynman's QED.


Are there any textbooks you would recommend for learning vector analysis / vector fields before studying EM?


EM’s vector fields formulation is fairly straightforward: it’s all curls and divergence. Any 3rd (-ish) semester undergrad multivariate calculus course is likely to cover it in sufficient depth. “Mathematical Methods for Physicists” covers it in sufficient depth, for example, provided you already have a thorough understanding of the prerequisite material. Most undergrad physics degree curriculums should have E&M courses whose texts (e.g. Introduction to Electricity and Magnetism, Griffiths) cover enough of the details. If you want to pursue it further than an undergraduate level study, you’ll also want a good text on differential equations that has or is supplemented by material covering, e.g., spherical harmonics and Bessel functions (among other things). I wish I could remember what I used, but it was…more years ago than I care to say when I was an grad student.


I do not. None of the books I know of are very good because they are mostly targeted at Mathematics majors rather than physicists or engineers.

Gerard ’t Hooft used to have a humongous list of textbooks for aspiring theoretical physicists. I'd sure look there for starters.


I was recommended Nathan Ida's Engineering Electromagnetics as being comprehensive in that all the necessary Mathematics is introduced in place as needed. Lookup the reviews for this book on the web.

Perhaps somebody who has read this book can comment in more detail.


I've always thought the Heaviside notation is a bit bizarre... is there any advantage to them at all?


Heaviside and his proponents avoided quaternion like a plague, and like a classic false messiah he somehow convinced people not to use it. If we want to easily and completely model the EM waves its entirety including polarization we need to embrace quaternion, there is no two ways about it. The intuitive understanding of EM can be only developed by using quaternion and me personally waiting for someone to write a Pozar's book version in quaternion approach.


Vector fields are just as incorrect, and ought to be relegated to history books as a mere stepping stone on the way to understanding.

Geometric algebra can reduce Maxwell’s equations into a single, hilariously terse equation:

    ∇F = J
Ref: https://en.wikipedia.org/wiki/Mathematical_descriptions_of_t...


Thanks for the pointer on Geometric Algebra. This looks to be a promising path to understanding relativity/QM/EM, and goes some way to explaining my unease with cross products and imaginary numbers.

Disclaimer: maths degree, so my unease was not a plain lack of understanding.


> maths degree

Then you want differential forms for EM, differential geometry more broadly for GR, and a bit of functional analysis for QM.

The hype around geometric algebras (Clifford algebras over R) just comes from the fact that it's not the plug'n'chug explicit numbers and coordinates approach, which is all most people ever see. They do not do a good job of tracking the physical structure of electromagnetism, and in fact end up baking in a lot of assumptions about the setting that fail to generalize.


> baking in a lot of assumptions about the setting that fail to generalize

That’s the point! That’s the entire point!

Mathematicians want the most general, most abstract approach. They want to generalise to a wide range of problems and not be painted into any one specific example.

Physics theories have an opposite goal to this: the ideal theory ought to take no parameters, and produce “reality” as the one and only possible outcome. The ideal theory ought not generalise to un-physical models.

For example, the mathematics of general relativity have excess degrees of freedom that must be constrained through additional restrictions. Similar issues turn up almost anywhere matrices are used: they have too many degrees of freedom.

Geometric Algebra is typically a better fit for what actually goes on in physics.

For example, rotation matrices have precision issues, gimbal lock, and can’t be robustly interpolated. Rotations implemented using GA have none of these issues.


> The ideal theory ought not generalise to un-physical models.

Sure - but a theory that fails to generalize to physical models is a bad one. A good classical theory should be a straightforward deformation of the corresponding quantum and/or relativistic theory. In this respect the best versions of classical mechanics are the standard Lagrangian and Hamiltonian approaches.

> For example, rotation matrices have precision issues, gimbal lock, and can’t be robustly interpolated.

No physicist was ever under the impression that euler angles were any realer than any other way of parameterizing SO(3), that matrices were the linear transformations they represent, that manifolds are their charts, or any other trivial map-territory confusion. Paying careful attention to the distinction between real physical objects on the one hand and their representations on the other is the central theme of the last century of physics: that's basically all a gauge theory is!

This is exactly what I'm talking about with geometric algebra advocates only ever comparing it to the worst sort of high school coordinate-bashing imaginable. The argument always goes

- Look at this horrible vector algebra with explicit charts and numbers all over the place

- Now look at this nice coordinate-free geometric algebra construction

- Therefore geometric algebra is the right setting for physics

but it's a complete non sequitur, because the coordinate-freeness is doing all the heavy lifting. But everyone already works without coordinates wherever it's practical to do so! The question isn't whether you should work in terms of abstract objects or explicit coordinates, it's which abstract objects you should work with.


Thanks for your comment.

> They do not do a good job of tracking the physical structure of electromagnetism

What do you mean by tracking the physical structure?

By this, do you mean the typical EM formulation of Maxwell's laws produces Gauss's law and Faraday's law (which are instructive) where as the Geometric Algebra formula produces ∇F = J (less instructive?).

> and in fact end up baking in a lot of assumptions about the setting that fail to generalize.

Can you explain a bit more what you mean here please?


> By this, do you mean the typical EM formulation of Maxwell's laws produces Gauss's law and Faraday's law (which are instructive) where as the Geometric Algebra formula produces ∇F = J (less instructive?).

> Can you explain a bit more what you mean here please?

It's all downstream of geometric algebra leaving the metric implicit in its operations, basically

- The metric is an extremely important physical quantity: it doesn't necessarily look that way when everything is classical and flat, but you have to start caring about it in curved spacetimes.

- Even when the metric can be safely neglected, doing so makes it very easy to confuse degree (n-k) elements of your vector space V with degree k elements of its dual V. V and V are isomorphic but not canonically isomorphic - you have to choose a basis. And keeping track of where you introduce a choice of basis matters, because all physical quantities are basis-independent. Nature has no preferred coordinate system.

- Geometric algebra doesn't make sense on a general manifold: you have to embed it in a sufficiently large geometric algebra and inherit structure from the embedding. This is better than working in a particular coordinate chart but worse than doing differential geometry in a purely geometric way.

- Geometric algebra is not invariant under diffeomorphism, which means GR is dead in the water. You can build mostly equivalent theories by enforcing the equivalence principle in your dynamics instead, but it's more complicated and ironically far less geometric.


> The moral of this story is that modesty is not always a virtue. Maxwell and Mendel were both excessively modest. Mendel's modesty setback the progress of biology by fifty years. Maxwell's modesty setback the progress of physics by twenty years. It is better for the progress of science if people who make great discoveries are not too modest to blow their own trumpets. If Maxwell had had an ego like Galileo or Newton, he would have made sure that his work was not ignored. Maxwell was as great a scientist as Newton and a far more agreeable character. But it was unfortunate that he did not begin the presidential address in Liverpool with words like those that Newton used to introduce the third volume of his Principia Mathematica, “It remains that, from the same principles, I now demonstrate the frame of the system of the world”. Newton did not refer to his law of universal gravitation as “another theory of gravitation which I prefer”

It's a good thing Maxwell is not alive and that he did not follow Dyson's advice, lest Hacker News accuses him of attempting to abuse the attention economy and promote his research like a salesman.

https://news.ycombinator.com/item?id=33043945 https://news.ycombinator.com/item?id=22297855 https://news.ycombinator.com/item?id=39144845


Newton was the first to act on the revolutionary idea that "physics should be formulated using math". Nowadays a theory of gravity that doesn't use math is basically not a theory. Maybe we can forgive him at least some of the extravagance.


galileo and kepler used quite a bit of math to formulate their physics theories too, but in that they were following in the footsteps of ptolemy and archimedes


One thing I would like to understand is why in our universe we can have two things that combine to nothing. But we can't have 3 things that combine to nothing. Can someone smarter give an explanation.


In Quantum Chromodynamics we have 3 things that combine to nothng though.


What are you thinking about here? If it is "matter/antimatter" "particles/antiparticles" then this is not true. There are still conserved quantities in annihilation of particles and antiparticles which makes other particles and/or energy come out of these annihilations.


Which are the two things that combine to nothing?


Particles and antiparticles.

But I think he's trying to make a slightly more general point: why are "parities" (2-fold symmetries) so common in nature and mathematics? Why not more 3-fold symmetries?


Maybe sound is an illustrative example. The air pressure is a positive scalar quantity. We are interested in the deviations from the mean. Deviations can be both positive and negative. A positive and a negative deviation cancel (in absolute terms: a larger than normal pressure and lesser than normal pressure will average to something more typical)


This is a good explanation for e.g. electric charge (which is scalar, just like presure), but color in QCD really is a bit more multidimensional in behavior where you can get things like red + green = -blue and so-on.


As the other reply said, in QCD you have "3 things that combine to nothing." If I had to guess why they are not so common I would say that the more variables you add in a theory the more complicated you make it. So by Occam's razor we try to go for the simpler models/structures.


One line of thought here is that maybe there do exist larger, more complex symmetry groups that can broken to create more complex "charge" particles (e.g. electric charge < color < something else < ...), but that the masses of particles involved are so large, their lifetimes so short, etc, such that we'd never actually observe them (or at least, we can't yet observe them) and that the more complex the group, the less relevant the physics actually becomes.

(Obviously I can't say whether that is true or not, but it might be a possible explanation of current observations).


Occam's razor is a heuristic not a law so this isn't really a satisfactory explanation


I'm still waiting for Weber to win out.


I think you mean 10^−8 Wb ;-)


[flagged]


> Maxwell's equations are wrong and built on a lack of understanding.

I'm not going to bother attempting a wall of text which ends with an obviously wrong conclusion.


This is really good copy pasta, thanks.


i have tried to transform the pdf into a presentation with AI, may help you read faster


I don't know what's more concerning, the fact that you find a six page paper written by one of the greatest communicators in science hard to digest or that you think an automatically chopped up version with colorful shapes is equivalent to the original.


Reformatting makes a document more digestible, especially when not reading on a printed sheet of paper, which the OP is exclusively designed for.

What's concerning (aside from your callous disregard for people who have small screens) is that the PP created a new document, and didn't show it, suggested that we might find it more digestible.


I see hundreds of students consuming PDF files on smartphones everyday for their classes. I read this paper on my phone just now. I am callously disregarding people who cannot bring themselves to read six pages and have to make it small and cute first.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: