I am not well versed in the Physics counterpart of Noether's work, but the mathematics side is quick to explain: the isomorphism theorems in group theory (originally, in ring theory) were one of the main impacts of her work.
A quick skim of Wikipedia tells me that the isomorphism theorems were later than the theorem in physics [1] [2] by about 5 years. Looking at it quickly, these are not the same thing, and although in spirit similar, I am not sure what the exact historical progression was.
I wonder if standard abstract algebra modules could teach this history in a way that tells more about the story behind the isomorphism theorems. In my abstract algebra class, Noether was not really mentioned, and usually the focus is (for example) on the progress from Greek geometry (e.g. squaring the circle) to modern algebra. I don't think anyone is at fault for this, but I would personally like to have a more accessible introduction to Noether's legacy via pure mathematics.
Ring theory in general was founded in large part via her work. I always find it funny that everyone only makes such a fuss about one physics conservation law/symmetry correspondence she observed (for physical theories that happen to be phrased in terms of Lagrangians/Hamiltonians), while the immense mathematical achievement of her extensive work on ring theory goes largely undiscussed in these kinds of conversations.
Noether's theorem is kind of a grand closure of natural science. It's one of those theorems that gives the sensation of transcending its axiomatic setting and reveal deep philosophical truth.
Compare Arrow's impossibility theorem, which is set in the much more familiar realm of order theory, but still appears to say something about the real world.
The scope of this theorem seems so broad and general, apparently covering any possible physical symmetry, whether or not conceived of yet, that I wonder if it could reasonably be called a metaphysical one.
Modern physics assumes symmetry and then searches for models which fit observations under the constraint of symmetry. However, we are starting to wonder whether the symmetries truly exist. Time, charge, parity, position, momentum; we hope that they have symmetries.
That doesn't sound like a fair description. Physics doesn't simply assume symmetries exist. We've observed symmetries that hold up under all observational and experimental scenarios available to us. From that, it's quite reasonable to consider the possibility that those symmetries are truly universal, and to study models that treat those symmetries as universal. Those models have great predictive power for ordinary scenarios of the sort where we know from experience that the symmetries do hold up, and those models tend to be simpler than ones that break symmetry in exotic scenarios. When we actually find a broken symmetry (eg. space, time and momentum behaving oddly at speeds close to c), it's time to update those models, which then usually reveals a more subtle symmetry and conservation law.
But there's no big trend in physics that casts doubts on all the important symmetries in physics. Even under as-yet undiscovered theories, the symmetries we're familiar with will always remain true in the limit as the conditions approach familiar everyday circumstances. Nobody is worried that concepts like time, charge, momentum, etc. will be discovered to be a silly, unfounded idea that the universe casually disregards. We're just not adding extra terms to the equations until there's a need for them.
I second this. Conserved quantities (by extension symmetries) do not compute. Car engines and refrigerators are both thermal engines, but the same conserved quantities are engineered (say programmed) to do different things.
> Nobody is worried that concepts like time, charge, momentum, etc. will be discovered to be a silly, unfounded idea
While not overly worried, I have to confess that I'm at least somewhat worried. And somewhat convinced that our present conceptions of theses things will one "day" be turned on their head, in a way that might also seriously affect our everyday conceptions of them.
I would support adding a couple of properties to most if not all our established models; notably 1: known unknowns, and 2: unknown unknowns. It would seem a healthy antidote to the hubris that runs rampant among scientists, at least before they discover the ubiquity of these two properties (they usually do around the 50 year mark).
Perhaps "truly" also in the sense of to what precision. To the precision of measurement in experiments is usually fine :) When the theory is not measurable anymore, than it's questionable whether it's Physics or already Meta-Physics...
While I understand the similarity to what BoiledCabbage is saying to whackjob theories, BoiledCabbage is actually referencing real physics discussions that have occured in the past. Specifically, there is the problem of measuring the universe when you don't have anything not in the universe itself to do the measuring with. What if, for instance, the three dimensions aren't all the same size as it seems, but actually, x is five times larger than y or z and everything time things rotate, including all of your measurement apparatus, they actually grow and shrink? How would you detect such a thing? What if as you look out into the universe, all the constants actually shift in a way that time ends up running at a different speed, so instead of an expanding universe and light taking a long time to get to us, instead there is some sense in which the external universe "really is" younger, and the reason we witness ourselves in a bubble of old universe is simply the anthropic principle?
These were all very serious questions around the time of relativity being nailed down, especially since one of the implications of relativity is precisely that you don't have any absolute rulers anymore. While overall the scientific consensus is the scientific consensus, the problem still remains to a lesser degree even today. There's a recent paper questioning whether or not the universe's acceleration rate is actually expanding, because they posit a systematic error in our understanding of the standard candle supernovas used to measure it. What if the "standard candle" turns out to be more dependent on the chemical makeup of the stars than was previously understood, which as the universe gets older the stars have more non-hydrogen in them, so as you look to the younger parts of the universe you misjudge how far away they are systematically? Your ruler was bent in a way you didn't realize, so the distances don't work, so it looks like the universe's expansion has been expanding and there must be this "dark energy" that ends up making the majority of the universe, when it could just be a chimera of our inability to reference absolute measurements.
A quick skim of Wikipedia tells me that the isomorphism theorems were later than the theorem in physics [1] [2] by about 5 years. Looking at it quickly, these are not the same thing, and although in spirit similar, I am not sure what the exact historical progression was.
I wonder if standard abstract algebra modules could teach this history in a way that tells more about the story behind the isomorphism theorems. In my abstract algebra class, Noether was not really mentioned, and usually the focus is (for example) on the progress from Greek geometry (e.g. squaring the circle) to modern algebra. I don't think anyone is at fault for this, but I would personally like to have a more accessible introduction to Noether's legacy via pure mathematics.
[1] https://en.wikipedia.org/wiki/Isomorphism_theorems [2] https://en.wikipedia.org/wiki/Noether%27s_theorem