Hacker News new | past | comments | ask | show | jobs | submit login
Why the foundations of physics have not progressed for 40 years (iai.tv)
283 points by ash 10 days ago | hide | past | web | favorite | 312 comments





I strongly disagree that the foundations of physics have not progressed for 40 years.

We have been looking really hard for answers to persistent questions. While we have not found affirmative answers, physicists have systematically ruled out option after option after option. We have not yet discovered a unified-theory-of-everything, but we know a whole lot more about what that theory is not.

Furthermore, the past 40 years have seen the emergence of precision cosmology (and the dark-matter/energy paradigm that it entails), the observation and confirmation of neutrino oscillation, the detection of gravitational waves (and the nuclear physics revolution that has begun with GW170817), SN1987A, and so much more.

The coming decades are poised to learn so much more, a lot of it from the stars. GAIA, LISA, updated terrestrial GW detectors, LSST/Rubin, TMT, SKA, and more are all poised to tell us much more about things we don't understand. Particle physics will move forward too, though it is uncertain how quickly. The right breakthrough in wakefield accelerators, though, could be transformative.

Thirty spokes share the wheel's hub;

It is the center hole that makes it useful.

Shape clay into a vessel;

It is the space within that makes it useful.

Cut doors and windows for a room;

It is the holes which make it useful.

Therefore profit comes from what is there;

Usefulness from what is not there.

Tao Te Ching - Lao Tzu - chapter 11


> I strongly disagree that the foundations of physics have not progressed for 40 years.

I think, we are getting a peak at the politics of the practice of science when you look at the article and comments, like this ones parent.

A bunch of this is about the unobserved. The theoretical rather than then experimentally observed (think scientific method).

Whose theories get the funding to be looked into? Whose ideas are published and talked about in the popular places?

A post of Sabine's talked about how she thinks the crisis in physics isn't about physics [1]. Is it instead about the politics of the money and popularity? Is it about the psychology of being wrong? I mean, different physicists contradict each other and the truth isn't what's popular it is what's observed, right? If people contradict each other they can't all be right.

What's most interesting, to me, is how this isn't about what's been observed but all those human system qualities that people bring to the table.

[1] https://backreaction.blogspot.com/2019/10/the-crisis-in-phys...


I think what people miss about the industry of Science is that, logically, if a paper says "A=Foo" then we'd expect it to be true. And if another comes out that says, "A=Bar" then it's confusing, are both wrong, are both right.

But instead, the way it works is a consensus is built. Researcher Bob - "We see that A=Foo." Researcher Sally -"Well, I see that A=Bar." Researcher Timmy - "Well, I see that A=Far." Researcher Kimmy - "Well, I see it as A=Fbar." Community over time - "Now the community has seen that A=Fbar is consistently correct and can be relied upon and used." 20 years pass... Researcher Jeff - "Well, I see that Ab=Fbar actually." Community - "That's bullshit." Researcher Betty - "Well, actually I see that too. But I see Ab=FbarC". etc etc

Since we don't know what we don't know, it isn't really a "We're done!" situation for someone to get 100% correct, it's a process of evolution in knowledge.


> We see that A=Foo.

I think part of this is that scientists don't "see" as in observe these things. Instead they "think that" something is the case. And, they have math and ideas to back that up.

If we had repeatable observation of the things it would be much harder to make disagreeing arguments.


You're putting a lot of unstated assumptions into what "observation" is.

If you see a shimmer in a desert, does that mean there's an oasis?


> A post of Sabine's talked about how she thinks the crisis in physics isn't about physics [1]. Is it instead about the politics of the money and popularity? Is it about the psychology of being wrong? I mean, different physicists contradict each other and the truth isn't what's popular it is what's observed, right? If people contradict each other they can't all be right.

A good book (or two) for this is Isabel Stenger's Cosmopolitics 1 and 2.

She goes into a lot of physics debates from the 20th century, along with a lot of the political debates


Love the Tao Te Ching quote!

To add to your point, scientists have been chasing inconsistencies exactly like Sabine said. They’re chasing dark matter, dark energy, quantum gravity, early universe cosmology, naturalness (arguable; more of a theoretical inconsistency), and quantum foundations (smaller effort).

So I don’t see the point of vague allusions to the philosophy of science. If anyone has a concrete proposal---inspired from philosophy or whatever else, doesn’t matter---the proposal will typically be treated on its merits. Modulo caveats about humans being humans and all that.


The things you mention may be useful to science and even help in evolving foundations of physics, but that hasn't happened yet. Ms. Hossenfelder talks about things such as internal consistency of quantum methodologies, consistency of QT and GRT and particle physics. There is no breakthrough in these questions for decades.

What if there aren't any breakthroughs left to be made in those areas?

That is possible, time will tell. It is quite possible (and at this point, even desirable) that breakthrough will be made elsewhere in foundations of physics.

While I mostly agree with you, this is not really a counterpoint to Hossenfelder's position. There is no denying that experimental physics has produced great results in the last 40 years, but the theories underlying these results have remained unchanged during that time.

Lee Smolin deals with this question in Einstein's Unfinished Revolution: The Search for What Lies Beyond the Quantum. He relies on the same facts, makes many of the same points you do, and he comes to the exact opposite conclusion (assuming, of course, that I actually understand your position fully, and his as well). That's interesting to me because I think it might point to a philosophical disagreement between your views and his, especially in epistemology.

Part of the difference may stem from the fact that I am an experimentalist, and he is a theorist. Theoretical progress has been slow, in large part because Nature has been unwilling to show any new experimental deviations from the Standard Model.

Experimentally, people continue to hammer away at the Standard Model (and gravity, my specialty), and the paradigm continues to hold. With few exceptions, most ideas from 40 years ago have been put to significant tests and turned out not to be how Nature operates.

From a theory standpoint, without new guidance from us, theorists are forced to attempt to out-think Nature, which is extremely hard to do. Smolin may be bummed, but theorists can take solace in the fact that the problem is extremely difficult.

In the particular case of quantum-mechanics, the day that theorists divine a compelling way that one interpretation of QM makes a prediction that differs from that of another interpretation is the day that an experimentalist starts building a test to find out which one is true.


The original article as I understood it is breaking a lance for more rigorous selection of experiments based on how much more information the experiments can yield. With increasing costs in experimentation it makes sense to prioritize for impact.

In a way it is a meta-science because you need to find a methodology to evaluate which theory, when proven or disproven has the most impact.

I would imagine that the current working method would be to look at how often theoretical physicists cite a specific theoretical result/paper and hence "popularity" is the prioritization mechanism.


Honestly why is popularity not a good mechanism to determine how we allocate experimental investment?

If a large cadre of highly educated people find a particular theoretical mechanism sufficiently compelling to dedicate their careers and time to it, then surely that's a good basis on which to develop experiments to try and confirm whether it's right? Because the benefit of proving it wrong means they'll reallocate their resources.


well, the author argues it is not optimal due to human fallacies such as group-think and institutional incentives

Presumably the economical cost of the experiments are higher at this point then the resource-allocation of that cadre of highly educated people.

It's worth thinking about, but not sure if there is a viable alternative methodology to be found


「三十輻,共一轂,當其無,有車之用。埏埴以為器,當其無,有器之用。鑿戶牖以為室,當其無,有室之用。故有之以為利,無之以為用。」

Google Translate on this fails almost completely but somehow turns it into a critique on the Internet Of Things:

Thirty spokes, there is a hub. When it is free, it is used for cars. I think it is a device, when it is free, it is used for devices. Profit, useless.


I get something like this from google translate: Thirty spokes, a total of one hub, when it is not available, there is a car. I think that when there is no device, there is a use for it. The chisel owner thinks that when there is no room, there is room. Therefore, it is good to use it, but not to use it. "

Interesting that we get different results. Google's AI-based translation can get a little crazy sometimes[1] but I would have still expected everyone to get the same result for the same thing.

Proof of mine: https://i.imgur.com/hb428NI.png

[1] DECEARING EGG comes to mind. https://www.youtube.com/watch?v=3-rfBsWmo0M

Edit: As someone else has now mentioned, it's the difference between leaving the quotes(「 and 」) on the ends or not. So it is just the translate AI being weird as usual.


I've got your translation variant for the text without quotes (「...」). Does the translation have a peculiar rhythm to it, or I'm imagining things?

Sounds like this quote (posted elsewhere in this thread):

Thirty spokes share the wheel's hub;

It is the center hole that makes it useful.

Shape clay into a vessel;

It is the space within that makes it useful.

Cut doors and windows for a room;

It is the holes which make it useful.

Therefore profit comes from what is there;

Usefulness from what is not there.

Tao Te Ching - Lao Tzu - chapter 11


It's that same quote translated by Google instead of by a human.

I got this one:

"Thirty spokes, there is a hub. When it is free, it is used for cars. I think it is a device, when it is free, it is used for devices. Profit, useless. "


Thanks, I was wondering what was the original quote, which is often tricky to get from unsourced translation (edit: I missed grand parent sourced chapter 11. Btw, I love the parallelism used in Classic Chinese writings)

> We have been looking really hard for answers to persistent questions. While we have not found affirmative answers, physicists have systematically ruled out option after option after option. We have not yet discovered a unified-theory-of-everything, but we know a whole lot more about what that theory is not.

I feel this answer is a bit of a self justifying cop-out. A lot of the value of physics from the perspective of society has been generating understanding about the world that actually translates into manipulating the world.

Reaching the moon, superfast internet, gps, microwaves, etc... etc..

You are not wrong of course, but knowledge for the sake of knowledge is not always useful, and also not always worth funding imo.


Agreed, and how is string theory not being considered an attempt at: "resolving inconsistencies" by the author? In fact, imho, any attempt at finding a unified theory is an attempt at resolving inconsistencies.

Does anyone else feel like the abstractions and models in physics have gone passed the point where the casual outsider (even a technically and scientifically minded one) can no longer intuitively understand it?

Because that's how I feel. There are so many things I just don't understand now like:

1. I originally thought the Heisenberg Uncertainty principle was a natural consequence of using particles (photos) for measurement. Instead however it seems to be a fundamental property of the universe, which I only learned after finding out most of the mass of hadrons comes from the relativistic motion of quarks and it explains why hadrons don't collapse to a point.

2. What does it even mean to create more space? The universe is expanding. Ok, I can accept that. But what does it mean?

3. I find the models for dark matter and dark energy to be... unsatisfying. I realize there's experimental evidence for unobservable mass but it feels like a fudge.

4. Of course we still have no quantum model for gravity.

5. I don't really understand what a fundamental force really is. Like why does electromagnetism have a repulsive opposite but gravity doesn't? When I tried to look into this I ended up down some rabbit hole of "gauge forces" and got completely lost. Why is the Higgs Field not a force?

6. Why are some predictions of the Standard Model so incredibly accurate (like the magnetic moment of an electron IIRC?) while others are so incredibly inaccurate (eg IIRC the QFT prediction of vacuum energy is off by 120 orders of magnitude).

7. Why are there exactly three generations of particles (ignoring the Higgs)? What does a generation even mean?

I could go on. I don't for a second mean to suggest any of these notions are wrong. It's just that the models have gotten so complex (it seems?) that it just feels like something huge is missing, something that will eventually seem obvious in hindsight. Or am I just a lemur trying to figure out how an airplane works?


> Or am I just a lemur trying to figure out how an airplane works?

Unfortunately yes, all of us are.

This is not a problem with physics or abstractions. It's a problem with our intuition. Our intuition is based on evolution and life experience, which is all formed based on mostly solid objects from 1cm to 100m moving at 1m/s to 100m/s.

The universe however does not care what meatbags can experience with our senses. It doesn't mean that our complex math abstractions are necessarily correct - but they are more correct than casual intuition. You can train that intuition with enough work with the maths.

For example, you can map most of basic electricity to water flow and pressure, and some electromagnetic waves to waves in water - but you need to make a small jump to abstraction to combine both, and a large jump to get a gut feeling for special relativity to "feel right".

The crazy part is really that mathematical abstractions exist for all these things at all. There seems to be no natural reason that physics should be describable by small elegant formulas at all, let alone our experience of throwing rocks into a pond. Why isn't particle physics as messy as organic chemistry?


You are looking at it from the wrong way.

It is not nature that describes itself in math, it's people who describe in math what their current understanding is of how nature works. The more mathematical simplicity in the formula, the better we understand it.

Organic chemistry is harder because our powers to observe it are computationally and experimentally limited at this point.

Human intuition is based on limited sensors, boundaries which we have overcome with science and applied science over time. Our intuition had to be collectively replaced with rigorous mathematical methodology to incorporate these foreign sensors.


Yes. First, intuition depends on your experience. If you never studied physics, most things will be nonintuitive (heavy bodies don't fall faster than light ones? really?)

Second, modern 20th century physics education (courses, textbooks) suffered sustained corruption of methodology by scientific authorities, where the quest for understanding was renounced in favor of "modelling" and "prediction" (e.g. authors of orthodox quantum theory and their less bright pupils perpetuating that attitude) and later by institutionalized system of university research which propels tweaking and applying old ideas to detriment of trying new ones or questioning past ideas that are too ingrained.

This leads to a large portion of theoretical physics publications being more and more about complex calculations where most applicators do not even try to understand "what is going on", they just assume the same quantum methodology with some tweaks (i.e. different configuration spaces, more dimensions, different Lagrangians, new fields that fix problems of the previous ones, tricks with removing some ugly series terms etc).

Sometimes these tweaks get fancy names (superstrings, loops, dark matter) but they are really an additional concept that needs to be put in to save the edifice from those radicals who would like to try actually new and incompatible ideas.

When you study 20th century physics yourself from original sources, you'll find the stuff taught currently actually has highly varying degree of credibility. Some stuff is rock solid, such as relativity, molecular theory and chemistry, nuclear physics and solid state theory, and some stuff is ... well, more unfinished and less credible - such as standard model, force unification, quantum gravity, dark matter, etc.).

If you want to get some solid ground on which to build intuition, start with the rock-solid physics as known till 1905, then after that makes sense, learn about its problems (explanation of emission spectra, inconsistency of EM theory with Newtonian mechanics), then after that take a deep breath and read original papers on quantum theory and particle/nuclear physics.

This will take years to understand. The later theoretical stuff around Standard Model details (lepton generations, stability of particles, unification of gravity and QFT) is a decades old project that nobody knows how to finish. It is stuck for now, and has little relevance for understanding those previous things.


Hm. Comparing paradigm shifts with model based realism is not really valid criticism of either, yet understandable.

The folks who just added one more term to the old stuff to make it predict better - let's call them the old guard necessarily did it to point out that old models can become slightly new ones too. Even if in the long run they will be seen as the evil holdouts.

Yet at the same time we know that just whipping up a new fancy maths model won't solve anything in itself. New models need to make new testable predictions.

And then even new models require lengthy fine tuning, which requires costly experiments.

Alas textbooks are very often terrible, but not because they emphasize predictions and models over "understanding" - but usually because they omit to elaborate on how to select the better of two models, how paradigm shifts happen, how anomalies are ever present - and thus make practical model selection even harder. Plus they regularly fuck up the math explanation part, exactly because they use terrible language and models.

Finally, it's always data that cleans up the mess. Either practical usefulness - engineering, applied science. Quantum experiments, q-bits, and so on. And on the high-energy end cosmology and astronomy.

Anyone harking about how the crisis is about politics usually wants to allocate more money to theorists, so we will finally get breakthrough theories. Yeah, great, we already have a lot of those, but without data we don't know which one to take seriously.

Furthermore pouring money into theory is a nice idea, and comparatively cheap (compared to a new collider), but that won't solve the very pragmatic employment question for the collider builders. (Who are out of luck anyway, because the era of building bigger underground circles seems to be over. But they would gladly build anything, but they won't make good theorists, even if pundits' articles seem to imply there's a simple slider between theory and experimentation.)


Yeah, if the author of the parent comment is willing to put in work to get the understanding they claim to want, this is the way to go

As a layperson I recommend watching the PBS Spacetime channel on YouTube: https://www.youtube.com/channel/UC7_gcs09iThXybpVgjHZ_7g

Certainly the complexity of the experiments shows why theories are so complicated. The neutron wasn't even observed until 1930, with an experiment that fits on a tabletop. Even into the 1940s you could put together a cyclotron in a lab and discover a new particle. Or observations of the cosmic background radiation, which were made with a 6 meter radio dish in suburban New Jersey. Now most research requires a facility that only governments are willing to fund. If it were easy, it would have been discovered by now.

The casual observer hasn't been able to understand the forefront of physics for a very long time. Maxwell's equations have been known for 150 years[1], but the average high school physics student hasn't the foggiest idea of what a differential equation is. At best, they have a vague understanding that electricity and magnetism are more-or-less interchangeable.

Once you get into invisible forces acting across very small distances, nothing about how the world works is 'intuitive'. The most precise description of it is... A bunch of math, and not the kind of math that people learn in their K-12.

[1] As has entropy, and the laws of thermodynamics. Yet even educated people often have no idea of what the laws of thermodynamics actually imply! You'd figure that people would take a little bit of an interest in them, given that they live in a society powered by combustion engines...


Good point. I work on laser amplifiers, and I struggle to get clear on what Planck said in 1899. I'm definitely not caught up with where physics was in 1920.

Hearing casual observers talking about quarks or dark matter just makes me run the other way. I know I don't know.


A great deal of (19th century) thermodynamics has its roots in the desire to engineer better steam engines. Entropy as an example has been introduced as an abstract concept to get to engines with better efficiency. Even when statistical thermodynamics was suggested (which makes of a less abstract and more intuitive understanding of thermodynamics) it was strongly opposed by many of the pundits at that time.

> 2. What does it even mean to create more space? The universe is expanding. Ok, I can accept that. But what does it mean?

Consider several droplets of water on the surface of a latex-based birthday balloon. The balloon is inflated, and droplets spread further apart, yet each droplet retains its shape and size. If you measure the distance in droplets it appears that you have created space as more new droplets could now fit in between those placed earlier.

We only have two comprehension tools to help in grasping the universe - math and intuition. The above is intuition, and you know better than me where to find math. There isn't more "meaning" to it than that, just the tools.


Another way to think about it is just that if you had two solid particles, and measured the distance between them, you'd find it had increased over time.

Space everywhere is expanding, it's just all the other forces over short distances ensure that everything pulls itself back together. The Big Rip - if dark energy is increasing in strength - is what happens when at some point the rate of expansion exceeds the forces at various scales that are able to hold things together, till even the strong nuclear force can't sustain it.


Everything you touch on - except the Heisenberg uncertainty (which is due to fundamental property of any finite signal, shown through a Fourier transform) - is a big (really mind-boggling) anomaly of current contemporary physics/cosmology.

Just the other day right here on HN was a link posted about dark energy, and how maybe it's just measurement calibration error. Poof, solved. Or not, we shall see.

Reading about the statistics and epistemology behind the experiments (Andrew Gelman's blog) and models (eg why preferably hierarchical Bayesian models) will help a lot to cut through the problem of understanding and satisfaction. (Rarely we can have both for complex issues.)

Oh, and everything is a field. Fields are coupled (coupling constants, running couplings). Some fields' have non-null base energy state. (So vacuum energy is non-zero. Maybe. It depends how many and which fields your model deals with.)

Some kind of energy wiggle in some fields translates to one/two/a-lot of wiggles in other/the-same fields. (See the Feynman diagrams, how there are infinite number of possible but decreasingly probable interactions between "particles".)

What is a force? It's just one well separated aspect of the whole model. Ultimately we think all of them are coupled into one big interaction that involves every field. See the electro-weak unification.

Why gravity doesn't seem to be able to attract? Because maybe it's not a field, it's just the shape of spacetime warped by energy and we haven't found negative energy. (Einstein's General Relativity) Or it's just based on entropy and thus it's a very strange emergent property of our universe. ( https://en.m.wikipedia.org/wiki/Entropic_gravity - bonus, it explains dark matter too, so maybe it's simpler - but it's just the ugly MOND (modified Newtonian) in disguise, noooo!) Okay, so maybe back to higher dimensions and branes and loops and strings? But nobody understands that! So we wait.

Why three? Because so far we found three and thus our models reproduce exactly that many.


Re: "Does anyone else feel like the abstractions and models in physics have gone passed the point where the casual outsider (even a technically and scientifically minded one) can no longer intuitively understand it?"

That's how I feel when using Twitter Bootstrap compared to the WYSIWYG days of VB-classic and Delphi. Like Quantum Physics, getting Bootstrap right relies on probability and killing of cats, or at least hair follicles. (Our shop probably needs a dedicated UI coder, but office politics won't allow it.)


> 1. I originally thought the Heisenberg Uncertainty principle was a natural consequence of using particles (photos) for measurement. Instead however it seems to be a fundamental property of the universe, which I only learned after finding out most of the mass of hadrons comes from the relativistic motion of quarks and it explains why hadrons don't collapse to a point.

> 2. What does it even mean to create more space? The universe is expanding. Ok, I can accept that. But what does it mean?

IME both QM and Relativity are much more easily understood if you start from the actual equations and/or an undergrad textbook. I don't think physics is - yet - beyond the technical and scientific minded amateur who is willing to read and understand some equations. But it's probably gone beyond the ability of any science journalist to render into prose.

> 3. I find the models for dark matter and dark energy to be... unsatisfying. I realize there's experimental evidence for unobservable mass but it feels like a fudge.

I think working scientists would agree with you at least as far as dark energy goes. Dark matter is pretty indisputably just some kind of matter that we can't see (see e.g. the Bullet Cluster) - there's still something to be solved in terms of figuring out what it actually is, but I don't expect that to be major new physics. But as to dark energy: yeah, it's a fudge. Everyone knows it's a fudge. But it's still our best description of reality. Even if you knew there was something wrong with epicycles, they were still the best way to calculate planetary orbits at the time.

> I could go on. I don't for a second mean to suggest any of these notions are wrong. It's just that the models have gotten so complex (it seems?) that it just feels like something huge is missing, something that will eventually seem obvious in hindsight. Or am I just a lemur trying to figure out how an airplane works?

I think models aren't complex so much as unfamiliar. We're getting further and further away from everyday experience, and so more and more of the fundamentals of reality have to be understood from mathematical first principles rather than everyday intuition.

The models are actually really good though. Certainly once I understood QM it seemed so clear and simple that it couldn't possibly not be true.


> We're getting further and further away from everyday experience, and so more and more of the fundamentals of reality have to be understood from mathematical first principles rather than everyday intuition.

How much of that is an insistence on thinking of things in terms of fundamental particles and definitive properties instead of fields and packets of energy? I guess the question is that what the mathematical models are really about, and whether we're just having a hard time shrugging off Greek atomism and 19th century materialism when it comes to intuition?

Intuition could be built on top of a solid understanding of fields, backed by the math. The difficult part is connecting that to the classical world of our size that we experience.


> How much of that is an insistence on thinking of things in terms of fundamental particles and definitive properties instead of fields and packets of energy? I guess the question is that what the mathematical models are really about, and whether we're just having a hard time shrugging off Greek atomism and 19th century materialism when it comes to intuition?

I don't think it's "Greek atomism and 19th century materialism" to imagine a world made of persistent physical objects in well-defined positions. That's the entirety of everyday life.


Consider the competing ontological views: everything is made up of water, the five elements, everything is a sphere, the world is a simulation, everything is ideas being perceived by the mind, the world is matter formed by the ideal forms, etc.

Those views were prominent among certain intellectuals at different times which go against the grain of everyday experience. Since physicists are trying to understand the fundamental nature of the world, I take it they're used to not accepting appearance as a guide, and are rather operating under whatever philosophical intuition is dominant at the time. Since 19th century physics confirmed the existence of atoms, and the periodic able of ordinary matter is made up of atomic bonds, then it makes since that physicists have been influenced by that guiding intuition.

Regardless, fields are probably a better building block for intuition than particles.


I'll take a shot at trying to answer some of your questions:

1. I originally thought the Heisenberg Uncertainty principle was a natural consequence of using particles (photos) for measurement.

That's the observer effect. When quantum mechanics was first constructed, it was believed that the uncertainty principle could be explained that way. As you mentioned, today, it is considered a more fundamental feature of the theory. As an analogy, consider how a signal sharply located in the time domain gets smeared out in the frequency domain, and vice versa. It's kind of like that.

2. What does it even mean to create more space? The universe is expanding. Ok, I can accept that. But what does it mean?

Specific volume (the average volume occupied by a unit of mass) increases. If the universe was spatially finite, so would total volume.

3. I find the models for dark matter and dark energy to be... unsatisfying. I realize there's experimental evidence for unobservable mass but it feels like a fudge.

Shrugs. If dark matter turns out to in fact be just a bunch of yet to discovered particles, that wouldn't be too strange. Dark energy is more of an unknown.

4. Of course we still have no quantum model for gravity.

That's an issue.

5. I don't really understand what a fundamental force really is. Like why does electromagnetism have a repulsive opposite but gravity doesn't? When I tried to look into this I ended up down some rabbit hole of "gauge forces" and got completely lost.

For now, there's a certain amount of arbitrariness involved: We could easily construct universes that worked differently. Furthermore, gravity is special: At the classical level, it's not a regular force, but a pseudo-force (a consequence of Newton's first law). Also note that it can in fact manifest repulsively (eg via the cosmological constant). From the perspective of particle physics, it's also special because the hypothesized force carrier (the graviton) would have spin 2 instead of 1.

Why is the Higgs Field not a force?

To my knowledge, there should be an effect one could call Higgs force. It would be tiny.

6. Why are some predictions of the Standard Model so incredibly accurate (like the magnetic moment of an electron IIRC?) while others are so incredibly inaccurate (eg IIRC the QFT prediction of vacuum energy is off by 120 orders of magnitude).

Quantum Electrodynamics can be solved perturbatively. We know how to do that. Things that can't be handled that way tend to be hard.

Regarding vacuum energy, you'd have to account for every fundamental field there is to get it right, so no surprise we get it wrong. Also, given that we have no quantum theory of gravity, it's not even clear to me that it is even the right approach.

7. Why are there exactly three generations of particles (ignoring the Higgs)?

Figure that out, get a Nobel prize. It could even be arbitrary, like how it's futile to ask why a particular snowflake looks different than another snowflake created in similar conditions: Random chance.

Hope that helps a bit.


> It's just that the models have gotten so complex (it seems?) that it just feels like something huge is missing

Why shouldn't it be complex? The universe is complex. It's amazing that it's so simple, really.


1. Blame stupid pop-sci books. Anyone reading an actual QM book will know it's an intrinsic property.

2. again pop-sci books.

3. You and anyone not blinded by non-scientific factors...

6. The math we use for those calculations is adhoc bullshit and no one has been able to place it in a solid foundation.


Personally I find it amusing Hossenfelder is now invoking the need for learning philosophy of science, given how hostile she's been to it before. See for instance

http://backreaction.blogspot.com/2016/08/the-unbearable-ligh...

that begins with: "Philosophy isn’t useful for practicing physicists. On that, I am with Steven Weinberg and Lawrence Krauss who have expressed similar opinions."

Though to be fair, she clarifies that she wishes philosophy wasn't so useless, and that:

"Philosophers in that area are necessarily ahead of scientists. But they also never get the credit for actually answering a question, because for that they’ll first have to hand it over to scientists. Like a psychologist, thus, the philosopher of physics succeeds by eventually making themselves superfluous. It seems a thankless job. There’s a reason I preferred studying physics instead.

Many of the “bad philosophers” are those who aren’t quick enough to notice that a question they are thinking about has been taken over by scientists. That this failure to notice can evidently persist, in some cases, for decades is another institutionalized problem that originates in the lack of communication between both fields."

This is the sort of reasoning that got me reading Hossenfelder in the first place, not the conspiratorial posts she writes now... :(


>Like a psychologist, thus, the philosopher of physics succeeds by eventually making themselves superfluous.

Yes, my doctor too has a thankless job of treating disease & injury in me, where once he succeeds, he has no purpose in life.


His purpose is the next patient!

A better analogy is "once my doctor successfully advocates for better health habits and I achieve them, they are purposeless."


I believe that was the point, phrased in sarcasm. Philosophers move on to the next thing after they hand problems to science.

I thought the article was not great. But the fact that she's changed her mind on the topic makes me like her a lot more. How can we learn more if we don't change our mind?

Nothing wrong with changing one’s mind. But need to be thorough enough to provide a rigorous rebuttal of the arguments made by one’s past self :-)

> How can we learn more if we don't change our mind?

I think it's also possible to learn without having strongly held beliefs about the subject beforehand. This alternative reminds me of Descartes (don't assimilate knowledge until you are sure of it's veracity) and Bayes (keep track of degree of belief about traditionally non-probabilistic things). Maybe such an approach would help getting trapped in local optima. E.g., I'd imagine it would be hard to climb out of the theist energy well once your world view were based on it.


I wasn’t very convinced by the 2016 text either, but the point is more that she’s gone from that clearly stated position to mentioning the importance of ”philosophy of science” in her attacks on current practice without much evidence (that is, not writing about it) that there has been any shift of conviction.

It looks more like Hossenfelder found something discarded in the shed and temporarily use it as a club in want of something better. A slight intellectual dsihonesty.


hmm some of the original co founders of quantum had the opposite view that philosophy in all its forms might be of use

The philosophy of science is different from philosophy.

The philosophy of science is really a meta field for science while philosophy itself encompasses things the philosophy of religion and other things core to the human experience.

When studying something of scientific nature the human experience like religion, art or that kind of thing is the domain of the humanities and entertainment and not of the universe in general.


Short version, without the anger and bitterness:

Theoretical physicists' way of working is to put forward baseless mathematical models and build $40bn machines to prove them wrong. They should instead work on theoretical inconsistencies that have been known for a while.


The problem is that the theoretical inconsistencies are too small to be useful. For example, we know that there is probably a problem with the anomalous magnetic dipole moment of the muon (it's not sure, because it's only a 3.5 sigma, it may me a fluke). Without more experiments, it's difficult to know how to modify the "Standard Model" to get the correct result. There are many alternatives, but without more precise experiments it's difficult to select the correct one. https://en.wikipedia.org/wiki/Anomalous_magnetic_dipole_mome...

So the alternative is to build a $40bn machine, look at the data and then try to imagine then how to fix the theoretical model. [There is a risk of overfitting the model, and finding patterns in the noise.]

Another alternative is to build a $40000bn (or more) machine and have enough precision to make the model obvious from the data. [I'm not sure this is possible, I guess with enough money, perhaps m000000re money.]


that's because experiment drives physics rather than mathematical consistency (no matter how much people pretend it's about "beauty"). plenty of mathematically consistent physical theories have been falsified by experiment and plenty of mathematically inconsistent physical theories have made precise and accurate predictions.

When was the last time an experiment had a result that wasn’t explained by theory (supralumimal neutrinos aside)?

Observations of the cosmic microwave background, galactic rotation, gravitational lensing, and redshift led to the concepts of "dark matter" and "dark energy" that aren't yet explained by theory.

The cosmic microwave background was first detected in the 1960s, gravitaional lensing was predicted by Einstein in the early 1900s and redshift can be traced back to the later half of the 1800s (as an extension of the Doppler effect)...

It's not the existence of these phenomena that's unexplained, but specific observations that don't fit our current theories.

This article seems like a decent introduction:

https://medium.com/starts-with-a-bang/five-reasons-we-think-...


MOND is a better explanation than dark energy.

There was an article linked here just a week or two ago. Its claim was that the distance measurements used in cosmology may be wrong on larger scales. With the correction there is no need for MOND or dark energy. This needs confirmation of course.

I have yet to see the "expected" galactic rotation curves that are contradicted by observation a lead to ideas about dark MATTER. I'm mean I've seen the curves but cant find the math behind them. You often see weak references to Keplers law which doesnt even apply, so that leaves me very skeptical.


MOND has nothing to do with dark energy, and the article suggested nothing about anything addressed in MOND.

MOND is an attempt to make dark matter an unnecessary assumption.

No opinion on whether some MOND will turn out to work better than dark matter.


> I'm mean I've seen the curves but cant find the math behind them. You often see weak references to Keplers law which doesnt even apply, so that leaves me very skeptical.

Huh?

This is directly derived from 3rd law


>> This is directly derived from 3rd law

Then it's wrong. I will need to see the derivation to find the error. I've seen indications of a couple possible places it may be (based on simplifying assumptions people make incorrectly) but have not seen the actual derivation of the expected curve.


gimme some time, I'm busy till Jan 20, then I'll redo it. Just in case, kick me a message if I'm not heard from...

Some examples: Matter/anti-matter asymmetry, arrow of time direction, why CP violations, neutrino mass questions, why masses are what they are, dark matter and energy nature, what cancels out zero point energy, many structures on universe scale don't fit models, firewall paradox, why is gravity so weak, are there gravitons (other particles..), do magnetic monopoles exist (widely conjectured to work from models, none yet seen), why 3 generations of particles, proton radius discrepancies, exotic and pentaquark (and higher) particles, Navier-Stokes open problems, lots of superconductor and metamaterial questions results not explained theoretically, and so on...

Matter/anti-matter asymmetry is expected to be present. It is a basic feature of individual random walk instances using symmetrical laws that about half will be dominated by matter and the other half dominated by antimatter for long periods of time (although the "universe" may pass through pure energy states as it switches between the two). While there is a balance on average and in the long run, there is not for individual instances or points in time.

We called the dominating one for our current instance/state of the universe "matter": https://physics.stackexchange.com/questions/505662/why-is-ma...


>Matter/anti-matter asymmetry is expected to be present. It is a basic feature of individual random walk

It may be the cause, but it is not known to be the cause.

And it's not a random walk; black holes accumulate charge from pairs, making future radiation not symmetric.

My understanding is that people have probed this for some time and it's still inconclusive if it can generate the observed imbalance. Here's [1] a 1979 paper on the idea, with hundreds of citations, in case you want to poke at the literature.

If I remember, Cosmology by Weinberg has a chapter on various theories of how the imbalance may happen, none of them known to be all of or even part of the answer.

[1] https://journals.aps.org/prd/abstract/10.1103/PhysRevD.19.10...


Nice link, thanks.

Superconduction is still poorly understood. We don't have a good model that predicts what materials are superconductors.

It's likely though that in the case of superconductivity the problem is the complexity of the calculations, rather than a fundamental theoretical problem. It's like protein folding - we're no where near being able to do an ab initio calculation of the shape of a protein, but that doesn't mean that there is a fundamental problem with quantum mechanics.

maybe a bit facetious, but wave-particle duality at a time when we only had classical mechanics...?

thanks for the tl;dr.

I'd argue back to the author that "putting forward 'baseless' mathematical models" is "working on theoretical inconsistencies." When you're testing a black box for the content of the box without the ability to open the box, one may have little basis for an idea that might, maybe, could possibly provide useful results. That testing will definitely provide information, even if it's the basis of ruling out an entire class of tests.


And academia is failing too, I can’t remember what Nobel prize mentioned that he could not get one today as most researchers are stuck in having to produce papers for the sake of keeping their grants. What would be needed is a lot of free time and freedom to think ...


This is a much bigger issue. Research has been distorted by irrelevant bureaucratic productivity metrics. So the illusion of regular activity is rewarded, while anyone who takes ten years to explore a truly original ground-breaking idea is punished and excluded.

Good essay. Yes, the sociology and politics of the way we do science is overtaking the reproducible learning aspect. Foundations for many things, like physics, are as solid as necessary for doing a lot of work, but by the time you get to the point where you should be testing, rearranging, and ferreting out flaws in the foundations, you're so indoctrinated into a culture that you don't have the mental tools necessary to do the required work. So instead you just chug along the way the last generation did, adding a decimal point here or there.

It's not wrong. It's just not changing over time. It is stagnant.

The nice thing about physics is that with new advances in astronomy and the lack of a unified theory, it keeps getting poked with reminders that there may be missing pieces. That's not true in many other fields.


> but by the time you get to the point where you should be testing, rearranging, and ferreting out flaws in the foundations, you're so indoctrinated into a culture that you don't have the mental tools necessary to do the required work.

Don't you rather believe that a much simpler explanation is that the incentives and terms for grants are at fault?


I believe I said that. You might be confusing the reason for various sciences to get stuck with the mechanism of how they actually get stuck. The reason is that groups of people have common characteristics over time. As for the mechanism, I'm with you. Always assume the simplest explanation unless there's some evidence otherwise. Missing the "mental tools required to do the work" can be as simple as not being popular in your field of study, or a character unable to get funding or provide the oversight of funding that is necessary for your science to advance. I believe if you understand the reason, you realize that you could very well end up playing whack-a-mole simply by trying to fix the various mechanisms. That would be a tragedy.

Stagnant? Or just mature, like a fine cheese and wine?

As an outsider, I don't know. There is a presumption that since things have changed over time, they will continue to change. This very well might not be true. I didn't want to get too Kuhnian, so I just used the word "stagnant" for effect.

Once again, as an outsider it doesn't seem to me that they are anywhere near "done", but they sure as heck look like a mature science. Physics and its children have given us amazing things. Spending a lot of time playing with math wasn't one of them. All sciences have one aspect in common: until you get to reproducibility, the conversation in the community tends towards groupthink over time. That's a human characteristic not related to any one field of study.


Mature implies that no further major improvement is required. That does not seem to be the case with the foundations of physics given some of the fundementally unresolved inconsistencies.

As science historian John Horgan noted in The End of Science the parallel in physics is with academic humanities departments becoming mired in "irony"

The End of (one type of) Physics, and the Rise of the Machines

https://www.math.columbia.edu/~woit/wordpress/?p=10680

Seems there are two possible outcomes. The deluge of data leads to better correlation which smooths over the flaws in current models. And corrects errors with some minor fudge factor that contains no further significance.

From Dark Matter to Galaxies with Convolutional Networks

https://arxiv.org/abs/1902.05965

Or something deeply profound is discovered. The thing which cannot be ignored. And instead leads to an explosion of new physics. Recognizing patterns of the latter class will perhaps always be the domain of the human operator.


The role of this "era" may be in reformulating quantum physics and, separately, general relativity in new ways that make the ideas more accessible to more people, and earlier in their lives. The goal could be to make of modern physics... the new classical physics. That is, we start to let go the crutches we still teach because it is thought that day-to-day life is more readily explained by Newtonian physics. We are now in era where most advances (e.g. smartphones among them) could not exist in their present form without modern physics.

Once more people accept the concepts of modern physics as a way of life (perhaps intuitively?), we will be in fertile territory for any potential new revolution in physics.


These theories have a very precise mathematical formulation and very weird unintuitive consequences. If you try to teach them without math, you only keep the weird unintuitive part and it's more unintelligible.

For quantum mechanics you have to know eigenvalues and eigenvectors. This is studies in the first years of the university in a technical career. I'm not sure if it can be teach much earlier.

For Special Relativity you have to know Minkowsky spaces. It's not so difficult, it can be moved to the first years of the university.

For General Relativity you have to know curved spaces. It's not imposible to learn, but you can get a Ph.D. in Math or Physics without studding curved spaces.


Linear algebra (with diagonalization not just using gauss-jordan) could be pushed back to highschool for motivated students, and is in some countries. The coordinate system aspect of special relativity (the origin of time dilation and most of its "weird effects") only requires algebra. General relativity requires the full mechanisms of differential geometry but advances in things like differential forms are pushing this back to the undergraduate level. Overall I would say that it could be done but you would have to leave the unmotivated students behind.

Turtle Geometry gets as far as motion in curved spacetime using code in Logo. Dunno how many high schoolers have ever learned from it, but it's there. (It includes a nice concrete intro to vector algebra earlier, too.)

Re quantum mechanics without many prerequisites, I'm a fan of Feynman's book QED.


We can keep math, but switch to better theories, with plausible explanations.

A kind of Pilot Wave can explain quantum weirdness to layman people with ease.

We can ditch theory relativity and calculate speeds relatively to CMB, which is much easier to understand.

We can ditch Big Bang theory and, instead, accept that light is not immortal, because it ages with time. IMHO, Dipole Repeller and Shapley Attractor are much more attractive and easier to explain than Big Bang.


All three examples you gave have problems or inconsistencies and this is why they are not used. You are being downvoted because you are suggesting teaching formalisms that are known to be insufficient simply because they fulfill your personal criteria of intuitiveness.

We have no perfect theory to explain everything, so it's just tradeoff, exchange of one set of inconsistencies for another set of inconsistencies, but with better intuition. I'm doing it here, in my country.

The problem with current theories is that I understand them when I reading them. It's like piece of complex code or book with complex but boring text, like phonebook. I can follow it, when I read it, but I cannot reproduce it when book is closed.

Can we teach a phonebook to kids? Yep. Is it useful? Nope.

Recently, I did "quantum physics in one picture" experiment. Results are very good: lots of reposts, comments, interest in topic.


But it is not a tradeoff in the cases you picked, rather one set of formalisms has drastically more inconsistencies than the other. E.g. pilot waves: you gain having real numbers (which I personally see little value in) and you gain having a more mechanistic intuitive source of the interference (which is indeed interesting). However describing multiple interacting entangled particles becomes incredibly difficult, describing annihilation and second quantization which is needed for the quantum behavior of fields is not completely done yet, and (what I consider the most substantial problem) you can not work with finite level systems (i.e. anything but a spinless particle in a box is very difficult to describe by pilot wave theory).

In short, pilot waves were a worthwhile avenue of research, but we have seen they are incredibly cumbersome or even insufficient in many quantum mechanics problems.


Yep. Pilot Wave theory is underdeveloped theory, but it helps to develop intuition. Walking droplets are even better for that. IMHO, it's better to use QM to solve QM problems in science, but use walking droplets and Pilot Wave Theory to develop intuition for others. Walking droplets are easy to demonstrate. Double slit experiment can be reproduced in school lab. This way, quantum physics can be taught in school for children of age 12+, so they will be ready to solve much more complex problems when they will be PhD.

Entanglement is hard problem for PWT. Photos of entangled photons[0] are intriguing, because they look similar to behavior of walking droplets in some experiments (see dotwave.org feed). I hope, someone will be able to reproduce entanglement in macro. Currently, my top priority is to reproduce Stern–Gerlach experiment in macro (I suspect that interference between external field and particle wave creates channel, which guides particle into spot, but it better to see it once). Second priority is creation of "photons" in macro. Entanglement will be third. IMHO, all of them require microgravity to reproduce in 3D.

[0]: https://phys.org/news/2019-07-scientists-unveil-first-ever-i...


With some caveats, I happily agree with the angle from this last comment! I agree PWT is a great way to get people hooked on quantum science, even if I consider it as a dead end for fixing the inconsistencies we have (semi-personal semi-professional opinion).

One problem is that physicists are not interested in lowering the bar to understanding advanced theories. Some say it's all fairly simple once you spend a decade learning some very advanced math. The art of teaching is in making the material more accessible, and at that I dont think much progress has been made.

>physicists are not interested in lowering the bar to understanding advanced theories

That is not true, geometric algebra is an example of a recent pedagogic improvement that is getting a lot of attention. The problem is that physics will never be easy enough for someone who is not prepared to think deeply, because it is one of the few areas where truly new ideas can be found. Virtually every area of learning involves repackaging concepts we have all known from childhood (people's motivations, stories, colors, that kind of thing) in specific ways. Major exceptions are physical tasks like learning to sew or play an insturment, and "esoteric" subjects like math and physics. In all of those cases you cannot learn by casually reading because the neurons in your brain are simply not prepared for it.


> is getting a lot of attention

Not really. People look at it, marvel, and move on.


Worse, they're snidely dismissive.

I've been interested in GA for years now because it helps me visualise and understand otherwise inscrutable mathematics.

Nobody, literally nobody mired in the traditional mathematics of theoretical physics can explain why the Universe is best represented using matrices of complex numbers with constraints on them.

"Shut up and calculate" or some variant is the common response to such probing questions.

More often, it's some variant of "Well, I can understand it, you need to study more.". This is usually stated just politely enough not to be outright insulting. But if you keep asking probing questions, it turns out that they don't really understand either, the "study" didn't help them either. They only got better at pushing the symbols around on paper They're dismissive of such questions because they're too proud to admit their own ignorance.

Geometric Algebra (GA) was my "lightbulb" moment where I finally understood where Dirac matrices, Pauli matrices, and the like come from and why they have the structure that they do.

My logical conclusion was that GA is the far more elegant, clear, understandable mathematical structure that brings a wide range of Physical phenomena under a unified formulation. So clearly, it should be used for pedagogy.

Nobody agrees with that. The attitude is "well, that's nice, but it's mathematically equivalent so there's no benefit." which is just the stupidest thing I've ever heard.

Imagine if you saw a function called "add_num(a,b)" that computed the sum of two integers using the full bit-by-bit adder digital logic circuit simulated in software using boolean logic. Absolutely bonkers, insane code, right? Clearly this ought to be scrubbed from the codebase and replaced with a simple "+" operator, because we're not maniacs. Physicists would argue "no", it's equivalent, it's "working", so shut up, leave it and just move on.

Drives me batty.


If you haven't used it already, Versor[0] is nice to play with. GA is simple enough that even normal-ish teenagers can understand it and produce useful results (my sons are using it in a game they're building). Math isn't even close to my strong suit, but Dual numbers and GA make sense to me, and have made it a lot easier for me to do (seemingly, to me anyway) advanced stuff. :-)

[0] http://versor.mat.ucsb.edu/


I 100% agree with you in all respects — I don't come from a physics background but I hear you loud and clear. I think the 'why' is deep and psycho-historical in nature:

- we're exiting the "industrial" mindset where everyone is the same making the same products, to a wider topology of knowledge and skills (more and wider horizontals, more and bigger verticals, 'average' profiles become 'scattered'). This clearly drives a need to "learn a little bit of a lot of things" even at expert level.

- The walls and denial you expose here is to me but a symptom of the disease that current academia will either have to heal or die of. Seeing how Khan (and thousands of Udemy's after them, indies) changed the landscape, my money is on a major paradigm shift incoming for academia (it's already done, they just don't seem to know it yet as institutions, most of them). Lots and lots of great teachers around the world almost freely sharing incredible hands-on knowledge and insight.

- Some applied domains with dramatic tension of the demand side (lots of positions to fill) don't have the luxury of elitism and massively adopt "pragmatic" approaches especially in learning. Software dev, programming and tech in general is much like that — the "one liner" installs and 1-page "getting started", all the intelligence solely put into making things intelligible and usable is, frankly, quite humbling and inspiring in that field. A very good side of the SV/Cali culture. So, examples of how to proceed next really do exist.

Now when I think back of topics that I hurt my head against for months or years, that a simple 20-minute video could 'unlock'... Why, why do we not make it a staple of "teaching" to at least consider 2-3 angles to make sure everyone's got a fair chance at getting at least 1?

- On the topic of hubris and laziness, this is where physics went astray, imho. Too much hubris and not enough laziness. That was back in the 1980s and it took 40 years to realize, probably 10-20 more to "fix", if ever before we build a new system (see above).

That being said,

> Geometric Algebra (GA) was my "lightbulb" moment where I finally understood where Dirac matrices, Pauli matrices, and the like come from and why they have the structure that they do.

YES, please! Geometric algebra seems like the thing that could blow my mind too. I am very visual, to a fault maybe.

Would you have a 'favorite' resource to share? (book, course, youtube, whatever?)


GA is just "strongly typed" vector algebra. It recognises and embraces the inalienable fact that areas and volumes are fundamentally different to vectors and scalars.

The reason Physics "went wrong" is that in 3D space (only!) the mathematics of areas and vectors is coincidentally isomorphic, so it's possible to cheat and use only vectors and scalars and then everything "works". Similarly, volumes and scalars are easily confused as well, and appear to work fine.

GA has no such restrictions and the same formulas work in all dimensions, including high-dimensional or with degenerate metrics. Problems from classical geometry such as finding tangent lines to circles can be trivially extended to finding tangent hyperplanes to hyperspheres, even for very complex problems.

The formalities of GA force you to include things like the square of the unit pseudoscalar in some physics formulas that were accidentally dropped in the traditional form because in 3D this is just "1" and hence easily overlooked. This makes some formulas weirdly difficult to extend to become relativistic, when in fact the problem was just the "weak typing" of vector algebra.

Vector calculus also inherently requires a basis, which is an easy way to get bogged down in the weeds and get confused by issues with the algebra itself instead of the truly "hard" aspects of the problem.

Generally, the "lightbulb" moment for me was that Geometric Algebra has various subsets that are also closed algebras in their own right. For example, the "even" subset of a 3D GA is isomorphic to Quaternions, and the even subset of a 2D GA is basically the same thing as a Complex number. The various "named matrices" are just other subsets of 3D or 4D GAs. Physicists tend to avoid the full general case and simplify their algebras down to the special subset cases, using the historical names and greek symbols. We have to keep the symbols, you see, because otherwise you wouldn't be able to read 2000-year-old ancient greek texts, or... something.

University Physics is actually a study of the History of Physical Philosophy. The computer science equivalent would be learning about abacuses for the entire first semester, then progressing to mechanical calculators in the second semester, vacuum tubes in the second year, and so forth, only to briefly touch on transistors by the end of the third year. Postgraduate research students would be finally told about modern silicon chips and software development, but by this point they're so used to wiring up breadboards manually that it's too late to teach them how to do anything properly.

Starting with something elegant like pure functional programming in the first year is how I studied Computer Science, but I only found out about Geometric Algebra existing after I graduated Physics. It's nuts.

For background reading:

David Hestenes is one of the few physicists trying to reformulate physics in terms of GA: http://geocalc.clas.asu.edu/html/Overview.html

There's lots of papers around: http://geocalc.clas.asu.edu/html/GAinQM.html

Wikipedia is an okay starting point, but not amazing: https://en.wikipedia.org/wiki/Geometric_algebra

Enkimute's "Ganja.js" online demos are amazing, unfortunately the source was written by one of those crazy maths people who think that terseness helps readability: https://enkimute.github.io/ganja.js/examples/coffeeshop.html...

Real industrial use is few and far between, but at least a few folk have discovered that GA is ideal for robotics. Unfortunately, not everyone got the message, and most robotics software libraries are firmly vector/matrix based and have all the usual issues like numerical instability and gimbal-lock. Fun stuff.


Hey there. I'm not sure you'll ever read this, but for the record. THANK YOU, so much.

So.. I've been dabbling with GA since we talked and it is an incredible framework!! I now understand your post loud and clear. It's a new dawn of math for me, I really mean that; Clifford is my new prophet (and I think this one's a keeper possibly for life, I don't know and can't imagine something better for the problem space). So much had not clicked with linear algebra for me, so much of matrices was obscure and had no representation in my mind... And GA's base objects and concepts are so, so elegant, and exquisitely intuitive.

Looking for a short conceptual intro I stumbled upon this channel/playlist: https://www.youtube.com/playlist?list=PLpzmRsG7u_gqaTo_vEseQ...

Turns out he's an outstandingly good teacher. Strong recommend.

I'll probably take a more "serious" course/book (with problems!) next — if anyone has a recommendation, please do!

Then make progress by working on actual stuff (I guess Hestenes' reformulations are a great starting point, retracing some of these following his reasonning).

And the penultimate goal would be to reformulate stuff myself, if I could — haha, that would be so great. More realistically use GA for research in designing models and representations.

___

TL;DR: you brought Math back into my life. We were on a break (but kept calling each other..) for the last decade and a half. GA is really, really strong. Remind me again, why don't we teach children like that for a century? /s (sigh)

Much thanks again


Wow, thanks so much for all this. I've yet to digest it fully but it's a terrific intro, I love how you worded some of this. You should consider teaching! :)

I can't elaborate much, so just a few "mind blown" moments for posterity:

> GA is just "strongly typed" vector algebra.

That's one hell of $1B slogan, at least around these parts! :) Shut up and take my money.

> in 3D space (only!) the mathematics of areas and vectors is coincidentally isomorphic, so it's possible to cheat and use only vectors and scalars and then everything "works".

I never realized that... there's indeed a lot of confusion in my mind between those concepts. I fail to see how "different" they're supposed to be, I guess really need to go back to sane basic in that regard.

> Geometric Algebra has various subsets that are also closed algebras in their own right.

Just wow. I love this. I actually need this.

> GA has no such restrictions and the same formulas work in all dimensions, including high-dimensional or with degenerate metrics. Problems from classical geometry such as finding tangent lines to circles can be trivially extended to finding tangent hyperplanes to hyperspheres, even for very complex problems.

So that is the real kicker for me, because it fits my problem space so well. I'm exploring highly-dimensional models (basically letting complexity arise from the dimensionality of rather simple/elementary objects, rather than trying to shoehorn complex functions in low-dimensional space in hope of pretty much randomly finding "better fits" — it's a strong desire to not interpret the data before the fact, to remove bias from modeling itself).

There's interesting research around geometric deep learning as well, which seems largely informed by physics as well, and this is sort of the logical conclusion of that for big datasets.

I think industrial use may rise greatly based on this first take. But it's always a generational thing with culture — it takes ~25 years give or take for those who "grew up with it" to finally become the majority of the workforce and sway things their way. Same with politics — looking at you, academia. As you said, "but by this point they're so used to wiring up breadboards manually that it's too late to teach them how to do anything properly."

> It's nuts.

Yeah, it'll take time, never mind how infuriating in the meantime. But good on you, spreading the word about GA is exactly how we move forward, one post, one topic at a time. Eventually, we get there.


I'm trying to explain quantum physics using single photo[0] (in Ukrainian, but you will get it). It has good adoption among regular people. It based on real physical experiment, just labels are added. BUT scientist are insane when they see it. They argue that quantum physics cannot be explained using picture, because the only true way to explain quantum physics is using mathematics.

[0]: https://scontent.fiev21-2.fna.fbcdn.net/v/t1.0-9/79956387_10...


Also, let’s agree on a consistent interpretation of quantum theory.

As long as we keep teaching the reckless hand-waving that is the Kopenhagen interpretation, we will keep confusing clear-thinking students.


The historical attitude has been, "it doesn't matter, shut up and calculate". There's been quite a bit of push recently to try and nail down exactly what QM means rather than just what it calculates (See: Sean Carroll, etc)

Because the sophons have been sabotaging our experimental results

Came here looking for this comment

But perhaps it is actually because we are located in the Slow Zone.

Great series to all who haven't read it!

How would those who haven't read it understand the reference?

If you're one of those people, I believe it's a reference to the "Three-Body Problem" series.


It is a really good series, even if it is a bit like reading a sci-fi story whilst being beaten over the head with a physics textbook.

I actually quite liked that. Adds something more to the text imo.

That's the best part!

I think that the article raises some interesting points, with some that I agree with and some that I do not.

I think it would have been helpful for the article to put the 40 years of no progress in perspective. Are we looking for progress on the scale of the theories of relativity and quantum mechanics, and so should we be comparing to the timescales between Newton and Einstein/Schrodinger? How should we think about the rate of progression in a ‘mature’ field such as physics? Should it be linear (big discovery every 40 years), faster (new discoveries are faster due to bootstrapping from other discoveries), or slower (diminishing returns)?


What actually is the foundation of physics? The observations or the theories?

We believe, as an assumption (or nearly as a matter of orthodoxy) that there are simple universal laws that govern consistent natural phenomena. One could argue that that is the foundation of our science of physics in that if that wrong, the whole thing falls down. But that has not “progressed” and really should not change... which seems consistent with the concept of a building foundation. Building foundations don’t move and shouldn’t move.

What about theories, which seem to be the focus of her blog post? Well we should be careful to distinguish between our theories and the fundamental laws we think they describe—the map vs the territory and all that. I would really hesitate to call our theories a foundation of physics. For one thing they are known to be provisional; intended to be changeable. That’s not how foundations usually work.

When observations contradict theories, the theories must move. From that perspective one could say that observations are more foundational than theories. Once a piece of evidence is properly observed, it doesn’t change.

And the thing is, we have collected major (I would argue foundational) observations in the last 40 years. We observed the Higgs boson and gravitational waves, and I would call both of those foundational.

That they agreed with existing theory is somehow being taken for a crisis? I guess it’s a crisis if your job is to come up with new theories and you’re lacking reasons to do so.

But there are plenty of mysterious observations yet to be explained. Many of the observations related to dark matter and dark energy fit within a retrospective 40-year time horizon. Call them astronomy if you like, but going back up to my second paragraph, we believe they should be explainable by our physical theories.


> What actually is the foundation of physics?

See Naturalism:

> 1. that there is an objective reality shared by all rational observers.[20][21]

> 2. that this objective reality is governed by natural laws;[20][21]

> 3. that reality can be discovered by means of systematic observation and experimentation.[20][21]

> 4. that Nature has uniformity of laws and most if not all things in nature must have at least a natural cause.[21]

> 5. that experimental procedures will be done satisfactorily without any deliberate or unintentional mistakes that will influence the results.[21]

> 6. that experimenters won't be significantly biased by their presumptions.[21]

> 7. that random sampling is representative of the entire population.[21]

* https://en.wikipedia.org/wiki/Naturalism_(philosophy)

Some other approaches:

* https://en.wikipedia.org/wiki/Philosophy_of_science#Current_...

Basically you have to make some metaphysical assumptions before doing science can even get off the ground. If you believe that reality is an illusion (Buddhism? Hinduism?) then you're less likely to be interested in understanding the world's workings. If you think that things occur for capricious reasons (e.g., pagan gods being the cause of things), then there is no reason to ask "why?". If things happen not because of inherit properties but because of God's Will (Occasionalism), then who can understand the mind of God?

I've heard it argued that science mostly developed in (Western) Christendom because it brought together all of the above assumptions under its Aristotelian world view. If you look at the invention of the telescope in ~1600: it spread over the world with-in a couple of decades, but most cultures weren't really interested in it.


Another relevant segment of the Philosophy of science article is the realism/anti-realism dichotomy.

I find it an interesting line of reasoning that the current lack of progress is due to the default naturalistic approach whose sole purpose is finding "truth" vs. a more pragmatic, non-realist approach that would have a much more concrete purpose (e.g. solving particular problems). Truth for the sake of it with no practical experiments seems to have been a dead end.


It can be argued that it is up to engineers, not scientists, to develop the practical uses of 'raw' science.

It can. I think the lack of coupling between the two is the crux of the matter here.

The foundations of physics you are talking about sound more like foundations of scientific method, and are not specific to physics. Most people rightly call the theories and corresponding fundamental physical equations to be fundamentals of physics.

Also, you really wrote a bit too much.


The foundations are in the mathematical sciences, beginning with Galileo and Kepler and wrestled with by Descartes until Newton came up with a brilliant work on mechanics. It’s focus has been on the mechanical, ie numerical and geometrical, principles of motion and not on substance. Studying not what it is but where it will be, giving us greater certainty of phenomena and thus greater self-determination in an unpredictable world.

This intellectual current is now upheld by the engineering sciences. The physicists are too glued to the Bohr Model and a particle universe to concern themselves with a new mechanics in light of the quantum wave phenomena discovered last century.


I dare say the author is right on many points but statements like:

"But for all I can tell at this moment in history I am the only physicist who has at least come up with an idea for what to do."

make it hard for me to share her point of view, especially as these ideas are not mentioned.


I'm not sure how you missed it, but she did give a recommendation. Attempt to resolve inconsistencies as a means of discovery.

This is not a very original recommendation. In fact a major selling point of String Theory is that one can manage to derive both Einstein's Field equations and quantum field theory scattering amplitudes from its equations. This approach is currently the only one that can claim that for itself. Of course people like Hossenfelder never made an effort to understand String Theory in detail, so they can only make first order observations about the current state of the field.

It is also not true that no progress has been made in the understanding of String Theory in the last 40 years and it still seems like the best bet that could eventually generate a fundamental theory. What is missing is still a lot though:

- We don't seem to possess the correct mathematics to develop a non-perturbative formulation of String Theory and there are too many potential string backgrounds that we could expand around.

- It is also hard to derive the matter content of low energy effective actions from most brane configurations.

String theory provided major insight into non-perturbative quantum field theory as well. There is tons of examples, let me highlight one of them: The discovery of the Amplituhedron (Arkani-Hamed et al., 2012) was preceded by the discovery of the BCFW recursion relation (Britto et al., 2005), which in turn was motivated a relationship between perturbative Yang-Mills theory and the instanton expansion of a certain string theory in twistor space (Witten, 2003).


This is so general and vague that it is useless, even more considering that the author has been saying the same for the last 10 years (the same timeframe her career progression stopped) and there is not a single valuable paper proposing a somewhat-valuable idea. I hate to sound like Lubos Motl (for the cognoscenti) but Sabine's criticism is trite.

It's not general and vague if you're a practicing physicist and know what those inconsistencies are. For example, the Standard Model assumes neutrinos don't have mass, but they do.

> the Standard Model assumes neutrinos don't have mass

No, it doesn't. The original Standard Model from the 1970s did, but then neutrino masses were discovered and the Standard Model was modified to include them.


By which mechanism do neutrinos gain their mass?

AFAIK the seesaw mechanism is the one currently used to model neutrinos in the Standard Model.

I did not miss this statement; for me this does not constitute a recommendation. Indeed I think many of the researchers of whom she is critical could claim that this is what they're doing.

more than a tad unfair to accuse her of not offering suggestions by claiming her suggestion isn't a suggestion.

I think whether it is unfair would hinge on whether it is true that the thing isn’t a “recommendation”.

I have no firmly held answer to the question of whether it is.


But that's like telling me "run faster" if I am complaining that I can't run 100m in 10sec. I would think that almost all theoretical work revolves around resolving inconsistencies (such as between quantum field theory and general relativity). This advice is too generic.

She provides one of her ideas immediately prior to that quote:

”I have said many times that looking at the history of physics teaches us that resolving inconsistencies has been a reliable path to breakthroughs, so that’s what we should focus on."

And given that she has written at least one book on this general subject, I would guess that idea is elaborated in much greater detail there or on her blog.

https://backreaction.blogspot.com


"We just need to innovate"

K.


In this case, though, the idea that we need to resolve inconsistencies is the correct diagnosis and shines a light on the incredible failure of the prevailing institutions.

Take for example the idea that a photon is both a particle and a wave. This dogma has been force fed to students for decades without the glaring inconsistency being resolved, or even pointed to as a thing that needs resolving — something is either a particle or it’s a wave; if we observe properties of both, then we need a physical explanation for how one becomes the other. Something can’t be two distinct things. Logically, all I’m pointing out is that “A is A”.

Yet my perspective that there is an unsolved inconsistency here is considered heretical. “Shut up and calculate” is the reigning dogma.


A photon isn't both a particle and a wave. It's something else that happens to share some features with both.

>> A photon isn't both a particle and a wave. It's something else that happens to share some features with both.

But physicists have given up on figuring out what it IS. They have decided that having math that can predict the outcome of experiments is enough. They're not wrong, but it feels rather unsatisfying. IMHO there have been a couple avenues worth exploring that are being largely ignored.


> But physicists have given up on figuring out what it IS.

It depends on what you mean by that. What they've given up on is trying to find some sort of anthropocentric analogy. It's understandable that this is unsatisfying, since analogies are fundamental to how we understand things. Unfortunately, there's no reason to believe that a good analogy will exist.


>> Unfortunately, there's no reason to believe that a good analogy will exist.

That's no reason not to try. If there IS an objective reality I think it deserves a better description than just the math which characterizes its behavior.


If would be tremendous if someone can come up with one, I'm sure they'd be quite popular, but good analogies don't grow on trees.

We shouldn't be looking for analogies, we should be looking for theories consisting of precise concepts that directly explain what is happening in the universe.

Ultimately, math is the only "explanation" that physics will ever offer.

Its not satisfying because its not intuitive. Its not intuitive because our billion years of evolution never exposed use to such experiences to observe. If we limit ourselves to only whats intuitive it will be necessarily limiting to our ability to make new discoveries.

Let's grant the point that there may be some phenomena in the universe the human mind is not equipped to understand (which is a nuanced position but we can ignore those nuances for now):

Why should we assume that the contradictions in contemporary theories are of this special, inexplicable type? Every era has contradictions, before they are resolved... But they'll only ever be resolved if people are trying to resolve them, meaning that they haven't resigned themselves to the idea that our minds haven't been gifted with the capacity to make sense of our experience.


I was thinking about this just last night.

One of my favorite books is called "Architecture of Matter", it's a history of ideas about matter. One early idea was that matter is made of little tiny bits of stuff and that qualities of these little bits (such as being smooth or spikey) leads to macroscopic phenomenon (like spikey bits being acidic.)

The problem with this idea (and almost all others) is that it's just pushing the problem down a level: If matter is made out of little bits of matter, what are the little bits of matter made of?

FWIW, the wave-particle duality gets around this self-reflexive problem. Matter is made out of some other kind of stuff. But then, as you say, we still have the essential problem of "wtf is this stuff?" but we don't worry about that so much as long as our math describes the behavior of the stuff.

> They're not wrong, but it feels rather unsatisfying. IMHO there have been a couple avenues worth exploring that are being largely ignored.

What avenues? (Genuinely curious, not trolling.) It seems to me that the ultimate, existential question of what "stuff" actually is, is unanswerable (within the logical/scientific framework.)


>> What avenues?

Pilot wave theory is one. The other was a paper - forgive this explanation - the found equivalence between particle physics and fluid dynamics. Suggesting the objects in physics might be modeled as say vorticies in some kind of fluid (aether). The equivalence was IIRC only to first order, but the work to get there must have been a lot. I'm sure there are others.


Cheers!

Fair point, so then let me reframe the inconsistency as: there’s a phenomena that appears exhibits properties of both a particle and a wave but it can’t be both. So what is it?

And the criticism of mainstream thought in physics would be: its wrongfully dismissive of my question, and ignoring the important job of looking for the answer.


No, the mainstream physics reply is "It's a quantized excitation of the electromagnetic field." since that's a perfectly reasonable reply to a perfectly reasonable question.

"Height field" is mathematical model of land.

Electromagnetic filed is mathematical model of what?


Electromagnetism is a U(1) gauge theory. If you take the accompanying classical geometry at face value, there's hidden state at each point in space, like some sort of dial. The absolute position of the dial is irrelevant, but the gauge potential tells you how the dial turns as you move from place to place. The electromagnetic field strength is something called curvature of the corresponding 'connection'. The mathematics are a bit involved, but morally speaking, I would say it tells you how the turning of the dial varies across space.

So, EM field describes strength of EM force in every point of mathematical model. Nature of EM force is unknown. Right?

Let's talk about nature of EM force.

Analogy: look at a typical tropical cyclone. It rotates. Is it rotating because a unknown property of an air molecule? No. It rotates because our planet rotates, while air molecules are just trying to keep their positions. I.e. it's rotation of planet + inertia of molecules.

Is it possible that EM force is happens because our local space is moving trough global space by non-linear trajectory, so it just non-linear trajectory of local space + inertia of rotating and vibrating particles?


Not exactly only mathematical, though - you can feel it as you walk across.

I feel also air field (wind), height field (land), light field (photons), and so on.

My point exactly - fields are real, they are not merely mathematical models. (It's just that some words acquire a more precise meaning, being formalized as part of a model, and while it's true that models may contain an additional "scaffolding" that has no analogue in reality, field is not one of those.)

Field is mathematical abstraction, used in mathematical model, to represent a physical thing.

Physical things are real. Mathematical abstractions are not. OpenGL is not real too, but it looks very real and accurately predicts reality. In OpenGL, field is array, e.g. "float[][][] field;".


That’s like saying, “Cat is a word used in a language to represent a physical thing.”

No, it's like saying that nature is built on kind of vertex buffer, because OpenGL accurately predicts reality.

How about "it's a thing that does not have an analog at the macro scale in which humans exist, therefore any explanation that does not use terminology from our daily lives will be unsatisfactory"

It has analog at macro-scale: it's a particle surrounded by a wave. Double split experiment is reproduced in macro almost decade ago!

Your perspective isn’t “heretical”, it’s ignorant. The “paradox” has been resolved since 1930 or so, in the form of quantum field theory.

We have a pretty good and intuitive explanation of what a photon (or other particles) is. It is an excitation of a quantum field and quantum fields stem from the underlying symmetries of space and time. It is incredibly powerful, incredibly simple, and incredibly illuminating idea, but it requires you to learn a couple of new words that humans did not need in their vocabulary when they were inventing agriculture. I am very comfortable claiming this is much simpler than any particle-related intuition as it requires way fewer "axioms". The difference is that you arbitrarily happen to have a mental image of what a particle is, but that does not make particles simpler. As a mental exercise, try to rigorously define what a particle is according to your intuition and explain why we should expect them to even exist.

That inconsistency was subject to an immense amount of scrutiny by thousands of scientists for over a hundred years. Albert Einstein received a Nobel Prize for resolving the inconsistency. The theory that ultimately resolved it is called quantum mechanics. That's no longer an inconsistency.

Better examples would be the inconsistencies between general relativity, which demands curved spacetime, and quantum mechanics, which prohibits curved space. This is a very real inconsistency at the heart of modern physics. Many, many people are actively working on solutions for it. Search for Grand Unified Theory or Theory of Everything for more information. Candidate theories include string theory and its derivatives, loop quantum gravity, etc. There are plenty more.

The problem is that designing experiments for these theories is Hard. Big-O Capital H Hard. My flight's about to board, maybe I can expand on it during my layover.

Dark matter is another example. Observations of the speeds and orbits of stars in galaxies and galaxies in galactic clusters are not consistent with our measurements of their masses. Plenty of candidates for dark matter have been and are continuing to be tested.

Candidates include MOND, which supposes that our theories of gravity need to be modified when acceleration is astronomically low. We have designed experiments to support or disprove these theories and most of the results have landed on the side of disproving them. (search for "bullet cluster" or "dark matterless galaxies")

Another candidate is MACHOs. ("MAssive Compact Halo Object") Basically the universe is teeming with small black holes, brown dwarfs, loose planets unassociated with any star, basically a lot of stuff we can't see. We have designed and executed experiments to search for these, but the results have concluded that there are insufficient such objects to explain the inconsistency.

The third candidate with traction is WIMPs, or "weakly interacting massive particle". Basically theorizing that there are other types of unobserved particles that have mass but do not interact via the electromagnetic force, which makes them very difficult to observe. There was hope that neutrinos could explain all these, especially when it was demonstrated that neutrinos have mass. However, experiments trying to bound the mass of the neutrino have shown they are not nearly massive enough to explain the observations. Experiments are ongoing to find these particles, but have yet not discovered anything we can't already explain. However, there's a problem: maybe they're just too difficult for any experiment to observe. In that case, we might be SOL.

These are just two examples. All of science are all the other examples.

The idea that this person is the only person probing inconsistencies is pure hubris. It's not a useful starting point for a conversation, unless the point of the conversation is to talk about how awesome you are and how much everyone else sucks. Which is all I got from this article.


>which demands curved spacetime, and quantum mechanics, which prohibits curved space. //

That's not inconsistent [as you've framed it]: curved time.


Personally I am not convinced a particle is a wave until collapsed. There is nothing in the double slit experiment that definitely proves that. It may be that light particles bounce off something and change trajectory and create the illusion of there being a wave.

In the double slit experiment, if you rotate the slit 90 degrees, the interference pattern is rotated 90 degrees in the same direction. Doesn't this prove there is no wave involved?

If you make the slits larger, the pattern changes, and then vanishes.

If you make the slits circular, the interference pattern becomes circular.

If you make the slits triangular, the inteference pattern becomes triangular itself.

All these things tell me that particles are not waves at any point, they just bump to something and take a different trajectory.

When, in the same experiment, a light detector is placed, and the interference pattern disappears, this does not mean the wave goes away and there is a collapse to a single particle; it means the detector produces particles that don't bounce off something. What if the detector is placed near the particle beam emitter? has it ever been tried? I don't know. If placing the detector near the emitter makes an interference pattern reappear, then we certainly have no collapsing of any wave.

What if we put the slits very close to the emitter? do we get the same inteference pattern? what if we put the slits further away from the emitter? does the interference pattern change? if yes, then we certainly have no wave.

And something else regarding quantum entanglement: how can we be sure that the particles are not created with their properties in such a state that they appear entangled when they are measured? why do we assume there is a communication between the particles on the fly rather than the two particles having relatable but not connected properties? we just assume that due to the other assumption that particles are waves and they collapse.

Finally, how do we know that matter attracts matter and it is not the void that pushes matter into clumps? how do we know that the actual distance between the furthest points in the universe is the same as it ever was, and simply new positions are created within the same distance? and these positions are what push matter to clump together?

I'd love to sit down with an honest physicist to research these types of questions, a physicist that cares more about answering these types of questions than hunting for grants and fearing to go against the status quo, but it seems only crackpots are willing to do that.


I have become a bit more pessimistic about it the state of discovery. Things have slowed despite the current generation having abundant access to overwhelming compute power and the internet, things that did not exist even 30 years ago. There has never been a better time to collaborate or prove out theoretical models, yet there has been a decrease in needle moving discoveries.

Maybe the internet is an impediment to doing science. It certainly seems to shorten people's attention spans.

More importantly, the most brilliant minds (and there are still a LOT of them) are working for large companies on unimportant problems instead of doing research. Academia has lost all of its prestige, and companies pay ridiculously more.

I too worry this might be a big issue holding us all back. As well as poor diet and lack of proper exercise.

I've read several of Sabine's blogs over several months. I think she has very good ends in mind, has courage to push back on corporate/academic inertia ... such inertia comes with any human organization ... On the negative side she's big on complaining but small on alternatives. She's also a bit too blunt/dismissive of people -- this from a person who also dislikes corporate happy talk. As such it's not clear if she'd confer distinction if she had a large budget, an institution, and group of experimentalists. An Oppenheimer? No.

Aye, it’s easy to make criticisms (and Zeus knows the price of good science is eternal vigilance), but the easiest and most satisfyingly Ockhamite explanation of the slowdown in physics remains: all its low-hanging fruit is long since taken.

And while there’s no harm in pondering the philosophical origins of the scientific method while debating where to go next, we should take care not to go backwards either, as that way lies fractal navel fluff and bloody string theory.


You should open a diner: TZIF

When I lived in the Middle East, I imagined a chain of dry bars called: TAIT

(Up until about 6 years ago, the Omani and Saudi outlets would have been called TAIW)


Agree!

It’s amusing you jocularly employ Zeus in this conversation. Galileo, Descartes, Newton, Maxwell, etc. did not have Zeus on their minds, maybe it would be wise for you not to either?


I've followed it for years, but I think lately (last ~year) she's become somewhat clickbait-y in some articles.

In particular the posts where she keep highlighting the misguided fools that claim gravitational waves aren't real; she wrote a forbes contribution that made a big deal the LIGO people didn't respond to questions over facebook and saw it as suspicious, and awarded points to the GW denialists as a consequence.

But maybe it's just that I know have a better understanding of what Hossenfelder is writing about, and can see for myself how thin the cases can be.


I believe in gravitational waves, but specifically with LIGO I wish they did a better experiment with the multimodal observations. An easy one would be:. For an [n] month period blind all of the results, and mix them with 3X fake data. Then instruct traditional EM astronomers to search for EM phenomena corresponding to these signals. If only 25% of them correlate, we know the multimodal search is not working.

Eh? You can’t point a telescope at an event that happened (here) years ago! Also, if it ever came out you’ve wasted good night skies on fake events, preventing other astronomy from being carried out by interrupting it with too’s that are known to be fake?! Like non-transient observers get their time sniped enought already?!!! You wouldn’t have friends left in the funding agency at best.

Anyway most events are BHBH mergers with no EM counterparts, so non-detection of those mean nothing.


There is no believing in them. Gravitational waves have been observed and you can't "believe" or "not believe" in them anymore than you can "believe" or "not believe" in electricity.

The observations are still subject to debate. The signals are far below the noise and can only be discerned with fancy statistics, which some groups have failed to reproduce. See Hossenfelder's detailed critique at https://backreaction.blogspot.com/2019/09/whats-up-with-ligo...

This video is better, IMHO (same author):

Have we really measured gravitational waves?

https://www.youtube.com/watch?v=WWTvNlfkvoI


No, it is not a detailed critique, it is a list of little more than rumours. Hossenfelder is satisfied with describing the LIGO collaboration as suspicious and then backing off with ”of course, I don’t belive this.

If you belive that clickbait I feel bad for you, but it’s too low quality to spend time debunking.


I believe with exceptionally high degrees of confidence that the sun will rise tomorrow. I have this belief because the sun has been observed to rise every day for something like a few million times in a row and I've never once witnessed it not rise. This belief is so strong I could comfortably call myself certain of it, but it's still nevertheless a belief. For most practical purposes, 'certainty' means an exceptionally high degree of confidence, where the margin for doubt becomes too small to bother ourselves with. But true certainty seems like something that can only arise from pure mathematics, not through observations of the world. I am truly certain that triangles have three sides, whereas I am 'merely' exceptionally confident that gravity exists.

A person can certainly disbelieve electricity, they would just be foolish to do so.

While I think a person would also be foolish to disbelieve gravitational waves, I think it wouldn’t be quite as foolish, because while we clearly measure gravitational waves, we don’t exactly use them to do (as opposed to “look at”) stuff, so a person’s everyday life is less impacted by disbelieving GW than by disbelieving electricity.


> foolish to disbelieve gravitational waves

How foolish of Newton!


Pardon, I meant given the current evidence. Newton didn’t have that evidence. I thought that was obvious.

And at first I had written “probably foolish” as a hedge, but thought “oh, I hedge the things I say too much, and it makes what I say less pleasant and more difficult to read. I’ll leave it out.”.

By “foolish” I maybe I really meant something closer “probably reasoning incorrectly”.


The exact point of my comment is that the evidence is not as strong as claimed in the popular conception, as there has not been independent confirmation via a multimodal technique. To date there has been a single, unreproduced, nonindependent, multimodal observation... A good start, to be sure, but still folks like yourself are saying that it would be foolhardy to merely believe GW.

"...mindless production of mathematical fiction..."

This derisive comment betrays the authors own hypocritical stance, claiming physicists are too close-minded, while simultaneously ridiculing the role of advanced mathematics in formulating new physics hypotheses, arbitrarily declaring them mindless fiction.


Nope. The criticism is of searching for new physics by chasing mathematical elegance, instead of trying to explain observations.

Actually, Hossenfelders great fight has been with the concept of "naturalness", a fight that has now been won by the LHC killing off all the theories based on that concept.

And it wasn't ever so simple as "chasing mathematical elegance instead of trying to explain observations", the problem has been that there was a theory that could explain almost perfectly everything within a certain region of physics, but can't easily be extended.

Thus you work on crazy schemes to extend the existing theory (all the sensible ones already having failed), or you are forced to make an entirely new framework, and that takes a lot of work before it's finished enough to even reproduce the results of the limited theory.

If you take the second route, you are very vulnerable to the "chasing mathematical elegance" slander, but it's not like the other guys are doing any better: there aren't actually any unexpected observations that need explaining within the reach of the existing theory.


>"the problem has been that there was a theory that could explain almost perfectly everything within a certain region of physics, but can't easily be extended."

Sounds a lot like overfitting.


Sounds more like a formula that needs more terms.

Compare the classical physics formula for momentum, p=mv, vs relativistic formula, p=γmv. γ is almost 1 for most low velocities, it only starts jumping up to infinity when we get close to c.

The point being that the classical formula is pretty good in it's zone of low velocities, but as soon as you get too far out of the implicit term's "constraints" the formula breaks down and you need to add more to it to get it working for both low and high velocities. Which doesn't sound easy.


Adding more terms sounds like more overfitting.

It sounds like that only because scientists reformulate their models in terms that people are familiar with. Actual physicists don't work in terms of the gamma correction factor; they work in tensor fields that don't look anything like conventional arithmetic.

But they can pare all that down to something expressed in terms people are familiar with. And that has the bonus purpose of helping them understand why the familiar terms were familiar: the "correction factor" is small under circumstances we encounter, and only becomes large under circumstances we rarely do.

If that intrigues somebody enough to learn the actual physics, they'll encounter a completely different and more-encompassing formulation which looks not at all like overfitting. One that turns out to be more elegant, in fact, cramming more information into less notation. But it's information nobody needs until they're doing fairly advanced physics, so we're not going to be teaching it in elementary school any time soon.


Don't worry about it, I think you're talking to a NN stuck in a local optima around the term "overfit".

I don't see how what you wrote could possibly address concerns with adding parameters to the model until it explains current data so perfectly that it cannot generalize to future or other data.

This is not a "theory" problem, it has to do with matching the theory to observations.


If that’s what it sounds like then that is a failure of my description: the problem is nothing at all like overfitting, and casting it in that light would be pointless.

How many free parameters do "the foundations of physics" allow?

I was concerned a few years back when a "blip" at CERN resulted in theoretical physicists publishing 300+ different theories to explain it in a short period of time. All of these theories were presumably consistent with "the foundations of physics". And I guess that "blip" got rejected as a not something worth explaining anyway.

Sounds like post-hoc overfitting to me.


Calling it ”overfitting” is actually you overfitting your model of how theory works on this one concept from machine learning.

The actual events of the 750 GeV peak are much closer to neural networks hallucinating, and the diversity of models created doesen’t strike me as evidence of overfitting...

Anyway, you clearly didn’t trust my summary, and I no longer trust you have an honest interest in learning more, so I’ll stop here.


If you can come up with 300 different explanations for a random blip, your theory is obviously very flexible. You are probably overfitting.

If your solution to the problem of not being able to generalize your model to new/other data is to add more parameters. You are probably overfitting.

These are the hallmarks of overfitting.

Sorry that you cannot learn from others and only expect them to learn from you.


Look at this: theorists come up with something like 300 different models, and you declare they all used the same theory to do it? Of course they didn't! Why would you assume they all did the same thing?!

Further, of the various frameworks used, many will have been created by imposing some further symmetry on the standard theory, in effect decreasing the number of free parameters!


Can you point out one that was inconsistent with the foundations of physics? I suppose it is possible some were that radical.

So now you demand that a new theory shouldn't describe what the old theory describes? Just what do you put into the concept of "overfitting"?!

I find it hard to put together her criticism of "chasing mathematical elegance, instead of trying to explain observations" with her criticism of the large and expensive colliders. The way I see it, the whole reason why the expensive experiments are needed is that the current theories do explain all the 'local' and (comparatively) low energy observations that we can do here on earth in ordinary conditions; we know that there are discrepancies between our theories and large scale / high power processes that we can observe in astrophysics, but if we actually want to probe and explore these discrepancies between theory and observations, then we need to make some discrepanct events to look at.... which we can't do without the very expensive experiments that she shuns; e.g. the Higgs boson is not going to show up on a low-power particle accelerator, no matter how smart physics you do.

First, I disagree that physics, and its foundations, have not changed. Incrementalism is common in mature areas of study, but the cumulative effect is still felt.

Second, I am reminded of Thomas Kuhn's The Structure of Scientific Revolutions [0] This work described exact the state, with historical examples of the cycles, whereby progress exhibits peaks and valleys, periods of time wherein little monumental progress is made followed by brief frantic periods of discoveries, often stemming from the fertile ground laid by those who worked in plodding toil.

And so I am more inclines to believe we are in such a trough at the moment and not even a particularly deep one. Various avenues of thought & experiment show amble potential to thrust us forward into one of Khun's Scientific Revolutions.

[0] https://www.amazon.com/Structure-Scientific-Revolutions-50th...


Consider a Venn diagram. 3 circles : the observable, the understandable, the communicable. Intersecting at a chubby triangle. That's physics. And it's pretty darn small compared to the rest of the diagram.

Maybe the triangle is exhausted. All mapped out. The limits of the method have been met. Time to find a new method.

Maybe?


Do we really want something fundamentally important as the foundations of physics to progress or change quickly?

History of science and the philosophy of science has shown that the foundations of sciences progress a little here and a little there until these "little progresses" gain enough momentum to create a paradigm shift. And we only recognize these "little progresses" in hindsight after the paradigm shift.

Technological advances also tend to progress science. We tend to believe that advances in science lead to advances in technology but historically, it's the other way around.

More likely than not, there are man "little progresses" being made toward an eventual paradigm shift, but until it happens, we won't recognize how important those "little progresses" are.


I really really want a reason to not dismiss this as vapid demagoguery running on the "Woman scientist challenges predominantly male establishment on stagnant paradigms" ticket because this is absolutely what it reads like.

"But for all I can tell at this moment in history I am the only physicist who has at least come up with an idea for what to do. "

This is one heavy claim (two actually), is there some place where she elaborates what that idea is in terms more specific than "resolving inconsistencies" and "more theorists" ?


I think she’s bitter and aggrieved but I’ve never seen evidence of her playing the gender card.

Not sure why the original response got flagged but: I think that her being a woman in this context provides this kind of output with more amplification than if she hasn't been while at the same time diminishing her opportunities for honest collegiate feedback. That's not her fault or an advantage that she's deliberately taking, or an advantage all for a researcher, maybe for an author/pundit.


Unpopular opinion:

The last time we had progression in the foundation of physics we just got even more powerful world destroying nuclear weapons. Maybe it's just too dangerous to advance physics outside of deeply classified government programs. In order to keep new physics from destroying the earth, funding is diverted to make work projects for physicists working in cosmology and string theory that will never actually have practical significance.


Sometimes when I’m working on a coding or troubleshooting problem I get stuck iterating on an issue when really I need to step back and try to think about solving things a different way. My impression is this is what she’s trying to advocate the physics community to do. I think her overall tone is too negative and is turning a lot of people off but overall I think she is a needed voice just to make sure we’re on the right track.

Sabine has a great YouTube channel, too => https://www.youtube.com/user/peppermint78/videos.

On the topic of the LHC, particle physics, and future colliders, I like this video => https://www.youtube.com/watch?v=Go2TaEUQpF4.


Doesn't it seem likely that there are important natural phenomena of complexity that simply exceed human ability to comprehend them no matter how long we work to understand them and no matter what evidence we stumble upon? That that evidence will always remain mysterious until we first develop artificial intelligence (for example) capable of interpreting it?

Actually, string theory arose as a way of doing exactly what this person requests. It was noticed that quantum field theory and gravity do not go together, so it was attempted to do something about this. So, it did not really work out? Well, you know the thing with this kind science is that it is unknowable beforehand what you are or are not going to find.

The claim of sceptics along these lines is that string theory has produced nothing in 50 years and some string theorists appear to be in denial of that.

In science as in pretty much any other discipline you should always weigh someones opinion by their believability / credibility. The credibility of someones opinion on something is a function of their knowledge / experience of the subject and things related to it. To first approximation most sceptics have close to zero credibility making claims about string theory, as they have not produced scientific output remotely comparable to proponents of string theory. This is simply because people like Witten tower far above most other working theoretical physicists.

This sounds suspiciously like argument from authority. You will be more convincing if you address the argument, not the alleged experience of those making it.

Please see my response to the previous objection in the thread (the sibling comment to yours) requiring specific examples of progress which would silence the critics - it remains unanswered.


Said skeptics are blissfully unaware of the contributions that working on string theory has made to other branches of physics and mathematics. To say it's produced nothing is to admit ignorance.

In the context of this article - physics progression - what specific theoretical discoveries or predictions has it produced that can be verified or disproven via experiment in the forseeable future?

Point was, math is one thing, reality is another.

From an outsider's point of view (i.e. mine) it looks like physics has become too enamored with mathematics, "thinking" that reality can be totally described by mathematics. Which brings us to one of the author's point of views, the fact that physicists (and I guess most scientists) nowadays dismiss things like philosophy of science or epistemology, they just continue head-on on their "quest for knowledge" not realizing that for almost half a century now nothing big has been "found out".

There's nothing wrong with the way math is used in physics. Consistency is a good thing; whether calculations possess explanatory power is another question.

I didn't say the maths is wrong, I just said it has reached its limits. And "no big discovery" for the last 40 years kind of shows that some limits have been reached.

> Consistency is a good thing

Lots of close but in the end ineffectual "epistemic" systems were internally consistent but in the long run they proved "deficient" (as in other, more efficient systems took their place). I confess I've never read Thomas Aquinas's work (to give just one example) but I'm pretty sure his "view of the world/reality system" is pretty consistent, don't think there are any internal contradictions in his writings. Problem is his internally consistent system wouldn't have been able to allow us to build combustion engines or modern electronics, so that we have had to come up with other internally-consistent "epistemic systems" that proved to be more efficient (because they allowed us to build and reason about combustion engines and modern electronics).

In the end and in the great scheme of things I don't think this theoretical physics road-block will be of any great importance for the general public, it looks like people are content with what they already can purchase based on past physics-related discoveries. Yeah, traveling through galaxy wormholes or getting to 100% know if the Universe is finite or not would be nice things to have, but people just don't care and there's nothing wrong with that.


If the past is any guide, whenever reality isn’t adequately described by mathematics we just come up with new mathematics.

I may be in the wrong here, but afaik mathematics has not "re-invented" itself after WW2, to take it a step further, I fail to see how we're doing different mathematics compared to what Newton and Leibniz have put on the table.

Even if we look at the "greatest" post-WW2 maths result, Fermat's Theory, I don't see any new reality-related insights that it has brought us, and I'll go another step further and I say that even if we were to someday prove the Riemann hypothesis I don't see how it would fundamentally bring new insights regarding "reality"/the physical world.

If anything, I dare say that in a certain way maths has tainted the physical world for us, has made us believe that in the same way in which mathematics is "homogeneous" then the physical world is too, if maths has numbers and "units" and if (1002 - 1000) = (2002 - 2000) then it also means that in the physical world we have homogenous "stuff".

This is why physics has started using mystical-like language like "elementary particles" which are seen as the "foundation of the physical world", with the implicit premise (if I'm wrong here, please correct me) that given a certain "elementary particle" (let's say a boson) then the boson close to it or the boson situated at the other "side" of the Universe are pretty much the same thing, almost identical, the same way as the mathematical difference I mentioned above is the same, or how two parallel lines are "the same".

Basically almost all the theoretical physicists have become Platonists by embracing mathematics no-questions-asked, when in fact they should have remained closer to Hume. And when reality hits them in the face pretty hard they resort to even more mysticism by "inventing" concepts like dark energy and the like.

Later edit: I see that that "homogenous reality" theory even has a name, Cosmological principle [1], and as a close-enough Hume follower Karl Popper was quick to dismiss it. By reading it you have to wonder what those physicists had in mind when they wrote it down:

> Although the universe is inhomogeneous at smaller scales, it is statistically homogeneous on scales larger than 250 million light years.

like, why 250 million is ok and 240 million light years is not ok? To say nothing of the fact that the "infinitely small" (the mystical-like elementary particles I mentioned above) are ignored completely from this discussion, they're also probably seen as "statistically the same". As I said, this "statistical sameness" has made us believe that more than half of the Universe we know of (68%, to quote Wikipedia) is made out of the mother of all mystical thingies, "dark energy".

[1] https://en.wikipedia.org/wiki/Cosmological_principle


First off, mathematics did actually reinvent itself in the late 1940s, with the discovery of Category Theory, which, despite being sometimes called “abstract nonsense”, has found its way into theoretical physics. Further, there is nothing mystical at all in any of the concepts of the modern physics - not more anyway than in the concept of, say, the atom. It is actually math that should be credited with the removal of the shroud of mysticism perceived by some - perhaps even many - of the uninitiated. (Physics is not an exception here - some people still look at a working computer as a miracle, for example.) And the homogeneity at the scales at which there’s just too many things to allow for much diversity should, too, be seen as one of the manifestation of the absence of any true mystery in the universe.

> there is nothing mystical at all in any of the concepts of the modern physics

Nothing mystical, but there certainly are deep questions, which is why we have the various interpretations of QM.


It has worked several times in the history of physics, though.

Indeed, theoretical physics has long ago stopped providing foundational explanations. It became strictly what essentially it had always been - a calculational tool. Whether calculations possess explanatory power is a question of psychology and sociology.

Sometimes the impossible takes us a little longer.

US school teacher, then geologist J. Harlan Bretz spent as much time as he could 'out in the field'. It was as a result of -extensive- observations that he arrived at his 'outrageous' Missoula Floods hypothesis. He spent 40 years defending his interpretation; he remained 'out in the field' most of that time.

His critics had spent -very- little time in the field. They knew he was wrong. In 1979, he was awarded Geology's top prize.


It stagnated when they started doing NHST, ie checking for a difference from "background" vs collecting and comparing data to the predictions of various theories to distinguish between them. Same thing that has destroyed every field of research that adopted this approach.

Imagine if Einstein just predicted that the position of the stars would appear to be different during the eclipse, rather than displaced by an exact amount. The last 40 years has seen physics become more like the former (bad) than the latter (good).

“They do not think about which hypotheses are promising because their education has not taught them to do so.”

Pretty rich coming from someone who’s written a bunch of papers on doubly-special relativity and similarly unpromising hypotheses.


I do not agree that physics has not progressed. I do however believe there may be some dogmatic contamination in some processes that may have stalled some progression. Gravity for example, big G or little g and why?

recent stuff on dark energy:

https://news.ycombinator.com/item?id=21974117

(by the blog's author Sabine Hossenfelder) https://youtu.be/oqgKXQM8FpU

(more on the confidence tldr Λ>1 still) https://youtu.be/7UNLgPIiWAg


This is my pet topic, but:

How can we expect to make progress in our theories if we haven’t even agreed on a consistent interpretation of today’s quantum theory?


Because explanatory stories aren't strictly necessary: While it certainly helps if we have them available because they allow us to reason intuitively, the hallmark of science is the predictive model.

In contrast, a bunch of explanatory stories lacking an underlying predictive model is what we call pseudo-science.


An "interpretation of a theory" is not an "explanatory story, it's a way to map the equations ("shut up and calculate") to reality.

It's not a contested notion that every theory needs an interpretation.


I would phrase things differently:

The way to map a theory to reality is via its predictions. The interpretation is how the theory fits into my mental model of reality.

In principle, reality could be strange enough that we are incapable of holding a good model of reality in our brains that evolved to avoid getting eaten by lions instead of doing quantum mechanics.

I certainly hope that's not the case, but neither can I rule it out.


I think you and me mean different things with “interpretation”.

> The way to map a theory to reality is via its predictions.

Agree, and I call this mapping an “interpretation”.

(I thought this was the general usage of the word in the scientific context, but I may be wrong.)


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: