Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The models of quantum mechanics have already withstood experiments to a dozen decimal places. You aren't going to find departures just by banging around in your garage; you just can't generate enough precision.

The only way forward at this point is to start with the model and design experiments focusing on some specific element that strikes you as promising. Unless you're staring at the model you're just guessing, and it's practically impossible that you're going to guess right.



>You aren't going to find departures just by banging around in your garage

This kind of rhetoric saddens me. Someone says "design an experiment" and you jump to the least charitable conclusion. That people do this is perhaps understandable, but to do it and not get pushback leads to it happening more and more, to the detriment of civil conversation.

No, the experiment I had in mind would take place near the Schwarzchild radius of a black hole. This would require an enormous effort, and (civilizational) luck to defy the expectations set by the Drake equation/Fermi paradox. It's something to look forward to, even if not in our lifetimes!


> No, the experiment I had in mind would take place near the Schwarzchild radius of a black hole

I think the GP was thinking of more practical experiments, not science fiction.


I mean you did just suggest that classical QM can be supplanted by your heavily underspecified finite(?)-state model for which you provide essentially no details, you must admit that's pretty crank-y behaviour.


This is one of the reasons I believe science and technology as a whole are on an S-curve. This is obviously not a precise statement and more of a general observation, but each step on the path is a little harder than the last.

Whenever a physics theory gets replaced it becomes even harder to make an even better theory. In technology low hanging fruit continues to get picked and the next fruit is a little higher up. Of course there are lots of fruits and sometimes you miss one and a solution turns out to be easier than expected but overall every phase of technology is a little harder and more expensive.

This actually coincides with science. Technology is finding useful configurations of science, and practically speaking there are only so many useful configurations for a given level of science. So the technology S-curve is built on the science S-curve.


I don't think this is strictly true. Rather it seems that the problem is that we, at some point, invariably assume the truth of something that is false, which then makes it really difficult to move beyond that because we're working off false premises, and relatively few people are going out of there way to go back in time and challenge/rework every single assumption, especially when those assumptions are supported by decades (if not centuries) of 'progress.'

An obvious example of this is the assumption of the geocentric universe. That rapidly leads to ever more mind-boggling complex phenomena like multitudes of epicycles, planets suddenly turning around mid-orbit, and much more. It turns out the actual physics are far more simple, but you have to get passed that flawed assumption.

In more modern times relativity was similar. Once it became clear that the luminiferous aether was wrong, and that the universe was really friggin weird, all sorts of new doors opened for easy access. The rapid decline in progress in modern times would seem most likely to suggest that something we are taking as a fundamental assumption is probably wrong, rather than that the next door is just unimaginably difficult to open. This is probably even more true given the vast numbers of open questions for which we have defacto answers, but yet they seem to defy every single test of their correctness.

---

All that said, I don't disagree that technology may be on an s curve, but simply because I think the constraints on 'things' will be far greater than the constraints on knowledge. The most sophisticated naval vessel of modern times would look impressive but otherwise familiar to a seaman of hundreds or perhaps even thousands of years ago. Even things like the engines wouldn't be particularly hard to explain because they would have known full well that a boiling pot of water can push off its top, which is basically 90% of the way to understanding how an engine works.


It's true that Ptolemaic cosmology stuck thinkers in a rut for a very long time; but what got us out of that rut was observation (and simplification). Copernicus saw that heliocentrism led to a simpler model that fit observation better (ironically he wanted to recover Ptolemy's perfectly circular orbits!). In turn, Kepler's perfectionism led him to ditch the circular orbit idea to yield the first accurate description of orbits as ellipses. Yes, transgression against long-held belief was necessary to move forward, but in every case the transgression explained observation. Transgression itself is undesirable. In fact, transgression unmotivated by observation is what powers the dark soul of the "crank", who is at best a time-waster and at worst a spreader of mental illness.

Even Einstein did not produce (e.g. special relativity) out of whole cloth. He provided a consistent conceptualization of Lorentz contraction, itself the result of observing descrepencies in the motion of Jupiter's moons. The same could be said of the photoelectric effect, the ultraviolet catastrophe, and QM.

All this to say that your statement "The rapid decline in progress in modern times would seem most likely to suggest that something we are taking as a fundamental assumption is probably wrong" is unsupported. Nothing could be more popular than questioning fundamental assumptions in science today!

It could very well be that, as Sean Carroll puts it, we really know how everything larger than the diameter of a nuetron works! Moreover, we know that even if we find strangeness at tiny scales, our current theories WILL remain valid approximations, just like Newtonian mechanics are valid approximations of special and general relativity. The path to progress will not happen because a rogue genius finds something everyone missed and boldly questions assumptions long-held. Scientific revolution first requires an observation inconsistent with known models, but even the LHC hasn't given us even one of those. There is reason to think that GR, QM, and the standard model are all there is...until we do some experiments near a black hole!


> Copernicus saw that heliocentrism led to a simpler model that fit observation better.

That's not true, he didn't.

Geocentric model of the time was a better fit of the data than the Copernican model. What Copernican model had was simplicity (at some cost to observational data fidelity).

Making the heliocentric model approach (and breach) the accuracy obtained by the geocentric model took a lifetime of work by many people.

As a kinematic model (description of the geometry of motions) as observed from Earth's reference frame geocentric is still pretty darn accurate. There's a reason why it is so. Compositions of epicycles are a form of Fourier analysis -- they are universal approximators. They can fit any 'reasonably well behaved' function. The risk is, and it's the same risk with ML, deep neural nets, that one (i) could overfit and (ii) it could generate a model with high predictive accuracy without being a causal model that generalises.

Heliocentric model was proposed much much earlier than Copernicus but the counterarguments were non-ignorable. Reality, it turned out was very surprising and unintuitive.


Truth be told, I don't know much about Copernicus. He may indeed have been right but for the wrong reasons! If so, he's a very good example against my point that observation must precede successful revolution. It seems strange that the Catholic church took him so seriously if his claim was supported by his enthusiasm and not observation. It's definitely something I'd like to learn more about - any book recommendations?


This history is absolutely fascinating. Let me find a blog post by Baez that covers a lot of that history.

I don't think this history says anything against your point -- sometimes the time is just not right for the idea -- and even classical science can be very unintuitive and weird, so much so that common sense seems like very strong counter arguments against what eventually turn out to be better models.

I of course learned this over many books, but the mind blanks out over which one to suggest. I think biographies of Copernicus and Kepler would be good places to start.

Edit: you may find this interesting:

https://news.ycombinator.com/item?id=42347533

HN do you know what happened to John Baez's blog that listed his multiparty blog posts ? They are a treasure trove that I do not want to lose. Azimuthproject too seems to have disappeared


As a tangential hit on this issue, the relationship between the Catholic Church and science [1] is an interesting read. It's nowhere near as antagonistic as contemporary revisionary takes would suggest. In particular the most famed example of this is with Galileo (whose name is mentioned no less than 146 times on that fairly short page...) yet that was far more interpersonal issues than his concepts being an affront to theology. He wrote a book calling the Pope (at the time very much one of his supporters) through hardly veiled proxy, a simple minded idiot. Burning bridges is bad enough, but burning one you're standing on is lunacy.

If one does genuinely believe in a God then the existence of science need not pose a threat to that, since there's nothing preventing one from believing that God also then created the sciences and rationality of the universe. The classical 'gotchas' like 'Can God create a stone so heavy that he could not lift it?' were trivial to answer by simply accepting that omnipotence does not extend to things which are logically impossible, like a square circle.

[1] - https://en.wikipedia.org/wiki/Science_and_the_Catholic_Churc...


LLM could have kept the geocentric theory alive for another hundred or more years! Awesome.


I especially like your last paragraph. Even if our fundamental assumptions are wrong, current theories still work very well within appropriate bounds. And those bounds basically contain all practical scenarios here on earth. That's a big reason why it's hard to make progress on string theory, because we can't create scenarios extreme enough here on earth to test it.

So even if our fundamental assumptions are wrong and some new theory is able to explain a bunch of new stuff, chances are it won't impact the stuff we can practically do here on earth, because scientists have already been doing the most extreme experiments they can, and so far progress is still stalled on fundamental physics.


Copernicus and Kepler did interpretations, not observations, they explained observations, but geocentrism explained observations too, so heliocentrism wasn't unquestionably superior.


Heliocentrism from its earliest formulation was pretty bad for many reasons, including as you mentioned the desire to maintain circular orbits, as well as uniform velocities, epicycles, and more. You could easily pick a million holes in heliocentrism to 'disprove' it. And the geocentric view, as convoluted as it was, was observably accurate and predictive with 'holes' being plugged by simply having the entire dysfunctional model absorb them - e.g. by simply assuming retrograde motion as a natural phenomena, and otherwise - just add more epicycles.

Heliocentrism was most fundamentally driven by somebody, with extremely poor interpersonal skills (which is much more the reason he was left living his final days in house imprisonment, rather than his theory itself), moving forward on his own somewhat obsessive bias.

Similarly, with relativity. I have no idea what you mean by a 'consistent conceptualization' of Lorentz contraction, but length contraction was a completely ad hoc explanation for the Michelson Morley experiment. It's correctness was/is more incidental than anything else. Einstein did not cite Lorentz (or anybody for that matter), and I do not think that was unfair or egotistical of him.

--

I'm also unsure of what you're referencing with Sean Carroll, but I'd offer a quote from Michelson of the Michelson-Morley experiment saying essentially the same, "The more important fundamental laws and facts of physical science have all been discovered, and these are now so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote.... Our future discoveries must be looked for in the sixth place of decimals."

So convinced was Michelson that the 'failure' of his experiment was just a measurement issue that he made that comment in 1894, near to a decade after his experiment and shortly before physics and our understanding of the universe was about to revolutionary explode thanks to a low ranking patent inspector.


Max Planck famously said, "A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it."


Now we know how to prevent it: popularize ideas like "physics is mathematics", "shut up and calculate", "it's useless philosophy not worth to think about", "nobody can understand it, so it's useless to even try". Also a nice excuse for ignorance.


>I have no idea what you mean by a 'consistent conceptualization' of Lorentz contraction, but length contraction was a completely ad hoc explanation for the Michelson Morley experiment. It's correctness was/is more incidental than anything else. Einstein did not cite Lorentz (or anybody for that matter), and I do not think that was unfair or egotistical of him.

In "On the Electrodynamics of Moving Bodies"[1] Einstein checks his derivation against Lorentz contraction. It's on page 20 of the referenced English translation. Lorentz' model was ad hoc, E derived it with only 2 postulates (equivalence principle; c invariance). Lorentz was indeed cited, and the cite is useful to connect E's theory to real-world observation. This is true whether or not you want to get pedantic about the meaning of "cite" vs "reference".

1 - https://www.fourmilab.ch/etexts/einstein/specrel/specrel.pdf Originally "Zur Elektrodynamik bewegter Koerper"


> The rapid decline in progress in modern times would seem most likely to suggest that something we are taking as a fundamental assumption is probably wrong, rather than that the next door is just unimaginably difficult to open.

We actually know we have:

Bell’s inequality tells us that the universe is non-local or non-real. We originally preferred to retain locality (ie, Copenhagen interpretation) but were later forced to accept non-locality. But now we have a pedagogy and machinery built on this (incorrect) assumption — which people don’t personally benefit from re-writing.

Science appears trapped in something all too familiar to SDEs:

A technical design choice turned out to be wrong, but a re-write is too costly and risky for your career, so everyone just piles on more tech debt — or modern epicycles.

And perhaps that’s not a bad thing, in and of itself. Eg, geons were initially discarded because the math doesn’t work out — but with the huge asterisk that they might still be topologically stabilized. But the math there is hard and so it makes sense to continue piling onto the current model until enough advances in modeling (eg, 4D anyons) allow for exploring that idea again.

Similar to putting off moving tech stacks until someone else demonstrates it solves their problems.

But at least topological geons would explain one question: why does space look like geometry but particles look like algebra?

Because topological surgery looks like both!

- - - -

> clear that the luminiferous aether was wrong

Another interpretation is that the aether exists, but we’re also made of aether stuff — so we squish when we move, rather than rigidly moving through it (as per the theory tested by Michelson-Morley). That squishing cancels out the expected measurement in MM. LIGO (a scaled MM experiment) then works because waves in the aether squish and stretch us in a detectable way.

Modern theories are effectively this: everything is fields, which we believe to be low-energy parts of some unified field.


It's an S-curve only so long as intelligence doesn't increase exponentially as well. What would the story look like if an ASI existed?


It's just accelerated. AI is bound by physics just like everything else.

The S-curve is really about fundamental limits. Lets say ASI helps us make multiple big leaps ahead, I mean mind blowing stuff. That still doesn't change that there must be a limit somewhere. The idea that science and tech is infinite is pure science fiction.


Exponential increases in intelligence doesn’t imply that the universe is more complex to compensate.


The first turn in an S-curve can easily look like an exponential. ASI has physical limitations, so I don’t see why it wouldn’t take an S-curve as well, although at a much different rate than human intelligence.


To be fair quantum mechanics was invented by guessing that energy might be quantized. It just happened to model the universe well.


Waves are quantized (one wave, two waves, ...), so energy transfers by waves are quantized too.


What you are describing is periodicity. That’s different from quantization.


One particular model: the electron g-factor.

Now go look up how precise a prediction the same model makes for the muon g-factor.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: