"It's not just that scientists don't want to move their butts, although that's undoubtedly part of it. It's also that they can't. In today's university funding system, you need grants (well, maybe you don't truly need them once you have tenure, but they're very nice to have).
So who decides which people get the grants? It's their peers, who are all working on exactly the same things that everybody is working on. And if you submit a proposal that says "I'm going to go off and work on this crazy idea, and maybe there's a one in a thousand chance that I'll discover some of the secrets of the universe, and a 99.9% chance that I'll come up with bubkes," you get turned down.
But if a thousand really smart people did this, maybe we'd actually have a chance of making some progress. (Assuming they really did have promising crazy ideas, and weren't abusing the system. Of course, what would actually happen is that the new system would be abused and we wouldn't be any better off than we are now.)
So the only advice I have is that more physicists need to not worry about grants, and go hide in their attics and work on new and crazy theories, the way Andrew Wiles worked on Fermat's Last Theorem."
(new comment right below)
"Let me make an addendum to my previous comment, that I was too modest to put into it. This is roughly how I discovered the quantum factoring algorithm. I didn't tell anybody I was working on it until I had figured it out. And although it didn't take years of solitary toil in my attic (the way that Fermat's Last Theorem did), I thought about it on and off for maybe a year, and worked on it moderately hard for a month or two when I saw that it actually might work.
So, people, go hide in your attics!"
I don't have quite the same critical perspective as the blogger, but I think there's a certain misguided attitude underlying the phenomena observed by Shor.
Yesterday or the day before I was listening to the radio and someone with a physics background was talking about something (I think quantum entanglement) and started asserting that physics has basically figured out almost everything. This is probably a somewhat unfair paraphrase, but not too unfair.
What irritated me about it was the assumption that, if most of your predictions are correct, your model is almost entirely correct, and just needs to be tweaked a bit. This is certainly true some of the time, but sometimes those little empirical cracks are what brings down a major paradigm, and leads to another one, one that has the same predictions as in 99% of the cases, but in the other 1% has totally different predictions with very different implications.
This carries over to grant funding, etc. in that the prevailing community often assumes that what they're doing is fine, and all that's left are these little empirical tweaks. That's certainly helpful some of the time, but it seems to dominate too much. Academics needs to leave more room for people to fail at high rates with good ideas, to increase those small percent of times they succeed wildly.
Haha, no. We wish.
Epicycles worked very well and were highly accurate, because, as Fourier analysis later showed, any smooth curve can be approximated to arbitrary accuracy with a sufficient number of epicycles. However, they fell out of favour with the discovery that planetary motions were largely elliptical from a heliocentric frame of reference, which led to the discovery that gravity obeying a simple inverse square law could better explain all planetary motions.
A theory can explain observations even perfectly well and still be wrong- because the frame of reference is wrong. The worse thing is that you can't figure that out until you've figured out what the correct frame of reference is, and looked at your obsrevations in a new light.
Well strictly speaking, it wasn't wrong. It explained the observations perfectly well. What a heliocentric description brought was a simpler description that illuminated the principles behind it, in a way that enabled us to discover the inverse-square law of gravity, link that to Gauss's theorem for gravitation, explain it even from a more fundamental geometric perspective with general relativity, etc.
Revolutions happen when a new mental model - or frame of reference, or whatever you want to call it - can generate new kinds of math.
The old model is certainly wrong in the sense that it's not a good picture of how reality actually works.
If you really want to, you can still use epicycles for certain kinds of problem, just as you can use Newtonian physics for basic mechanics.
But this is engineering, not physics. These theories are useless for frontier research. They're absolutely wrong in the sense that their lack of completeness means they cannot be used to generate theory[n+1].
How do you distinguish "an entirely imaginary math artifact" from the "principles" that "reality works on"?
(Hint: planetary orbits are not ellipses once you take GR effects into account.)
Not if you define "wrong" as "inaccurate predictions". You can approximate ellipses with circles and epicycles to any desired degree of accuracy by putting in more epicycles. So you can match the predictions of ellipses to any desired accuracy with epicycles.
Also, as I noted, the actual orbits of the planets are not perfect ellipses once GR effects are taken into account. Have you proven mathematically that it is impossible to construct an epicycle model that makes more accurate predictions than perfect ellipses, based on the actual data (which confirms the GR predictions to within current observational accuracy)?
Super interesting. I'm a physics major (graduated) who didn't take GR, so I didn't know this. Want to learn GR now but very likely won't haha.
"It was not until Galileo Galilei observed ... the phases of Venus in September 1610 that the heliocentric model began to receive broad support among astronomers."
According to General Relativity, there's no such thing as a "wrong" frame.
Running through this exercise with some honesty can give one a greater understanding of why our physics is framed the way it is, and why it is that while all sorts of reference frames are valid, "inertial" reference frames are still important on their own merits.
Is there a deeper meaning to "Consider the speed of light in the Andromeda galaxy" that I missed? The speed of light is known to be constant in every reference frame.
You can reformulate all of physics into your Earthly non-inertial reference frame. You can formulate all of physics into a reference frame in which you personally are always stationary! Nothing stops you from doing it, and the physics will work, as much as they ever do (i.e., we know something's wrong with our theories). To the extent that the result is a hideous monstrosity, well, such is my point. Pondering the nature of that hideous monstrosity is something I think worth doing, at least for a bit. Not to the extent of actually writing the equations, though. It brings clarity to why inertial reference frames are so important that we almost consider "inertial reference frame" to be a single atomic word, because non-inertial reference frames are in general not very useful. (In specific they can be.)
But now I understand why Einstein wrote in his last book, after much thinking, that this perspective is wrong. He called it "unthinkable" for a good reason. The model I'm using also has relativity, but with an absolute frame. It also behaves differently in extreme situations like the surface of super massive black holes and and near field of a proton. In fact, I have much more relativity but not everywhere and it's paradox free :)
What if I say there are frames of reference that are irrelevant to discovering the process that generates the observations?
Edit: Um, guys? I genuinely don't know what general relativity says and I didn't get the comment above. It'd be nice if someone explained.
The location, and speed with which you are travelling is what general relativity calls a "frame of reference", and none of them are "correct" or "incorrect", they're just predictors for what observations will be possible from that frame.
then the weirdest part is that one of the consequences is that planetery bodies are large enough for that “speed of light must remain constant” rule to matter in a particular way as to generate a warping of spacetime around them, the geometry of this warp perfectly explaining gravity. or put another way, we stick to the earth because time runs slightly faster at our heads than at our feet.
This youtube video explains it really well:
>> The location, and speed with which you are travelling is what general relativity calls a "frame of reference", and none of them are "correct" or "incorrect", they're just predictors for what observations will be possible from that frame.
OK, I see- "frame of reference" is a technical term, in General Relativity, that refers to your position in space, and determines what you can observe. Instead, I meant "frame of reference" as a more general "point of view" or "frame of mind" - a set of assumptions that give context to any observations and that inform interpretations of them.
Even going by the technical sense of a frame of reference, though, there are frames of reference that will not permit the cocrrect identification of a process that generates a set of observations- or at the very least, they will tend to favour incorrect interpretations of the observations.
I think that is in keeping with what your comment says about a frame of reference in General Relativity allowing a range of physical observations.
Going from observation, that the speed of light is constant, regardless of how fast the light emitter is travelling relative to you, he made that the unbreakable assumption, and made the shape of spacetime flexible to always satisfy a constant speed of light. This theory was then confirmed when the light of a distant star was observed to bend when travelling through the strong gravitational field of our sun during a total solar eclipse.
Therefore the physics described by General Relativity have greater predictive power.
Quantum physics, can also predict everything in general relativity, but doing so is a lot more complicated than using general relativity. However, Quantum Physics can explain things that happen on small scales that General Relativity cannot. Quantum Physics has greater predictive power, but it's more convoluted. Like Epicycles. Einstein didn't like quantum physics and spent a great deal of time trying to debunk it, but, well, he couldn't.
This is all to point out that one should not confuse predictive power with complexity. Ockham's Razor is a rule of thumb that prefers "simpler" explanations for things. But the predictive power of the two competing theories must be equal for that to apply.
My original comment is grounded in an assumption that predictive power is not enough to identify a theory as correct, and neither is simplicity. There's nothing to stop any number of theories to have the same predictive power and the same kind of complexity. Sometimes, it's just very difficult to choose one, above the others.
Did I come across as confusing predictive power with complexity?
EDIT: it's interesting you bring Occam's razor up. It's part of what I'm studying, in the context of identifying relevant information in (machine) learning. There are mathematical results (in the framework of PAC-learning) that say that, basically, the more complex your training data, the more likely you are to overfit to irrelevant details. At that point, you have a model that explains observations perfectly well, but is useless to explain unseen observations (the really unseen ones- not those pretending to be unseen for the puprose of cross-validation).
...iiish. The result is that large hypothesis spaces tend to produce higher error. But, the size of the hypothesis space in statistical machine learning depends on the complexity of the data, as in the number of features. Anyway, I'm fudging it some. I'm still reading up on that stuff.
Unfortunately, the two theories, while both being extremely successful and accurate in their predictions, are incompatible with one another. Quantum Field Theory has successfully combined Quantum Mechanics with Special Relativity, but that is all.
Which is to say: they break under conditions very unlike the every-day universe, which is important but also indicative that they are not that broken.
The incompatibility is important though, because if there's any more card tricks we can do with physics so we can do interesting things, somewhere in that bit of incompatibility is where we must find it.
maybe some day we’ll find the grand unifying theory of the universe.
Also, "frame of reference" has a specific meaning in relativity but it also has a more general meaning regarding the framework within someone understands something. It's pretty clear from the context (imo) that this latter is what was meant in this comment.
"I'm not that kind of geek" is a bit of an in-joke so my bad for using it where the context is missing, but I thought it would work even so. The missing context is that a colleague used to tease me for my deplorable lack of a science background, although we did hit it off in terms of our fantasy and science fiction tastes. So, I was not the science kind of geek, although I was the science fiction and fantasy kind of geek.
Like I don't understand either Australia or Argentina.. but I know that they are not the same!
I'm with you though, I find this to be extremely frustrating.
Worse still, many people do work that exists for the purpose of raising housing, food, and medical costs.
* what will people do with their free time? "It is a fearful problem for the ordinary person, with no special talents, to occupy himself, especially if he no longer has roots in the soil or in custom or in the beloved conventions of a traditional society. To judge from the behaviour and the achievements of the wealthy classes to-day in any quarter of the world, the outlook is very depressing!" He asked whether there might be a "general 'nervous breakdown'" of people "who cannot find it sufficiently amusing [...] to cook and clean and mend, yet are quite unable to find anything more amusing."
* While most of our needs would easily be fulfilled, he noted a second class of needs, jostling for relative status, that might never be: there are two classes, "needs which are absolute in the sense that we feel them whatever the situation of our fellow human beings may be, and those which are relative in the sense that we feel them only if their satisfaction lifts us above, makes us feel superior to, our fellows. Needs of the second class, those which satisfy the desire for superiority, may indeed be insatiable; for the higher the general level, the higher still are they."
And he got somewhat overly optimistic then maybe:
"The love of money as a possession – as distinguished from the love of money as a means to the enjoyments and realities of life – will be recognised for what it is, a somewhat disgusting morbidity, one of those semi-criminal, semi-pathological propensities which one hands over with a shudder to the specialists in mental disease. All kinds of social customs and economic practices, affecting the distribution of wealth and of economic rewards and penalties, which we now maintain at all costs, however distasteful and unjust they may be in themselves, because they are tremendously useful in promoting the accumulation of capital, we shall then be free, at last, to discard.
Of course there will still be many people with intense, unsatisfied purposiveness who will blindly pursue wealth – unless they can find some plausible substitute. But the rest of us will no longer be under any obligation to applaud and encourage them."
Seems to me the massive inequality and concomitant struggle for relative status keeps most everyone working like crazy, even though we have many times more than what people had a century ago.
EDIT to add: Keynes biographer Lord Skidelsky has a book about it, How much is enough?:
(1) That enough people would use their new free time to innovate instead of just watching TV or playing World of Warcraft or something like that.
(2) That enough people would still choose to do the unpleasant but necessary tasks that provide the resources needed for everyone to eat, have housing, etc. Not to mention all the other stuff people seem to want.
What would a world where we did the former but not the latter look like? It would be something like everyone having to be, at least in some measure, an entrepreneur--everyone would be their own small business, having to figure out what product or service to sell to others in order to make a living, and having to decide for themselves what use of their time would best serve that goal. That might well, in the long run (i.e., after all the upheaval caused by people who were used to having someone else define their business objectives, now having to do it themselves, has died down), be a big improvement over what we have now. But the key incentive of having to make a living is still there.
The "basic" part means that the level of income is generally set at an austere level. Think of living in the minimal existence of a monk. Some people would do fine with that, and would choose it, but the vast majority of people I believe would elect to work for more.
Humans over their history have always reached for more, including the most extremely wealthy, who basically have a capitalism granted equivalent of UBI, but significant numbers still choose to work.
And all history shows that that level increases over time to a point where "minimal existence" is enough luxury to be unsustainable. This is by no means the first time that the option of the state doling out basic necessities to everybody has been considered. The Romans had their bread and circuses. Today it would be food stamps and cable TV and Facebook and Twitter. Same difference.
My sense is that that percentage is pretty low. Yes, there are people like pg or sama who continue to work and add value even though they don't have to, and I think that's an admirable thing to do. But I think there are many more people who, once they have enough wealth to not have to work, stop working for good and don't produce anything after that.
A lot of it produces ZERO economic returns. But the fallacy is that we need top-down institutions to move things forward. I would argue that we are better off abolishing intellectual property laws as well and allowing everyone to contribute to open source drugs the way they do in other sciences.
Watch these two videos:
Drive: the surprising truth about what motivates us
Clay Shirky: Institutions vs Collaboration
We need more collaboration and less capitalistic competition.
I know firsthand the righteous indignation that anarcho capitalist libertarians have at “violence” being used to redistribute wealth.
But these same libertarians ignore all the coersion used on the other side. They seem to want people to be FORCED to work out of fear of losing food and housing. Some freedom for the masses - the freedom to work or starve.
And of course Property is a coercive institution just like government. It has to be enforced. So Disneyworld charges visitors entry fees and vendors rent and pays people to dress up like Mickeys and it’s top down and Libertarians are ok with that. Next door is a city that’s run democratically and what if they want to charge taxes and redistribute basic income, how is that any worse than Mickeys?
> the fallacy is that we need top-down institutions to move things forward
That might well be true. And it has nothing whatever to do with what I said. In fact, abolishing top-down institutions would, if anything, make it more difficult to have a scheme like universal basic income (the topic we're discussing here) at all.
> of course Property is a coercive institution just like government. It has to be enforced
Is the only thing preventing you from appropriating your neighbors' property the fear of enforcement?
Property rights are agreements. If it is a net gain for all parties to follow an agreement, they will follow it, even in the absence of coercive enforcement.
The same can be said about the social contract. Is the only thing preventing you from running red lights the fear of enforcement?
For many people, yes. That's the only thing. And we violate property rights in many ways, like peeing in a forest that may be "owned" by someone. Or by using an idea that may be "owned" by someone.
Property rights become "States" if the organization is large enough.
Property rights are basically monopoly rights to exclude others, by force if necessary, from the use of a resource.
Sometimes this exclusion actively harms wealth creation. Especially if the resource is a public good.
This is a very telling question. Of course the answer is yes--if you qualify "running red lights" to mean "running red lights when it is clear that it is not going to cause any harm or violate anyone's property rights". For example, it's very late at night, it's an intersection with clear visibility in all directions, well lighted, and there is obviously no one else in sight. In such a case, yes, the only thing preventing me (and probably any reasonable person) from running the red light is fear of enforcement.
But of course that's because any reasonable person has the common sense to know that running a red light under circumstances where it will clearly violate no one's property rights and cause no one harm is not a crime; it's just a violation of an administrative rule, which in practice is used as a revenue source by localities, not to improve traffic safety.
And of course any reasonable person will not run a red light if it would risk causing harm or violating someone's property rights. But in that case, it is not because of fear of enforcement; it's because reasonable people understand that harming others or violating their property rights is a net loss for everybody, including them, so they have a good, rational reason not to do it and would behave the same even if it there were no enforcement.
> we violate property rights in many ways, like peeing in a forest that may be "owned" by someone
If this does no harm, how is it a violation of property rights?
> Or by using an idea that may be "owned" by someone.
Ideas are different because there is no such thing as exclusive "ownership" of ideas. Governments create "ownership" of ideas by making laws, but that doesn't make ideas the same as physical objects. If I take your car, I deprive you of it; we can't both have it. If I take your idea, you still have it; I can't deprive you of it. That is a key difference.
What people however do is to smuggle more risky ideas into proposals for less risky ones. But that in the end means they'll be spending time also on low-risk things.
A developed country can be recognised by the fact that the society doesn't rely on the goodwill of the rich.
A second difficulty is the "rest of their lives" part. It's quite hard to believe ROI would not be required when rich people are involved in some way or other. Charity is PR, and so the system will optimize for PR.
No idea how Cowen's thing will turn out, and I'm not sure the idea scales well, but I really like the concept.
I can't find it now in a quick search, but I remember reading that he thought every physicist should devote something like 10% of their time thinking about the foundations of physics/quantum mechanics. (What would he do with 100% of his time?)
>But if a thousand really smart people did this, maybe we'd actually have a chance of making some progress.
The problem is, as I understand it: suppose some people locked themselves in their attic and worked on physics problems; how does society know that they're actually working on physics and not merely twiddling their thumbs?
The whole publication-review-credit-tenure-grant circuit was invented to address exactly that situation. In order to replace it, you need some other way of convincing the funding bodies that their money is actually paying for something.
The only question then is whether there is enough philanthropic research around. Are there perhaps, 2000 different projects around the world getting something like 5M USD each from an philanthropists? Or does the modern zeitgeist that assumes science funding is a thing for governments to do crowd it out?
That's just people being chicken. You know someone got to research why wombat poop comes out in cubes:
I could spend a couple hours giving physicists off-the-wall ideas that have some degree of plausibility. All worth exploring rigorously IMHO.
That's true and also not true. Yes, we have a working theory that explains essentially everything with unprecedented accuracy as long as you don't wander too far outside of everyday length and energy scales. But on the other hand we still do not really understand basic quantum mechanics almost a century after its discovery or at the very least there is no consensus about what the theory actually says. Quantum mechanics is not even self-consistent.
Which is not to say people aren't trying. Conferences are held probing this exact question (e.g. can we come up with DM detectors that aren't just enormous tanks of cooled liquids?) and trying new strategies. It's not like the community isn't engaging in good faith with some of these proposals, it's also that we haven't had a new hint of where to look from a collider experiment since the discovery of the Higgs. Despite everyone's best efforts. Yes, we need some experimental ingenuity to push through this frontier, but I also agree with OP that Physics is just being very stubborn against yielding any further secrets at present.
This does actually happen already. I'm not sure how widespread it is, but I've noticed it in a couple of Condensed Matter Theory groups. For example Imperial's group do research on Complexity and Networks including the "application of these principles to a variety of stochastic phenomena, ranging from ant colonies to cardiovascular biology, from sandpiles to earthquakes".
We don't know what we don't know. It's very likely that more discoveries will result in obvious changes that are visible in everyday life, like GPS.
My bet is on quantum mechanics. There's a lot there we don't understand or don't know how to make use of quite yet that nevertheless seems likely to have a very obvious effect on everyday life in the centuries to come.
It requires GR to make it work, but not GR to understand how it works. I.e., GPS would work fine -- in fact, be easier -- in a world without GR. The clocks on board the satellites depend on QM, but the idea of an accurate clock is easy. Same with orbits -- staged rockets came late, but orbital mechanics goes back a long way. Transistors are new, but amplification is old.
The only guarantees made about the LHC were, that it would prove or disprove the existence of the Higgs Boson.
Believe me, experimental physicists are desperate to find the slightest deviation from the standard model. I spent 2 years on one such 'stab in the dark' rare decay analysis!
The SM's predictions have been tested to a rigor unparalleled in history. It predicts stuff like mass of W & Z bosons, fine structure constant, the measurements of which exceed an accuracy of 1 part per billion in some cases.
I was getting it jumbled up with the magnetic moment of the electron, which is predicted by the SM (well, the QED part anyway), to be slightly different to the 'classical' prediction.
Experiment measurement of this is accurate to one part per billion, and is consistent with the QED prediction.
The electroweak bits of the SM predict the W & Z bosons, along with their masses, which have also been measured, to around 1 part per 10,000, and match SM predictions.
EDIT: last but not least, the Higgs Boson was also predicted by SM, with a ballpark figure for it's mass, and other properties (how often it decays into photons, W bosons, quarks etc). So far all measurements of these properties are consistent with SM predictions.
This is seen as one of the big issues with the standard model, that it does not actually explain a lot of the characteristics of the fundamental particles like their couplings and masses.
The Higgs mass is indeed a free parameter, but the SM wouldn't work of it's mass was greater than 200 GeV or so. The Higgs interacts with other particles of mass, and the strength of interaction is proportional to the Higgs mass, it influences certain processes (like W boson scattering), the rate at which these happen would deviate from experimental observation if Mhiggs was over 200 GeV.
That's why the LHC was such a big deal, it reached the energies required for direct observation of sub-200 GeV Higgs, so it would either find the SM Higgs, or rule it out and invalidate the SM. Unfortunately the former seems to have happened.
SM free parameters:
Indirect constraints on Higgs mass in SM (a bit technical, slide 6 chart is the key one, strongly influenced specs & mission of the LHC)
Eq. (13) in  is a prediction of the electron's magnetic moment given the fine structure constant. Eq. (15) in  is a prediction of the fine structure constant given the electron's magnetic moment. For the purposes of that paper (testing QED) it turns out to be more convenient to use Eq. (15).
> There is a most profound and beautiful question associated with the observed coupling constant, e – the amplitude for a real electron to emit or absorb a real photon. It is a simple number that has been experimentally determined to be close to 0.08542455. (My physicist friends won't recognize this number, because they like to remember it as the inverse of its square: about 137.03597 with about an uncertainty of about 2 in the last decimal place. It has been a mystery ever since it was discovered more than fifty years ago, and all good theoretical physicists put this number up on their wall and worry about it.) Immediately you would like to know where this number for a coupling comes from: is it related to pi or perhaps to the base of natural logarithms? Nobody knows. It's one of the greatest damn mysteries of physics: a magic number that comes to us with no understanding by man. You might say the "hand of God" wrote that number, and "we don't know how He pushed his pencil." We know what kind of a dance to do experimentally to measure this number very accurately, but we don't know what kind of dance to do on the computer to make this number come out, without putting it in secretly! — Richard Feynman, Richard P. Feynman (1985). QED: The Strange Theory of Light and Matter. Princeton University Press. p. 129. ISBN 978-0-691-08388-9.
[my emphasis added]
As far as I can tell, the fascination with it got started by the number being close to an integer, and maybe the remark in  that "In ancient Hebraic language letters where used for numbers, and Cabbala is the word corresponding to 137" played a role. But we know that it isn't an integer, and that it runs  like any coupling constant in QFT, so at best you could marvel about it taking on some particular value at some particular interaction energy, which would mean... what? I dunno. As Feynman also said ,
You know, the most amazing thing happened to me tonight... I saw a car with the license plate ARW 357. Can you imagine? Of all the millions of license plates in the state, what was the chance that I would see that particular one tonight? Amazing!
This whole thread started because the top comment said that a physics theory predicted the value of the fine-structure constant. Which is wrong, as the fine-structure constant is one of fundamental constants of the universe and one _whose value is not predicted by any theory_.
At this stage, two claims are in tension.
The first is your claim, that QED predicts the fine-structure constant once you measure the magnetic moment of the electron using several thousand Feynman diagrams.
The second claim is Feynman, himself, literally writing in his book titled _QED_, that we have no idea how to predict the value of this constant.
Could it be that Feynman overlooked the fact that he, himself, predicted the value of the fine-structure constant? He thought that not being able to predict its value was such an unsolved problem as to call it "one of the greatest damn mysteries of physics"? That "all good theoretical physicists put this number up on their wall and worry about it"?
The long and the short of it is that I think you're missing something much deeper. Yes, _once you measure_ something that has a tight coupling with the fine-structure constant, you now know the value of the fine-structure constant. But, before you made that measurement you DO NOT KNOW and further CANNOT PREDICT the value of the fine-structure constant. If you could, you'd be able to claim your own Nobel prize.
> Which is wrong
No. I even provided you with the reference to the paper in question. Did you even try to read it?
> as the fine-structure constant is one of fundamental constants of the universe and one _whose value is not predicted by any theory_
This is where you go wrong, and where you misunderstand Feynman's point.
The correct statement is that all respectable theories of physics (to date), including the Standard Model, include an irreducible number of values which must be plugged into them "by hand". In other words, once you've written down your theory, there are some parameter values in it about which the theory itself gives you no guidance; you could give them different values, and the theory would still work. It would just be describing a universe with different properties than ours. In order to make it describe our universe, you need to get those values from experiment.
Feynman's point is that we don't have a theory without at least some such parameters (even string theory, which he disliked, has the string tension, and it skirts the need for more by randomly picking a vacuum, which sets the values of low energy theory "constants"). It is not that the fine structure constant has some particular status as "more fundamental" than others.
The choice of constants which you can determine by experiment is constrained by the theory, but generally not locked down completely; you can choose your set of constants, as long as they are independent, i.e. as long as measured constant #1 can not be determined by plugging measured constant #2 into the theory and doing some calculation. If constant #1 can be computed given the theory and constant #2, then they are not independent, and the choice between them is arbitrary; a convention.
The choice between fine structure constant and magnetic moment of the electron is one such arbitrary choice. Given one of them and the Standard Model, you can compute the other. And It turns out that it's actually more convenient to do it this way: measure the magnetic moment, then compute the fine structure constant. There is no reason at all to regard the fine structure constant as more of an "input to the model" than the magnetic moment, as you claimed at the start of this thread.
Needless to say, Feynman knew all this perfectly well. You are just taking away the wrong message from an attempt to popularize the topic. "QED" was a popular book, not a graduate text.
> It is not that the fine structure constant has some particular status as "more fundamental" than others.
Agreed, as I wrote above "as the fine-structure constant is one of fundamental constants of the universe", it's in a class of fundamental constants (or one of the irreducible values to be plugged in to use your phrasing).
> If constant #1 can be computed given the theory and constant #2, then they are not independent, and the choice between them is arbitrary; a convention.
Sure. You can choose one input over the other once you have made at least one other measurement.
Are you arguing something like, "there are X irreducible inputs to the Standard Model. For any particular input, a, you might be able to swap it out for a different one, g, so that you still have X irreducible inputs, but now they are a different set. Therefore, because we swapped a for g, a is not a fundamental constant"?
Do you take issue with the phrasing of the passages in this wikipedia article (https://en.wikipedia.org/wiki/Dimensionless_physical_constan...)? Specifically:
1) > ...physicists ... reserve the use of the term fundamental physical constant solely for dimensionless physical constants that cannot be derived from any other source.
2) > Fundamental physical constants cannot be derived and have to be measured.
And 3) its classification of the fine-structure constant as a fundamental physical constant?
That's part of what I'm saying. Some trivial examples from the Standard Model are the choice of angles you use to parametrize the CKM and PMSN matrices, Weinberg angle vs electroweak gauge couplings and the scale at which you choose to fix those couplings.
Maybe it will help to call the prediction of values for one such set of parameters from the values of another such set of parameters a "horizontal prediction": you have one theory T, a value a_A of some parameter A, and you predict a value b_B of some other parameter B: B_b = T(A_a). It is "horizontal" because A is no more fundamental than B; you could equally well use T to predict A_a from B_b.
y = T(x) is of course the general form of any prediction of anything at all from theory T.
The reason you saw fit to "correct" walru1066 is that you implicitly expanded "prediction" to "prediction from a more fundamental theory". That's too long to write, so I'll call it a "vertical prediction": you have a more fundamental theory F with some set of parameters A and a less fundamental theory L with some set of parameters B, and you predict B from A using F: B = F(A). It is "vertical" because F is more fundamental than L.
How do we know that F is more fundamental than L, and not just an equivalent description of the same theory? That's easy: because the set A is smaller than the set B. :)
walrus1066 mentioned a prediction of the fine structure constant, and he was right; that's what's done in  (I'm pretty sure he was remembering that paper, but not the exact reference; who does?). It's a horizontal prediction. Like all proper predictions, it only works if the theory works, so it is a perfectly valid test of the theory (the topic of his post).
You saw "prediction" and expanded it to "vertical prediction", but that was never mentioned or intended.
> Do you take issue with the phrasing of the passages in this wikipedia article (https://en.wikipedia.org/wiki/Dimensionless_physical_constan...)?
I do not take issue with the full phrasing of it, which you snipped out. The complete sentence is
Other physicists do not recognize this usage, and reserve the use of the term fundamental physical constant solely for dimensionless physical constants that cannot be derived from any other source.
In other words, there is no consensus about whether dimensional quantities can be called "fundamental physical constant". The reason is obvious: once you've settled on a system of units (if you are doing fundamental physics, presumably natural units ), you can always turn any dimensional quantity into a dimensionless one combined with a fixed dimensional factor.
I can imagine a parallel to this thread in that context: Somebody posts "the mass of the electron is a fundamental constant of the Standard Model", you reply "no it's not, it's dimensional, so it's not fundamental", and I end up writing a long post explaining that you can factor it into a dimensionless Yukawa coupling and a dimensional Higgs expectation value, so it's really fine to call it fundamental even by your definition (i.e. we do not currently have a more fundamental theory which predicts the mass of the electron, unless you are happy with it being a random value).
Regarding this part of your question,
> Fundamental physical constants cannot be derived and have to be measured.
I have no problem with the first part of that sentence (can't be derived; that would require having a more fundamental theory) but the "have to be measured" is subject to interpretation. If you take it to mean directly measured, it's really too restrictive (just have a look at what really goes into determining the properties of short-lived elementary particles). If you allow for measuring some quantities and performing a bunch of calculations on the general form of a horizontal prediction (the only kind possible within the confines of a single theory) then fine.
As for "classification of the fine-structure constant as a fundamental physical constant", I have no problem with it (at the current state of knowledge).
Physics got stuck for a short while on the understanding of QM, and then promptly went into sour grapes mode and decided that it was meaningless to ask any deep questions about what QM actually meant. Since then it has been focused on mathematical formalisms and smashing particles instead of deep questions about what it all means.
The stagnation is real, and it's the physics community's own fault.
And you are misrepresenting this"taboo" you are talking about. Thinking about what quantum mechanics "means" is how we got breakthroughs in quantum computing and theoretical computer science. Similarly, there is plenty of exciting deep theoretical work in particle physics beyond the smashing particles together type of experiments.
I also think the taboo has lifted a bit, as it's now possible for mainstream physicists like David Deutsch and Sean Carroll to build their careers on quantum foundations work. But I still think the physics community has a lot of baggage from the second half of the twentieth century to let go of.
> Another comment-not-a-question I constantly have to endure is that I supposedly only complain but don’t have any better advice for what physicists should do.
> First, it’s a stupid criticism that tells you more about the person criticizing than the person being criticized.
These feel closer in tone to "personal attack" on her critics to me than a discussion of their ideas.
There is also discussion of ideas in the article, which I have no problem with. But snippets like that feel like unnecessary salt that doesn't add anything. While her critics are (I agree, mostly) wrong, there is no reason to call them stupid.
There is certainly no shortage of these!
What we are short of, are ideas that are:
- testable by experiment, with current technology
The point is, unconventional perspectives give you new ways of looking at a problem that can yield new experiments.
Physics of today would be unrecognisable to a scientist at the start of the 20th century. Indeed the physics of 1935 would be unrecognisable to a scientist at the start of the 20th century. Our understanding progressed enormously AND it did so because people put forward radical theories in complete rupture with the established, 300-year old classical mechanics. There was no "traditionalist resistance" to it after it became apparent that classical mechanics failed to explain phenomena that quantum theory did explain.
>am astonished at how aggressively physicists resisted and suppressed "unconventional" interpretations of quantum mechanics (like the many worlds interpretation) in favor of the obviously-wrong Copenhagen interpretation [emphasis mine]
Ahahaha, you clearly have no idea what you're talking about. There is nothing "obviously wrong" about the standard Copenhagen interpretation (unless you have some new insight you would like to share), nor was there any suppression of ideas. Many debates have been waged in the past ~100 years, and many alternative interpretations have been put forward, like Bohmian theory, superdeterminism, or "shut-up-and-calculate".
>decided that it was meaningless to ask any deep questions about what QM actually meant
Physics, indeed all science, studies observable reality. Any "deep" questions about why or about things not measurable, quantifiable, or empirically observable, are by definition outside the scope of science. It is therefore as unreasonable as complaining about why don't geologists study epistemology. The answer is the same: it's outside the scope of their study.
I also don't like that your phrase seems to imply that what physicists study is not "deep" as opposed to your philosophical questions. There are many deep and beautiful ideas in physics.
This is wrong. Scientific theories provide predictive power, yes, but they should also provide explanatory power.
Which is to say, they should describe a model of the world because this is how we devise experiments, and experiments are clearly critical for science.
Two theories with equivalent predictive power but unequal explanatory power are not equally good theories.
Can you tell me how to calculate whether a system of particles will cause another system of particles to collapse or not?
Can you tell me under what circumstances a system of particles will evolve unitarily or not?
Can you shade the region of the spacetime diagram of EPR where the wavefunction is collapsed? How about in a delayed choice quantum eraser experiment?
If you tell me "you're not allowed to ask those questions" (or "hm, I never thought of that!"), then you're directly illustrating the complaint here about physics!
The common narrative is that Copenhagen, many worlds, and the other interpretations of QM are all equivalent, but they are not. Copenhagen adds an extra physical event-- collapse-- where the wavefunction is suddenly nonunitary. The burden is on Copenhagen to tell me how and when this happens, and in fact to prove that it happens at all. Many worlds, on the other hand, predicts the same phenomenon-- the apparent collapse of a wavefunction to an eigenstate-- without adding unitarity violation, or any other phenomena at all beyond the normal, extremely well-verified mechanics of multi-particle system scattering; it merely treats the environment is a multi-particle quantum system.
Many worlds is the null hypothesis (no, the extra worlds are not extra suppositions, they are predictions of known physical laws), and the burden is on Copenhagen to show that unitarity violation exists, and the burden is extremely high (possibly insurmountable?) for EPR and eraser experiments.
When Copenhagen was devised, most believed that there was some "underlying state" in QM, and that measurement told you something about what the underlying state was. Bell's theorem should have sent a shockwave through the community which forced everyone to reevaluate fundamental assumptions and self-correct wrong ideas. But by accident of history, John Bell was too shy to publish in a major journal, and no one even read his result for four years. The implications of Bell's theorem were slow to diffuse through the community, and so the disruptive moment of reckoning that should have happened never came. By the time it was well-accepted, the narrative in physics had become "you're not allowed to ask about what quantum mechanics means, just shut up and calculate", and so the cognitive dissonance was cast aside. Annealing this attitude and this misstep away is taking excruciating decades.
EDIT: I also don't mean to imply that there aren't deep and beautiful things in modern physics. There are! But physics is concertedly avoiding asking deep questions in areas where (I strongly believe) it is most important. The claim that "the meaning of QM is outside the scope of science" is exemplary, I think, of that attitude.
You appear to be somewhat mistaken. Copenhagen interpretation does not postulate any specific explanation or mechanism for wave-function collapse, merely that "upon measurement, the wave-function collapses into an eigenstate of the observable being measured". This, of course, is a physically verified phenomenon. Now Copenhagen purposefully leaves the precise meaning of "measurement" undefined, since in my view there is no convincing empirical evidence that supports a specific mechanism for this phenomenon. Other interpretations posit mechanisms (decoherence (doesn't fully account), von Neumann "consiousness" (not empirical), etc.) for this collapse.
My biggest complaint about many-worlds interpretation is how it is in its essence a non-scientific theory, as it makes assertions about an unobservable reality. It postulates other parallel realities that by definition do not communicate. Again, this makes it intrinsically a non-scientific theory.
>Can you tell me under what circumstances a system of particles will evolve unitarily or not?
Everywhere except on measurement, in which the state collapses.
>How about in a delayed choice quantum eraser experiment?
I'm not familiar with this experiment.
>If you tell me "you're not allowed to ask those questions"
By all means, ask as many questions as you want. That's after all the essence of scientific endeavour. But it's not very nice to misrepresent other positions, nor is it to claim everyone else is deluded (without strong evidence on your side at least).
>Bell's theorem should have sent a shockwave through the community which forced everyone to reevaluate fundamental assumptions and self-correct wrong ideas.
Bell's inequalities force no re-evaluation. They simply prove that the search for a local hidden-variable theory is impossible. It certainly raised important ideas for research, but it does not do anything to discredit established QM.
>The claim that "the meaning of QM is outside the scope of science" is exemplary, I think, of that attitude.
If a physical argument can be made about this problem, then it's in the scope of physics. Otherwise, it is not. As simple as that.
No it's not. There are interpretations in which collapse does not exist, so these experiments are not measuring what you think they're measuring unless you already assume the conclusion.
Also, your view on other interpretations of QM are outdated. Arguably one of the more famous results on quantum mechanics, Bell's theorem, would have never existed if not for Bohmian mechanics.
Don't discount the value of explanatory power. The fact that other interpretations provide far more explanatory power than Copenhagen makes them far more valuable. Many important results in quantum foundations would have never happened if everyone were a Copenhagenist.
The major interpretations of QM are all similar in the fact that they are not testable and don’t really affect the resulta. It’s metaphysics.
> the implications on conditional probabilities
hold for other measurements throughout the entire
spacetime, present and past. [emphasis theirs]
And the math in the paper which supports that statement works by keeping all the superpositions around and allowing us to project to them at any time. This is the picture of many worlds! Copenhagen says the opposite: When you make a measurement, the unmeasured superpositions go away. The paper confirms, that's nonsense! If you shade any region of the spacetime diagram where the superposition is gone ("collapsed"), you'll be wrong!
Also: "It is important to note that arriving at our conclusions did not require introducing new physics. We only relied on elementary quantum mechanics: not on novel ‘backwards time’ concepts, nor on any particular interpretation: we only used the Born rule ‘as is’. [...] With the remarks and intuition presented here, there really is no mystery whatsoever in any of the discussed experiments."
This experiment is not more mysterious than the rest of QM, but of course if you find the whole theory unsatisfactory you won't be satistified with this explanation either.
Also, this is a fantastic example of the "don't ask questions" attitude I think is so shameful. If they had bothered to take the small step of asking, they'd have come to a clear, evidence-based refutation of Copenhagen! There are regions of spacetime after the measurement, where the wavefunction is not collapsed, which the authors explicitly point to. That's in direct contradiction to the premise of Copenhagen.
Edit: solving the problem in the "natural" order would give the same probabilities but is much more difficult to handle. You need to get a probabilistic outcome for the position of each single photon on the screen, which "collapses" the state and determines the wave function of the other photon. At this point, the first photon has been detected somewhere but it can only be labeled as "interference" or "not interference" later after the detection of the second photon. The probability of being labeled as "interference" or "not interference" does depend on the position (because the collapsed wave function depends on what the outcome of the previous measurement was). When everything is said and done, looking at the subset of events labeled "interference" there is an interference pattern and looking at the subset of events labeled "not interference" there is not an interference pattern. There is no mystery.
Before measurement, the state of the particle system is Σ α_i|i>. Copenhagen says, "after you measure the system, all but one of the α_i go to zero". The authors don't do that, and in fact say that you can't. Unless I am misunderstanding something, they keep all the α_i around at all times and project the measurement for each particle separately, regardless of whether it has been (or will be) measured somewhere else. There is nothing philosophically or physically wrong with this, as they point out, but it is different than what Copenhagen says you should do when you measure something.
And if you look at what it means (which they refuse to do), you'll see that after Alice makes her measurement, Bob's quantum state is still explainable in terms of a superposition. When is the measurement (joint or individual) "finished"? Answer, per the authors: Never. The superposition permeates spacetime. That's how we escape the need for a causal connection between Alice and Bob's measurements, and, naturally, that's how many worlds does it.
(And if we take it further and ask, when Bob makes his measurement, "What is the state of Alice's particles?" we'll see that she is in a superposition of being entangled with each of the superpositions of the particle, which remains in superposition before, throughout, and after our ~measurement of~ entanglement with it).
What he shows in that paper is that the order of the measurments is irrelevant . So he does solve the problem in the reverse order where it can be done easily writing a few quantum states .
Note that QM is not about causal connections, is about correlation. Once a pair is correlated, the correlation may appear when measurements are done. But it's not that observing one outcome here and now "causes" a particular outcome there and later (or before). One doesn't need to keep superpositions to think it "correlation" terms (instead of "causal" terms).
 Using the projection postulate in the derivation: "Say we have indeed measured on B and got OB = bJ . The state then collapses onto ..." He concludes: "None of this looks very surprising, but we want to stress that the total probability to find OA = aI and OB = bJ does not depend on the place or time at which the measurements occur."
 He also uses the projection postulate here in the usual way: "So the experimental outcome (encoded in the combined measurement outcomes) is bound to be the same even if we would measure the idler photon earlier, i.e. before the signal photon by shortening the optical path length of the downwards configuration. Then, if the idler is detected at D4 for example, the above state ‘collapses’ onto ..."
And yes, you don't need to ask "when does the wavefunction collapse" to manipulate joint probabilities of measurements that happen at disparate locations and times. In fact, that's my objection: If you do ask, you find that there is no consistent answer! (And I suggest it's because wavefunction collapse is not a thing the universe does).
Re: "interference tagging"-- do you have a link to some material? (I'd love to understand something specific before commenting).
EDIT: Also, I didn't see Appendix B at first-- The authors do understand and even advocate the Everettian view! Though I still don't quite understand/agree with their earlier timidity about finding and interpreting conflicts with Copenhagen.
I'm not sure what's the problem. Why do have to "ask" if the answer doesn't really matter? What answer more consistent than "it doesn't really matter" would you like? Anyway (standard) QM is a non-relativistic theory, QFT may be more satisfactory from that point of view.
Re: "interference tagging" - what I mean is that first you detect the photons and later check if they "did happen" to go through two slits (interference appears) or one slit (no interference). But the interference pattern is not visible for a single photon and at that point the individual events are still a superposition of both possibilities (so for the events at a certain position part of them will be in the end identified as coming trough one slit and some of them from both). Only after the second measurement is done you know how to group the previoulsy recorded events to see the interference. It's not that the later measurement causes interference to appear. Or at least it doesn't affect at all where the photons were detected, it just lets you know how to group the existing events to make it apparent (selecting only those where, once the full mesurement on the pair has been done, the path taken remains uncertain).
Edit: maybe this picture helps https://upload.wikimedia.org/wikipedia/commons/thumb/c/c8/De...
If all the events are taken together there is no interference pattern. But when they are grouped according to where the second photon is detected in two cases there is still no interference but in the other cases complementary interference patterns appear.
You tell me; Copenhagen is the one that says collapse exists. It sounds like maybe we are on the same page that collapse isn't necessary to explain quantum mechanical observations? In that case, we are both Everettians :).
In some cases it is a safe approximation to ignore those extra states for the remainder of our experiment/calculations, but with a small change to the experiment we can make that a bad approximation.
a) You do the measurement first on the "screen" side (and project the quantum state of the pair of photons according to the measurement, the "extra universes" disappear). You do then the measurements on the "idler" side (and project again the quantum state according to standard QM).
b) You modify slighly the setup to reverse the order of the measurements. You do the measurement first on the "idler" side (and project the quantum state of the pair of photons according to the measurement, the "extra universes" disappear). You do then the measurements on the "screen" side (and project again the quantum state according to standard QM).
QM predicts that the outcomes in the original experiment (a) and the "reversed" experiment (b) are the same. And those predictions are verified empirically.
Why do you say that his 1964 paper (I assume) was not read by anyone for four years?
Copenhagen interpretation showed up when they were faced with a choice between preserving locality or determinancy.
They chose wrong — we know now QM is non-local, and that the underlying justification for the Copenhagen model is an extraneous philosophical proposition.
But much like an extra dependency in a software project, no one wants to remove it now that it’s used everywhere, and there’s a lot of “good enough” stuff using it.
That said, it seems to be one of the major impediments to a unified theory: by dropping the extraneous assumption, we have fewer things to reconcile with GR, and can start looking for GR geometries that have quantized non-local behaviors.
ie, dropping Copenhagen and giving geons another look is probably worth it. (And is basically what loop quantum gravity people are doing, as far as I can tell.)
What underlying justification are you talking about and what do you understand by "the Copenhagen model" precisely? At least in the Einstein vs Bohr debates the one denying that QM could be a complete theory because of its non-locality was Einstein, I think.
> Einstein's refusal to accept the revolution as complete reflected his desire to see developed a model for the underlying causes from which these apparent random statistical methods resulted. He did not reject the idea that positions in space-time could never be completely known but did not want to allow the uncertainty principle to necessitate a seemingly random, non-deterministic mechanism by which the laws of physics operated.
The underlying assumption for Copenhagen was to try to preserve locality by assuming non-determinism. However, there’s no saving locality — and non-locality is enough to leave determinism — so there’s no reason for the non-deterministic axiom.
I think I also meant “definite” instead of “deterministic”, but it works out the same.
I found this quote from David Lindley in a review of Adam Becker's book referenced in this thead (http://www.math.columbia.edu/~woit/wordpress/?p=10147):
"The problem with Copenhagen is that it leaves measurement unexplained; how does a measurement select one outcome from many? Everett’s proposal keeps all outcomes alive, but this simply substitutes one problem for another: how does a measurement split apart parallel outcomes that were previously in intimate contact? In neither case is the physical mechanism of measurement accounted for; both employ sleight of hand at the crucial moment."
> later someone sent me a glorious photoshopped screenshot (see above) which shows me with a painted-on mustache and informs us that Sabine Hossenfelder is known for “a horrible blog on which she makes fun of other people’s theories.”
> The truly horrible thing about this blog, however, is that I’m not making fun. String theorists are happily studying universes that don’t exist, particle physicists are busy inventing particles that no one ever measures, and theorists mass-produce “solutions” to the black hole information loss problem that no one will ever be able to test. All these people get paid well for their remarkable contributions to human knowledge. If that makes you laugh, it’s the absurdity of the situation, not my blog, that’s funny.
-- Hannah Arendt, "Vita Activa"
Heisenberg and others knew that, I'm sure many current scientists do too, but going by output, the situation is dire, the intertubes are full of people talking about being "unbiased" or "objective" even about human matters, when not even the hard sciences are truly objective! It's like people simply agreed that because they're not religious or whatever, they're now correct. If they are wrong about anything, since they supposedly are on the side of science, they can be sure that will be corrected, so they can consider themselves correct even today. Before you know it, you have superstitious people who think their superstition constitutes the absence, the opposite of superstition.
We went from Heisenberg to a chemistry teacher cooking crack while his assistant gets all bug-eyed about the power of "science, bitch!". Or this "critique" of the game Soma I saw recently, where the Youtuber mentions that "people who are smarter than you and I are saying the universe might be a simulation". What I see is people booby-trapping everything with stupidity.. making things so stupid, and then glorifying that, that injecting even a lick of seriousness into anything would cause massive offense, hurt a lot of egos. To me it's the analog of religious fundamentalists going to church 20 times a day to get hyped and primed with caricatures of those not of their flock, and fantasies of how great it's all going to be when those are gone.
But I'm not sure I can blame physicists for that. I don't know enough about what they do, I see what "everyone" else is doing, and that's horrid by itself. It's not the job of physicists to ask deep questions about what it all means, it's the job of everybody.
They’re just interpretations and they all work equally well and produce the same predictions. It makes no sense to call any of them obviously wrong. It’s personal preference.
That's the impression I get as a non-physicist, that yes it's kind of an aesthetic preference, but on the other hand aesthetics are important and sometimes are pivotal in finding new insights.
Ugly may not be completely objective, but it isn't completely arbitrary either.
Epicycles can produce correct point location of planets, but do not produce correct phase. Galileo observed heliocentrism-compatible phase of Venus with his telescope. That was the crucial experiment, not the simplicity.
I would greatly appreciate any advice.
All the papers are free online and authors will generally discuss their work with you if you have intelligent questions.
BTW, this is what I do. I freelance about 20% of the time and spend about 50% of it reading physics papers. So far I haven't produced anything new, but I have greatly increased my intuitive understanding.
It seems that some examples might be useful here.
Which specific groups are trying to solve problems that don't exist?
What are some mathematically well-defined problems that aren't getting enough attention?
As for rewarding scientists for working on what's popular, that's a science-wide problem that stems from the way that science is funded and decades of inbreeding. Still, examples of how to break physics out of its funk on this score would also be useful.
They also mention a wildly optimistic "I'm not holding my breath" $20k per kg to mars, which is already is 4x higher than a SpaceX BFR launching stuff to mars.
You have some people saying the university funding system is to blame by not accepting crazy ideas but we have all sorts of ideas in physics like:
- String theory. As best as I can tell the only reason string theory exists is because if dimensions=11 the equations for general relativity pop out. Importantly though string theory has made no testable predictions and it's unclear when or even if that will be the case.
- Supersymmetry. Interesting idea but no evidence of this yet.
Other more interesting ideas to me at least (again, as non-physicist):
- Octonion Math underlying the standard model (maybe) 
And some interesting experimental work:
- Possible violations of lepton universality from the LHCb detector . This was, last I heard, still well below statistical significance (5-sigma) and could well disappear (as other bumps have eg at 750GeV) but it's interesting nonetheless.
And there are host of open problems with otherwise successful theories.
My favourite extremes here is the prediction of magnetic moment of an electron, which is ~12 significant digits in agreement with experimental results. At the other end is QFT predicting the energy density of a vacuum, which is ~120 orders of magnitude off .
Anyway, a lot of this exists in the current academic system.
This gets at the essential feature that seems to be driving string theory research (or at least a major one), but I think it's an overstatement as you state it. String theory is popular because it appears to hold out the hope of a theory of quantum gravity. But to my knowledge nobody has shown that the equations of general relativity pop out from an 11-dimensional string theory model; that is still vaporware. There are results which suggest (at least to string theorists) that that should be possible, but nobody has actually done it yet.
That's pretty much what I've been going for. Got my PhD in Chemical Engineering and an MBA, and now I'm starting my lab - or "startup", here - in pursuit of my research, basically into strong AI.
Bootstrapping is an interesting process. I mean, I have to keep the lights on somehow, though there's an inherent conflict-of-interests between keeping the lights on and doing the core work that the lights are there for.
Still, what can one do beyond keeping at it?
It is proven that physicists are in fact, the most ignorant folks of all scientists. Real, proper physical models are always interdisciplinary, unified theories. The most ignored category of all.
I can assure everybody, that the model that might get accepted in 100 years is already here. As I'm personally using one of those fringe models, I can assure you that using this in public will get you mostly negative points online and quite interesting conversations offline. Nice side-note: You will be able to filter out non scientific thinkers quite easily and I can assure you, there are lots of them in the "scene". Interestingly chemists are much more open to different models, in fact, most of them know that our models are rough approximations at best.
It is funny when you think in models that explain everything, but are quite far from the standard perspective. It becomes hard to explain effects because the details obviously start to diverge the closer you look.
On the other hand, I think every adolescent is capable of thinking the model I'm using. (PS: I'm not the origin of the model I'm using, I seriously would have never been able to come up with such a minimal, absolute logical and elegant solution)
Why not string theory? Because enough is enough.
The LHC “nightmare scenario” has come true.
Very similar theme of lack of verifiable theories.
I wonder what the experimental physicists have to say about this topic? I feel like theories are also driven by new observations. However the observations that the theorists have to go on are very indirect compared to those of 100 years ago. "The mass of this galaxy is out by x percent" isn't doesn't give many clues as to what's wrong.
Compare that to observations of Mercury's orbit that happened 2 centuries before Einstein published his Relativity theories. ( http://archive.ncsa.illinois.edu/Cyberia/NumRel/EinsteinTest... )
If the need for funding from existing sources is hindering your research, try to find another way to support it, like by freelancing 20% of the time or Patreon.
(reference to the Three Body Problem, an awesome book)
The reason for physics research becoming more wide, limited & shallow instead of more narrow, broad & deep seem to stem from the foundations of mathematics.
See here for the foundations of math itself:
(If you struggle with comprehending the above, try drawing Venn diagrams of the logical operations involved to gain a geometric understanding of the matter.)
And here for the foundations of probability theory:
https://link.springer.com/chapter/10.1007%2F978-88-470-2107-... Gian-Carlo Rota - Twelve Problems in Probability Theory No One Likes to Bring Up, The Fubini Lectures, 1998 (published 2001)
Two decades old, but the title still rings as true today as it did twenty years ago, unfortunately.
And here, for the obligatory philosophy slap fight abound in statistics:
The philosophical solution to which basically boils down to this image macro:
But for which we lack a sufficiently advanced & logically consistent mathematical formalism, both due to people mostly ignoring, out of ignorance [because from where else do you get the action of ignoring!], the philosophical solution, and, more importantly, because we lack a sufficient mathematical formalism for it due to, among other things, the issues with probability theory.
And here, a small shimmer of hope in the foundations of statistics:
There exists another, unrelated to the above presentation, avenue of highly interesting research out of Brazil, but their results haven't yet reached a stage of maturity where people throw together easily understandable powerpoint slides, which I'll neglect mentioning here for now, because I'd consider that bad etiquette.
Personally, I feel partial to blaming all of this on this Euclid translation error, albeit I say that in partial jest:
...which people still fall for, even in 2018, as exemplified in quite a few papers on the foundations of geometry published in recent years.
Physicists don't stand the furthest to the right in this xkcd comic, and out of frame, even further to the right from the already left out philosopher, there exists a recursive boxing match between numerous fields of science conveniently left out of the graphic to maintain a sense of strict hierarchy & order in a reality that lacks such hierarchy:
Also, I'd like to point out that the title of that blog post technically represents a statistically testable hypothesis.
On what premise? 'Theories' are human constructs, hence why physicist are so adamant about their Truth and Beauty. It's wrong to say that, when most sciences rely heavily on Occam's principle (an aesthetic argument) for reasons unknown. It's pretty likely that the human brain is guided by both principles to model the world, and that should be reflected in the formulation of our theories.
The universe is not like a loaf expanding from dense batter to fluffy bread.
Instead, the chaotic vacuum produces, for want of a more accurate concept, particle-antiparticle pairs at random that exert a “pressure” seen as the Casimir Effect and a force that underlies the expansion of space-time.
These pairs are mostly ephemeral, but under certain conditions they can randomly transition to a stable state. This eventually results in matter. (It results in a lot of things, but we’re biased toward the minor component, matter, because we’re made of it.)
The process happens a lot in very empty space, and almost not at all in space that is constrained by the existence of matter already. This is why “dark matter” exists out there and not down here. The Casimir Effect will give this to you; constrain the available space and some wave equations are excluded, resulting in a measurable inward pressure.
Run the expanding universe backwards: space-time contracts and we have exactly what we have right now. Run it forward and space-time expands, again giving us exactly what we have right now. Of course, things are different, but the physics is unchanged. The universe doesn’t get hotter or cooler, there’s no era of total ionization or inflation.
The fundamental ground state of the universe is chaos. Anything can arise out of that chaos, but specific events are constrained by probability: some are so unlikely that we never see them; some are so likely that they are certain and they happen all the time.
Mathematics, so useful a tool in the past, cannot describe this situation. The only way to describe this system is by using the system itself; there are no shortcuts.
Math, philosophy, reason and order are inapplicable because they are only rules-based approximations of a chaotic state.
I'm unwilling to assume it "just randomly looks like that" given that a random distribution on a universal scale seems highly unlikely to show the large scale structures notable in the CMB.
So for instance the CMB did not support the big bang, it contradicted it. See: the horizon problem . To reconcile this inflation theory was invented which arbitrarily suggests that the universe hit the accelerator hard, then slowed back down. There's no logic, mechanism, rationale or falsifiability. But it makes what we see fit what we predicted we'd see, so it's a pretty widely accepted part of modern physics. And this retrofitting now has a cascade affect that enables other theories to provide support for yet other theories -- such as what you're proposing here in that the CMB now 'supports' the big bang. It does so only if you add a very big asterisk there.
 - https://en.wikipedia.org/wiki/Horizon_problem
If you'll reread it you'll see that I mentioned nothing about a Big Bang, but rather that the CMB provides evidence that the Universe expanded from a smaller state to its current state, which is larger than its starting state.
From the linked wiki:
"Differences in the temperature of the cosmic background are smoothed by cosmic inflation, but they still exist. The theory [Cosmic Inflation evidenced by the Horizon Problem] predicts a spectrum for the anisotropies in the microwave background which is mostly consistent
with observations from WMAP and COBE."
But physically what we observe does not make any sense. Like you probably know, nothing -- including action -- can be perceived to travel faster than the speed of light. The sun is about 8 light minutes away from us. If it somehow just suddenly disappeared, we'd still see it in the sky and continue to revolve around, what would 'now' be nothing, for about 8 minutes. The observation of its disappearance and the effective causality of its disappearance (and its effect on our orbit) would happen at or very near the exact same time.
The problem with the CMB is that areas of space that should not be causally connected since light itself has not had time to go from one to the other, seem to be causally connected. In other words, with our boiling pot in a kitchen room - the eventual equilibrium that the kitchen reaches (if we assume that that entire little region is all of the space in existence) is going to vary quite substantially whether you have an e.g. 100 cubic meter kitchen or a 200 cubic meter kitchen. We should observe both sides of space acting like two independent 100 cubic meter kitchens, instead we seem them behaving like a single 200 cubic meter kitchen.
This is a major and unresolved problem that threw much of what we know out the window. It directly contradicts the big bang. To 'resolve' this, we started creating a arbitrary special conditions. Cosmic inflation is one of these. There is absolutely no reason to believe that cosmic inflation ever happened - its sole and only reason for existence is to work as a 'fix' to make what we observe fit what we thought we'd observe. This makes it illogical to use derivative things as "evidence". In particular the nature of our current CMB is in no way meaningful evidence of inflation, because inflation was hypothesized, after the fact, in no small part to fit the CMB to what we thought we'd see! In other words calling the CMB meaningful evidence of inflation is trying to provide support to a hypothesis by suggesting that the observation said hypothesis tries to explain is meaningful evidence of that hypothesis.
Any not completely idiotic hypothesis will obviously be 'evidenced' by what it tries to explain. But we have a major problem when that 'evidence' becomes all you have to rely on, and that is exactly the case here.
> Math, philosophy, reason and order are inapplicable because they are only rules-based approximations of a chaotic state.
This is really a superlative cop-out. "My theory is so genius that math, philosophy, reason, and order are insufficient to describe it."
That's tantamount to saying that his theory is not based on data, logic, or even sound reasoning, and cannot be proven. Pretty much the definition of a crackpot theory. Of course, what he is actually admitting is that he doesn't understand and can't do the math.
Or how can you take someone seriously who says, "We’re stuck in a paradigm that doesn’t result in any valid fundamental predictions." This is so obviously false that the only possible explanation is that he is ignorant about the foundations of modern physics, what they are, how we got here, etc. The field is freaking loaded with predictions that have been proven.
The tl;dr is that physics is not necessarily explainable, however much you want it to be. So it's very possible that the underlying phenomena of reality is simply out of reach of theories (i.e. it's incompressible).
> That's tantamount to saying that his theory is not based on data, logic, or even sound reasoning, and cannot be proven. Pretty much the definition of a crackpot theory.
You misunderstood the post I think. It's not proposing a theory, or even a crackpot one. It's saying that maybe the underlying phenomena is out of reach of theories.
Wolfram’s article is irrelevant to this discussion. It is a question of empistemology.
Lets get hypothetical. Suppose the quantum foam interface is, in fact, the event horizon of a blackhole as observed from the inside. Since we see a bias towards matter, we could expect that the "outside" universe would see a bias towards what we call anti-matter. Further, since we have blackholes in our universe which we can observe, we should see a bias towards matter in the Hawking radiation.
=This post brought to you by a physicist from a not-terribly-accurate sci-fi novella.=
Nope, everyone thinks that dark matter exists everywhere around us. That's why we are funding and running all sorts of dark matter detectors that are looking inwards, not outwards.