(The bright spot is near absolute zero, in low, low energy physics. Lots of interesting experimental results from down there in recent years.)
The multiverse approach (the universe forks at every quantum event, and all those universes continue to exist, forking further) is the default you end up at given what we know now. Hawking once said it was "trivially true". But it's unsatisfying. It means most of the fundamental constants are arbitrary, for one thing. Then there's the anthropic principle (our universe works well enough to have life because we happen to be in a fork where the constants have values which make chemistry work). That's unsatisfying, too. This un-testable stuff is more philosophy than physics.
This, from a historical viewpoint, is a failure. From Lord Kelvin to Fred Hoyle, physics was about measurement and prediction. Theories which can't be grounded in experiment are of little use. Today's physicists are losers by historical standards. This has career effects. Los Alamos is a lot less prestigious than it once was, and there's been substantial downsizing.
The discovery of the transistor didn't bring an iPhone overnight. Same with many other scientific discoveries that had to go through a long engineering phase. I see a similar trend happening now with quantum computation, which has seen great leaps over the past 20 years and continues to accelerate in both academia and industry.
It's my opinion that, at least in America, we live in a less intellectually disciplined and intellectually favored society. Anti-intellectualism, even in nominally educated portions of society, seems to be on the rise. This unfortunately has the effect of making science look stagnant to the larger population, and unless there are gadgets of instant gratification that science explicitly produces, it'll remain somewhat obscure to folks uninterested in advancement for the sake of advancement.
As for the multiverse stuff, I'm not sure why you think that it has to do with the arbitrary selection of constants. The multiverse interpretation of quantum mechanics doesn't say "anything happens and it all happens in separate multiverses", but rather that—in simple terms—non-deterministic events like measurements bifurcate into separate universes. But QM still posits that the results of these non-deterministic events are each probable, and each obey the same laws of physics equally. In other words, it has a more "butterfly effect" on us, keeping physical constants as truly constants, not a "splitting of universal laws effect".
Yeah, I coulda been one of those guys: I even had a pretty good idea for an imaginary QC platform and a position in a major research institution. I couldn't live with myself: I got a job in the valley.
Physics and science in general looks stagnant to the larger population because it is actually stagnating. Stating otherwise is mendacity of the highest order.
Considering that the only way to even observe Higgs is to build a multi billion dollar collider I'd bet on that one.
Mate, what are you on about? We manipulate life almost at will, we have proven theorems that were unsolved for centuries, etc. Your limited imagination and your apparent lacking ability to see things in perspective make no rule.
Cure for cancer? Cure for any genetic disease? Antibiotic resistant bacteria?
Given the forum, let's assume you're a little more familiar with tech: Perfect encryption? NLP that understands meaning? Do you see now how it's a fallacy to take cutting edge scientific breakthroughs and demand instantaneous applications, and if these aren't met, discard the breakthroughs as having little meaning?
(background: PhD in Biophysics, I've cloned genes and run huge simulations of protein folding, as well as worked in genomics and pharmaceutical chemistry).
No, this is absolutely wrong. There are entire families of solutions to the measurement problem and we have no firm idea which one is correct. There are even multiple mechanisms to get behavior that looks like this but is based on very different, often less weird, principles (see e.g. Einselection).
> It means most of the fundamental constants are arbitrary, for one thing.
No, this is wrong and unrelated. You're confusing two ideas. One is where the universe "forks", as you say, during eigenstate collapse, the other is that there is a space of universes with different physical constants. These are unrelated. Eigenstate collapse doesn't change physical constants.
I think that the quantum computer guys think that they are about to deliver as well.
But that doesn't mean it's "frustrated". There's a lot of progress being made. There's just a long way to go. Quantum algorithms are hard.
Fusion guys have always claimed they're underfunded w.r.t. experiments and in fairness fusion does get kinda hot so I could see it being expensive.
I don't have an answer myself, but it seems to most people, "delivery" doesn't mean "when it's useful", but rather "when it's usable, useful, and generally available for use."
If we use that as a benchmark, quantum computing has definitely hit "usable". It is teetering on "useful". And the beginnings of "generally available for use" have been explored by a few industrial players [0, 1].
 Rigetti Computing's Forest: http://forest.rigetti.com/
 IBM Research's Quantum Experience: http://www.research.ibm.com/quantum/
The problem with fusion is commercial viability. Building something for billions of dollars that can barely put out more energy than you put in isn't commercially viable. At any rate, fusion seems more like a very tough engineering problem, the physics has been there for a long while.
With quantum computing there are also no real signs that a quantum computer will beat a "classical" computer at anything any time soon. It always was and still is very difficult to get quantum systems to scale and you need a large enough system to make a difference and it may be impossible to do so. Again a lot of this is more of an engineering problem though admittedly the line between physics and engineering can be blurry since it's not always clear whether something can't be done because you've reached a fundamental limitation vs. having a clever enough design.
It's possible someone will eventually solve the problems in these areas (assuming there aren't fundamental reasons we're just not aware of yet why it's impossible) but IMO the odds are low and you need some sort of breakthrough.
John Martinis' group is planning to do that within the next two years . Ten years might be realistic for solving useful problems faster and cheaper, but the esoteric speedups are right around the corner.
One question I'd have is whether quantum simulation is indeed the fastest way to solve the quantum speckle problem on a classical computer. The other related thing to think about is whether the output of a quantum speckle computer can only be explained by the large state space and quantum mechanics or whether there are alternative explanations.
So definitely an interesting direction but even if they're successful we're still not quite there. When we have a quantum computer doing useful work with exponential speedup over classical computers that would be something and even then we only have a very limited set of algorithms at the moment.
They give arguments for this in the paper. Intuitively, if there's some fast way to classically sample a random quantum circuit then that means most quantum circuits should be easy to simulate classically. But that doesn't seem to be the case: we don't have algorithms that go fast on most circuits but go slow in highly specialized cases. It's the other way around.
> whether the output of a quantum speckle computer can only be explained by the large state space and quantum mechanics or whether there are alternative explanations.
Given that all the engineering was based on quantum mechanics, it would be very surprising if the machine worked based on something else. It's quite difficult to make things accidentally work like that. It'd be like trying to build a car, confirming it can take you from place to place, then realizing that actually you made a hang-glider by accident.
The mentioning of quantum computers is very interesting in the following sense: If one were able to build a sufficiently large quantum computer, this would provide strong evidence that the (at the moment rather hypothetical) theory whether quantum mechanics is rather an emergent phenomen by a deterministic process (cellular automaton)
which the Nobel laureate Gerard 't Hooft worked on for the last years is probably wrong. To quote p. 79-80:
"Such scaled classical computers can of course not be built, so that this quantum computer
will still be allowed to perform computational miracles, but factoring a number with
millions of digits into its prime factors will not be possible – unless fundamentally improved
classical algorithms turn out to exist. If engineers ever succeed in making such quantum
computers, it seems to me that the CAT is falsified; no classical theory can explain
On the other hand, if we seriously get into trouble building a sufficiently large quantum computer (despite our best efforts), this would at least to me provide evidence that 't Hooft is on something - since that this is a prediction that his Cellular Automaton Interpretation of Quantum Mechanics provides.
The Hooft thing is that there are limits on computation due to the process of the universe and that if a classical computer that was at the plank scale, runs at plankian speed (help help I am using terms I can't understand!!!) and covered the whole of the universe couldn't do it from the beginning of time to heat death... then nor can an arbitrary QC. Shores algorithm does factors in (log n)^3 maybe we'll find that there are physical limits on the scale of QC which hold it below hundreds of millions of qbits where a plankian computer could be synthesised, and/or it may be that such a computer can't run at high clock speeds, and because of this BQP algorithms can't be run on problems that are not allowed in the sense that they somehow solve the universe.
I don't think anyone has any reason to believe this is true in an engineering sense but there is every reason to believe that large QC will be very, very difficult in an engineering sense and I think that the people building one do expect to run into problems of this sort eventually, but at the moment they are delighted to find that they are able to do things at the lower end that will create devices that are going to be transformational for reasoning about some components of the physical universe - for example simulation of the interactions of very large molecules - maybe mapping seconds of a virus interacting with a cell membrane using months of QC ?
Edit: ah, yes, and also the slightly more generic mandatory XKCD: https://xkcd.com/678/
Many people have treated this as a hoax, or a joke. But the work is being done by Lockheed-Martin's Skunk Works, which has quite a track record. (The U-2, the SR-71, and stealth aircraft came out of there. ) Lockheed-Martin is doing this with their own money, and last year, when asked about it, the CEO said it was doing well enough they were putting more money in.
According to the researcher translation table, fusion is still a long ways off. https://xkcd.com/678/
Many of the scienctists of that post war era were hounded mercilessly for decades ever after, throughout the cold war.
If one were to try and obfuscate a scientific field, and introduce enough confusion such that whirlpools of disagreement ensnare the casual hobbyist, how would that take shape?
Conspiracy theory: in the early days after the H-bomb, there was research into finding some way to make an H-bomb without needing an atomic bomb just to get the energy to start the thermonuclear reaction. This would have resulted in a "clean" H-bomb, and also the ability to make smaller ones. This research was unsuccessful.
Maybe it was really a success. Too much of a success, What if it became too easy to do? Perhaps progress in fusion is deliberately held up because there's a way to make an H-bomb that doesn't require an A-bomb.
Why, isn't that un-testable as well, even more so than stringtheory, pretty much by definition?
Also, sigh. If you dig deep into the dark corners of the internets then I am sure that you can find fake anything. Focus on the beauty and the truth, people. For example:
 LIGO continues to work beautifully, as evidenced by its second detection of gravitational waves back in June: http://news.mit.edu/2016/second-time-ligo-detects-gravitatio...
 A fun but highly speculative 'bump' in the LHC data, which will probably go away but is fun to think about: https://profmattstrassler.com/2016/10/21/hiding-from-a-night...
 New precision results from a nice little experiment done 'on the side' at CERN: https://press.cern/press-releases/2017/01/cern-experiment-re...
 Or just a very accessible overview of particle physics in 2016: http://www.symmetrymagazine.org/article/2016-year-in-particl...
also read Woit's sensible reply to that comment and MT's re-reply. I agree Woit is polarizing but maybe not unnecessarily.
because the problems he points out are not hidden in the dark corners of the internets but all over the mainstream media - and arxiv too, and backed with millions of dollars.
- There have been dozens of 'bumps' in the data that went away over the past several years. The Standard Model still stands and there's no BSM physics that we've found. I'd bet the bump goes away just like the recent 750 GeV bump.
- The magnetic moment of the proton and antiproton being the same is also not BSM physics at all.
The problem is that fundamental elementary physics continues to boringly grind along and validate the standard model and things that we already knew had to be found (Higgs, LIGO).
The result in the popular press, though, is that multiverse mania has taken over, which is a non-solution to the problem. You'll actually find arguments that we stop thinking about alternatives to string theory and just assume it works because its beautiful, but it can never be measured--which is not a scientific argument.
500 years from now if we haven't made any progress we might wind up going "meh, probably string theory, but we'll never know", but its too soon to throw in the towel yet.
In general, Symmetry is an excellent site; they describe themselves as "An online magazine about particle physics" and are funded by the US Department of Energy, via Fermilab and SLAC. Recommend having a closer look http://www.symmetrymagazine.org/
Don't just look away from a very real concern.
It's very confusing and I can't help but assume he doesn't mind misleading people.
I agree that reading this blog by itself is jumping into the middle of the conversation, and terms are not explained. The multi-verse has been discussed there for years, so occasional readers may assume it is the many-worlds stuff, but it is a very different beast.
> MWI is one of many multiverse hypotheses in physics and philosophy.
My understanding is the at the big bang there are a HUGE number of universes (the Multiverse) that are causally dis-connected, with different coupling constants for the forces and very different physics. Within our universe we have our physics, which includes QM, and one way of viewing QM is the many-worlds view.
From reading the articles the author cited as examples of "Fake Physics" I have the impression that their authors simply took the analogies they used too far, thereby saying things that are simply not true. And while this surely is problematic I wouldn't call those articles "fake", as there seems to be no malicious intent behind the misleading analogies.
They get a lot more credit and traction here than they deserve. Nautilus is usually full of bull but pretending to be intellectual.
That's important to know as a critical thinker.
http://nautil.us/about (heading "Nautilus has received financial support from")
Seems a little unfair to only mention the philosophical outlier.
I see the list of "advisors" on that page has some pretty respectable names, too. That may not be worth much - who knows how much influence they have over the content - but having some venerable academics willing to associate their names with your output like that is surely at least a small signal in favour of legitimacy.
1. "publish or perish"
2. preserve credibility capitalization (scientist's karma points)
I would nevertheless be very cautious when someone claims he can make the difference between true and fake physics. There is a risk of error, and a higher risk of manipulation by introducing a bias by leveraging the credibility force (2). His book where he spit in the soup, and his promotional web site where we have been directed too, looks very like what he condemns.
I'm curious to see if that author is able to recognize real physics .
All of this reminds me a bit of the novel Three Body Problem where certain branches of academia (including hard sciences) are discredited as "too reactionary" or "reactionary philosophy."
The motives in some cases are clear: just as tobacco companies benefitted from the status quo, so now do oil companies.
However in physics, the motives may be more nuanced (and more of a territory capture that I don't understand).
Don't doubt for a second that legitimate climate researchers will be accused of doing fake research ... most likely within the year. I'm not sure what the physics community can do besides joining together against this and continually pushing back.
Let's stay transparent. Science is well-defined and important.
What that really tells me is this is an area of study that the Establishment does not want you to follow so it will label it to apply social pressure to mitigate the effects & discourage people from following it.
The thing is, nature has a way of not caring what the Establishment thinks. It does care on a surface level, but it also conspires to cause the collapse of the socio-information paradigms that the Establishment creates.
Given we have a soup of information & no grounding central authority to give us "objective reality", we ought to utilize other techniques. I don't happen to know what these techniques are, but I suspect that it has to do with network models, cognition models, perspectives (physical & information), attention schemas, faith, complexity, patterns, language, etc.
You're right as far as you go, but actually nature doesn't care what anyone thinks. One thing to question is how the establishment got to be established. If it was primarily through physical strength, heredity, or persuasion (as is more true for the bulk of history), then they are no more likely to be right than a plebe (except that p(persuasive | true) > p(persuasive)). But it's more true than in most of history that the odds of being an elite are much higher if you can consistently make correct predictions about the world.
It's true that we don't have a central authority to give us an objective reality, but we don't need one either. Reality is there if only you choose to observe it closely. I think the best technique for dealing with the soup of information is to find bona fide experts with a track record of providing correct explanations and giving what they say more weight.
It's up to the experts to make a persuading case & to engage the audience to think for themselves. There are many-a-con-artist who labels oneself as an "expert". That con-artist may even have credentials to make one's "authority" even more convincing.
Labels justify all sorts of things, worst of all, telling one to stop thinking beyond the abstract representation of the label. In the domain that the label casts a perspective shadow upon, the con-artist is free to utilize to take advantage of the audience's ignorance.
Let's suppose that many people find this model useful, because it allows them to dismiss such posts without having to critically engage them. They downvote/flag all such messages, and they feel good for having done so because they believe they are resisting an attempt to manipulate their opinions.
Do you have any problem with this?
I think a model can be bad at the former but good at the latter.
Useful to what systems & what aims? What's important to an Entity to engage what systems?