Hacker News new | past | comments | ask | show | jobs | submit login
Fake Physics (columbia.edu)
127 points by maverick_iceman on Jan 21, 2017 | hide | past | favorite | 85 comments

This is a frustrating era for physicists. In the mid 20th century they were like gods, giving the world the atomic bomb and the semiconductor. Today many branches of physics are stuck. The cosmologists can't find "dark matter", and aren't even sure it's necessary. The string theorists have pretty but un-testable theories. The fusion guys can't make fusion power work. Quantum mechanics and general relativity still haven't been reconciled. And nobody has a useful handle on gravity. These problems have been outstanding for decades now, with some churn, but little progress.

(The bright spot is near absolute zero, in low, low energy physics. Lots of interesting experimental results from down there in recent years.)

The multiverse approach (the universe forks at every quantum event, and all those universes continue to exist, forking further) is the default you end up at given what we know now. Hawking once said it was "trivially true". But it's unsatisfying. It means most of the fundamental constants are arbitrary, for one thing. Then there's the anthropic principle (our universe works well enough to have life because we happen to be in a fork where the constants have values which make chemistry work). That's unsatisfying, too. This un-testable stuff is more philosophy than physics.

This, from a historical viewpoint, is a failure. From Lord Kelvin to Fred Hoyle, physics was about measurement and prediction. Theories which can't be grounded in experiment are of little use. Today's physicists are losers by historical standards. This has career effects. Los Alamos is a lot less prestigious than it once was, and there's been substantial downsizing.[1]

[1] http://www.nytimes.com/2012/03/04/us/los-alamos-braces-for-d...

I think this assessment is magnificently incorrect. Over the past 30 years there have been very important experiments, many of which have reached the mainstream. This includes anything from the Higgs experiments to the LIGO experiments. What about things closer to the HN crowd, like communications infrastructure like the Internet?

The discovery of the transistor didn't bring an iPhone overnight. Same with many other scientific discoveries that had to go through a long engineering phase. I see a similar trend happening now with quantum computation, which has seen great leaps over the past 20 years and continues to accelerate in both academia and industry.

It's my opinion that, at least in America, we live in a less intellectually disciplined and intellectually favored society. Anti-intellectualism, even in nominally educated portions of society, seems to be on the rise. This unfortunately has the effect of making science look stagnant to the larger population, and unless there are gadgets of instant gratification that science explicitly produces, it'll remain somewhat obscure to folks uninterested in advancement for the sake of advancement.

As for the multiverse stuff, I'm not sure why you think that it has to do with the arbitrary selection of constants. The multiverse interpretation of quantum mechanics doesn't say "anything happens and it all happens in separate multiverses", but rather that—in simple terms—non-deterministic events like measurements bifurcate into separate universes. But QM still posits that the results of these non-deterministic events are each probable, and each obey the same laws of physics equally. In other words, it has a more "butterfly effect" on us, keeping physical constants as truly constants, not a "splitting of universal laws effect".

Higgs and LIGO are big nothingburgers compared to anything that happened in physics in any given year between 1910 and 1950 or so. Meanwhile "physicists" spend their careers doing piffle like writing programs for quantum computers that arguably will never exist in the corporeal world.

Yeah, I coulda been one of those guys: I even had a pretty good idea for an imaginary QC platform and a position in a major research institution. I couldn't live with myself: I got a job in the valley.

Physics and science in general looks stagnant to the larger population because it is actually stagnating. Stating otherwise is mendacity of the highest order.

I disagree with the assessment of LIGO being nothing- LIGO is the only physical confirmation of the existence of gravity waves. It did a nice job of turning something we predicted from theory into an observation that is hard to deny (it's also a tour de force of engineering and management).

Why is it stagnating though? Lack of talent? Lack of funding? Is it that all the lowest hanging fruit is gone and the problems are becoming intractable to solve?

>Is it that all the lowest hanging fruit is gone and the problems are becoming intractable to solve?

Considering that the only way to even observe Higgs is to build a multi billion dollar collider I'd bet on that one.

Social problems, IMO. Physics career track is designed to produce this outcome. Peter Woit (OP) wrote a decent book on the subject.

Look at how computing has changed in the past 30 years. From huge ugly boxes with CRT monitors to powerful touch computers in our pockets. How is this not physics? Sure, the principles and the foundations were laid out in the decades prior to that, but moving in smaller but important practical increments is also science.

That's a win for engineers I think. Isn't the stagnation argument saying that physicists aren't creating new foundations for the engineers of 2075?

> "Physics and science in general looks stagnant (...) because it is actually stagnant.

Mate, what are you on about? We manipulate life almost at will, we have proven theorems that were unsolved for centuries, etc. Your limited imagination and your apparent lacking ability to see things in perspective make no rule.

> manipulate life ... at will

Cure for cancer? Cure for any genetic disease? Antibiotic resistant bacteria?

Do you genuinely believe this is an insightful response? Without knowing your background, this makes it seem rather likely you are arguing from the 'larger population' point of view.

Given the forum, let's assume you're a little more familiar with tech: Perfect encryption? NLP that understands meaning? Do you see now how it's a fallacy to take cutting edge scientific breakthroughs and demand instantaneous applications, and if these aren't met, discard the breakthroughs as having little meaning?

His comment is spot on. We cannot manipulate life almost at will. We have some amazing tools, but the actual outcomes of manipulation are mostly just selecting for rare, random positive outcomes, than actual intent and engineering leading to rational results.

(background: PhD in Biophysics, I've cloned genes and run huge simulations of protein folding, as well as worked in genomics and pharmaceutical chemistry).

I think you misunderstand, what you are saying is my point. I argue that science, unlike the parent comment's claim, has not stagnated but made amazing progress. Yet, that when seeking to identify scientific progress by looking at new applications one might not arrive at a valid conclusion.

Why do others misunderstand? Maybe you say one thing and mean something else? It is clear you're arguing for the sake of arguing, and this community is worse off for it.

OK. I would say "discovery is stochastic", such that we do make amazing progress in a theoretical sense, although that progress doesn't seem to lead to tangible improvements.

Certainly our understanding of the world is improved, no? When you mean 'tangible' in the sense of 'solving a real world problem', i.e. an immediate application, then I feel this is getting a little circular. I simply reject the claim that one of the parent comments were making whereby science in general is not progressing. (edit: question mark)


May I just point out that 'science is stagnant' is not only hyperbole, but wrong, as disproved by my examples highlighting amazing achievements in biology and mathematics. Unless you would like to propse neither are amazing feats.

> The multiverse approach (the universe forks at every quantum event, and all those universes continue to exist, forking further) is the default you end up at given what we know now.

No, this is absolutely wrong. There are entire families of solutions to the measurement problem and we have no firm idea which one is correct. There are even multiple mechanisms to get behavior that looks like this but is based on very different, often less weird, principles (see e.g. Einselection).

> It means most of the fundamental constants are arbitrary, for one thing.

No, this is wrong and unrelated. You're confusing two ideas. One is where the universe "forks", as you say, during eigenstate collapse, the other is that there is a space of universes with different physical constants. These are unrelated. Eigenstate collapse doesn't change physical constants.

Well, do what all the 'cool kids' are doing in physics: comp-sci, neuroscience, and finance. Seriously though, the 'heroic' physicists out there are moving to other fields and are doing very well. They are smart people, and they can see that most of physics is stuck in a rut until a massive increase in funding comes along or some kid gets really lucky with the math. Neuro and the bio fields are getting shook up with the incoming physicists around, the nanoscope revolution has been kicking for about a decade. Finance has had quants for a while, but a pipeline to Wall Street is starting to emerge in force recently. Comp-sci has always been inundated with physics undergrads and PhDs, and we see the result in Silicon Valley's wealth.

I think that the fusion guys think that they have made fusion work and are going to deliver it commercially in ~15 years (in ~5 years this claim will be either obvious or dead).

I think that the quantum computer guys think that they are about to deliver as well.

Having worked in quantum computing, I think we'll get fusion first. Essentially the problem is, all of our scalable quantum architectures are really just mathematical models. There's scant research on actually building anything larger than about ten qubits. When you consider fact that error correction means you probably need about 500k physical qubits to do anything interesting (at least!) and as I like to joke even if we followed Moore's law it would take 32 years or so (and I doubt QC would scale so quickly) so not anytime soon.

But that doesn't mean it's "frustrated". There's a lot of progress being made. There's just a long way to go. Quantum algorithms are hard.

Fusion guys have always claimed they're underfunded w.r.t. experiments and in fairness fusion does get kinda hot so I could see it being expensive.


I think the following viewpoint has been interesting. If we look back at the development of the electronic computer, it's hard to say there was a single point it "delivered". Was it the Atanasoff–Berry Computer, the Collosus, the ENIAC (1940s)? Each of these miss some things that we commonly think of as a modern, generally useful computer. (Lacking programmability, Turing completeness, or stored program representation, respectively.) Do we have to wait all the way until, say, Apple II (1977) was released to consider delivery?

I don't have an answer myself, but it seems to most people, "delivery" doesn't mean "when it's useful", but rather "when it's usable, useful, and generally available for use."

If we use that as a benchmark, quantum computing has definitely hit "usable". It is teetering on "useful". And the beginnings of "generally available for use" have been explored by a few industrial players [0, 1].

[0] Rigetti Computing's Forest: http://forest.rigetti.com/

[1] IBM Research's Quantum Experience: http://www.research.ibm.com/quantum/

The first successful commercial electronic computer was the UNIVAC I. About 18 were delivered to customer sites and operated for years. Earlier machines (ENIAC, SSEC, LEO, Ferranti Mark I) were one-off prototypes. (Two in the case of the Ferranti Mark I.)

I'm not sure about either of these.

The problem with fusion is commercial viability. Building something for billions of dollars that can barely put out more energy than you put in isn't commercially viable. At any rate, fusion seems more like a very tough engineering problem, the physics has been there for a long while.

With quantum computing there are also no real signs that a quantum computer will beat a "classical" computer at anything any time soon. It always was and still is very difficult to get quantum systems to scale and you need a large enough system to make a difference and it may be impossible to do so. Again a lot of this is more of an engineering problem though admittedly the line between physics and engineering can be blurry since it's not always clear whether something can't be done because you've reached a fundamental limitation vs. having a clever enough design.

It's possible someone will eventually solve the problems in these areas (assuming there aren't fundamental reasons we're just not aware of yet why it's impossible) but IMO the odds are low and you need some sort of breakthrough.

> there are also no real signs that a quantum computer will beat a "classical" computer at anything any time soon

John Martinis' group is planning to do that within the next two years [1][2]. Ten years might be realistic for solving useful problems faster and cheaper, but the esoteric speedups are right around the corner.

1: https://arxiv.org/abs/1608.00263 2: https://www.youtube.com/watch?v=kgMWommXxU8

Just watched the talk. Definitely an interesting direction.

One question I'd have is whether quantum simulation is indeed the fastest way to solve the quantum speckle problem on a classical computer. The other related thing to think about is whether the output of a quantum speckle computer can only be explained by the large state space and quantum mechanics or whether there are alternative explanations.

So definitely an interesting direction but even if they're successful we're still not quite there. When we have a quantum computer doing useful work with exponential speedup over classical computers that would be something and even then we only have a very limited set of algorithms at the moment.

> whether quantum simulation is indeed the fastest way to solve the quantum speckle problem on a classical computer.

They give arguments for this in the paper. Intuitively, if there's some fast way to classically sample a random quantum circuit then that means most quantum circuits should be easy to simulate classically. But that doesn't seem to be the case: we don't have algorithms that go fast on most circuits but go slow in highly specialized cases. It's the other way around.

> whether the output of a quantum speckle computer can only be explained by the large state space and quantum mechanics or whether there are alternative explanations.

Given that all the engineering was based on quantum mechanics, it would be very surprising if the machine worked based on something else. It's quite difficult to make things accidentally work like that. It'd be like trying to build a car, confirming it can take you from place to place, then realizing that actually you made a hang-glider by accident.

> With quantum computing there are also no real signs that a quantum computer will beat a "classical" computer at anything any time soon. It always was and still is very difficult to get quantum systems to scale and you need a large enough system to make a difference and it may be impossible to do so. Again a lot of this is more of an engineering problem though admittedly the line between physics and engineering can be blurry since it's not always clear whether something can't be done because you've reached a fundamental limitation vs. having a clever enough design.

The mentioning of quantum computers is very interesting in the following sense: If one were able to build a sufficiently large quantum computer, this would provide strong evidence that the (at the moment rather hypothetical) theory whether quantum mechanics is rather an emergent phenomen by a deterministic process (cellular automaton)

> https://arxiv.org/abs/1405.1548

which the Nobel laureate Gerard 't Hooft worked on for the last years is probably wrong. To quote p. 79-80:

"Such scaled classical computers can of course not be built, so that this quantum computer will still be allowed to perform computational miracles, but factoring a number with millions of digits into its prime factors will not be possible – unless fundamentally improved classical algorithms turn out to exist. If engineers ever succeed in making such quantum computers, it seems to me that the CAT is falsified; no classical theory can explain quantum mechanics."

On the other hand, if we seriously get into trouble building a sufficiently large quantum computer (despite our best efforts), this would at least to me provide evidence that 't Hooft is on something - since that this is a prediction that his Cellular Automaton Interpretation of Quantum Mechanics provides.

I think that the largest simulation of a quantum computer on a super computer is about 40qbits. Therefore quantum supremacy (weak...) is 41 stable qbits. There are numbers of groups out there that are building (working on projects with the funding to complete) machines with 100's of stable qbits now, so my argument is that quantum supremacy of the weak sort is a matter of engineering. I think that quantum computers that factor primes with 100's of digits (like 2048 bit ones) requires 1000's of qbits, say a couple of orders of magnitudes beyond the machines that are currently in train to be built.

The Hooft thing is that there are limits on computation due to the process of the universe and that if a classical computer that was at the plank scale, runs at plankian speed (help help I am using terms I can't understand!!!) and covered the whole of the universe couldn't do it from the beginning of time to heat death... then nor can an arbitrary QC. Shores algorithm does factors in (log n)^3 maybe we'll find that there are physical limits on the scale of QC which hold it below hundreds of millions of qbits where a plankian computer could be synthesised, and/or it may be that such a computer can't run at high clock speeds, and because of this BQP algorithms can't be run on problems that are not allowed in the sense that they somehow solve the universe.

I don't think anyone has any reason to believe this is true in an engineering sense but there is every reason to believe that large QC will be very, very difficult in an engineering sense and I think that the people building one do expect to run into problems of this sort eventually, but at the moment they are delighted to find that they are able to do things at the lower end that will create devices that are going to be transformational for reasoning about some components of the physical universe - for example simulation of the interactions of very large molecules - maybe mapping seconds of a virus interacting with a cell membrane using months of QC ?

One of the things that the fusion community is trying to do is scale down the size of machine that is required, basically the view is that the volume of facility determines the cost (interestingly the stats presented seem to support this) and by building compact devices the price of the machine will decline accordingly. This will enable commercial implementations - if it works!

One thing to remember with quantum computation is that "large enough to be superior" is measured in 10s of units of quantum resources. That doesn't seem so far away.

Not sure if you're serious or joking (the parenthesis suggests to me serious), but AFAIK it's a well known saying that fusion power is in permanent state of "to be delivered commercially in ~15 years" since '50s or something like that...

Edit: ah, yes, and also the slightly more generic mandatory XKCD: https://xkcd.com/678/

I think his point was that it's now 15 years away, which is not the 20 years away it was 5 (and 20) years ago. In that sense it is significantly closer. Many of the real challenges may remain monetary as solar becomes cheaper, and funding of fusion has never been particularly high (perhaps $10-15B over 50 years). However, it is ultimately a complementary technology (as are Thorium reactors and others) for scales, places, and times when Solar or Wind aren't appropriate.


The real surprise here might come from Lockheed-Martin's fusion project.[1] They're pursuing an unusual approach to magnetic containment. It's not a tokomak, and it's not a stellarator.

Many people have treated this as a hoax, or a joke. But the work is being done by Lockheed-Martin's Skunk Works, which has quite a track record. (The U-2, the SR-71, and stealth aircraft came out of there. ) Lockheed-Martin is doing this with their own money, and last year, when asked about it, the CEO said it was doing well enough they were putting more money in.

[1] http://www.lockheedmartin.com/us/products/compact-fusion.htm...

> deliver it commercially in ~15 years

According to the researcher translation table, fusion is still a long ways off. https://xkcd.com/678/

I agree about the lack of substantial progress, but for the record the standard many worlds theories still predict that most "constants" actually are constants; they're the same in every branch.

It's been a great era for condensed matter physics though. And the recent revolutions in nonequilibrium thermodynamics (fluctuation theorems and generalizations of the second law) promise an exciting time ahead!

At some point, I had considered the idea that after the invention of the atomic bomb, the cabal of scientists surrounding its discovery were asked to obfuscate further advancements, and seclude practical findings behind classified restrictions.

Many of the scienctists of that post war era were hounded mercilessly for decades ever after, throughout the cold war.

If one were to try and obfuscate a scientific field, and introduce enough confusion such that whirlpools of disagreement ensnare the casual hobbyist, how would that take shape?

At some point, I had considered the idea that after the invention of the atomic bomb, the cabal of scientists surrounding its discovery were asked to obfuscate further advancements, and seclude practical findings behind classified restrictions.

Conspiracy theory: in the early days after the H-bomb, there was research into finding some way to make an H-bomb without needing an atomic bomb just to get the energy to start the thermonuclear reaction. This would have resulted in a "clean" H-bomb, and also the ability to make smaller ones. This research was unsuccessful.

Maybe it was really a success. Too much of a success, What if it became too easy to do? Perhaps progress in fusion is deliberately held up because there's a way to make an H-bomb that doesn't require an A-bomb.

You might like Asimov's story "The Dead Past".

Like majorana

You have reminded me of this book (not actually read it): https://www.worldswithoutend.com/novel.asp?ID=12078

> The multiverse approach

Why, isn't that un-testable as well, even more so than stringtheory, pretty much by definition?

Perhaps it's testable the Sherlock Holmes way of eliminating everything else and being stuck with that as the only answer you can think of. Not a very convincing test of course, but better than saying "we can't think of any possible way this might work".

occhams razor comes to mind, too

Oh jeez you have no understanding on physics in any regard.

As usual I think that Woit's blog post is unnecessarily polarizing. If you decide to read his post then I would recommend you also read the excellent comment by Marty Tysanner on the same page.

Also, sigh. If you dig deep into the dark corners of the internets then I am sure that you can find fake anything. Focus on the beauty and the truth, people. For example:

[1] LIGO continues to work beautifully, as evidenced by its second detection of gravitational waves back in June: http://news.mit.edu/2016/second-time-ligo-detects-gravitatio...

[2] A fun but highly speculative 'bump' in the LHC data, which will probably go away but is fun to think about: https://profmattstrassler.com/2016/10/21/hiding-from-a-night...

[3] New precision results from a nice little experiment done 'on the side' at CERN: https://press.cern/press-releases/2017/01/cern-experiment-re...

[4] Or just a very accessible overview of particle physics in 2016: http://www.symmetrymagazine.org/article/2016-year-in-particl...

> also read the excellent comment by Marty Tysanner on the same page.

also read Woit's sensible reply to that comment and MT's re-reply. I agree Woit is polarizing but maybe not unnecessarily.

because the problems he points out are not hidden in the dark corners of the internets but all over the mainstream media - and arxiv too, and backed with millions of dollars.

- The detection of gravitational waves has been expected for decades based on the orbital decay of binary neutron star systems

- There have been dozens of 'bumps' in the data that went away over the past several years. The Standard Model still stands and there's no BSM physics that we've found. I'd bet the bump goes away just like the recent 750 GeV bump.

- The magnetic moment of the proton and antiproton being the same is also not BSM physics at all.

The problem is that fundamental elementary physics continues to boringly grind along and validate the standard model and things that we already knew had to be found (Higgs, LIGO).

The result in the popular press, though, is that multiverse mania has taken over, which is a non-solution to the problem. You'll actually find arguments that we stop thinking about alternatives to string theory and just assume it works because its beautiful, but it can never be measured--which is not a scientific argument.

500 years from now if we haven't made any progress we might wind up going "meh, probably string theory, but we'll never know", but its too soon to throw in the towel yet.

> very accessible overview of particle physics in 2016

In general, Symmetry is an excellent site; they describe themselves as "An online magazine about particle physics" and are funded by the US Department of Energy, via Fermilab and SLAC. Recommend having a closer look http://www.symmetrymagazine.org/

We need to make sure we're not ignoring motives we don't understand. That happened in the Republican primary, the general election, and perhaps now, too.

Don't just look away from a very real concern.

You have to dig down to the 2nd page of comments on his previous post about these articles to learn that when he says "multiverse" he isn't referring to the many worlds interpretation of QM:


It's very confusing and I can't help but assume he doesn't mind misleading people.

I don't think there is any intent to be misleading.

I agree that reading this blog by itself is jumping into the middle of the conversation, and terms are not explained. The multi-verse has been discussed there for years, so occasional readers may assume it is the many-worlds stuff, but it is a very different beast.

Is it confusing? Is there a specific reason to think that the multiverse is related to the many worlds interpretation? I didn't know people thought they were the same thing.

From https://en.wikipedia.org/wiki/Many-worlds_interpretation

> MWI is one of many multiverse hypotheses in physics and philosophy.

I see your point, but Wikipedia isn't super reliable on the more unsettled parts of theoretical physics. For example, a cursory search gives me https://www.technologyreview.com/s/424073/multiverse-many-wo..., but while I can't judge https://arxiv.org/abs/1105.3796 on my own, it does appear the idea is new, and for something as new, and as vaguely defined as the multiverse (MWI itself is quite old), I probably should ask for more than a Wikipedia summary. I mean, it's fine by me if there's some kind of disagreement within physics, but to assume they are the same thing would require more, no?

I find that a bit mis-leading. While IAAP, this is not my area.

My understanding is the at the big bang there are a HUGE number of universes (the Multiverse) that are causally dis-connected, with different coupling constants for the forces and very different physics. Within our universe we have our physics, which includes QM, and one way of viewing QM is the many-worlds view.

"Multiverse" can have several different meanings, causally disconnected regions of spacetime in General Relativity, landscape ideas in String theory or all the worlds of the many worlds interpretation. However, since non of them so far produced actual, that is observable, physics, it is probably not important to distinguish between these meanings in a popular discussion.

The post seems to be a short rant on opinions concerning the idea of a "multiverse" published on various popular science news websites. Isn't this idea is just one of the several major interpretations of quantum mechanics? Any popularized discussion of quantum mechanics (or cosmology) could be labeled "fake physics", of course, but the point of this is not clear to me.

I think the problem is that it's quite challenging to explain something like the "Many Worlds" interpretation of quantum mechanics to a general audience that often doesn't have any prior knowledge of modern Physics or even linear algebra. So, in order to explain it in simple words, people use analogies. And analogies are probably the greatest tool we have in teaching, because they help people to understand a given aspect of a theory by comparing it to something they already know. However, when taken too far they will always break down. Hence, in order to do good popular science, we need to find analogies that fit very well with the theory which we want to explain, and when using them we need to make it very clear which parts of the analogous system can be mapped to the theory we explain, and which ones can't.

From reading the articles the author cited as examples of "Fake Physics" I have the impression that their authors simply took the analogies they used too far, thereby saying things that are simply not true. And while this surely is problematic I wouldn't call those articles "fake", as there seems to be no malicious intent behind the misleading analogies.

Interesting; I didn't know that Nautilus was backed by a religious organization, but in retrospect I am not surprised at all.

They get a lot more credit and traction here than they deserve. Nautilus is usually full of bull but pretending to be intellectual.

I like a lot of Nautilus articles, but like you didn't know who funded it. (Ie Templeton)

That's important to know as a critical thinker.

For the record, the Templeton Foundation aren't their only source of funding, and the rest appear to be entirely secular:

http://nautil.us/about (heading "Nautilus has received financial support from")

Seems a little unfair to only mention the philosophical outlier.

I see the list of "advisors" on that page has some pretty respectable names, too. That may not be worth much - who knows how much influence they have over the content - but having some venerable academics willing to associate their names with your output like that is surely at least a small signal in favour of legitimacy.

I think a big part of the problem is we look at and model time backward. It isn't the point of the present moving from past to future, which physics codifies as measures of duration, but change turning future to past. Tomorrow becomes yesterday because the earth turns. So the present is a constant state of collapsing probabilities. Duration is just the state of the present, as events form and dissolve. This makes time an effect of action, more like temperature, color, pressure, etc. Thermodynamic cycles are more fundamental than the linear, narrative effect of time. As high pressure is causation, while low pressure is direction. Time is not causal. Yesterday doesn't cause today. Energy is causal. Sunlight on a spinning planet creates the effect of days. As energy is conserved as the present, the past is consumed by the present, as much as the inertia of this energy directs present events. Time is asymmetric because action is inertial. The earth turns one direction, not the other. Clocks run at different rates because they are separate actions, not because the fabric of spacetime is curved. A faster clock will simply expend energy quicker. That's why the twin in the faster frame ages quicker.

There are two forces sustaining fake physics :

    1. "publish or perish"
    2. preserve credibility capitalization (scientist's karma points)
The former is what generates fake science, and the later is what let it prosper.

I would nevertheless be very cautious when someone claims he can make the difference between true and fake physics. There is a risk of error, and a higher risk of manipulation by introducing a bias by leveraging the credibility force (2). His book where he spit in the soup, and his promotional web site where we have been directed too, looks very like what he condemns.

I'm curious to see if that author is able to recognize real physics [1].


Isn't the best approach to 'fake physics' simply to publicly criticise the ideas, or, if that's too exhausting, to point to a previous criticism?

I think this is an important data point given the powerful organizations that benefit from discrediting academics (especially in the current political climate) -- some of this is happening in climate research, and if it's also happening in physics that matters.

All of this reminds me a bit of the novel Three Body Problem where certain branches of academia (including hard sciences) are discredited as "too reactionary" or "reactionary philosophy."

The motives in some cases are clear: just as tobacco companies benefitted from the status quo, so now do oil companies.

However in physics, the motives may be more nuanced (and more of a territory capture that I don't understand).

Don't doubt for a second that legitimate climate researchers will be accused of doing fake research ... most likely within the year. I'm not sure what the physics community can do besides joining together against this and continually pushing back.

Let's stay transparent. Science is well-defined and important.

Whenever I try to load https://www.edge.org/response-detail/27129 the page crashes and Chrome says the page ran out of memory.

I'm surprised he didn't mention pilot wave theory. Cranks love it because it doesn't violate the notion of local realism and it has infected respected popular-science publications. But any "interpretation" of quantum mechanics that leads to violations of causality should be treated as illegitimate until proven otherwise.

Also respected scientists like it because it dissolves many pseudoproblems of the orthodox quantum theory like the measurement problem and because it doesn't require rethinking basic scientific philosophy like Copenhagen did. I do not think it is perfect replacement for the orthodox quantum theory, but what do you mean 'it leads to violations of causality'?

This sounds like the reaction against "Fake News", "Fake History", etc.

What that really tells me is this is an area of study that the Establishment does not want you to follow so it will label it to apply social pressure to mitigate the effects & discourage people from following it.

The thing is, nature has a way of not caring what the Establishment thinks. It does care on a surface level, but it also conspires to cause the collapse of the socio-information paradigms that the Establishment creates.

Given we have a soup of information & no grounding central authority to give us "objective reality", we ought to utilize other techniques. I don't happen to know what these techniques are, but I suspect that it has to do with network models, cognition models, perspectives (physical & information), attention schemas, faith, complexity, patterns, language, etc.

> The thing is, nature has a way of not caring what the Establishment thinks. It does care on a surface level, but it also conspires to cause the collapse of the socio-information paradigms that the Establishment creates.

You're right as far as you go, but actually nature doesn't care what anyone thinks. One thing to question is how the establishment got to be established. If it was primarily through physical strength, heredity, or persuasion (as is more true for the bulk of history), then they are no more likely to be right than a plebe (except that p(persuasive | true) > p(persuasive)). But it's more true than in most of history that the odds of being an elite are much higher if you can consistently make correct predictions about the world.

It's true that we don't have a central authority to give us an objective reality, but we don't need one either. Reality is there if only you choose to observe it closely. I think the best technique for dealing with the soup of information is to find bona fide experts with a track record of providing correct explanations and giving what they say more weight.

> I think the best technique for dealing with the soup of information is to find bona fide experts with a track record of providing correct explanations and giving what they say more weight.

It's up to the experts to make a persuading case & to engage the audience to think for themselves. There are many-a-con-artist who labels oneself as an "expert". That con-artist may even have credentials to make one's "authority" even more convincing.

Labels justify all sorts of things, worst of all, telling one to stop thinking beyond the abstract representation of the label. In the domain that the label casts a perspective shadow upon, the con-artist is free to utilize to take advantage of the audience's ignorance.

Knowledge is seen as dangerous by people holding power over others. It always has been (e.g. the apple thing). Therefore, in an age where knowledge is more readily accessible to a much larger number of people, it behoves the groups in power to somehow thwart its development. A good way of accomplishing this is to use mass media to make the signal to noise ratio as small as possible. Which is itself accomplished nicely by paid, conscious, agents and amplified by the growing hordes of individuals humiliated, rather than motivated, by their lack of knowledge. The true becomes a moment of the false, the real a moment of the fake. I believe that "spaces", real or virtual, rather than techniques will develop for knowledge to flow and grow.

Is there any scientific theory outlandish enough that you are willing to call it nonsense on its face?

A better indicator of quality of a model is to ask, "is this model useful?". At the end of the day, models are representations of another system. Like tools, some are more useful than others.

Ok, so I propose (for the sake of argument) the following model: any message posted on an electronic message board that casts aspersions on "The Establishment" was actually written by an AI running on a computer in Siberia.

Let's suppose that many people find this model useful, because it allows them to dismiss such posts without having to critically engage them. They downvote/flag all such messages, and they feel good for having done so because they believe they are resisting an attempt to manipulate their opinions.

Do you have any problem with this?

Useful in that context is typically used to refer to being able to make accurate predictions about the future.

Does it need to make accurate predictions or just inform good decisions?

I think a model can be bad at the former but good at the latter.

It depends ;-)

Useful to what systems & what aims? What's important to an Entity to engage what systems?

AI needs tuning. Doesn't seem to be reading the posts it's relying to. ("useful, because it allows them to dismiss such posts")

It's useful to think that God will reward you for hard work (or this model gives statistically useful, actionable, and accurate results and makes useful, actionable, statistically accurate predictions) but it is not an accurate model for the mechanism by which external changes in your life occur. I vote against "is this model useful" as an indicator of the quality of a model.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact