To expand on that thought a little bit, with a hologram, the projected object is an illusion, and can be made to appear to move at arbitrary speed by e.g. rotating the projection apparatus. On a larger scale, it's possible to create the illusion of FTL movement by e.g. rapidly rotating a laser projector in space and then traveling a long distance from it, so that at a certain distance from the source, it appears that there is a projection from the source which is rotating faster than light.
If this theory models the universe as a 3D (or more) projection from a 2D surface, why is it not possible to cause objects within our perceived 3D+ universe to appear to move faster than light by causing some sort of change to the 2D surface itself? I assume there is a reason this is not possible within the bounds of this theory, but I have no idea what that reason might be.
(Warning: pseudoscientific bullshit ahead)
Picture a transparent sphere, covered in tiny dots. Imagine that you live on one of these dots and the others are stars. Now, mark 'your' dot and look around on the sphere to find the dot that is farthest away, on the opposite side. If the internal radius of this sphere is 1 unit, then the distance to that dot (traveling on the surface of the sphere) is pi units. Imagine yourself in Edwin abbott's Flatland, but thanks to relativity you know this flat universe to be finite and bounded, such that if you set out in any direction in a straight line you will eventually arrive back at your starting point.
Now, let's designate that point opposite you on the sphere as the Maximally Distant Star. There's no doubt, no matter what route you take along the surface of the sphere, this is as far as you can go before you begin coming back, if that makes sense. But suppose you were able to transit across the interior of the sphere instead of across the surface. It's still the farther point from you, but by transiting by volume instead of on the surface the distance is only 2 units, or 63% as far as it appears to the surface-dwellers.
Of course, this realization is of little help in figuring out how you as a Flatlander can access that theoretical 'volume' so as to shorten your transition time. If you could, you'd seem to disappear at your existing location (most likely by shrinking down to a point, or possibly seeming to turn inside out, or both) and then reappear at your destination, assuming you hadn't been eaten by the great old ones said to inhabit that forbidden space by famous horror author J. Q. Likecraft.
This world didnt last very long. Under our concept of time it happened long ago, but we can see evidence of it smeared across the sky in the cosmic microwave background. (A 2d image that should look the same no matter where you stand in our 3d space.)
I apologize for being so vague; I'm trying to develop something more solid related to this idea, but I'm at that in-between stage of not being ready to properly articulate it yet.
Space and time as absolutes appear to be a construct, a consequence of us being highly complex neural networks trained at a specific scale of the Universe - the "human scale" - as beings about 1 ... 2 meters in size. Absolute time and absolute space appear to be very real on this scale. But as soon as you leave this order of magnitude, and climb either up or down on the scale of things, this turns out to be mere provinciality.
Time and space are first and foremost concepts (ideas, opinions, "facts")
Have you tried asking a mountain whether it ascribes the state "exist" to the concepts of time and space?
It is true that time and space exists for humans thinking about the mountain however.
Time exists for the mountain. The mountain moves through time and is effected by time. The mountain is incapable of perceiving time. Neither can bacteria perceive dimensions, but they are still controlled by them.
A mountain is a part of spacetime. But it does not "exist" for the mountain.
Certainly the mountain is affected by the environment and can be modelled with the spacetime concepts. But the mountain does not perceive anything, and therefore cannot compute isExist (spacetime)
Distance was already an illusion, insofar as it represents a non-unique projection of a spacetime using purely relativistic techniques. :) But the speed of light isn't going to be got around by any of these models. If anything, the models will explain this limit better.
I recommend the Hammock Physicist as a random physicist who can explain concepts like this at just the right level of detail for the HN audience. http://www.science20.com/hammock_physicist
Or at least they seem to rant against "leftist" scientists. http://www.science20.com/kevin_m_folta/the_scicomm_challenge...
Like, can we not treat "the people who say things like 'jewish elite' " and "people who are right wing" as being the same group?
I mean, I'm not saying they don't have an overlap
( It is probably true that P(a person is right wing | they say stuff about "the jewish elite") > P(the person is right wing) ),
but I'm concerned about the social incentives that might be created/contributed-to if they are treated as if they are the same.
... It's clearly a collaborative site with hundreds of authors, at least two of whom are right-wing authors, who have presumably got to the top being controversial and attracting attention from outside the normally quiescent community of readers talking about entropic gravity, thus convincing the algorithms their content is worth showcasing.
HERETICS, THE LOT OF THEM! BURN THE DATA CENTER WITH FIRE!!!
I wonder if books like Zelazny's Amber series will seem, in retrospect, more insightful.
You are awake, walking around a basketball gymnasium, it's 100x100x100ft. A nice, large room. All other details preserved as well. Surely, our little brains can't contain a 100x100x100ft gymnasium; our little heads would explode trying to fit something so large into it. Yet, when we dream, we are perfectly capable of revisiting this large room in our mind, full spatial properties preserved.
While there are many encoding strategies, looking at how the brain encodes information (or a more simplistic neural network), this gives, in my opinion, a good intuition as to how distance and spatial properties can both differ and relate between two "universes".
You can also observe that our final "view" of the gymnasium is just the surface of the complex underlying thought-structure of the brain. We very much, on a daily basis, only see the surface of what is a very very complex process going on inside the brain.
Perhaps, one may even call that a projection.
Which reminds me of Feynman's theory that there was only one Electron and it was everywhere in the universe all at once over an infinitely long period of time, and could even travel in time if it had to.
Further, would such a projection suggest a possible mechanism for entangled particles to interact at distance (e.g. the particles are separated in 3D-space, but are co-located in 2D-space)?
The universe is currently not in the holographic phase. That is thought-provoking, though. Let's assume that this is plausible. During the holographic phase, the speed limit wouldn't apply. There are a few interesting theories that require FTL during the big bang.
What I find interesting is that the universe went through two very distinct phases and we have two distinct physical theories.
If you look a that photograph in normal light it just looks like a lot of black and white lines and blotches smeared out over the surface instead of the original object that was photographed. In a holographic model of the laws of physics, the entire content of the universe is encoded somehow onto a 2D surface. If you could somehow see that surface, it would not be obvious that it encoded anything in particular. Where the analogy falls down is that there is no laser that projects a 3D image from this surface. There's just a set of 2D laws operating on the 2D information. To anyone on the "inside", as it were, the universe still appears to be fully 3D with 3D laws of physics.
"If string theory is a correct theory of Nature, then this implies that on some deep level, the separation between large vs. small distance scales in physics is not a fixed separation but a fluid one, dependent upon the type of probe we use to measure distance, and how we count the states of the probe." 
Brian Greene also explains why in an accessible way in the Chapter 10 of The Elegant Universe , which I recommend.
We are small creatures, but our networks -- our brains and societies -- represent the most complex information-encoding geometries we've yet seen in the universe.
And I see the way that our curiosity reaches upward in scale, documenting the far corners and folds of the universe; and deeper, interrogating the tiny subatomic spaces; and forward and back, building models of the future and past of this point in time.
And we capture this knowledge and bring it into our tiny space, information encoded in structures along the skin of this rock floating in space.
And I wonder if that's not holographic in some way: That insatiable drive to compress information from massive scales of space and time into the tiniest of spaces...
But of course, this is just armchair philosophizing ;)
If so, think you're selling humans a bit short. Our capacity isn't really all that limited. As long as we are able to continuously apply new information to gain better insights into reality, we will improve. Maybe we are woefully behind other some other beings out there, but that doesn't mean we'll never get there. Our capacity for exceeding our physical limitations (ex, we can detect neutrinos despite having no ways to see them through evolutionary abilities) and create new physical realities (the coldest known temperature measured is not somewhere else in the universe, it's in a physicist's lab) shows that we are still on a path towards understanding so much more than we do today.
I guess I'm not sure what "obsessive pursuit of information" means in this case. Is it continuing to seek better explanations even when current ones serve a purpose adequately (ie, the principle of fallibility)? Is it that once new explanations are available people we seek to apply those explanations to other domains and create further information?
I'd argue that both of those examples are positive practices that have resulted in better quality information and expanded valuable applications of that information.
I cannot see the progress that humans have made since the scientific revolution and write those improvements off as non-objectively positive things. Earth is not naturally hospitable to our form of life, and our ancestors suffered through extremely short, brutal, and unpleasant lives due to that. It is only through the pursuit of information (obsessive pursuit even, in the sense that we needed a large amount of information that is both reliable and expandable to apply meaningfully) that most humans live lives where we don't die from things like starvation, exposure to the elements, treatable diseases and so forth. New problems have certainly been introduced by the application of gained knowledge, but those problems will never be solved by not pursuing more (and better) answers.
Less information is never better than more information, and societies which advocate most liberally for the pursuit of information have reliably produced better conditions for their people than those which do not.
I think it's just evolution, nothing more, but that is not to say that evolution as a very simple principle has not made astounding 'discoveries'. We evolve, elephants evolve, ants evolve, and in that bloody and hungry process, beautiful things of incredible complexity emerge all over the globe.
I suspect that many of the replies to this comment who cynically cite variations of "the principle of mediocrity" would also do well to read it.
He makes the point that the single coldest place in the known universe is not anywhere in the depths of space (which is gets down to about 3 kelvin), it's in a lab designed by humans and used for quantum mechanics research (200 nano kelvin).
Our capacity for information gathering and creativity has allowed us to create physical realities that cannot otherwise exist outside the influence of intelligent beings. Incredible to think about.
Hey, this is imaginative, even if it's armchair philosophy. Seems like you are thinking that the holography principle is self-similar at many scales.
> there is substantial evidence supporting a holographic explanation of the universe—in fact, as much as there is for the traditional explanation of these irregularities using the theory of cosmic inflation.
This is a bit misleading, especially the phrase "substantial evidence". I bet that the authors of the paper would not have used this phrasing. From the paper:
> We emphasise that the application of holography to cosmology is conjectural, the theoretical validity of such dualities is still open and different authors approach the topic in different ways.
Essentially, their paper shows that a holographic model cannot be ruled out simply by comparing the predictions it makes for the CMB to observation. It also gives some intuition for why a holographic model might make sense - at sufficiently early times in the Universe quantum and gravitational effects begin to coincide, and in other contexts people have modeled quantum gravity using "a quantum field theory with no gravity in one dimension less". The paper finds, however, that there is no empirical case to be made for discarding the standard model of inflation:
> We see that the difference between evidence for [the standard model] and HC predictions is insignifcant, with marginal preference for HC, depending on the choice of priors.
If 'true', is the holographic universe 'merely' a mathematical tool that helps us solve problems, or is it a description of an objective reality, and the universe is 'really' a 2D surface, and our 3D perception is somehow illusory?
I understand this is partly a philosophy of science question, but would be interested to hear an expert opinion ...
Newton's model of gravity works well enough for approximation at human scales, but is flawed enough that you don't use it for astrophysics or microscopic realms.
If it walks like a duck and quacks like a duck, is it a duck? What if it's really something indistinguishable from a duck, but not actually a duck? What does that even mean? Our concept of "duck" is inextricably connected to our perception of it. A "duck" does not exist in the way we envision it, if we do not envision it.
So backing up to your question again, does this theory (or any other theory) explain everything about the universe, or is it more of a temporary patch that nobody believes is correct, but works better for one area, and worse for others? The latter. All models are the latter. The truth is unknowable.
As the parent says, to the extent that the model predicts our observations, it is consistent with reality. Many other models are also consistent with reality. This model may be simpler in some ways and less simple in others. If it is simpler in all ways and has predictions that are consistent with reality, we will use it in preference to other models.
Having said all that, I have no idea in which areas it is simpler, in which areas it is more complex and in which areas it conflicts with our observations.
It's like having a model that says "quack-like sound == duck". It's wrong. We know it's wrong. It may be useful when looking at birds, but it's 100% wrong when your hard drive produces the noise.
Of course it's a spectrum, but some of the things we hang on to and use are known to be false in a large way but still useful.
But the key is that we can't know. Maybe "God did it" is the truth and the rest is just appearances. Maybe our mathematical constructs do represent reality closely. Even if we build models that are isomorphic to our perception of reality, there is still no way to tell if they are in any way similar to the actual workings of reality. The FSM can certainly alter our brains so that we just believe we see things happening in a consistent way, even though they are not.
Like I said, it is tempting to believe that the simplest solution that we can imagine is close to reality. This is very unlikely to be the case, IMHO. It's fine to believe it is, but that is not science any more. You are entering into religion.
Newton's laws are not known to be "more wrong" than relativity. Relativity matches our observations more closely and is simpler in some respects, but it could be just as wrong. Or more wrong. We have no way of measuring how close to reality our models are. We can only measure how closely they match our observations. As one of the other posters quoted: All models are wrong. Some of the are useful.
Even when you have a model that is completely consistent with observations it does not make it more likely to be correct. Imagine creating some byzantine model with a myriad of exceptions to explain away any discrepancy. Obviously as long as we accept complexity, we can model anything. These models are not usually useful though.
So as much as I understand that you want to ask, "Is this model more likely to be closer to the truth than other models", it isn't a question we can answer in science. We can only answer the questions, "Is this model consistent with our observations?", and "Is this model simpler than another model?"
The article claims that the model is at least as consistent as other models while being simpler in some instances. I think work needs to be done to verify those claims.
We have a current best, or at least competing current bests. These may later be invalidated - that's fine, that's learning new things. It doesn't mean that everything we know now is indistinguishably-wrong as everything we thought we knew in the past. We can predict things now that we couldn't before.
We also have approximations that don't attempt to predict the system, they just produce useful results often enough to be retained - are these indistinguishable from all other attempts at describing reality?
How far do we take it? If Pi is 3, it may actually be true! It'd just invalidate a ridiculous amount of what we think we know. That's equal to quantum mechanics, which appears to have concrete applications.
Perfect is the enemy of good.
> The article claims that the model is at least as consistent as other models while being simpler in some instances. I think work needs to be done to verify those claims.
Science approaches the underlying reality asymptotically.
> It's not amazing that we amaze ourselves.
This realization is super assuring with the complexity of the discoveries broadcast on HackerNews. Sometimes it's overwhelming, but now I can think of it as a sort of... scale.
In my mind this is one of the problems that moral philosophy has fallen into... rule utilitarianism works because we are about as good at predicting how others would feel as we are for ourselves (i.e. mediocre, but not terrible). Most modern moral theory is so complex in order to deal with various oddities that we're terrible at using it in even the simplest cases.
It's kind of the same thing with the holographic idea. Information locality is 2D (information is "near" other information as if it were on a two dimensional plane), but our perception is that of a 3D universe.
(Standard disclaimer about loose analogies not reproducing mathematical equations...)
Some would say there is no meaningful difference between those two. E.g. Max Tegmark in 'Our Mathematical Universe'.
I'm not saying he's necessarily right, I'm just pointing out that such a viewpoint exists.
Also, i have absolutely no ability to penetrate what is being described by this article. Holograms work by applying lasers to different surfaces and re-rendering the image relative to the original laser's point of view. How does the word 'holographic' apply?
Similarly, this theory is that our perceived reality of 3 spatial dimensions + time is actually fully contained on a 2d surface.
I think that this metaphor is flawed. Holograms project an illusion of depth, but they clearly are not 3 dimensional. The metaphor falls apart if your thought process includes the assumption that eyesight is an unreliable depth detector.
There are other methods for detecting x/y/z coordinates that do not rely on eyesight- consequently, I'm having a hard time reconciling what the metaphor describes.
I feel that Platz's material below is more descriptive and helps communicate the concepts in a far less confusing way.
Leonard Susskind on The World As Hologram (more dense/verbose)
"And what completes our incapability of knowing things, is the fact that they are simple, and that we are composed of two opposite natures, different in kind, soul and body. For it is impossible that our rational part should be other than spiritual; and if any one maintain that we are simply corporeal, this would far more exclude us from the knowledge of things, there being nothing so inconceivable as to say that matter knows itself. It is impossible to imagine how it should know itself.
"So if we are simply material, we can know nothing at all; and if we are composed of mind and matter, we cannot know perfectly things which are simple, whether spiritual or corporeal. ...
"Who would not think, seeing us compose all things of mind and body, but that this mixture would be quite intelligible to us? Yet it is the very thing we least understand. Man is to himself the most wonderful object in nature; for he cannot conceive what the body is, still less what the mind is, and least of all how a body should be united to a mind. This is the consummation of his difficulties, and yet it is his very being. Modus quo corporibus adhærent spiritus comprehendi ab hominibus non potest, et hoc tamen homo est. [The manner in which spirits are united to bodies cannot be understood by men, yet such is man---Augustine]."
It's called the evolution of free will. You might want to check that out if you'd like to know more about that perspective.
So a Player data-structure maps to a Player on screen, in the same way that some configuration of qubits on the surface of the universe corresponds to some thing from the "bulk" that we see.
In the long term string theory may contribute more to mathematics than to physics.
But my understanding is that so far all our evidence shows that spacetime is flat?
i.e. we spacetime needs to be an Anti-de Sitter space. https://en.wikipedia.org/wiki/Anti-de_Sitter_space
But I didn't think many actually think we live in an ADS universe?
Also relevant to string theory, https://en.wikipedia.org/wiki/AdS/CFT_correspondence
What really matters is the space of independent parameters (or, if truly independent parameters do not exist, then the truly fundamental ones). That would be really nice to figure out one day.
" [...] our 3-D ‘reality’ (plus time) is contained in a 2-D surface on its boundaries."
What, exactly, do they mean with 'on its boundries'?
You can think of the rules of physics as requiring 4 parameters to specify things like position, momentum, etc -- (x,y,z,t). Imagine that you have a category of states in that space, and the physical laws are functions that take states to other states in that same category.
Now imagine you have some functor which takes those states and translates them to a new space that only requires only (x,y,t) to specify them, and translates the functions so that they now take those states to other states in that 2 spatial dimension category. And there's a one-to-one correspondence, so you can move back and forth between the two categories. So if you have states A and B in 3d and A' and B' in 2d, You can go from A to A' to B' and back to B and it's the same as going from A to B.
It turns out that the physical laws in the 3d category have gravity, and the ones in the 2d category might not. But the laws still correspond one-to-one.
The reason this is useful is that figuring out quantum gravity is really hard, and it might be easier to figure out the functor that translates 3d physics into 2d physics that lacks gravity, perform the calculations there, and then translate the result back to 3d physics.
I'm working on an automobile analogy but its not easy.
The holographic principle means that if you take a sphere somewhere, the amount of entropy you can push there grows with r^2, and not with r^3 like you'd expect.
Am i correct ?
The information stored is proportional to the surface, not volume.
But also, the holographic principle applies to more than black holes. At any volume, the maximum information you can put there is proportional to r^2. That applies to all the volumes inside the volume you just measured... what means that a volume can not be completely full (a weird case of apparent non-locality).
An hologram is a better analogue to it than information being stored on the surface because, outside of black holes it apparently isn't all on the surface. (I guess for the black hole information loss problem, information being in the surface or in a hologram don't make much difference.)
Shine lasers on a 2-D surface, and interference patterns create a 3-D image in mid-air. You see a 3-D object, but all information is actually stored in 2-D.
This is the same idea.
<< Black holes are extremely dense and compact objects from which light cannot escape. There is an overall consensus that black holes exist and many astronomical objects are identified with black holes. White holes were understood as the exact time reversal of black holes, therefore they should continuously throw away material. It is accepted, however, that a persistent ejection of mass leads to gravitational pressure, the formation of a black hole and thus to the "death of while holes". So far, no astronomical source has been successfully tagged a white hole. The only known white hole is the Big Bang which was instantaneous rather than continuous or long-lasting. We thus suggest that the emergence of a white hole, which we name a 'Small Bang', is spontaneous - all the matter is ejected at a single pulse. Unlike black holes, white holes cannot be continuously observed rather their effect can only be detected around the event itself. Gamma ray bursts are the most energetic explosions in the universe. Long gamma-ray bursts were connected with supernova eruptions. There is a new group of gamma-ray bursts, which are relatively close to Earth, but surprisingly lack any supernova emission. We propose identifying these bursts with white holes. White holes seem like the best explanation of gamma-ray bursts that appear in voids. We also predict the detection of rare gigantic gamma-ray bursts with energies much higher than typically observed. >>
Even if the Big Bang was a continuous long-lasting process of energy/matter appearing, wouldn't the effects of General Relativity make it look like, from the perspective of observers later on within the Universe, that all the energy/matter appeared instantaneously?
> They found that some of the simplest quantum field theories could explain nearly all cosmological observations of the early universe.
This seems to be very, very old news. Like all theories, holography sounds very interesting. This article implies that a prediction made by holography has been observed. This is not the case, apparently. The math may a lot more elegant, but what new predictions are there, and have we observed them? This is what the article claimed to be about, alas it wasn't.
The latest Scientific American has an article on why some physicists are calling for rejection of cosmic inflation theory:
Although it seems to be paywalled.
Things like the Em Drive device and this are really making physics interesting again.
Different philosophy can result in different theory of explanation and different direction of model building.
What would mean a "holographic 2d space encoding 3d space" detached from an illusion made by someone observing it?
The universe is fake? Is this along the same lines as "everything is a simulation"?
If the holographic information is itself a simulation is another level of discussion entirely!
'appears' means that for us it is three dimensional.
'actually' means that the amount of stuff universe can hold is limited by 2d surface area, not 3d volume. Information/matter contained in universe is in '2d container surface' while interactions between them seem to happen in 3d universe.
In other words, the information/matter you can keep in volume of space is limited by it's surface area and not it's volume. If you try to add more, it becomes black hole.
Today everyone is very chill with using FT to convert say a pile of voltage and time measurements into frequency/phase and power, and back again, and manipulate either side to see how the other side wiggles in response. This is all basic EE stuff this is how your DSP or software defined radio works (well, not exactly but close enough)
Photo negative human scale holograms use monochromatic light to store all kinds of interference pattern stuff into splotches of black and white. If we were (are?) smart enough we could print out something on photo paper and shine a laser on it and a hologram would appear.
Another analogy which is pretty decent is its like x-ray crystallography where about half a century ago a popular use of computers was to take a crystalline material and shine xrays thru it making a very modern art ish looking photo of holographic interference patterns (sorta) which would be analyzed by a computer to reverse the transform and turn it back into a 3-d model of the atoms in the crystal.
So a lot of this is almost cryptographic hash like in that flipping a couple bits on one side almost always results in a huge change on the other side.
So if instead of talking about atoms you take a film and laser hologram of two coffee cups the splotchy hologram film doesn't really have a concept anymore of two coffee cups next to each other, you can't point to a splotch and say thats the cup on the left. You definitely can't slice the film in half and project each and expect one cup on each half, probably you'll have two holograms each with two coffee cups and 3 dB or so worse signal to noise ratio.
Badly mixing analogies its like running a crypto algo on 2 bytes of data and asking where exactly the two bytes are in the output. Well, um, if the crypto algo is anything better than ROT13 they're both kinda smeared everywhere across the output of the algo.
Really badly mixing analogies its like taking a .scad file storing a 3d object (or stl or cad program of your choice) and encoding it into one of those legacy 2-D QR codes and then asking where in the QR code can you see your 3-d object ...
The whole point of this is we kinda sorta have a 2-d holographic picture of the cosmic background radiation sorta. Its a very distinct picture. So whatever crazy simulations we try, we'd best make sure if you hologram-ify the cosmic background radiation, it best look similar to what our telescope have already seen. Kind of like stellar astrophysics isn't just making stuff up, your theories better match what spectrum analysis of star light often actually looks like, if your nifty stellar astrophysics theory predicts Sirius is a red giant or a black hole or something then your theory is messed up because we have lots of data about the real Sirius as opposed to theoretical models.
"By comparing the Bayesian
evidence for the models, we find that ΛCDM does a better job globally, while the holographic
models provide a (marginally) better fit to data without very low multipoles."
So the non-holographic model seems to be better globally even according to the paper.
If the further research proves that holographic models are better, it's good too, let the best wins. But at the moment it still looks to be too early to conclude too much.
'However, there exist classical solutions to the Einstein equations that allow values of the entropy larger than those allowed by an area law, hence in principle larger than those of a black hole. These are the so-called "Wheeler's bags of gold". The existence of such solutions conflicts with the holographic interpretation, and their effects in a quantum theory of gravity including the holographic principle are not yet fully understood.'
In the same way that two trains traveling at different speeds are warped to the observer, it would seem to me that we would need to observe the wave in real-time to accurately observe it.
Wow, these flat-earthers are getting ambitious.
>We test a class of holographic models for the very early Universe against cosmological observations and find that they are competitive to the standard cold dark matter model with a cosmological constant (ΛCDM) of cosmology. These models are based on three-dimensional perturbative superrenormalizable quantum field theory (QFT), and, while they predict a different power spectrum from the standard power law used in ΛCDM, they still provide an excellent fit to the data (within their regime of validity). By comparing the Bayesian evidence for the models, we find that ΛCDM does a better job globally, while the holographic models provide a (marginally) better fit to the data without very low multipoles (i.e., l≲30), where the QFT becomes nonperturbative. Observations can be used to exclude some QFT models, while we also find models satisfying all phenomenological constraints: The data rule out the dual theory being a Yang-Mills theory coupled to fermions only but allow for a Yang-Mills theory coupled to nonminimal scalars with quartic interactions. Lattice simulations of 3D QFTs can provide nonperturbative predictions for large-angle statistics of the cosmic microwave background and potentially explain its apparent anomalies."
So, basically, the holographic models seem to be pretty good, in the areas where it is perturbative (roughly l>30), it appears to fit slightly better than the current theories ("lambda CDM" models). The paper can rule out some of the holographic models because they don't fit the data, while other models fit great. Finally, the non-perturbative holographic models (roughly l<=30) might be able to be made to fit the data better if they are not calculated in a perturbative sense.
This seems to be pretty close to what the article says...
Please read conclusion section of the paper.
Epicycles also explain how planet movement works - but it is not evidence how it works it is explanation based on data.
> We test a class of holographic models for the very early universe against cosmological observations and find that they are competitive to the standard ΛCDM model of cosmology. These models are based on three dimensional perturbative super-renormalizable Quantum Field Theory (QFT), and while they predict a different power spectrum from the standard power-law used in ΛCDM, they still provide an excellent fit to data (within their regime of validity). By comparing the Bayesian evidence for the models, we find that ΛCDM does a better job globally, while the holographic models provide a (marginally) better fit to data without very low multipoles (i.e. l≲30), where the dual QFT becomes non-perturbative. Observations can be used to exclude some QFT models, while we also find models satisfying all phenomenological constraints: the data rules out the dual theory being Yang-Mills theory coupled to fermions only, but allows for Yang-Mills theory coupled to non-minimal scalars with quartic interactions. Lattice simulations of 3d QFT's can provide non-perturbative predictions for large-angle statistics of the cosmic microwave background, and potentially explain its apparent anomalies.
The evidence appears to be "this is almost as good as state-of-the-art normal cosmic inflation theory." Which is to say, it's more of a lack of proof against than strong evidence for--it's not providing explanations for unexplained observances.
That said, though, a not-quite-as-good match after a very few years' work is kind of promising. After as long as the standard model has been worked on, the match might be as good.
We all have that new agey friend on FB who will inevitably share this nonsense. Do we really need to be dealing with it on HN? What's next, Minion memes?
There is a very well developed model of how this relationship could work in a simpler case. I suggest you have a look at this wikipedia page: https://en.wikipedia.org/wiki/AdS/CFT_correspondence
Hopefully this mathematical model will convince you that the holographic principle is, at the very least, more mature than a minion meme.
Simulations and probabilistic estimations could be applied only to fully observable environments.
Experiment, not a simulation is the criterion of validity in the scientific method. Simulation is not a valid experiment.
Hypotheses and predictions are guesses, not facts and cannot be substituted for facts or logical premises.
'Patterns imprinted in it carry information about the very early Universe and seed the development of structures of stars and galaxies in the late time Universe.' [From the Bulletin]
If the universe is encoded on a 2d surface and has been projected on 3d, doesn't that sound like some design/intentional purpose encoded on the 2d surface?
In physics, all matter can be described as having "information" or "state" that describes itself. That information can't be destroyed. However, a questions about how matter's information was preserved in a black hole led to math equations that describe how that information is "encoded" onto the black hole's event horizon. As a funny side effect, those equations also happened to describe everywhere else in the universe, leading physicists to look around and ask themselves, "Are we holograms too?"
In this way, the Hologram theory states that all matter is really information on a 2D surface, and that "we" are the 3D holographic projection of that information.