They seem to be saying that to do a quantum computation there is a minimum volume of space required in which to perform it which grows in relation to the number of qubits.
Not minimum volume but volume with a minimal boundary area proportional to the size of the states your computation explores. The weird thing about these holographic mappings is that stuff you would expect to be limited by volume is limited by surface area of which there is rather a lot less of.
The cool thing is that there are few independent such results, starting with the entropy of a black hole, so if you like to speculate on possible physics beyond QFT it gives you some material.
Even if you can imagine a baroque and spacious boundary that is to your liking, it remains the case that the simpler, economical boundary still exists, and you have to reckon with that one.
de Sitter Space: A model of the universe with a positive cosmological constant, leading to a universe that expands exponentially. It's a solution to Einstein's equations of General Relativity representing a universe dominated by dark energy.
Anti-de Sitter (AdS) Space: A spacetime with a constant negative curvature. It's the opposite of de Sitter space and is a solution to Einstein's equations with a negative cosmological constant. AdS spaces are commonly used in theoretical physics, especially in string theory.
Brane: Short for "membrane", in string theory and related theories, a brane is a physical object that generalizes the notion of a point particle to higher dimensions. A universe can be conceptualized as a 4-dimensional brane existing in a higher-dimensional space.
Conformal Field Theory (CFT): A quantum field theory that is invariant under conformal transformations, which are transformations that locally preserve angles but not necessarily distances. CFTs are important in studying phenomena like phase transitions and in string theory.
AdS/CFT Correspondence: A conjecture in theoretical physics that proposes a relationship (duality) between a type of quantum field theory (Conformal Field Theory) and a theory of gravity defined in an Anti-de Sitter space. This duality suggests that calculations done in one theory can be translated and used in the other.
AdS_5: This denotes a 5-dimensional Anti-de Sitter space, often used in the context of the AdS/CFT correspondence.
S^5: Refers to a 5-dimensional sphere, a higher-dimensional generalization of a usual sphere. In the context of the AdS/CFT correspondence, the theory of gravity is considered in a space that is the product of AdS_5 and S^5.
The universe as an "effective de Sitter brane" in an "Anti-de Sitter (AdS) space": This suggests that our observable universe, which approximates a de Sitter space due to its accelerated expansion, can be represented as a brane (a boundary or membrane) within a higher-dimensional Anti-de Sitter space. AdS spaces are characterized by a constant negative curvature, contrasting with the positive curvature of de Sitter spaces.
"Conformal 4-dimensional field theory is mapped to AdS_5×S^5": This is an instance of the AdS/CFT correspondence. It states that a 4-dimensional Conformal Field Theory (CFT), which is a quantum field theory invariant under conformal transformations, can be equivalently described by a 5-dimensional gravity theory in an Anti-de Sitter space (AdS_5) times a 5-dimensional sphere (S^5). This duality allows for the study of gravity in AdS spaces using CFTs and vice versa, providing insights into quantum gravity and string theory.
> But our universe adheres to AdS constraints. As far as we know.
No, as far as we know, it doesn't since the cosmological constant is slightly positive, so the universe would best be described by de Sitter spacetime, not anti-de Sitter.
The great-grandparent comment was talking about an "information density limit" and black holes in our universe, so I assumed the referenced sci-fi collection had something to do with holography. I don't know of any particular emphasis on AdS in that context before 1997. Elaborate?
Obviously you can stick GR in AdS, but AFAIK nothing about that would've seemed interesting with regards to holography before Maldacena, let alone plausibly providing inspiration to a fiction author.
To be less roundabout than my previous comment: I think Baxter may have been inspired by the holographic principle in general, but I doubt AdS crossed his mind at all when he was writing these stories in the 80s and early 90s.
(EDIT: or maybe Baxter was thinking about AdS but not about holography. I haven't read his work.)
Vacuum Diagrams is a collection of short stories Stephen Baxter has written well before 1997, but the published collection in 1997 included new intertwining narrative stories about an AI named EVE to bind them together. EVE is an AI in a black hole? On the edge of one?
Stephen Baxter is known for sch-fi so hard it'll cut you. He may have just accidentally come up with similar concepts during the same year.
Think of the spaces in ADS/CFT as mathematical spaces, not physical spaces. It lets you take a model constructed in one space, translate it into another space, perform some calculations there that might be simpler and then translate them back.
This makes a lot of sense. Laundauer's principle shows that the amount of energy to store a bit is directly proportional to temperature, thus, energy loop size/travel time at the speed of light. The lower the temperature, the smaller the loop radius, the lower the energy requirement. There is no primitive of storage in our universe, it's all delay line memory.
From what I understand, the cost of "erasure" is really just the cost of replacement. True erasure can't exist in a unitary universe. In the same way, the cost of "allocation" is effectively the cost of replacement too, since our universe is unitary and no information can actually be lost at the fundamental level.
Think virtual memory vs actual memory, forks, copy-on-write mechanics, etc. Are we juggling/managing memory or actually creating any? As far as we know, the universe itself is a reversible quantum supercomputer. There are no erasures and a reversible computer is 100% efficient.
If the formula is correct at all, it should apply to the reverse process of setting bits, not just deletion.
Thanks for the paper. Brownian computers are a cool idea, they seem like exploitation of the ratcheting paradigm.
Entropy increasing is a illusion based on the specific selection of macroscopic observables / slices of configuration space that we use as inputs for entropy. There is no information lost as much as some species that live on certain observable sense space slices, becoming ignorant and unable to exploit the new patterns of information flow. Chaos and order are two sides of the same coin. There will be brownian computers, funny enough, in chaotic environments. Whereas there might be very compact and sharply defined high efficiency solitary entities in low energy environments.
That makes sense. The whole point of the theory about holographic spacetime is that the 3d universe is completely described by the information densities on its edge. This implies that if you need to contain a computation involving a certain amount of information, then you need to have at least that amount of information on the edge. Since the amount of information anywhere is not infinite, this also implies that you need a non-zero amount of edge surface for that computation and thus a non-zero volume.
TL;DR: If the volume is too small, you just cannot fit enough information inside and so you cannot do computations which require more information than that.
It turns out it is pointless, we just work in a very fad-driven industry. Once Google started computing in a black hole, it just automatically became popular, and eventually it became common knowledge. “Can’t get fired for computing in a black hole” they’d say.
Then the Hackernews guys put their server in a black hole that turned out to be a wormhole, one thing lead to another, and posts were getting sprayed across the timeline. What a mess.
Throwing around general accusations of fads doesn’t reduce a paper to part of a fad. Papers stand on their own merits, based on their reasoning, independent if any associations with fads.
I expect the properties of a black hole are simply a convenient context to talk about the holographic principle.
If the holographic principle holds true, it is true across any border separating one space from another.
Likewise, mathematicians prove things about infinities, which we will never encounter directly, but which have useful implications for things in math that can relate to things we might create or encounter.
The time police insist that I inform everyone that my post about accidentally posting across timelines is, in fact, just a joke, nothing serious, haha.
Of course! Once you know you are going to lose the time war you just rotate and reflect your polarity through your current Lorentzian manifold point to switch sides and win.
The trusty eternal Möbius Timeline Gambit. Never invented, just reused and reused ...
That’s why I stick with Oort cloud computing. It might take 22 days to get the request back, but the chance that anyone can see that data is astronomical.
People have been working at the intersection of blackhole physics and quantum computing/information since the early 90s. This is a ripe area to work in, because this is where QM+GR are most likely to break.
While we are ultimately interested in the physical limits of computers in our universe, working within the context of the AdS/CFT correspondence gives us a precise framework for quantum gravity. As well, a fundamental observation in computer science is that the power of computers is robust to “reasonable” changes in the details of the computing model: classical computers can be described in terms of Turing machines, uniform circuits, etc. and the resources needed to solve a given computational problem will change only polynomially. Quantum computers are similarly robust. This robustness suggests understanding the power of computers in AdS is likely to yield insights that apply more broadly.
I'm decidedly not an expert in this field but as I understand it there are two points to doing the math in this way:
- We know how to describe the inside of a black hole mathematically from an AdS perspective.
- IF the AdS/CFT correspondence is true (likely but unproven), then you can generalize from "works inside black hole" to "works inside the normal universe".
It's more an exploratory step towards getting a better understanding of complexity theory for quantum computers than it is a practical result intended for doing computations inside black holes.
It's important to remember that we know very well that we don't live in an AdS space. It's actually not all that likely that many of these theories apply to a de Sitter space-time like our universe, though it remains to be seen.
That is, even if the AdS/CFT correspondence is true, it may still turn out that the dS/CFT correspondence is not, and so the results are not applicable to the physical universe.
You're currently possibly living inside one. Big Bang? - The initial collapse. Universe expansion? - Matter falling in. It may be unfalsifiable from inside though.
They describe computations that are forbidden despite the inputs to these computations being small, and the description of the computation being easily fit inside the black hole.
Sure? That doesn't even matter for normal computers though. Busy beaver algorithms can be described in very few lines of code but can generate incredible complexity. It's not super difficult to devise an algorithm that would need many more bits of information than just its description+inputs to accurately describe all the state required, and that is in fact exactly what the authors of this paper did.
Limits on all computational complexity in a given regime are a quite different result from noting particular algorithms have high computational complexity.
And if these results and the holographic principle holds, then these limits would apply to all computers. Even “normal” ones.
> They seem to be saying that to do a quantum computation there is a minimum volume of space required in which to perform it which grows in relation to the number of qubits.
My first glitch happens when trying to parse "computations (...) which cannot be implemented inside of black holes".
My assumption is that whatever computer is used it is falling between the event horizon and the singularity. The masses involved are the mass of black hole before the object falling into it and the mass of the object. The amount of computation that can be done is dependant on the mass of the object and the time available. The time available is some kind of product of the mass of the object and the mass of the black hole. As the object is reaching the event horizon, the event horizon expands to accommodate the object, but this expansion isn't super simple as my understanding the expansion itself progresses at a speed of light (the other side of the black hole takes time to be affected by the object falling).