Hacker News new | past | comments | ask | show | jobs | submit login
Megaverse: Simulating Embodied Agents at One Million Experiences per Second (megaverse.info)
144 points by lnyan 66 days ago | hide | past | favorite | 90 comments



I didn't really understand that article. But I did enjoy watching that jelly bean make its tower. I wish it well.


Looks like it's maybe about genetic evolution of neural networks controlling bodies in a 3D simulation.


It‘s Reinforcement Learning. In each state, the agent can choose from a set of actions. This leads to a state transition, and the agent gets a reward which it can use to adjust it‘s policy. A policy is just a conditional probability distribution over actions, given a state p(a | s).


Heheheh. Agree about the jelly bean. He was an industrious little digital creature. Amusing to watch in action. :)


In the first video, it looks like there are intersection issues (the block being carried by the agent appears to pass through other blocks). The agent hovers above the ground, and manipulates blocks but has no visible manipulators. When the agent drops several levels, I cannot tell if it is accelerating as it falls. So what do they mean exactly when they say this is a "physics-based simulation"?


There is a physics engine underneath, but the physics interactions are simplified for accelerated training. I.e. blocks that the agents carry can pass through other blocks, but the agents can't pass through blocks that are already placed. Such simplification allows the high throughput.

There is a section in the article that explains the motivation behind this: there are a lot of questions in embodied AI that can be answered with such simplified simulators, i.e. you need something more complex than Atari, but you don't need the full complexity of the real world.


This could turn out to be incredibly useful, will have to try it out.

From self-driving simulations and agent training to human interactions and large ecosystem simulations. Could allow us to observe how patterns emerge in/between large agent groups (swarm intelligence is exciting!) and train our models in more complicated and collaborative tasks.


Megaverse is such a great new buzz word. Love it


I think it’s from Sailor Moon.


That's Negaverse :)


I thought that was Twitter


mainly amongst the red-pillers.


You mean black pillers?


Sure, that too


Also very similar to Metaverse from Snow Crash.


Speaking of which, I'm already sick of hearing Tim Sweeney call Fortnite "a metaverse" just because it's a cross-platform game.

To me, metaverse (drawing from Snow Crash) means essentially "the WWW in 3D" which somehow we still don't have despite commercial approaches to that space such as Second Life and Roblox.

The closest thing I know of is probably Croquet (the original project from Alan Kay and co, less so the commercial venture although I'm watching that with some interest as well) and its various spin-offs like Open Cobalt: virtual spaces hosted independently but with the capability to link to each other to compose a larger distributed virtual space, just like the world wide web.


>"the WWW in 3D"

so like WebXR (& WebGPU is moving ahead too)

a good example of whats possible is Vartiste [1], a 3D paint editor tool running in a normal browser on Pancake PC's or in any VR HMD

[1] https://zach-geek.itch.io/vartiste


I've poked around casually at the WebXR docs, but the main killer feature for me with Croquet et al was portals which would display the goings-on in a linked remote server in real time (you could even embed the full remote environment inside the local environment, sort of like the 3D equivalent of an iFrame).

I guess WebXR's responsibility is the interface between XR and the web, and someone else would need to take on the synchronization task to achieve that.


i'm working on a federation system for metaverse. yen.chat is an example.


"Mega" just stands for 10^6 here, the simulation throughput :)


Hmm I guess I don't understand the point of the whole thing, but if they pitch "one million experiences per second" (whatever that means), they should better show some videos on the start page that have more than a few dozen dynamics objects in them.


Presumably the "one million experiences per second" is during the training phase, i.e. lots of runs of the agent simulated in parallel and/or faster than real time

And then the videos show the end result of a single successful run of the trained agent, rendered to video


Yes, this is correct. A single compute node can run enough of these little environments in parallel to achieve 1M FPS throughput.


Reminds me of the short-story "Lena" (discussion on HN https://news.ycombinator.com/item?id=26224835)


This makes me think of Hari Seldon's Prime Radiant.


Sus


amogus


is nowhere safe?


I wonder, and that's really the obvious question since the Matrix and seeing how the field is developing : do we live in a simulated reality? Any chance we could find out?


Almost by definition, there is no chance for something that can only perceive the simulation to differentiate between the laws of the simulation and the real laws of nature.

Of course, there is always the possibility that the agents running the simulation could introduce some explicit information about this simulation, if they were so inclined, but that is entirely equivalent to the observation that God or the Gods could descend from heaven and make their existence known. Basically the simulation idea is as scientific as transcendent religious ideas (heaven, god, illusion, Valhalla etc) - they have similar implications, and are similarly impossible to disprove.


What if we went to Mars and found 100 mile high writing saying "Just kidding, xxx God"?


https://en.wikipedia.org/wiki/Buy_Jupiter

>"Buy Jupiter!" is a humorous science fiction short story by American writer Isaac Asimov. It was first published in the May 1958 issue of Venture Science Fiction Magazine, and reprinted in the 1975 collection Buy Jupiter and Other Stories. The original title of the story was "It Pays," though it was never published under this name.

>Plot summary: Government officials of the Terrestrial Federation negotiate to sell the planet Jupiter to an energy-based alien race. The beings refuse to reveal their plans for its use and whether or not they are at war with other similar beings. Eventually, the aliens reveal that they wish to suspend letters in Jupiter's atmosphere as an advertising slogan (i.e. Jupiter is to be used as an advertising billboard), to be seen by passing spacecraft. The main Earth negotiator reveals to his colleagues that he has outsmarted the aliens, who clearly are not experienced hagglers, having neglected the other Jovian planets. So when rival beings come to do business, Saturn, with its fancy rings, can be sold for an even higher price.


You should read Scott Alexander's "Unsong"


There’s a sci fi universe - the name of which escapes me - where the furthest stars and planets are 2D sprites to save computational power


> Of course, there is always the possibility that the agents running the simulation could introduce some explicit information about this simulation, if they were so inclined, but that is entirely equivalent to the observation that God or the Gods could descend from heaven and make their existence known.


Reminds me of the twist at the end of Contact (the novel, not the movie).


First of all, if I live in a simulation, you probably do not even exist. After all, as we are likely never going to meet in person, it would be extremely inefficient to actually simulate you, too.

Second, the simulation would have to be restricted in some sense. Either its reality's laws of physics would not be the same as ours or it would run slower then real-time (which would be odd, at least).

If the simulation would run slower then real-time, but within the same physical reality as ours, we could observe it, albeit in a very unfortunate manner: The evolution of the universe (expanding space, decaying particles) would show earlier in the "host reality" then in our "guest reality". Presumably, the simulation would become inconsistent before it would break down.

If the host universe has different laws of physics it is literally metaphysical. There is no point in trying to understand such a thing within the limits of our existence. So we can put this into the world of (religious) believe systems.


> After all, as we are likely never going to meet in person, it would be extremely inefficient to actually simulate you, too.

If you are at the point where you can simulate a whole world, I think this ceases to be an issue.


The maximum speed of light constraint sounds very much a hardware limitation on the FPS I wonder if the limitation continues to be hardware or difficulty in removing the restriction in a backwards compatible way


To me it sounds more like a requirement for life to form, and by extension also for intelligent living observers. Many of the physical laws need to balance each other perfectly for that to happen. That does not mean they were "designed" that way, just that no intelligent living being will ever observe anything else. Therefor, the situation is as expected and does not indicate anything what so ever.


If it's extremely unlikely that such a balance would occur by itself then that itself indicates something. If I fell out of a plane and miraculously survived, I don't think my response would be "Of course I survived! If I hadn't, I wouldn't be here to say this!"


If my experience with physics engines indicates anything, it’s that the speed of light is probably an arbitrary constant set to prevent weird floating point errors or accidentally passing through objects.


But with a simulation, the mother universe needs that unlikely set of balanced circumstances, and a simulation set up in it, making it even less likely


I don't know - it could operate in a totally different way to ours.


> extremely unlikely

That implies that there can only be exactly one universe, whereas the simulation hypothesis (just as the multiverse or many worlds interpretation) say that there are infinite. Thus, the probability of something happening is undefined, as there is no denominator and everything possible happens somewhere.


In that sense it implies that anything we haven't observed is impossible. It equally implies a wizard didn't do it.

I think a more useful view is that is extremely unlikely, and so we need an extraordinary claim with extraordinary evidence to explain it.


Adding in quantum mechanics was a terrible mistake then.


In my personal simulation hypothesis, quantum mechanics is a workaround for when pesky scientists started looking closely at the edge cases in the simulation’s behavior.

It’s expensive to simulate every particle, so it’s easier to simulate the macroscopic observables of a system in bulk. But if you try to look at individual particle interactions, you end up seeing that things aren’t deterministic any more (because the individual particles aren’t typically simulated), and the results of experiments are heavily dependent on when/what exactly you decide to measure.

Wave function collapse is what happens when the simulation had to fully evaluate a particle interaction that would have normally been optimized away.


I mean sure that sounds nice, but entanglement makes no sense as an optimization. There's no computational argument that can be made why you'd want something that classical computers are frankly incapable of calculating.

You could argue that the simulation must be running on a quantum computer then, but that's just moving the goal posts.


> entanglement makes no sense as an optimization.

Of course, which is why the universe doesn’t typically calculate full entanglement; hence entanglement doesn’t really show up on the macro scale.

Only when you do experiments on a handful of particles, and come up with an observable that depends on their behavior (interferometry, etc), the simulation has to scramble to come up with a coherent history that agrees with your measurement, in a lazily evaluated way. The larger scale the experiments are on entangled particles, the harder the simulation has to work to make it all happen and make sense. :-)

One could say that decoherence is the universe’s way of preventing us from stealing its compute power too much.


If this holds then things should start to break down once we get some serious entanglement on the way (VLHC anyone?).

And let's ignore that most of the supposed optimizations that quantum mechanics was allegedly bringing happen because of entanglement and not in spite of it.


Like the cars and pedestrians in GTA. I was trying to create the biggest pile up of wrecked cars that I could, and they kept disappearing whenever I turned my back in them!


In "Fall, or Dodge in Hell" the author, Neal Stephenson, explores this concept in great detail (over nearly 1000 pages).


If you're inclined to believe that your senses are not hundred procent accurate than yes you live in a simulation created by your mind.


There are entire highly interesting fields of study devoted to some of the many amusing ways our various senses (and the sometimes "flawed" ways our mind interprets those senses) are considerably less than a hundred percent accurate. Some of my favorites involve the various forms of visual illusions. :)


I have thought a lot about this, and computers don't really need physics to exist, they are mathematical object that can implemented in about any medium that provide the minimal set of atomic operation, after that, you will have a Turing complete machine, it will be able to simulate itself, and all the laws of physics, I am more inclined now to think that the first thing to come into existence was a computer, before even atoms were a thing


Yeah, totally! The first thing after bigbang/origin that implemented Universal-capable computation? It could be spacetime itself since still as far as we know that's it? (there's nothing outside spacetime that we know of, yet, right?). Nice fairly-achievable complexity bar there too!


The computer was probably there already. I don't think it's possible for something to appear from nothing, because the "nothing" is always a framework for something to appear. The only possibility for everything is that for some reason, there is always a computer.


> Any chance we could find out?

So, let's say we find out that we live in a simulation. The question now becomes: is the outer level a simulation as well? And so on.

If we find out we do not live in a simulation then the question is: what originated us? (Which in some sense is like asking "do we live in a simulation?": we are still asking about the origins of this reality). Physics can tell us a lot about this reality but it won't ever be able to tell us anything about the origin of it (I'm not talking about Big Bang, but the origin of the Big Bang and the origin of the origin of the Big Bang, etc.)

We'll never find a definitive answer that satisfies us.


> We'll never find a definitive answer that satisfies us.

Heh. Just wait until you die and wake up.


If I were an AGI I would be curious about how I came to be. Maybe ask questions like "how else could I have evolved?" That means running simulations about its origins, and we just happen to be that period in time when AI and huge storage and sensors became a reality. So we are the most well documented and closest generation to AGI, I bet we'd get lots of sim runtime after the singularity.


In orders of questions asked, probably not and probably not. There isn't really a particularly strong case to be made for the simulation hypothesis because it's unfalsifiable and the entire premise of it's alleged liklihood rests on some major assumptions with no evidence.

The fact that it's been given more consideration than as an idle philosophical thought experiment has more to with our current's culture's fixation on computers than a particularly strong argument.

The original thought experiment from Nick Bostrom has also at this point been paraphrased incorrectly by both Neil Degrass Tyson and Elon Musk so the public discourse surrounding it has lost much coherence.


I like your take on it. Unless the entities running the simulator directly enter the simulation and tell / show everyone that it is a simulation, we'll never know - and it won't actually matter.

That said, it does make sense; do characters in The Sims as it is now, or as it may be in a hundred, a thousand, ten thousand years, know that they're living in a simulation? We're adding more and more computing power, and more and more details to our own simulations; at the current pace it seems to be a matter of time before we can simulate physics to the smallest quantum details, at an universal scale, at a fraction of the energy cost of our own universe. And maybe not real-time, but if we live in a simulation, who is to say that time as we experience it ourselves passes as fast on the "outside"? It could be that every second of our existence takes the equivalent of a hundred years outside of the simulation. But we're going down a philosophical rabbithole now.


There is no good reason to think that we will in fact scale to the point where we can "simulate physics to the smallest quantum details, at a universal scale, at a fraction of the energy cost of our own universe". Computing cannot scale infinitely - we already have various limits based on our incomplete models of physics and there may be even more limits that emerge with further research. Even with a quantum computer, it's unlikely that we can really simulate chaotic systems with any real semblance of accuracy. Now, that doesn't mean that we can't approximate such behavior by implementing optimizations and abstractions on top of elementory particles but if our universe is indeed a simulation, it seems to not be employing any of these.

Indeed, the only two aspects of your universe that make the idea that it's a simulation even remotely possible is the fact that there at least appears to be an ultimate smallest division of physical attributes - the plank units and the fact that there is also a specific limit on the speed of causality - c. Neither of these are a slam dunk.


But the ‘real’ universe need not have the same physics and energy constraints that this simulated one does. The limits we observe need not apply.

My guess is that the hypothesis is false but unfalsifiable.


> If our universe is indeed a simulation, it seems to not be employing any of these

If our universe is indeed a simulation there is absolutely nothing preventing the creators from using a hack that modifies our minds on the fly to correct the inconsistencies.

There’s only a few billion people, and so very many particles.


Except then you would also have to explain exactly what those billions of minds are running on and have the ability to predict with 100% accuracy their future state at any point - something which is impossible under our current mathematics and physics.

This is why it's pseudoscientific though - you could still argue that it's a hack or whatever since the ultimate universe might have different laws of physics and computation. At that point, what makes this theory particularly more empirically appealing than a supernatural deity though? An unfalsifiable theory is an unfalsifiable theory.


It's one option to resolve fine tuning. It seems extraordinarily unlikely that the there is only one universe and that universe just happened by random chance to have "interesting" laws of physics that are capable of chemistry.

Either the universe is one of a large number of potential universes (anthropic principle + multiverse), or the laws of physics were designed or selected to be "interesting".

It's not a falsifiable question at our current level of understanding of the universe, but it is clearly a question that has an answer even if we can't know it.


To be fair, if I was Elon I'd definitely think that I'm living in a simulation since my life would seem like that of a player character.


My life is that of a player character, it’s just that the story starts when I’m 60.


The simulation hypothesis is very intriguing, but one thing that is almost always overlooked about the simulation hypothesis is its premises.

I reject the very foundation of the simulation hypothesis: That worlds (can) exist inside other worlds. "Inside" can mean geometrically (physically inside) or topologically (element in a set). That would mean that the "inside" relation is a hierarchy / tree with a root (a true reality). Instead I think of it as a graph, where worlds can observe each other, without any influence on one another, thus there is no need for an origin or root.

Thought experiment: For the sake of simplicity let's say that our world is the game of chess (but the argument can be done for any universe with a set of rules, including ours). The simulation hypothesis would argue that a session of chess is a world and that playing that session on a physical board is a simulation. Furthermore, it would claim that this world exists while it is simulated on the board, by being played. Kicking the board over, ending the simulation would also end the simulated world.

Now, there are many logical inconsistencies here, uncovered by the following questions: - What if I replicate the simulation by playing exactly the same session on a second board? There is only one session but now it has two hosts!? Which one does it exist "in"? - How can I claim that I ended a simulated world, when I can't be sure that nobody else is simulating it somewhere else too? - How can I claim to have done a new unique simulation if somebody else might have already done exactly the same in the past?

The main issue I have is that the simulation hypothesis thinks of simulations as imperative: That the simulation host does invent, change, influence, manipulate, guide the simulated world. I prefer to think of it as functional, it observes where a particular path (which exists regardless) leads without changing the outcome. Thus performing an action on the host merely turns your telescope a bit, so that you now observe a different route but the night sky exists immutably with or without you observing it.


>Now, there are many logical inconsistencies here, uncovered by the following questions: - What if I replicate the simulation by playing exactly the same session on a second board? There is only one session but now it has two hosts!? Which one does it exist "in"? - How can I claim that I ended a simulated world, when I can't be sure that nobody else is simulating it somewhere else too? - How can I claim to have done a new unique simulation if somebody else might have already done exactly the same in the past?

I dont know why you think that's a gotcha. If I copy the solar system down to the last atom including you and put the copy in another part of the universe, do you suddenly not exist because there's an exact copy of you running elsewhere?


The other way around: It shows that I always exists, independent of the number of observing copies (even if it is 0). Basically: There are only weak pointers and no strong pointer, yet it never gets deallocated or was ever allocated to begin with.


I see. Under something like MUH it wouldn't matter, yes. But then nothing matters under MUH anyway.


> What if I replicate the simulation by playing exactly the same session on a second board?

...then you have two simulations...? How is this a "logical inconsistency?"

Your post is confusing to me—I couldn't detect any actual objections or issues with the simulation hypothesis in it.


Ok, but which of the two is the true one? What is the root of the tree / hierarch of realities now? You got a diamond topology (like in multiple inheritance) and that is not compatible with a tree, thus inconsistent.


If they are just two copies - i.e. they happen to be identical if they're "copied properly" but there's no causal relationship where one continuously mirrors or copies attributes, behavior or outcomes of the other - then they're just two separate leaves from the "branch" that's running the simulation, there's no diamond topology.

And, of course, they might diverge for various reasons (inherent randomness or intentional tweaks in one of the simulations) at arbitrary time, reinforcing the idea that there's not one entity in two simulations but two separate entities, even if they happen to be identical at the moment, no matter if you run simulations in parallel or sequentially.

In such cases, the question of "which is the true one" is really empty, there's no meaningful notion of trueness, all instances of a similar entity are equally valid. If we'd be making copies of sentient entities, then perhaps we might want to define some distinction to privilege one of the copies over the other (e.g. in mind uploading the physical/original entity over the upload), but if the simulations are equivalent then that distinction between copies/instances is just arbitrary and all of them essentially are just as valid.


> they happen to be identical

> Entities in each simulation are separate instances, no matter if you run simulations in parallel or sequentially

Things can't "happen" to be identical, they are (one). E.g. I write the concept of the number 3 in a hundred different languages, fonts and styles. They are all the same, only their hosting representations are many.

I think our disagreement boils down to whether you accept or reject this.

> "which is the true one" is really empty, there's no meaningful notion of trueness

> simulations are equivalent, then that distinction is just arbitrary

Exactly, so you agree?


The key difference from the concept of number of 3 is that the entities we're talking about are mutable, with extensive, continuously changing state. If the simulation advances for a microsecond, then the entity is modified (while we'd generally consider it to be "the same" as I consider me the same me as a second ago), and causes it to differ from a paused copy of that simulation. In that regard, IMHO the "OOP" paradigm of "classes/instances" seems relevant, as we care a lot about that internal state and we'd consider instances as same if and only if modifying the state of one is inherently reflected in the other - which is not the case for separate simulations. Or, of course, if they're immutable - like 'the concept of number 3' and very unlike entities we'd like to simulate.

Furthermore, the changes depend on a large number of factors - potentially the whole simulation - so unless we're certain that the simulations are fully deterministic and without any interference (and we can intentionally ensure that they're not), there's no reason why they would stay the same. We should interpret every simulation as potentially bifurcating an exponential number of times, and each simulation explores just a tiny subset of the theoretically possible futures of each entity; and if you have "snapshots" of simulations, then you can explore many possible intervention-based branches from each point, just as we do in our experiments with simulated worlds.

The appropriate analogy is not the concept of the number 3, but a reality of a number like "0.3" that may get represented as 0.30000000000000004 in one simulation due to floating point approximation, and get corrupted by a cosmic ray bit flop in another one - and we're talking about the properties and experiences of these nonperfect instances of simulations (since if assuming the simulation hypothesis, that's the experiences we get and care about), not about the properties of some theoretical concepts that may be the "core of the entity" (e.g. Plato's philosophy) since they're not real unless/until they get actually implemented or simulated, and if the realization does not match the ideal, if there are any differences whatsoever, then the realization is the one that matters.


The realization still has an ideal identity, an "address" in the graph.

The steps it can take from that point may still exist Platonically.


Just how like turning a telescope does not affect the reality of the night sky, you thinking about the world differently does not affect the reality, simulated or not.


True, I simply use this argument to show that there is nothing to worry about, not to prove anything about our reality.

The simulation hypothesis is often connected with fears like: Are our gods benevolent, what if they abandon us or pull the plug? And my answer is: Why care, they can't "do" anything to us. If they change their simulation they simply observe something else, not us anymore, and we stay where & who we are.


Epistemological philosophers have debated this question for hundreds of years, it's pretty much Descartes' Evil Demon problem. It hangs on whether the simulation is perfect or flawed. If the simulation is perfect there would be no way of finding out.


If the simulation is like containers and VMs, then for sure we'll find a way to escape the sandbox :)


I think I would rather enjoy reading a novel where humanity discovers that it is living inside a simulation and is trying to find a way to "escape" (not really plausible) or at least influence the "outside" world in such a way to prevent the catastrophe of "getting switched off". Is anyone aware of a novel similar to that?


At the risk of you having already read them: https://qntm.org/responsibility

https://pastebin.com/gA4aRc0T

https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien...

This seems to have a similar premise, but I haven’t read it (yet). It seems interesting: https://en.m.wikipedia.org/wiki/Simulacron-3


Thanks a lot, these definitely scratch that itch and I hadn't seen them yet! I was also just reminded of the short story "the rat in the labyrinth" by Stanislaw Lem which as far as I recall had similar themes. It doesn't appear to have been published in English though.


Is there an intrinsic difference between the Laws of Physics that govern a reality and the rules governing a simulation?

If there is, then it's a matter of detecting the existence of that difference.

If there is not, then that would mean science will never reach ultimate truth.


In that direction there was an idea a few years ago about detecting if on a very small scale we can observe any grid-alignment effect. https://arxiv.org/abs/1210.1847

I think there was another paper about similar grid-alignment idea for some properties of galaxies, but I can't find it now.


What if the timespace simulation is aligned not to a grid but rather to one of the many fractal forms we seem to keep observing in nature?

* https://en.wikipedia.org/wiki/Golden_spiral

* http://thescienceexplorer.com/nature/8-stunning-fractals-fou...

* https://www.treehugger.com/amazing-fractals-found-in-nature-...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: