Hacker News new | past | comments | ask | show | jobs | submit | monitron's comments login

It's quite a bit more "realistic" if you imagine it as a simulation of the "Defragment Free Space" option that at least some defraggers had.

I humbly suggest that you read beyond the first few paragraphs. The author goes on to significantly expand the definition.


I did read the whole article, and I liked it. As an intro, it was a bit discombobulating to see this as the first paragraph.


For sure! But just because you don’t “see” it doesn’t mean it doesn’t have a huge impact on the feel of the movie, your impression of the characters, and on the storytelling, which is ultimately why these “invisible” decisions were made :)


I wrote my own home automation platform. I started >10 years ago when the existing options were pretty disappointing to me. It’s a web app that runs on wall mounted tablets in my house as well as on my and my family’s phones and computers.

It handles lights, fans, cameras, sensors, locks, doorbells, speakers and TVs, HVAC and more. It runs automations based on time, weather, presence of people, and other events. It tracks energy consumption, sends alerts, etc. There’s a strong focus on local control (as opposed to cloud).

My favorite thing about it is that the client and server run from the same codebase and share state over a WebSocket connection. I’m still working on it occasionally but mainly it just runs quietly and my people seem to like it. The whole thing is configurable (connectivity, behavior and interface) so theoretically it could be used by someone else, but for many reasons I doubt it ever will :)


This was a lovely article. I only wish the author wouldn’t have kept switching units (between µs and ns) making it hard to scan the tables for comparison.


They also switched from C to Rust halfway through the article, which was a bit jarring...


It looks like that was an addendum that was added later and also benchmarked against the original C version


1000ns = 1us :)


We know.

On a phone at normal reading distance, with the articles styling, it’s really hard to tell the difference between n and u without zooming, and the decimal points get lost - scanning the tables is hard.


It's interesting that they used the Planet Express building from Futurama as one of their examples of 3D-inconsistency, because I'm pretty sure the exteriors are in fact computer-generated from a 3D model. Watch the show and you can see the establishing shots usually involve a smooth complex camera move around the building.


Agreed, most or all shots of the Planet Express building and Planet Express ship are 3D renderings, even in the original first few seasons. Beyond that, even some shots of Bender in Space are 3D renderings, especially in cases where a complex and continuous shift in perspective is required.

Non-photo-realistic (NPR) 3D art goes back a surprisingly long way in animations. I rewatched the 1988 Disney cartoon "Oliver and Company" recently, and I was surprised to see that the cars and buildings were "cel-shaded" 3D models. I assumed that the movie had been remastered, but when I looked it up, I found out that it was the first Disney movie ever to make heavy use of CGI[0] and that what I was seeing was in the original. The page I found says:

"This was the first Disney movie to make heavy use of computer animation. CGI effects were used for making the skyscrapers, the cars, trains, Fagin's scooter-cart and the climactic Subway chase. It was also the first Disney film to have a department created specifically for computer animation."

References ----------

0: https://disney.fandom.com/wiki/Oliver_%26_Company


> "This was the first Disney movie to make heavy use of computer animation. [...]"

Tron came out 1982, six years before Oliver & Company.

https://en.wikipedia.org/wiki/Tron


I guess it depends on the definition of "heavy use." I know in Tron a few scenes were CG, and there were a few CG+live-action bits, but the majority was filmed on normal physical sets in high-contrast, then painstakingly hand-processed[1] to add the neon "glow".

[1] https://filmschoolrejects.com/tron-costumes-glowing-effect/ Thanks legions of Taiwanese animators (:


From your link: >The 1982 Disney movie is privy to a remarkable number of firsts: the first feature-length film to combine CGI and live-action; the first talking and moving CGI character; the first film to combine a CGI character and a live-action one; the first fully CGI backgrounds… The list goes on and on.

Sounds pretty heavy to me.


And the film OP mentioned Oliver & Company:

>Eleven minutes of the film used "computer-assisted imagery" such as the skyscrapers, the taxi cabs, trains, Fagin's scooter-cart, and the climactic subway chase

I think Tron wins in terms of CGI


But Disney financed and distributed Tron. It wasn't made by a Disney Studio, and most of the animation was outsourced to a Taiwanese studio because Disney wouldn't lend any of their own talent. So I think it's fair to say that Oliver & Company is the first Disney-made film to use CGI.


The Great Mouse Detective (1986) was earlier and the ending sequence is CG (printed out and traced onto cels so traditional 2D characters could be drawn on top).


That's a good point. What's funny is that "The Great Mouse Detective" was actually the film I was thinking of this whole time - I believe the ending sequence took place in Big Ben, and it looks quite good by 2024 standards. But I forgot the name of the movie and assumed it was "Oliver & Company" because Oliver is a plausible name for an English mouse :)


And large amounts of the "computer" graphics in Tron are hand drawn.


Still lots of CGI.


Probably meant “Disney animated feature”.


Found a pretty cool wireframe video of Oliver and Company.

https://m.youtube.com/watch?v=mix9rStOqoI

Now I am curious to watch it



Cel shaded 3d models?

Wait, you're telling me that computers have enabled us to have fewer artists and thereby replacing artists for a long time now?!

Just like pretty much every industry out there?!

And that it's widely accepted so long as people get their cheap plastic goods from China?!

And that the current outrage won't even be remembered in 20 years?!


Kind of, it hasn't replaced anyone though. 3DCG just became good-enough basis for artists to build on, what AI bros have been fantasizing and advocating for couple years by now, yet completely ignored and mocked over.

Which tells, AI hatred don't necessary come from what pro-AI thinks where it comes from, people potentially just find AI art rage inducing.

Like, not even specific technical aspect of AI is bad or could use improvements. It just sits at the wrong side of the uncanny valley, and arguments clump around that.

That's the real problem with generative AI.


Isn't a lot of 3D in shows and games "faked" to look good to the viewer?

I remember seeing this blog write up on what 3D animators do to make things look acceptable. Like make a character 9 feet tall because when the camera panned them, they looked too short at their "real" in-system height. Or archway doors that are huge but at the perspective shot, look "normal" to us. Or having a short character stand on an out-of-scene blue box to make them having a conversation with a tall character not look silly due to an extreme height difference? Or a hallway that in real life would be 1,000 feet long but looks about 100 in-world because of how the camera passes past it, and how each door on that 1,000 foot hallway is 18 feet high, etc.

I wonder if shows like Futurama used those tricks as well, so when you sort of re-create the 3D space the animators were working in by reverse engineering like this, then you see the giant doors and 9 foot people and non-Euclidian hallways, etc. Just because it looks smooth as the camera passes it, doesn't mean that actual 3D model makes sense at other perspectives.


I don't have a ton of experience in this realm but from what I've seen it does happen a lot -- looking good is often better than being right. A great example of this is the way they tilted the models for Zelda's A Link Between Worlds[0]. Basically everything in the world is tilted back so it looks better for the camera angle, which is designed to mimic the feel of A Link to the Past.

[0]: https://www.gameinformer.com/b/news/archive/2013/11/20/the-t...


I saw some video on A Difficult Game About Climbing a while back. The things they did to make the guy appear to grip the rocks and suck normally make the hands utterly bizarre when seen from the side.


Indeed many animated shows that don't look 3d animated have a 3d model somewhere in their pipeline these days. Even if there's not a digital 3d model, there might be a physical model of the main locations in the studio for animators to refer to.


Yeah, Futurama used composited 3D elements from the very first episode in 1999. The vehicles are nearly always 3D.


the exteriors aren't generated from a 3D model, they are generated from many 3d model(s), of the same thing, that perhaps changed over time or changed between scenes, like the models on the star trek enterprise


I just want to meet the dev who wakes up in the morning, sits down with their cup of coffee, rubs their hands together and says "Boy, I just can't wait to make a website where scrolling feels like trying to run across an ice rink!"


Yep. I am a current Tesla Model 3 owner, really like my car, but plan to switch to another brand after running this one into the ground. Elon's behavior is awful and dangerous given his fame/money, and luckily, unlike in 2017, there are now other good EV options which are getting better all the time.


I recently saw a Tesla that had a “No Elon” magnet on the back. (Red circle over white background, with a diagonal red line through the word “ELON.”)

I talked to the owner who said that his wife had custom ordered it.


How is the charging situation? I remember that everyone announced NACS and compatibility with the Tesla network a while ago, has that happened or is it still just promises?


Technically still just promises, but the timelines are all in the 1-3 year range, which seems about right for a manufacturing and support line change. I think Hyundai/Kia plan to start shipping native NACS cars this year.


It's in progress, I think. Ford and Rivian just gained access to the network about a month or so ago, and I think that other manufacturers will be added over the next year or two.


> after running this one into the ground

Oh, come on. Don't wait another 25 years! /s


I think the coolest thing about this is the way time works. One step at each level is computed using a large number of steps at the level below. So as you zoom out you are sinking into an exponential vortex where each step takes days, years, millennia to change.

But ingeniously, the simulator smoothly scales its speed as you zoom, totally erasing the fact of time. I wish there was an indicator somewhere of how much game time was going by per real-life second.

EDIT: ...but in order to do that you'd have to declare one of the levels to be the "bottom" level, the one that runs in real time, and that would ruin the fractalness of it all...


The hows-it-done article describes the meta-pixel pattern concept then gives you the "clock-ratio" of the periodicity:

> In addition, the pattern has a period of 35328, so advancing 35328 generations will cause the generation at the meta level to advance by 1.

I would even say this time dilation is necessary because the pattern's self-similarity is across time, and if the two levels operate at different clocks, you need to slow down the next level as it comes up to the self-similar animated view of the prior level.

In other words, the structure at level n requires 35328 iterations of the self-similar structure at level n+1, so if you're bringing n+1 up to the self-similar view of n, you need to slow down n+1 as it's coming up to also hit the time-based self-similarity.

I wonder then if there's something like a time-invariant constant, maybe along the lines of the "computational complexity" of any view remains constant across all levels of zoom.


For some reason what really sticks out to me is how when you zoom out, you always come out of a different part of the circuit, so it doesn't feel like the normal repetitive fractal, you're seeing all the parts of the machine from different chronological angles or something.


Interestingly, the author's blog post [1] mentions that this is due to technical limitations and not specifically done by choice, although I do like the effect.

> When we zoom in, we discard the information of the largest meta-metapixel, so it is not always possible to reconstruct the same scene when we zoom out again.

> Since zooming in and out can be continued indefinitely, we also realize that we need infinite information to accurately represent "our current position".

[1] https://blog.oimo.io/2023/04/10/life-universe-en/


Todepond's "Screens in Screens" explores infinite zoomability!

https://www.youtube.com/watch?v=Q4OIcwt8vcE

"Another thing I know about screens is, you can go into them as far as you want forever. And you can come out of them too, but you might not end up in the same place that you started. You might get lost. Or you might not."


That's because gravity slows down time.

So the higher you go, the less you're impacted by gravity, the faster times goes.

See https://en.wikipedia.org/wiki/Gravitational_time_dilation


I suspect this simulation does not try to incorporate that ;)


Haha but it feels weirdly metaphysical, right? Like maybe there is some sort of deeper analogy there.


It's universes simulating universes all the way down?


Scale-free infinity.


Well you could just say the level you start at is real time and the second hand moves slower for lower levels and faster for higher levels?


Exactly - time is relative to our reference frame, so that would be the best arbitrary choice.


This is fun and has a great symmetry but my brain breaks at the idea of the infinitely fast and large computer that would be required to run it all :)





Imagination is fun. I'm pretty sure I can make your brain break a little bit more. Imagine they're all looped and there is no bottom level fastest super infinite computer.


The levels are stationary with respect to each other though, so in the same reference frame.


Since the grid is infinite, it should be possible for one level to to be completely identical in its history to the next level. Thus you could identify all levels with each other to really be the same level. Then both space and time are circular in their scale. (The now-single level could nevertheless still be non-repeating both in space and time!)


The most recent video from Space Time in YouTube talks about something like this related to the holographic principle.


Note that this is also how our own world works. Typical time scales of processes get faster as the length scales shorten. Somehow it feels inevitable for a dynamic multiscale structure.


In the real world, there is a limit set by the speed of light and the Planck scale. And more generally, physics is very much not scale-invariant.


armchair physicist moment: isn't this a straightforward outcome of c as a global "speed limit"? effects can't propagate faster than c, so I would expect that interaction "latency" decreases linearly with scale.


And of course, who is to say that Reality/the Omniverse isn't the same way? :)


I'm fairly sure it's all an illusion.


My read on the author's explanation story is he did actually solve the "consistency with time" and "consistency in position" problems that would make it a "rounds up to real-looking" at some zoom seam.

It's not cheating, but the trick is it's pre-computed.

Basically the structural patterns repeat, and you can pre-compute the patterns (which are also the sub-patterns), then basically tile the configurations and their transitions "across time". A naiive approach takes a few TB to store it all, but there's some clever optimizations that can bring it down. Finally once a "solution" through the tiling space was found, the author encoded it into a 4 mb png and built a display with some clever shaders.


I'm not saying it isn't technically impressive and super cool - but he clearly can't actually be computing infinitely recursive levels of GoL, so in that sense it's an illusion.


But it can be displayed, partially.


Uh, yeah clearly. We're looking at it.


Pardon my poor vocabulary to describe the following:

But if the patterns are precomputed at whatever timescales, the patterns repeat in a predicable interval, based on the scale of time..

So, I wonder what each layer of pattern looks at from a perspective of a prime or true prime number scale digit...

Like if real-time baseline is 1 - and if you look at the pattern in a still frame at position scale 1. What is the pattern look like at time scale of the primes and true primes?

Whereas the primes and true primes are known discretes, so one could think of them as discrete dimensions of timescale based on the pattern represented at timescale position 1 (real-time)

And what if timescale at position 1 is some reflection of the pattern at the primes and true prime timescales... (assuming a dimension simply means the rate at which times flow at that energy level)

Does that make sense?


Relevant xkcd: https://xkcd.com/505/


Yes! The delightful reason why this configuration can repeat within itself is that this Game of Life pattern is a machine with the rules of Game of Life built into it. It's calculating and displaying the game step-by-step. If you watch the machinery you can see the "data" flying around between cells and then the cells turning on and off.

This pattern isn't unique – you could certainly come up with infinitely many more machines in Game of Life that do the same thing. But yes, most Game of Life patterns aren't fractal like this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: