Hacker News new | past | comments | ask | show | jobs | submit login
Generative Image Dynamics (generative-dynamics.github.io)
346 points by hughes 11 months ago | hide | past | favorite | 29 comments



This is super cool. Cinemagraphs have always been a bit of a passion of mine, and I try to bring that feeling of subtle-stillness in a lot of the work I do, whether it’s marketing or shooting, so i can see this becoming a regular tool.

The trick to a 10/10 cinemagraph is the more subtle, the bigger the impact. You almost want the viewer to think it’s a still photo before their brain clicks in thinks “wait, something isn’t normal here, this isn’t a photo, it’s a video”


any good examples you can share please?


there's a subreddit about that

check out www.reddit.com/r/cinemagraphs/top



These just look like vibrating images on my phone


Impressed and sad with how toxic HN can be sometimes

Just sharing some art here, and getting downvoted without even a comment about it. And even getting flagged

@dang care to check what’s going on here?


Maybe because it could be construed as "self-promotion" especially because you didn't bother to tell that you are the author of those tweets.


The tree has severe distortion when dragged from the edge. Still an interesting idea.


You'd probably have to combine this with segmenting and generative infill for the background layers, but luckily there's been a lot of progress there!


I wonder why in the first picture (red rose) the flower in the bg also moves, but we don't see the same affect in the third picture (tree). I also find it impressive that the amount of motion differs in the first and the second picture, could it be because the density around the pointer is considered?

The slo-mo ones are super relaxing to watch!


I don't know why but I reacted with slight fear to the rose ones.


Nice to see Google researchers continuing to publish open papers with bonus demos. Won't beat a dead horse about Google failing to productize or open source their AI research.


This is so cool. Not earth-shattering or productivity-enhancing, but still really cool.

I could definitely see this becoming a standard feature on desktop and phone wallpapers.

Could also see it being applied selectively to photos in things like historical documentaries -- especially if it can handle the gentle movement of water and clouds as well.


They used webGL for the demo. Nice!


This would be crazy in a video game. Walking through a bush and dragging the plant with you


> This would be crazy in a video game. Walking through a bush and dragging the plant with you.

But game physics can handle stuff like that, there's no need for GenAI.


I've been playing Red Dead Redemption 2 recently. When it comes to 3D nature scenes, this game is state of the art, and incredible in its attention to detail.

Still, one of the things that stands out to me is how fake the small movements of leaves, grass, and flowers in the wind still feels. The actual physics of tree branches as you push past them isn't bad, but you can't use the physics engine for every leaf. The wind in trees and grass is still just a randomized oscillation of some kind that doesn't quite feel real.

So I was immediately struck that this technique would be an improvement, perhaps using animated billboards. But it would come with its own issues - can't be done in real time any more than physics, for example, so it would be hard to use it for real time weather systems. And anyway, in that case since you're using a transparent video, there's probably easier ways to make it.



I'm still waiting for video games to adopt stable diffusion, GPT, and other GenAI models. the tech is there, but I guess the inertia in the industry doesn't allow us to have nice things yet.


You'll see those things! We already have DLSS, for instance. But unfortunately we can't simply glue expensive black boxes onto games and ship them. Wrangling performant, richly interactive media is difficult enough without these models. This modern ML + gaming fusion space is barely in its infancy. We need to explore what's practical, and discover patterns to do it.

Even without further breakthroughs, the next 5 to 10 years will be incredible. I'm so excited.


Wouldn’t say tech is there yet. It still needs a lot of human input and direction so slapping it into a video game would just be immersion breaking when it generates something out of character randomly.

There’s less impactful ways to implement it like generating art paintings in a museum dynamically but that is in the “a little gimmicky” territory.


Temporal stability has been solved for a while [1], there's just nothing of that grade for diffusion models currently. From above, tagged geometry guiding a temporally stable neural renderer seems to be the way to go for games, but this needs to be confirmed in the trenches. The industry has a lot to digest, and of course the hardware should improve quite a bit.

[1] https://isl-org.github.io/PhotorealismEnhancement/


I think quite a bit of stuff like texture generation has already been used for some time - just on the developers machines pre-release. I suppose using these technologies client-side in real time will be something else entirely. Consumer hardware is lagging behind for that though, even with optimizations such as quantization tweaks etc.


What do you mean, lots of games have physics that interreact with flora?


Wow, that's a neat idea! That could potentially be pretty cool. It'd almost be like a form of photogrammetry but for physics. Kinegrammetry, maybe? I wonder what the storage efficiency and performance would look like. Perhaps something like ths could be adapted into a framework for object modeling.


This suffers from the same low-vector movement requirements as EbSynth.


I think the achievement here is mostly about generating the image dynamics, so for example there is a cat in an image, the model understand that cats need to breathe so the dynamics show the lungs contracting, then the paper covers how to traslate the image dynamics and the image itself into a seamless video. I could be wrong tho


That's one step away from Harry Potter style photo frames for static photos.


Wow! This seems surreal, can't wait to have it integrated into Photoshop.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: