Hacker News new | past | comments | ask | show | jobs | submit login
Efficient high-resolution image synthesis with linear diffusion transformer (nvlabs.github.io)
218 points by Vt71fcAqt7 1 day ago | hide | past | favorite | 42 comments





This looks like quite a huge breakthrough, unless I'm missing something?

~25x faster performance than Flux-dev, while offering comparable quality in benchmarks. And visually the examples (surely cherry-picked, but still) look great!

Especially since with GenAI the best way to get good results is to just generate a large amount of them and pick the best (imo). Performance like this will make that much easier/faster/cheaper.

Code is unfortunately "(Coming soon)" for now. Can't wait to play with it!


  > surely cherry-picked
As someone who works in generative vision, this is one of the most frustrating aspects (especially for those with less GPU resources). There's been a silent competition for picking the best images and not showing random results (even when there are random results they may be a selected batch). So it is hard to judge actual quality until you can play around.

Also, I'm not sure what laptop that is but they say 0.37s to generate a 1024x1024 image on a 4090. They also mention that it requires 16GB VRAM. But that laptop looks like a MSI Titan, which has a 4090, and correct me if I'm wrong, but I think the 4090 is the only mobile card with 16GB?[0] (I know desktop graphics have 16 for most cards). The laptop demo takes 4s to generate a 1024x1024 image. But they are chopped down quite a bit[1]

I wonder if that's with or without TensorRT

[0] https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_proces...

[1] https://gpu.userbenchmark.com/Compare/Nvidia-RTX-4090-Laptop...


0.37s is only 11x away from realtime 30fps. I wonder if that will enable some cool new popular application for it besides batch image generation.

You can do much much better with GANs at that resolution. I'm sure you could combine the two for upsampling

Truthfully, I've had astonishing results from Stable Diffusion 1.4 on an M1 Mac, given the right inputs ...enough to throw my hands up and declare it a sort of magic (except for the presence of Getty Images watermarks randomly scattered around my results).

Nonetheless, as an art director, nothing I'd put into production. I guess that's because what I'm focused on is tickling the client base with something original.


Magic in what way? They sure are impressive tools but like all AI, they do not have an eye for finer detail. I'm really unsure most ML researchers have an eye for this oddly. But then again, most people I know that work in generative vision have no artistic hobby so I'm not sure how they can feel they can properly evaluate works. It's the subtle details that matter.

Maybe add 'watermark' to the negative prompt?

The GeForce RTX 3080 Mobile and GeForce RTX 3080 Ti Mobile also have 16 GB versions as noted directly above the linked section on [0].

Thanks! I forgot about that (usually mobile cards have less VRAM, not more lol). I don't necessarily doubt the paper's generation claim, but there are of course many factors that could help clarify what that number actually represents

>This looks like quite a huge breakthrough, unless I'm missing something?

Looking at their methodology, it seems like it's more of an accumulation of existing good ideas into the one model.

If it performs as well as they say, perhaps you can say the breakthrough is discovering just how much can be gained by combining recent advances.

It's sitting on just the edge of sounding too good to be true to me. I will certainly be pleased if it holds up to scrutiny.


If you read closer to the benchmark, it seems to be slightly worse than FLUX [dev] on prompt adherence and quality. However, the best is to evaluate the result oneself, and the track-record of PixArt Sigma (from the same author?) is pretty good!

If you generate 25x more images, you can afford to cherry-pick.

That transfers computer time to user time. It's great when you want variations, less so when you want precision and consistency. Picking the best image tires the brain quite quickly, you have to take into account the at a glance quality without it overriding the detail quality.

I'd be curious to see how a vision model would go if it were finetuned to select the best image match to a given criteria.

It's possible that you could do O1 style training to build a final stage auto-cherrypicker.


It would be interesting to have benchmarks that take this into account (maybe they already do or I’m misunderstanding how those benchmarks work). I.e. when comparing quality between two different models of vastly different performance, you could be doing best-of-n in the faster model.

That sounds like it could be an intiresting metric. Worth noting that there is a difference between an algorithmic "best of n" selection (via eg. an FID score) vs. manual cherry picking which takes more factors into account such as user preference and also takes time to evaluate, which is what GP was suggesting.

This is a bit pedantic, but FID score wouldn't really be a viable metric for best of n selection since it's a metric that's only computable for distributions of samples. FID score is also pretty high variance for small sample sizes, so you need a lot of samples to compute a meaningful FID score.

Better metrics (assuming goal is text->image) would be some sort of inception score or CLIP-based text matching score. These metrics are computable on single samples.


Yeah I’d likely just pick the best scoring one (that is, the pick is made by the evaluation tool, not the model) - to simulate “whatever the receiver deemed best for what they wanted”.

There really are some “free lunches” in generative models. Really impressive work by this group. Ultimately, their model may not be the winner, because so much of what makes a good image gen model is the images and captioning that go into it, and the fine-tuning for aesthetic quality — something Midjourney and Flux both excel at. But the architecture here certainly will get into the hands of the people who can make the next great model.

Looking forward to it. This space just keeps getting more interesting.


This comes from the same group as the EfficientViT model. A few months ago, their EfficientViT model was the only modern and small ViT style model I could find that had raw pytorch code available. No dependencies to the shitty framework and libraries that other ViT are using.

None of this means much to me unless I can actually use it. Sorta like how Sora has been totally overshadowed by Kling, Runway, Minimax.

You have to release your model in some fashion for it to be impressive.


On the subject of such high quality video synthesis: have there been any such models which are actually available online? It strikes me that for image synthesis there have been a lot of amazing local models, but I can't remember seeing anything impressive for video which can be run offline.

CogVideoX seems to be the best offline model so far

The highest quality ones I mentioned are available via API or web client, but that's enough for me to be happy.

In my opinion, what's missing in these "image GenAI" tech is the ability to generate subsequent images consistently.

That would be useful for e.g. book illustration, comic strips, icon sets. Otherwise, people would think you pick those images all over the internet and not from one source/theme.


>We introduce a new Autoencoder (AE) that aggressively increases the scaling factor to 32. Compared with AE-F8, our AE-F32 outputs 16× fewer latent tokens,

Basically they compress/decompress the images more, which means they need less computation during generation. But on the flip side this should mean less variability.

Isn't this more of a design trade-off than an optimization?


It might not be compressing more (haven't yet looked at the paper). You can have fewer but larger tokens for the same amount of data.

It would decrease the workload by having fewer things to compare against balanced against workload per comparison. For normal N² that makes sense but the page says.

We introduce a new linear DiT, replacing vanilla quadratic attention and reducing complexity from O(N²) to O(N) Mix-FFN

So not sure what's up there.


Does this finally solve the class of "6 fingers/hand" problems?

That problem can be fixed through careful fine-tuning, at the cost of losing some generality because the model is punished for drawing bad fingers. This new method outlined in the paper operates in a highly spatially-compressed latent space, but with more channels than previous models, so each latent pixel has 2x the information content than Flux and 8x the content of SDXL. I do wonder whether the high spatial compression means that high resolution features like fingers will be messed up. On the other hand, the higher channel count in the latent space gives the model more detail per pixel to work with… I guess we’ll just have to see.

Image models are going to be widely available. They'll probably be a dime a dozen soon. It's great that an increasing number of models are going open, because these are the ecosystems that will grow.

3D models (sculpts, texture, retopo, etc.) are following a similar trend and trajectory.

Open video models are lagging behind by several years. While CogVideo and Pyramid are promising, video models are petabyte scale and so much more costly to build and train.

I'm hoping video becomes free and cheap, but it's looking like we might be waiting a while.

Major kudos to all of the teams building and training open source models!


> (e.g. Flux-12B), being 20 times smaller and 100+ times faster in measured throughput. Moreover, Sana-0.6B can be deployed on a 16GB laptop GPU, taking less than 1 second to generate a 1024 × 1024 resolution image.

[flagged]


Do you believe that human artists should pay license fees for all the art that they have ever seen, studied or drawn inspiration from? Whether graphic artists, writers or what have you.

I'm still trying to figure out which side to be on. On one hand I agree with you - there would be little modern art if it wasn't for centuries of preceding inspiration.

On the other hand, at least one suit was making headway as of 2024-08-14, about 2 months ago [0]. It seems like there must be some merit to GPs claim if this is moving forward. But again, I'm still trying to figure out where to stand.

[0] https://arstechnica.com/tech-policy/2024/08/artists-claim-bi...


They can both be true.

The learning process is similar, and it isn't identical.

Humans and AI both have the intellectual capacity to violate copyright, but also human artists generally know what copyright is while image generators don't (even the LLMs which do understand copyright are easily fooled, and many of the users complain about them being "lobotomised" if they follow corporate policy rather than user instructions).

And while there's people like me who really did mean "public domain" or "MIT license" well before even GANs, it's also true that most people couldn't have given informed consent prior to knowing what these models could do.


Or not. They claimed a big win but it was not at all that. It was essentially not completely falling at the first hurdle. All bar one of their claims were dismissed.

The remaining claim may not be a good claim, but it isn't completely laughable.

https://cdn.arstechnica.net/wp-content/uploads/2024/08/Ander... Order-on-Motions-to-Dismiss-8-12-2024.pdf

In October 2023, I largely granted the motions to dismiss brought by defendants Stability, Midjourney and DeviantArt. The only claim that survived was the direct infringement claim asserted against Stability, based on Stability’s alleged “creation and use of ‘Training Images’ scraped from the internet into the LAION datasets and then used to train Stable Diffusion.”

I think you could have grounds for saying that construction of LAION violates copyright which would be covered by this. It doesn't necessarily mean training on LAION is copyright violation.

None of this has been decided. It might be wrong.

The rest of the case was "Not even wrong"


Human artists get in copyright trouble if the spam out a copy of something they studied and sell it. The businesses using AI artists do not seem to.

Artists who think that their copyright has been infringed upon are free to sue, just as they do when the alleged plagiarist is a human. I fail to see the difference.

Scale.

The cost of the electricity needed to create an image, was the cost of hiring someone on the UN abject poverty threshold to examine it for 10 seconds… with 2 year old models and hardware:

https://benwheatley.github.io/blog/2022/10/09-19.33.04.html

(There's also trademark issues; from the discussions, I think those are what artists actually care about even though they use the word "copyright").


Image generation models don't do that either

Copyright means you own the right to reproduce a given work. It doesn't mean you own the ideas behind that work. If that were true, then all of modern music would instantly be a copyright violation.

Did you see the results of the Marvin Gaye / Pharrell Williams case? Sadly, it's getting pretty close to that.

Exactly what the labels want, since if that type of thing keeps going their way they will soon own not just every song like they currently do, but every future song forever.

This argument is only fair if you also think human artists should be banned, from birth, from ever looking at any other art. After all that would be training on stolen copyrighted work.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: