"Stable diffusion runs on under 10 GB of VRAM on consumer GPUs, generating images at 512x512 pixels in a few seconds. This will allow both researchers and soon the public to run this under a range of conditions, democratizing image generation."
I mean - someone can probably get it running on a Commodore 64 eventually. But I wonder how well it's going to run. If it runs on the GPU then it might be plausible but don't models like this have a 100x or so slowdown on CPU?
Someone else said GPU PyTorch on the M1 is far from ready so I'm wondering if this will be CPU instead.
That's not bad. Rough back-of-the envelope has that as a 30x slowdown compared to the times on Discord but then I've got no idea how fast it will be on my own kit.
I get in the region 2-3 minutes from Disco Diffusion etc on a mobile 3080.
Looking at the repo[1], it uses pytorch so you may be able to. Pytorch released GPU acceleration for Apple M1 earlier this year in v1.12. Looking again at the environments.yaml file they require pytorch v1.11 so you might not be able to upgrade without issue.
Despite being 10x slower - is it still doable with few cups of coffee and waiting? I have no idea how long in wall clock time generating these images take.
Oooooh I can actually run this at home!