Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Stable diffusion runs on under 10 GB of VRAM on consumer GPUs, generating images at 512x512 pixels in a few seconds. This will allow both researchers and soon the public to run this under a range of conditions, democratizing image generation."

Oooooh I can actually run this at home!



Can it be run on Apple M1/M2, if they have 16+ GB of RAM?


The founders just confirmed in their discord chat that it will be able to run on m1 macs.


I mean - someone can probably get it running on a Commodore 64 eventually. But I wonder how well it's going to run. If it runs on the GPU then it might be plausible but don't models like this have a 100x or so slowdown on CPU?

Someone else said GPU PyTorch on the M1 is far from ready so I'm wondering if this will be CPU instead.


Per the talk Mostaque gave today in the discord, it runs on CPU on the M1 at about 4-5 minutes per generation.


That's not bad. Rough back-of-the envelope has that as a 30x slowdown compared to the times on Discord but then I've got no idea how fast it will be on my own kit.

I get in the region 2-3 minutes from Disco Diffusion etc on a mobile 3080.


will as in does or slated to?


Looking at the repo[1], it uses pytorch so you may be able to. Pytorch released GPU acceleration for Apple M1 earlier this year in v1.12. Looking again at the environments.yaml file they require pytorch v1.11 so you might not be able to upgrade without issue.

[1]https://github.com/CompVis/stable-diffusion


The PyTorch M1 GPU backend is a WIP, see the ops coverage list at https://github.com/pytorch/pytorch/issues/77764


It can be run on CPU. The GPU acceleration on an M1 macbook is very slow compared to an Nvidia equivalent, 10-15x slower in my tests.


Despite being 10x slower - is it still doable with few cups of coffee and waiting? I have no idea how long in wall clock time generating these images take.


no, that’s referring to the amount of ram needed by the (presumably nvidia) gpu


The M1 has that as shared memory between both the CPU & GPU.


Yes... but it's not an nVidia GPU




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: