Despite its powerful output and advanced model architecture, SDXL 0.9 is able to be run on a modern consumer GPU, needing only a Windows 10 or 11, or Linux operating system, with 16GB RAM, an Nvidia GeForce RTX 20 graphics card (equivalent or higher standard) equipped with a minimum of 8GB of VRAM. Linux users are also able to use a compatible AMD card with 16GB VRAM.
I’m guessing that it will work eventually, though I’m not sure who will make that happen.
I've used Apple's port of Stable Diffusion on my Mac Studio with M1 Ultra and it worked flawlessly. I could even download models from Hugging Face and convert them to a CoreML model with little effort using Apple's conversion tool documented in their Stable Diffusion repo [1]. Some models on Hugging Face are already converted – I think anything tagged with CoreML.
I have an M2 MBP with 64 GB RAM. Performance with the older models is very good in my opinion. It feels to run faster locally than DreamStudio. I don't have benchmarks, but in any case the performance is not bad.
I’ve had good results with SD1.4/2 with MPS acceleration on similar hardware (M1 Max, though with 64gb). No stability issues with MPS, either. I’d say don’t rule it out just yet.