The name "OpenAI" to me implies being open-source.
I have an RTX 3080 and will likely be buying a 4090 when it comes out. Will I ever be able to generate these images locally, rather than having to use a paid service? I've done it with DALL-E Mini, but the images from that don't hold a candle to what DALL-E 2 produces.
if you've got 60GB available to your GPU then maybe you can get close
I'm really curious if Apple's unified memory architecture is of benefit here, especially a few years from now if we can start getting 128/256GB of shared RAM on the SoC
I'm not sure if any current or next-generation GPU even has enough power to run DALL-E 2 locally.
Anyway, OpenAI is unlikely to release the model. The situation will like it is with GPT-3; however, it's also likely another team will attempt to duplicate OpenAI's work.
Thanks to the amazing @lucidrains there's already an open-source implementation of DALL-E 2: https://github.com/lucidrains/DALLE2-pytorch and a pretrained model for it should be released within this year.
The same person is also at work on an open-source implementation of Google's Imagen which should be even better (and faster) than DALLE-2: https://github.com/lucidrains/imagen-pytorch.
This is possible because the original research papers behind DALLE-2 and Imagen were both publicly released.
I have an RTX 3080 and will likely be buying a 4090 when it comes out. Will I ever be able to generate these images locally, rather than having to use a paid service? I've done it with DALL-E Mini, but the images from that don't hold a candle to what DALL-E 2 produces.