Magic3D looks like an improvement on DreamFusion [1], so it's sad to see that the code and models are not being made public.
What is public right now is StableDreamFusion [2]. It produces surprisingly good results on radially symmetrical organic objects like flowers and pineapples. You can run it on your own GPU or a colab.
Or, if you just want to type a prompt into a website and see something in 3D, try our demo at https://holovolo.tv
Looks like Magic3D doesn't depend on any additional training, which means that open-source methods like StableDreamFusion can be adapted to this new method quite easily.
They use https://deepimagination.cc/eDiffi/ as the text-to-image diffusion model, which can be replaced with Stable Diffusion or something else.
What is public right now is StableDreamFusion [2]. It produces surprisingly good results on radially symmetrical organic objects like flowers and pineapples. You can run it on your own GPU or a colab.
Or, if you just want to type a prompt into a website and see something in 3D, try our demo at https://holovolo.tv
[1] https://dreamfusion3d.github.io/
[2] https://github.com/ashawkey/stable-dreamfusion
[3] https://holovolo.tv