Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Magic3D looks like an improvement on DreamFusion [1], so it's sad to see that the code and models are not being made public.

What is public right now is StableDreamFusion [2]. It produces surprisingly good results on radially symmetrical organic objects like flowers and pineapples. You can run it on your own GPU or a colab.

Or, if you just want to type a prompt into a website and see something in 3D, try our demo at https://holovolo.tv

[1] https://dreamfusion3d.github.io/

[2] https://github.com/ashawkey/stable-dreamfusion

[3] https://holovolo.tv



Looks like Magic3D doesn't depend on any additional training, which means that open-source methods like StableDreamFusion can be adapted to this new method quite easily.

They use https://deepimagination.cc/eDiffi/ as the text-to-image diffusion model, which can be replaced with Stable Diffusion or something else.


Note that [3] https://holovolo.tv does not generate a mesh, but pipes the generated image trough a depth map estimator to create a parallax effect.


the dreamfields which is the precursor to all this work is available: https://colab.research.google.com/drive/1u5-zA330gbNGKVfXMW5... the meshes are so so, resolution wise. I made a bunch of architecture studies with the dreamfields: http://delta.center/20102020-ar-platform#/architecture-ai3d/ . I can't wait until the new versions are public; they will be so useful. I did not get good results with stable dreamfusion; I thought the dreamfields were better.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: