It's getting tiring seeing 3D model generation papers throwing around "high quality" to describe their output then glossing over nearly all of the qualities of a high quality 3D model in actual production contexts. Have they figured out how to produce usable topology yet? They don't talk about that, so probably not.
3D artists are begging for AI tools which automate specific tedious but necessary tasks like retopo and UV unwrapping, but tools like the OP do the opposite, skipping over those details to produce a poorly executed "final" result and leaving the user to reverse engineer the model in an attempt to salvage the mess it made.
If gen3D is going to be a thing then they need to listen to the people actually doing 3D work, not just chase benchmarks invented by other gen3D researchers. Some commentary on a similar paper about how they are trying to solve the wrong problems: https://x.com/rms80/status/1801362145600254211
> Have they figured out how to produce usable topology yet?
One recent attempt at it is Mesh Anything [1], which tries to generate triangle meshes with sensible topology[2] for a given point cloud - but it's still in the early stages, it can fail somewhat dramatically on meshes with smooth and concave parts [3] and it has a hard cap on the number of triangles it can generate.
> If gen3D is going to be a thing then they need to listen to the people actually doing 3D work, not just chasing benchmarks invented by other gen3D researchers.
This has been an issue for much longer than generative AI. In graphics, Siggraph papers have been publishing fully automated methods claiming “high quality” for decades that produce lower quality than human created assets or human-in-the-loop tools. Same is true of lots of other things in computer science, academics is just prone to over-automating because it’s fun and clever and because we can.
This is probably the natural course of events. People publishing papers and projects are going for maximum wow, not for maximum quality. Often this is smart grad students in school, and they have no access to 3d professionals, and need to get something published asap to meet funding/graduation goals.
We shouldn’t necessarily complain about it or get tired, but instead work on tech transfer. Borrow these ideas and bring them to 3d tooling. It takes time, and we have to vette them and figure out which will actually help people. Sometimes the ones that get lots of attention turn out to have no staying power.
> if gen3D is going to be a thing then they need to listen to the people actually doing 3D work, not just chasing benchmarks invented by other gen3D researchers.
Completely agree. I used to take an interest in reserach into non photorealistic rendering. The same thing happened there. Paper after paper detailing 'novel' approaches to yet another cross hatch algorithm that no artist was ever going to use.
Yes but a fair chunk only produce meshes as an afterthought. They often use a neural representation (or gsplats or density fields) and bolt on mesh generation for portability.
(This particular project does seem to be specifically about meshes but I'm addressing your broader point)
Not every application of 3d generation necessarily needs meshes. Or rather - might not need meshes in the not so distant future.
We'll see, but we've heard this story before with other proposed alternatives to mesh-based pipelines. Voxels and point clouds were all the rage in research for a while and poised to take over as soon as the last few pieces came together, but they never did and we still use meshes for nearly everything, either by creating meshes directly or turning some other type of data into a mesh. We still need to figure out how you're even supposed to edit static neural assets at a fine-grained level, nevermind animate them with precision, nevermind do all that efficiently.
I'd argue that even 2d generated art isn't very useable beyond putting it in your blog every few paragraphs with hopes of keeping readers hooked but such art is immediately recognisable that it has been prompted into existence.
Good luck with logos, serious graphics web design or UX or hoping to generate game assets that feel like all fitting together.
3D artists are begging for AI tools which automate specific tedious but necessary tasks like retopo and UV unwrapping, but tools like the OP do the opposite, skipping over those details to produce a poorly executed "final" result and leaving the user to reverse engineer the model in an attempt to salvage the mess it made.
If gen3D is going to be a thing then they need to listen to the people actually doing 3D work, not just chase benchmarks invented by other gen3D researchers. Some commentary on a similar paper about how they are trying to solve the wrong problems: https://x.com/rms80/status/1801362145600254211