The OC mentioned "static lighting". If they meant static, while the platform was spinning, then the lighting would be inconsistent, because the object would change lighting with each photo. You would have to fix the lighting to the platform to spin with the object, while taking the pictures to get consistent lighting.
I think you just nailed why I have been having a hard time with my photo set. It's the lighting. Well crap, because I don't have access to the statue or studio again. Thanks for the tip.
You could try generating per-view depth maps, going to a point cloud and meshing from there. (I suspect splats may reduce your accuracy as an intermediate.)
I’m not aware of a fully-baked workflow for that — though it may exist. The first step has gotten really good: the recent single-shot AI models for depth are pretty visually impressive (I don’t know about metric accuracy).
The ones I’m aware of are DUST3R and the newer MAST3R:
Photogrammetry generally assumes a fully static scene. If there are static parts of the scene which the camera can see and also rotating parts, the algorithm may struggle to properly match features between images.