Hacker News new | past | comments | ask | show | jobs | submit login

Why would that make a difference?



The OC mentioned "static lighting". If they meant static, while the platform was spinning, then the lighting would be inconsistent, because the object would change lighting with each photo. You would have to fix the lighting to the platform to spin with the object, while taking the pictures to get consistent lighting.


I think you just nailed why I have been having a hard time with my photo set. It's the lighting. Well crap, because I don't have access to the statue or studio again. Thanks for the tip.


You could try generating per-view depth maps, going to a point cloud and meshing from there. (I suspect splats may reduce your accuracy as an intermediate.)

I’m not aware of a fully-baked workflow for that — though it may exist. The first step has gotten really good: the recent single-shot AI models for depth are pretty visually impressive (I don’t know about metric accuracy).

The ones I’m aware of are DUST3R and the newer MAST3R:

https://github.com/naver/dust3r https://github.com/naver/mast3r

Good luck!


Photogrammetry generally assumes a fully static scene. If there are static parts of the scene which the camera can see and also rotating parts, the algorithm may struggle to properly match features between images.


i think it's common to have dots on the rotating disk where the object is placed on.


Sure, but if the background has a lot of features it will still confuse the algorithm unless it has some special settings for ignoring the background.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: