Hacker News new | past | comments | ask | show | jobs | submit login
How to automatically choose a camera viewing angle for any 3D model (cubehero.com)
106 points by iamwil on May 16, 2013 | hide | past | favorite | 11 comments



This approach is using the bounding box size and longest dimension to choose a camera distance and angle. It works better than static pose and it's computationally cheap.

However, it can oversimplify for objects which are significantly asymmetrical. A good place to go from here is statistical measures of object recognizability:

Betke & Makris, Information-Conserving Object Recognition.

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.44.4...

For 3D models, you can use hierarchical space carving to simplify the mesh first. You don't need great resolution, you just need better resolution than a rectangular bounding box.

(Note: PCA can be used for recognizability, but is not ideal. Highish complexity & overly general.)


There is a lot of academic work in this area that's relevant, dating back at least 10 years. I'm surprised it wasn't mentioned.

Maximizing the entropy of projected normals works well, for example, like this paper by Polonsky et al.: http://www.cs.technion.ac.il/~gotsman/AmendedPubl/Oleg/Polon...

A related problem is automatically figuring out the "natural orientation" of a mesh, so your automatic renderings don't look upside down by accident. Fu et al. use pretty simple machine learning tools: http://www.cs.ubc.ca/nest/imager/tr/2008/upright_orientation...


Secord et al. also did a perceptual study in which they determined which features were most correlated with user viewpoint preference and built models for viewpoint preference based on this information: http://gfx.cs.princeton.edu/pubs/Secord_2011_PMO/index.php


Interesting. The OP got me thinking along the lines of manually tagging salient features of each model (as well as ranking models by salience, either manually or automatically based on criteria related to the object that the model represents).


I think in this particular case you should be able to actually use the coordinate system in the STL for that. most things for printing you'll likely want to see how it would be coming out of the printer so you'll have an actual up direction defined before you do this. Should save a lot of computation there.


I'm curious whether you could use the 3D vertex coordinates and use principal component analysis to find an appropriate viewing angle. Example: http://www.youtube.com/watch?v=BfTMmoDFXyE

PCA is often overused but for this specific use case it seems very well suited.


Thanks! I hadn't thought to use PCA, but I'll try it out, and report back, along with other peoples' suggestions!


Hmm, my (admittedly naive) idea would be to render several shots of the product from different quasi-random angles and look at the final entropy/contrastiness of the image to find the most interesting one (the one that captures most detail).


And if fine-tuning of the angles is desired, a genetic algorithm should work great for optimization in this case.


Curious as to how the pictures would compare to a simple principle component analysis algorithm.


I see a lot of pictures with parts chopped of there though. The very first picture has the top clipped. In the Cubehero screenshot, there are multiple chopped off ones, including the top right one. Why chop off a small part?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: