Hacker News new | past | comments | ask | show | jobs | submit login
Modeling my Grandpa with 3D Photogrammetry (2021) (jeffgeerling.com)
162 points by breathenow on Jan 1, 2023 | hide | past | favorite | 25 comments



Scaniverse from Niantic is (really) free on iOS and makes this almost trivial: https://scaniverse.com

Apple’s Object Capture looks impressive too: https://developer.apple.com/augmented-reality/object-capture...


As far as I can see, It uses LiDAR. Unlike photo-based PG, LiDAR does not get confused by lighting conditions. It is also good for large scenes and environments. The downside of apples lidar is that the mesh is too low resolution for most practical uses.


I've been playing around with LiDAR apps and I agree; they are amazing for quickly building up models, but especially for anything not room-scale, the resolution is too low for detail work (like 3D printing reproduction) without a lot of touch ups.

More advanced apps seem to combine the data but take a bit longer to come up with a usable model or require post processing.


Scaniverse uses both LiDAR and photogrammetry. There are some examples here:

https://sketchfab.com/scaniverse/collections/made-with-scani...

In general, I think the iPhone LiDAR works quite well, even at small scales.


The LiDAR on the iPhone is limited, but fun. Commercial lidars are hella expensive (last time I looked).

Looked at your sketchfab collection. Lovely work.

For those who don’t know, Sketchfab allows you to preview the mesh using the layers button. Your bread roll model is nicely detailed with quite a tight mesh.

It is possible to model anything in 3D, but for complex (organic or natural) objects, nothing beats PG. Your bread roll model I would certainly use in middle distance. However, for close up ‘her’ shots all PG acquired models fail, unless they are significantly retopologized. The outcome would be a tidy quad mesh.

Having been retopologized, I would then have to project the original textures onto the new model.

The origami RGB imformation would then have to have its lighting information removed, either in photoshop or Ali soft delighter. In this way, I would be able to independently light it inside of my compositor or 3D app.

The final task would be to cook up some kind of roughness map. Likely I would derive this from the RGB map.

For those who are interested, my link below leads to a resource that details this process.


> Unlike photo-based PG, LiDAR does not get confused by lighting conditions.

I tried to use the app that GP mentioned, inside of a dark room. This confused the app and it thought I was in motion.


Most of them use the system AR sdks, which still relies heavily on the cameras for feature detection, if it’s too dark they have a hard time anchoring themselves to the scene


What I'd love is software that starts with a scan and then calculates a confidence level for all the vertices, which it would then let you edit with exact measurements or even intelligent questions like "is this part circular?"


somewhat related, apple room plan is really cool


If anyone is interested in an alternative to the software mentioned in the article, I've had very good results with Meshroom[0], and it works on Linux.

References:

0: https://github.com/alicevision/Meshroom


I did something similar over a decade ago as part of my master thesis back in university [1]. IIRC this was before most of the tools existed that were mentioned in the article. If you don’t have anything else other than a camera at hand (and no recent iPhone), this should be the most straight forward/cost effective way to generate metrically correct 3d models

[1]: https://youtu.be/9hAadMszs5k


Can we call this photogranpammetry then?


If you have an IPhone I warmly recommend Luma AI for these sort of scans. Their nerf (neural field, not the toy) based photogrammetry pipeline is close to magic.


I heard that the mesh from nerf is rather shabby. Not good enough for 3D printing or (in our case) VFX. Was this your experience?


Actually I haven't tried prints! Very good point. The visual quality though is really good - the thing it does out of the box right are specular surfaces which are really hard for traditional photogrammetry. This makes capturing even shiny objects trivial (which might be impossible with with trad. photogrammetry).


For anyone interested…

This is our educational module on photogrammetry: https://rmit.instructure.com/courses/87565/modules/933562

These are the tools we use: https://rmit.instructure.com/courses/87565/pages/photogramme...


I just scanned in a bunch of photos of my relatives... this article got me thining coudl I use those photos to build a 3d model of them (they're all decesed many before I was born).


The name OpenMVS is all kinds of confusing for people interested in retrocomputing.

https://en.wikipedia.org/wiki/OpenVMS

https://en.wikipedia.org/wiki/MVS


Wonder if anyone has experience with RealityCapture (https://www.capturingreality.com/realitycapture), they also have an iOS app but it seems the original service was for generating 3D models from photos like the author took.


I tried Meshroom and RealityCapture (both on Windows) and the latter yielded better results in a fraction of the time. YMMV.



Does anyone have experience with anything like this on android?


i suggest using NeRFs instead of pont clouds... it's a fun experience with the toolkit


> turning your grandpa into a ketchup cap

Jeff, I suspect your scientists were so preoccupied with whether they could they didn't stop to think if they should.


This is great but I don't have (or want) a Mac!

Perhaps the platform should be in the title.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: