As far as I can see, It uses LiDAR. Unlike photo-based PG, LiDAR does not get confused by lighting conditions. It is also good for large scenes and environments. The downside of apples lidar is that the mesh is too low resolution for most practical uses.
I've been playing around with LiDAR apps and I agree; they are amazing for quickly building up models, but especially for anything not room-scale, the resolution is too low for detail work (like 3D printing reproduction) without a lot of touch ups.
More advanced apps seem to combine the data but take a bit longer to come up with a usable model or require post processing.
The LiDAR on the iPhone is limited, but fun. Commercial lidars are hella expensive (last time I looked).
Looked at your sketchfab collection. Lovely work.
For those who don’t know, Sketchfab allows you to preview the mesh using the layers button. Your bread roll model is nicely detailed with quite a tight mesh.
It is possible to model anything in 3D, but for complex (organic or natural) objects, nothing beats PG. Your bread roll model I would certainly use in middle distance. However, for close up ‘her’ shots all PG acquired models fail, unless they are significantly retopologized. The outcome would be a tidy quad mesh.
Having been retopologized, I would then have to project the original textures onto the new model.
The origami RGB imformation would then have to have its lighting information removed, either in photoshop or Ali soft delighter. In this way, I would be able to independently light it inside of my compositor or 3D app.
The final task would be to cook up some kind of roughness map. Likely I would derive this from the RGB map.
For those who are interested, my link below leads to a resource that details this process.
Most of them use the system AR sdks, which still relies heavily on the cameras for feature detection, if it’s too dark they have a hard time anchoring themselves to the scene
What I'd love is software that starts with a scan and then calculates a confidence level for all the vertices, which it would then let you edit with exact measurements or even intelligent questions like "is this part circular?"
If anyone is interested in an alternative to the software mentioned in the article, I've had very good results with Meshroom[0], and it works on Linux.
I did something similar over a decade ago as part of my master thesis back in university [1]. IIRC this was before most of the tools existed that were mentioned in the article. If you don’t have anything else other than a camera at hand (and no recent iPhone), this should be the most straight forward/cost effective way to generate metrically correct 3d models
If you have an IPhone I warmly recommend Luma AI for these sort of scans. Their nerf (neural field, not the toy) based photogrammetry pipeline is close to magic.
Actually I haven't tried prints! Very good point. The visual quality though is really good - the thing it does out of the box right are specular surfaces which are really hard for traditional photogrammetry. This makes capturing even shiny objects trivial (which might be impossible with with trad. photogrammetry).
I just scanned in a bunch of photos of my relatives... this article got me thining coudl I use those photos to build a 3d model of them (they're all decesed many before I was born).
Wonder if anyone has experience with RealityCapture (https://www.capturingreality.com/realitycapture), they also have an iOS app but it seems the original service was for generating 3D models from photos like the author took.
Apple’s Object Capture looks impressive too: https://developer.apple.com/augmented-reality/object-capture...