Hacker News new | past | comments | ask | show | jobs | submit login

It uses the Object Capture API introduced in the latest version of macOS.

The API allows you to create 3D models from images with just a few lines of code.

https://developer.apple.com/augmented-reality/object-capture




If we're talking Swift frameworks, ModelIO (underrated) and ObjectCapture are certainly part of things.


He asked for the “blackbox that takes set of images as an input and produces a 3D model as an output.”

That black box is Object Capture API, which takes a folder of images and outputs an USDZ (or OBJ) file. Model I/O isn’t necessary.

Kudos to Apple for releasing a photogrammetry pipeline that allows web and app developers to build tools like this, with zero knowledge of 3D reconstruction required.


I’ve just been trying out the photogrammetry session, it’s super simple to use. I think Apple’s photogrammetry software isn’t as advanced as other’s, but it is free and works well small scale.

I work in the same field, making a capture app for photogrammetry software


Yeah what Apple's has is a clear lead on the time required to build objects.

I think that the photogrammetry side of things, the "give me an object now" will shift towards mobile device camera apps.

It is only serverside now due to power requirements.


If they make the api available on ios it will be a game changer, i’m hoping they’ll make it available for the m1 chipped devices


The rumours say they're putting m1 level devices in the the headset. Again the power budget is the only issue!


You're not wrong (althought we do use ModelIO a lot).

I just saw that you're from https://www.aboundlabs.com and metascan.ai Your scans look great! So how does your black box work?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: