Hacker News new | past | comments | ask | show | jobs | submit login

@braddwyer is me :)

It gives you about 15 fps of depth data




Haha, cool! Hi :)

Given the small size of the laser projector, I imagine natural movement from the phone being hand-held would result in significant displacement of the projected dots over a 1s interval? Have you tried integrating the 15 frames to see what it looks like?


I haven't yet.

We submitted a game about 3 weeks ago using front-facing ARKit as its core game mechanic and it hasn't been approved by Apple yet.

I'm waiting to see if they're going to allow us to use the new technology in novel ways or not before I invest a lot more time in it.


Getting minute, subpixel movements can ironically give you MORE resolution if you process it over time, though you'd probably need some sort of "anchor" points


That doesn’t seem ironic to me.


I think the irony being implied is that normally when you're shooting video and your camera is jittering, you're effectively losing resolution compared to a static camera because of motion blur, whereas this depth mapping benefits from minute movements. Though looking at individual frames of video is different than combining them into a single sharper image, I get the counterintuitive feeling they were driving at.


Could you stabilize this before integrating? Using feature points and matching them up, perhaps?


I imagine something like that would be necessary. The techniques would probably be ones related to those used in SLAM [1].

[1] https://en.wikipedia.org/wiki/Simultaneous_localization_and_...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: