
Show HN: VoxelCamera – 3D scanning with a mobile phone (YC Fellowship 2015) - pondruska
http://voxelcamera.co/
======
clay_to_n
Reminds me of the iOS app Seene, which is like Instagram for 3D photos made on
your phone: [http://seene.co/](http://seene.co/)

Ultimately not much of a value prop for me, but was very fun to play with for
a few days. I don't think the app could be used to actually export a model
(for printing, say), but their website today looks like they have other
applications besides their Instagram-like sharing app.

~~~
pondruska
We like Seene but unlike only creating 2.5D pictures we aim for full 3D models
one can further use for any purpose (such as 3D printing or creating digital
content).

------
pontifier
The problem I see with all of these scanners is the lack of quality... Are
they all based on the same feature tracking core? Why isn't anyone doing more
innovative reconstruction like tracking edge contours, or something else...
even with a depth sensor most of the scanned models I've seen are crap.

~~~
KaiserPro
Cos its hard, thats why.

You're assuming that CCDs are perfect image sensors that don't have noise to
cause feature trackers to jitter.

You're also assuming that sparsely featured objects only need simple
back/belief propagation to make a good model, finally doing it in less than
ten hours on an iphone at anything other than <320voxels^3 is pretty
impossible.

Even decent commercial laser scanners only have limited resolution at this
scale. Your best bet is either lightfield capture or
[http://web.media.mit.edu/~achoo/polar3D/camready/manuscript_...](http://web.media.mit.edu/~achoo/polar3D/camready/manuscript_iccv.pdf)
(which I've not read fully yet, however looks pretty sexy, even if its not
very general.)

------
asadlionpk
Good work!

Microsoft was/is working on something similar[1]

I tried the app on a chair and an object on table. It's far from what kinect
can do. I am optimistic about it though.

[1]
[http://blogs.technet.com/b/inside_microsoft_research/archive...](http://blogs.technet.com/b/inside_microsoft_research/archive/2015/08/24/3d-scans-
with-mobile-phones-mobilefusion-research-project.aspx)

~~~
pondruska
Right now the scanning works well for a limited number of scenarios (such as
the ones shown on the page). We are working on improving the quality and
expect it will grow over time to cover more and more cases.

Unlike Kinect this is a passive sensor technology which has it's own
advantages (it works outside) and disadvantages (it does not work on
completely textureless areas).

------
fgd
A few years back (then) coworkers made a version of this called Mementify [0]
which was derived from an EU research project called PHOV [1].

Quite interesting to see the idea pop up every few years.

[0]: [https://itunes.apple.com/us/app/mementify-your-finest-
moment...](https://itunes.apple.com/us/app/mementify-your-finest-
moments/id553460965?mt=8)

[1]: [http://www.phov.eu/gallery/](http://www.phov.eu/gallery/)

~~~
pondruska
Good job to your coworkers with Mementify! There is a number of apps nowadays
which work in a similar way (user takes pictures which are then uploaded and
processed in the cloud). Our aim is to provide a live-feedback while scanning
which is possible only with on-device processing. Without it one simply never
knows whether all parts of the model are captured while taking the photos.
Moreover, the processing in the cloud usually takes several hours and
therefore if the results are not satisfying it might be even impossible to re-
take the pictures again.

The advantage of offline methods is though somewhat higher quality. We can add
cloud post-processing later to refine final models - especially for the
purposes of 3D printing.

------
mandeepj
It is not that hard. You take multiple photos, add depth information and you
have 3d versions of the pictures that you took. I know I over simplified it
but in a nutshell this is it

~~~
pondruska
Yes, in nutshell, this is it :-) But knowing depth information is not enough -
you must also very precisely know the position of the camera the individual
pictures were taken from. Computing both depth information and position of the
camera is in fact a very challenging task.

------
zpr
Any plans for an android version?

~~~
lakySK
Unfortunately, we don't have enough time to add Android support at the moment,
as we're focusing on improving the quality and disability for the existing
app. In the longer term we're definitely planning to release an Android
version too.

------
Sir_Substance
That's a sweet trick, but I don't have an iphone, and apparently it's not
compatible with ipad 2's.

Anyone know if there's an equivalent piece of software for a desktop computer,
preferably linux compatible?

~~~
lakySK
Thanks! There are some libraries like PCL
[http://pointclouds.org/](http://pointclouds.org/) for use with Kinect cameras
or things like LSD SLAM and OpenDTAM available for desktop for real time
processing and some other structure from motion solutions to reconstruct
models from images. They are all really cool, but can be a bit clunky to use
for people without computing background, so we're trying to create a new
solution that's a bit more mobile and easier to use, hence the iPhone app.
Give a try to the sites I mentioned though, maybe you can find something that
fits your needs.

~~~
Sir_Substance
Yeah, I'm at the point of running Agisoft Photoscan under wine, no results
yet. But of course I won't get the heat-map quality indicator, which is the
bit I really like about your app!

~~~
lakySK
Good luck, hope you manage to make it work! Thanks, that's the advantage of
having it run real-time. Unfortunately, I don't think there is any other out-
of-the-box solution right now that can give you this sort of feedback with a
normal camera.

------
bcks
Ooh, please make it easy to export to a format I can load into a 3D printer.

~~~
lakySK
Hi bcks, we're working on it. We'll be releasing a new version that allows you
to export to ply. We're also working on adding external 3D printing and
delivery straight from the app.

------
joshvm
Cool to see it working at speed!

The site mentions this is a new technology, how is this different from
traditional SFM type algorithms which have been around for almost a decade
now?

~~~
pondruska
Thanks, Structure From Motion is a broad term covering a number of different
methods which were developed over years and our method is also a form of SFM.
The main challenge here is to make it work well on a mobile phone in real time
instead of hours many SFM algorithms require.

------
rememberlenny
Tried to use this twice today, but struggled to get it to 100% progress long
enough. Video instructions would be good.

~~~
pondruska
Thanks, we will make one. Try to put the object on a newspaper or other well-
textured surface. It usually helps.

------
3dfan
"VocelCamera is not compatible with this iPad"

That's what I get on my iPad 2. Why?

~~~
lakySK
All the processing is done on the device in real time and the app requires
enough processing power to work well, so we had to restrict it to newer iPhone
and iPad models only.

------
rememberlenny
Rudimentary Kinect with your iphone, without any additional hardware. Pretty
cool

~~~
lakySK
Thanks a lot! We're glad you like it :)

------
JorgeGT
Can the mesh be exported?

~~~
lakySK
Not yet, unfortunately :( We'll be adding export to ply in the coming
versions.

------
mandeepj
How you handle long hair in a picture?

~~~
lakySK
We don't have any special case processing, everything is handled the same way
in the volumetric model, which makes the app really versatile. If you're
expecting a scan detailed enough to recognise individual hairs, I'm afraid
I'll have to disappoint you so far :) We're currently working on improving the
quality and soon we should be able to scan people with good enough quality for
3D printing. Right now though the human scans may lack details.

