
Scientific Breakthrough Lets SnappyCam App Take 20 Full-Res Photos Per Second - Osiris
http://techcrunch.com/2013/07/31/fastest-iphone-camera/
======
revelation
DCT is already lossy [1], so the statements around 8 megapixels are completely
pointless, and worst of all, its 1990 lossy technology. Wavelet
transformations completely destroy any DCT.

That said, if their emphasis is on producing pictures with minimal time delta
at highest resolution, algorithms used for still pictures are out of place.
Video compression algorithms still use DCT and wavelets, but they do so only
after they have reduced redundancies between _series_ of pictures, a process
that tends to work significantly better than anything you can get out of these
lossy transformations when you want to preserve quality.

Of course, eliminating redundancy in a series of pictures might have tipped
them off to the fact that the image sensor isn't actually producing fresh
pictures at the rate they want.

1: as used in JPEG. The transformation itself is perfectly invertible,
assuming infinite precision arithmetic.

~~~
jpap
You are right on the loss: it's purposefully introduced as a quantization step
after performing the DCT, and before losslessly compressing the resulting
coefficients with Huffman and encoding to the final JPEG bitstream.

Despite all of that, JPEG has now become computationally tractable. I remember
the days where it took tens of seconds to encode a JPEG on a commodity
machine. Now, with the help of SIMD, we can encode a high quality image in
msec on a mobile device.

Fortunately you can choose the quantization matrix that determines the amount
of loss. Even if you were to choose a unitary matrix, no human, not even
superman with his laser eyes, can "detect" the quantization noise.

For SnappyCam, I chose to invest in JPEG a little more because it's a
ubiquitous standard for still image compression.... and with the right
hardware and algorithms, quite tractable.

I'll consider adding a JPEG "quality setting" so you can choose the amount of
loss introduced... sounds like a great idea to me.

The idea behind SnappyCam was also to code each picture independently, and not
rely on motion prediction or video codecs. If you try and pull a single frame
from a HD video you might be disappointed: they compress the YUV dynamic range
(studio swing) and it looks washed out, even if you land on an i-frame.

Lastly, as far as I can tell, the image sensor is yielding complete scans with
each frame. I'd hazard a guess to say that any motion prediction or frame
deltas might actually slow the whole chain down.

~~~
revelation
It's just bizarre that you would be doing the complete JPEG process at the
instant you get the image from the sensor. As you note, there are a plethora
of steps that JPEG performs, from color space conversion, to DCT
transformation (essentially a gigantic matrix multiplication), Huffman coding,
quantization, arithmetic coding and encoding as JPEG bitstream.

The only reason would be that you are pressed for memory or bandwidth, but
certainly you have the resources to store one full frame and produce deltas,
or just apply part of the JPEG chain, enough to remedy memory pressure. You
can always encode it to an actual JPEG after the process.

And yes, pulling single frames from a completely encoded video isn't helpful,
because they can get away with more compression. But there are very
sophisticated algorithms for eliminating the redundancy between frames, which
would have been my first avenue in attempting to do something like this.

~~~
gruseom
_which would have been my first avenue in attempting to do something like
this._

 _Have_ you attempted to do something like this? Because he not only has
attempted, he's done it. Therefore I think you should stop talking down to him
("completely pointless", "it's just bizarre", " _my_ first avenue"). It comes
across as wanting to prove how smart you are instead of seeking to learn from
someone who has done incredible work and—lucky for us—is bursting at the seams
with enthusiasm to share it.

Oh and congratulations jpap on what's looking like the most successful and
technically solidest HN launch in quite some time! I hope your hard work pays
off.

~~~
zimpenfish
_wild applause_

~~~
tptacek
This whole thread should be framed and hung in the HN lobby.

------
nwh
Looks like they've an instagram-type site set up too.

[http://snappyc.am/2LdRMF28U0](http://snappyc.am/2LdRMF28U0)

[http://snappyc.am/4HHxyCad7D](http://snappyc.am/4HHxyCad7D)

[http://snappyc.am/3G3i6QCJUk](http://snappyc.am/3G3i6QCJUk)

~~~
marcamillion
Do these work for you? When I load the site, I see the photo scrub on the
right hand side of the screen - and the slider moves up and down...but I don't
see the big image loaded in the center of the screen where I expect the image
to be.

Unless it takes a while to load - in which case I was just being uber
impatient.

~~~
jpap
I saw this a couple of days ago. Are you using Chrome?

It might be yet another Chrome canvas bug. :-(

Try Safari and let me know if you can? :-)

~~~
dlsym
Does not work in FF 22.0, neither in Chrome: 28.0.1500.71 or Opera. (OS: Linux
Mint)

~~~
jpap
Ouch. Looks like some work for me ahead.

The problem with Chrome 28.0.1500.x (.95 here) is troubling me. It seems a
more recent problem that I'm convinced is another browser bug.

Thanks for the detailed version report, that's really going to help. :-)

~~~
groby_b
Hm. Works on Chrome 30.0.1582.0 (Canary) - after repeated page refreshes. Was
stuck on 98% several times. Disabling the cache doesn't allow me to repro it,
emptying the cache doesn't either.

Works with 28.0.1500.95, too.

However, looking at the console, I see _occasional_ instances of Resource
interpreted as <blah> but transferred as MIME type <foo>. Not a big deal, but
maybe a pointer.

------
seldo
This is neat tech and works pretty much as advertised, but man, this UI is
pretty rough. The blue background and curvy borders are strangely superfluous;
tapping the left-bottom corner controls pops up an intermediate selector but
the right-bottom controls work in-place; taking a shot produces a big
"infinity" symbol that fades in and out of view -- I don't know what it means.

Good work on tech, please hire a UX specialist :-)

~~~
jpap
Fair comments, and much appreciated.

I did all of the graphics design myself, in the app and on the web. :-)

The infinite sign you see does require an explanation. I'll take your advice
and think about how it can be done more simply.

It's basically telling you, the user, that the capture buffer has filled, and
you're now dropping (some) shots.

~~~
josh2600
Have you thought about displaying a semi-transparent bar to show the buffer?
Or maybe a one or 2 px white mark creeping up the side of the screen (turning
to red as it get towards the top)?

Just some thoughts. If you have a buffer, and I'm gonna get fubarr'd if I hit
the limit, you should probably show me the buffer (not just a warning that
it's too late).

~~~
jpap
Versions 1.x.x of SnappyCam had a linear buffer [1] but I felt it was
distracting.

I generally can see the "end" of the circular buffer around the shutter
button, so it doesn't seem to be an issue for me. Perhaps I tend to touch it
on the lower-right instead of dead-center.

I made an effort to support lefties in the UI (see Advanced Settings), but the
buffer doesn't spin the other way just yet. (To be honest, I've had to
deprioritise that in favour of other features.)

Are you left handed?

[1] Yes, that's me jumping near the GG bridge. I'm quite good at it now, as
you can imagine:
[http://a3.mzstatic.com/us/r1000/085/Purple/v4/c5/06/d5/c506d...](http://a3.mzstatic.com/us/r1000/085/Purple/v4/c5/06/d5/c506d546-6a02-4d86-6ac3-d38935e82f4b/mzl.mzwfhcsq.320x480-75.jpg)

~~~
josh2600
No, not left handed, but I didn't get the infinite visual cue until you
explained it here or the border.

The red bar on the bottom woul probably be fine of you could make it like 70%
transparent until it gets toward the end, the vacillate it between 0% and 50%
so it looks like its flashing. Some visual indicator that I should be paying
more attention to it.

------
andrewf
I think it's fantastic that you've managed to turn a long, hard optimisation
slog into a real product win. Add me to the list of Australians willing to buy
you a beer - but not back home, I live in SF at the moment :)

I'm curious about the low-quality preview you get when scrolling through all
the shots. Are you storing low-quality data separately or do you also have a
fast, low-qual JPEG decoder? (Is the Huffman encoding between blocks
independent?)

~~~
jpap
Hey Andrew, would love to catch up over a beer. :-) Drop me a note via email:
jpap {at} snappylabs.com

You've got a good eye: as part of the JPEG image compress, I also generate a
low-resolution thumbnail that's embedded into each file as Exif metadata
(along with geotagging, and other camera settings that define the shot, like
exposure).

They are used as a "first-in" placeholder for an image.

The full image is then downsampled and decompressed simultaneously [1]
exploiting the fact that the (Retina) screen resolution is often much lower
than the full JPEG resolution.

As soon as you start zooming, the image is decompressed yet again at the full
resolution and replaced in-place as quickly as possible so hopefully you won't
see it. :-)

[1] As outlined in [http://jpegclub.org/djpeg/](http://jpegclub.org/djpeg/)
the technique relies on the fact that the top NxN DCT block of MxM
coefficients, N < M, can be inverted to form a NxN pixel lower-resolution
image of the original MxM block. When N is {1, 2, 4} a fast inverse DCT
algorithm can be used with great success.

In fact, N == 1 is a trivial inversion and it might be tempting to use it as
the low-resolution image instead of a thumbnail, but you still have to unpack
all of the DCT coefficients to get to it, which can be expensive (Huffman).

------
Oculus
I have a feeling that soon SnappyLabs is going to have Apple knocking on their
door with a very nice offer.

Kudos to them, sounds like they deserve it.

~~~
jpap
Thanks!

I just hope Apple's engineers don't get pissed off by the press. SnappyCam is
built on their hardware, which can do remarkable things.

Though we as app developers don't get access to a lot of their smarts, e.g.
hardware JPEG codecs, I'm sure there's even more innovation in their work that
often goes unacknowledged.

~~~
ahknight
You'd think so, but they take 2-10s to recover from a photo and you ... don't.
Clearly there's some room for optimization there. :)

------
cendrillon
Nice to see Jpap continuing to push the boundaries of what's possible.

Aussie maths whiz supercharges net
[http://www.smh.com.au/articles/2007/11/05/1194117915862.html](http://www.smh.com.au/articles/2007/11/05/1194117915862.html)

~~~
gandalfu
@jpap, are the results shown in the article being used today?

~~~
jpap
You'd have to ask Ericsson. ;-) I certainly hope so!

~~~
rbourke
I vaguely remember reading that they (the Ericsson patents) were included in
the latest VSDL2 specs.

Could be rolled out as part of the NBN pending Australia's election result.

------
Marat_Dukhan
Wow, amazing performance tuning, so rare these days!

However, you should be careful with this online ARM simulator. It simulates
Cortex-A8 while iPhone 5 runs on Apple Swift, two generations ahead. It very
likely has different instruction timings compared to Cortex-A8. I didn't have
a chance to test Swift, but here is a list of what _might_ be different,
judging by Qualcomm Krait and ARM Cortex-A15, which are in the same
generation:

\- Instead of 2-cycle latency on Cortex-A8 simple ALU instructions might have
3-cycle latency on Krait (this is the case on Krait and Cortex-A15).

\- Cortex-A8 can issue only 64-bit SIMD multiplication per cycle, Swift
probably can do 128-bit VMUL.Ix each cycle (Krait does).

\- Cortex-A8 can issue only one SIMD ALU instruction per cycle, Swift probably
can do more (Cortex-A15 can issue 3 128-bit VADD/VAND/etc in 2 cycles).

\- Cortex-A8 could issue one SIMD ALU + one SIMD LOAD/SHUFFLE per cycle, Swift
could be less restrictive (and probably even can issue 3 NEON instructions per
cycle, like Cortex-A15).

~~~
jpap
That's really cool, Marat. Thanks for the additional info on the A15 and
Swift.

It's a lot of work to optimize the assembly code to each ARM variant, but glad
to know that Swift will generally run the same code at the same or faster
speeds as the Cortex-A8.

The 3-cycle latency on simple ALU instructions is a bummer, but fortunately I
use them sparingly for computation as compared to NEON. (They're great for
pointer arithmetic and computing image row strides.)

The multiple issue of an ALU + LOAD is awesome. That would definitely help
some of my routines.

~~~
Marat_Dukhan
The 3-cycle latency refers to simple NEON ALU instructions (VADD.Ix, VORR,
VAND, etc). Scalar ALU instructions are still single-cycle. Note that these
numbers are from Cortex-A15 and Krait which are expected to be similar to
Swift, but I didn't measure Swift itself to know for sure.

------
gosu
This looks fantastic. Watching people's reactions in that example image was
really interesting, and it occupied me for a good few minutes. "Why can't you
do the same thing with video?" Because rewinding video is really painful,
especially online video.

Criticism:

I use my thinkpad's pointer stick to move the mouse cursor. It's impossible to
keep the cursor inside the "control strip" while moving it up and down and
also looking away from the strip (and at the image). Too much accidental x
motion is introduced.

It would be better for me if you were to enable the scroll wheel (which I can
simulate on my pointer) as an alternative time control, or perhaps let me
click on the control strip and then hold down mouse1 for as long as I want my
y motion to control the position in time.

~~~
jpap
@gosu, despite what Josh wrote, you can traverse your pointer across any part
of the living photo online. :-)

Love that you picked up on the expressions! It wasn't until I got the photos
out of the app was I fascinated by the same thing. I really can't wait to
enable this functionality for everyone soon. :-)

More elaborate mouse movements are possible, only in HTML5 full screen mode;
required to "capture" the mouse (think a game).

The problem with that, too, is that instruction or a tutorial is required.
(I'd try to make things as intuitive as possible, despite the failure in the
other thread RE UX and the infinite shutter.)

~~~
gosu
_facepalm_

What was happening is that I was trying to keep the mouse in the control
strip, and it would go off the right side of the image.

Thanks a lot, Josh.

Edit: By the way, the fullscreen functionality isn't launching. But I do have
a weird browser (conkeror on xulrunner 22.0).

~~~
jpap
haha, no worries.

In the app, you need to start with your finger near the thumbnail strip. (But
can move it away for fine-grained scrubbing if you wish.)

It's no surprise that the learned behavior is transferring to the web viewer.

------
9oliYQjP
jpap, I don't quite fully understand the implementation (though I'd love to
one day be proficient enough to). But maybe you can explain how the format
compares to motion JPEG. Or maybe it's very similar? About 15 years ago I
dabbled in live video recording on old Pentium II hardware with an old BT878
video input card. Motion JPEG was the only feasible option to obtain
relatively high quality (for the time) results albeit at the cost of disk
space.

~~~
jpap
There are a lot of similarities actually.

In SnappyCam, each photo is compressed to a separate JPEG file. There's no
inter-frame compression, no motion vectors, etc. The same as mJPEG.

The main differences are:

* Each photo is stored in an individual file. This makes seeking through the living photo blindingly fast. (I guess you could do this with motion JPEG by utilizing an index.)

* Each photo also has full metadata. Try rotating the camera as you shoot. It will follow you. :-) Same goes for the geo-tagging: included are a bunch of timings that aren't normally included, so you can know the "precise" usec when you took the photo.

* Each photo has it's own thumbnail. That allows me to cheat a little bit in the photo viewer: you will see a flash from blurry to clear as you scroll around.

(There are more cheats in the viewer for decoding and downsampling at the same
time before you zoom, to make the photo load faster as well. One of the
handful of reasons why I rolled my own decoder as well.)

~~~
ramanujan
This is amazing work. Could you explain why you decided to go with many
individual stills rather than filling in the gaps in a video codec? It's a
really counterintuitive approach.

~~~
jpap
Good question!

Several reasons:

* Video codecs are much more complex.

* Random access seek is a lot slower, unless you're using all I-frames. (That's now a codec option on iOS, but not when I started.)

* "Studio swing" reduces the dynamic range of the YCbCr components so the quality suffers.

* Each frame lacks their own thumbnail, unless you maintain an adjunct "thumbnail video"

* Each frame might(?) not be able to have attached separate metadata, like geotagging, sensor settings at time of capture, etc.

* Deleting one frame causes a "hole" and headache.

* Standards compliant JPEG means export is super easy.

* Anything above full HD video is difficult to deal with in 3rd party software.

------
Dylan16807
Wow, I never thought I'd see a software optimization be talked about in such
breathless amazement.

~~~
wmf
Yeah, I'm kinda skeptical of the "science" here.

Edit: A new algorithm counts as science, but the TechCrunch article really
gave no justification for the claim.

~~~
jpap
I've given a bit more background to the fast JPEG codec on my engineering
blog:
[http://www.snappylabs.com/blog/snappycam/2013/07/31/iphone-k...](http://www.snappylabs.com/blog/snappycam/2013/07/31/iphone-
king-of-speed/)

If you like signal processing, fixed point arithmetic, SIMD cores, and
assembly, then this is for you. :-)

~~~
0x09
So the summary is "JPEG encoder written in assembly with NEON instructions
saves images faster than Apple's encoder."

That's a cool feat and is a little damning for Accelerate.framework, although
the way techcrunch writes it I expected a new kind of fast cosine transform.

~~~
jpap
Don't forget that SnappyCam pumps both CPU cores when available.

The actual DCT algorithm created and used in the app is different to the
typical AAN (Arai, Agui, Nakajima) DCT algorithm that's used in JPEG codecs,
at least all the ones I've seen.

It's all about doing as little work as possible to achieve the end result.
That's why there's so much asm implementation, with carefully chosen NEON
instructions for each step.

Think of it as a cross-layer optimization between algorithm and
implementation... done by hand. :-)

~~~
midnightclubbed
Really interested in the nuts and bolts - are you optimizing specifically for
one quality setting (in which case I'm guessing you could probably do the
quantization as part of the dct and throw away some calculations)? I played
with a realtime jpeg compression implementation back in college on transputers
(yes I'm that old). Fun stuff, nice to see there are still places where going
right down to the metal can make a real impact on a product...

~~~
jpap
Oh that's awesome and a lot of fun!

While SnappyCam has been the most difficult, complex, piece of software I've
written since I started coding in my early teens, it's also been one of the
most satisfying technically.

I'd love to disclose the many, many optimizations baked in, but as this is a
commercial app I must keep much of it as a trade secret.

I will say though that a lot of precomputation was involved, both for the
encoder and decoder. Jumped at the chance to avoid computation, memory reads,
etc., as much as possible. :-)

------
ygra
This looks similar to what Microsoft Research's BLINK [42] does on Windows
Phone. Alas I wasn't able to find any publications on what they are doing
(which is strange for MSR). As I don't have my phone currently I can't even
look whether they are doing full resolution too or whether they are dropping
down to smaller sizes.

[42] [http://research.microsoft.com/en-
us/um/redmond/projects/blin...](http://research.microsoft.com/en-
us/um/redmond/projects/blink/)

------
peter_l_downs
Any chance of this coming to Android soonish? This is seriously cool!

~~~
jpap
The fast JPEG codec was written for the ARM NEON SIMD coprocessor found in the
iPhone. Most Android devices also sport the same architecture, so it is indeed
possible.

The code for the codec is written in mixed C and assembly, so it can be
"easily" ported to Android by making use the JNI.

While the R&D for the fast JPEG codec took about a year to perfect, the iOS
app took just about the same time to get polished (including the NodeJS
backend work, the HTML5 website and embeddable widgets in AngularJS).

Writing the rest of the app would take a few months of full time work, and
it's not yet clear if that might pay off at this stage.

We'll see... and glad to hear there's interest! :D

~~~
fluidcruft
Don't overlook the fact that the source for the stock Android camera is
available under a commercial-use-friendly open source license and has a quite
nice native android UI. You don't have to reinvent all the wheels unless
you're stubborn.

[https://android.googlesource.com/platform/packages/apps/Came...](https://android.googlesource.com/platform/packages/apps/Camera.git)

I would buy that in a heartbeat.

Unrelated, how quickly can you alter exposure settings? Can you get 30
pictures per second with three interleaved exposure brackets? (i.e. burst of
10 HDR photos / second) That would be very, very, very, very cool.

~~~
jpap
That's really interesting. I wasn't aware of that. I'll have a look at it once
social sharing is out the door.

I did consider getting into other aspects of iPhoneography, like HDR, etc. The
trouble with HDR in particular is that there's no API access to direct the
sensor into each of the bracketing modes.

In the case of HDR, it might be more fruitful to attempt some kind of image
signal processing, similar to "Clarity" on Camera+.

I looked into that for a while, and I figured that Camera+ might be using some
version of the Contrast Limited Adaptive Histogram Equalization (CLAHE)
algorithm. In any case, what they've done is really neat from a DSP
perspective. :D

~~~
est
Hi,

There's also a cool technology allows you to save near the same jpeg with
much, much smaller file size.

[https://news.ycombinator.com/item?id=2940505](https://news.ycombinator.com/item?id=2940505)

------
dvt
Why not have a deferred compressor? I assume that just straight-up saving the
raw data in memory would be much faster than compressing every frame as you
get it.

Couldn't you get significant FPS increases (given that you still had free
space/memory available)?

~~~
jpap
Actually, I do both on dual core devices.

One core is dedicated to host the capture/buffer, the other will encode shots
in the background.

When you see the big circle percent animation, both cores are dedicated to
compression to clear the encoder queue so you can take back to back living
photos quickly.

~~~
Andrenid
I've just gotta say:

This is one of the main reasons I keep coming back to HN. A story gets posted
about some cool new tech, and the creator is in the comments answering
questions. Simply awesome.

~~~
jpap
haha, cool. :-)

To be honest, I don't often post here because I'm busy working, but am
enjoying the discussion on a baby I've nursed for two years now. :-) Thanks
for your post!

------
comatose_kid
Science vs engineering distinctions aside, it is pretty cool to see the
attention to detail + effort put into solving this problem.

------
rabino
This is quite remarkable. I just tested it and works even better than
advertised. I hope you become rich and famous for this. And I _really_ hope
there's not a hidden gotcha I haven't seen yet.

~~~
jpap
Thanks! :D I'm just very happy to have more people try the app.

It's been a hard slog working 7 day weeks for just over two years now. Feels
great to receive some kind of recognition for the work.

~~~
axman6
I too have purchased the app, and initial testing on a 4S seems to show it
works exactly as advertised. This is a really great app, and an astoundingly
low price. You should be very proud I think, I'll definitely be using the app
as my goto in the future.

(Hmm, after writing that, I somehow feel it sounds like it should have a
reference to my rural, folksy, respected job that clearly makes me qualified
to discuss such things. Unlike those Amazon reviews I'm refering to though, I
mean every word.)

~~~
jpap
That's abs awesome to hear! Thanks for the wonderful complement.

I played with price. Until now, most of my sales were word of mouth, and the
$1.99 price hindered "growth".

It's been an interesting game. Many people whom I demo the app to in person
love it, then when they reach into their pocket to download it, realize it's a
paid app, they place their phone back into the pocket.

Still have lessons to learn in sales and marketing... but am enjoying the
schooling.

~~~
bobbles
Well as a point of reference, I will never pay more than $0.99 for a camera
app unless a friend has specifically shown me how it works. I have been burnt
on too many photography style apps that end up either not doing what I
expected from the pics and description, or just sucking in general.

For me $0.99 just breaks that psychological barrier into 'who cares if it
sucks'.

I gotta say though, after playing with snappycam it's definitely worth it. I
bet being cheaper will end up with easily more than twice the sales

~~~
jpap
I understand where you're coming from; social proof removes a massive barrier
to conversion, even in my own experience.

I found that having it at $1.99 most definitely improved sales after it had
been at $0.99 for about a week; after another week, it started to degrade
again.

You're spot on in saying that $0.99 is a good price to get "disconnected"
users who might experiment. If they like the app, they might make the personal
recommendation to their "connections" where price is less sensitive.

After a while, that social proof and networking effect wears off and it's time
to reset the price down to the "discovery amount" of $0.99.

To be honest, I'd love to flip SnappyCam over to freemium; but I feel that
can't happen until the social sharing is bolted in and the app has a chance to
sell itself organically.

------
mgerals
The techcrunch title sounds like taken from an infomercial. Or "one weird
trick..."

------
jlebar
To be clear, using SIMD for JPEG encoding is not new. I'd be curious how this
JPEG encoder compares to libjpeg-turbo's NEON encoder.

[http://libjpeg-turbo.virtualgl.org/](http://libjpeg-turbo.virtualgl.org/)

~~~
jpap
Hey @jlebar, you're right--it's existed on the desktop for some time (MMX,
SSE). When I first started, libjpeg-turbo never had an ARM port, which was
part of the motivation to do it myself.

See my post in another thread here on the same topic.

------
ianb
I take a fair number of casual action shots – mostly of the kids. To get
something to come out I often take a handful of pictures in a row; even that's
often not enough, or the "right" scene happens in between these slowish
frames. This could be cool for those cases.

Except... I also get annoyed sorting through those pictures afterwards. It
would be interesting if with some post-processing it could sort through the
pictures some for me, identifying distinct pictures, or filtering out ones
that are clearly bad (mostly too blurry), or if fancier maybe doing eye or
smile detection. I want to capture the moment a person looks up, before they
think about the camera.

Another cool case would be taking photos of movement. If I can track the
movement with the camera the picture can come out surprisingly well. But
tracking movement is hard. If I had several seconds of pictures, over the
course of that time probably I'd track the movement well enough for a few of
the photos to come out.

~~~
Martijn
If I remember correctly the automatically sorting through your pictures and
picking the best is exactly what Google announced for Google+ at their last
I/O keynote.

~~~
jpap
That's a cool feature, and not easy to implement. It generally ends up being a
machine vision problem. (Google has both great talent and a lot more resource
than a single-founder self-funded engineer like me.)

------
i4software
Hi Guys. This is Fast Camera. I'm callin' out SnappyCam!

Are you up for an old fashioned DUEL to see which app can shoot the most
"native camera quality" 8MP images per second in 60 seconds without crashing?

On an iPhone 5 with all apps closed, SnappyCam manages to save only about
eight 8MP per second over 10 seconds on average and loses the other 12 per
second. And these are not 8MP images at least as far as comparing resolution
against the native camera app or Fast Camera. All of this technical discussion
sounds great but is anyone actually testing this like I am? Just download a
stopwatch app with hundredths of seconds and burst for 10 seconds. You'll see.
Then shoot something with a LOT of detail at 8MP in both SnappyCam and Fast
Camera.

Fast Camera is capable of 10-12 native quality 8MP images per second (more
than SnappyCam) We throttle it back on purpose.

And what's with camera-shutter.caf John? ;)

Michael Zaletel Founder, i4software Fast Camera, Vizzywig, Video Filters

~~~
jpap
Michael, thanks for making contact by e-mail, outside of these public forums.

As discussed over e-mail, I've created an in depth report showing that
SnappyCam indeed takes full quality 8 Mpx shots on the iPhone 5.

With the amazing discussion and interest here on HN, I thought to share it
with the community here as well:

[http://www.snappylabs.com/blog/snappycam/2013/08/03/snappyca...](http://www.snappylabs.com/blog/snappycam/2013/08/03/snappycam-
imaging-in-depth/)

I'm off the grid on a hiking vacation for the next 2.5 weeks, back in late
August and look forward to the discussion then.

jpap

------
huhtenberg
Bug report -

On the first launch, if I quickly press the Setting button (bottom-right) it
starts the flip animation _and_ still shows the handwritten overlay explaining
where to tap for manual focus and all whatnot. After the animation is
complete, the overlay is still shown, so it looks like a mess. And it's also
not obvious how to get the overlay back, because I haven't seen what it
actually said.

Congrats on the TC cover and a very nice app. Get rich! :)

(edit) A nitpick - "Warm-up", not "Warmup"

(edit) Report Usage = On. Seriously? Who on Earth in their sane mind would
actually want this, except for you? Next thing you tell me is that you have
some "app analytics" library linked in and it's always on. Please don't be
evil.

(edit) The same goes for "Send Crash Reports = Always". It should be "Ask".
Respect your users and they _will_ help.

~~~
jpap
Thanks for the suggestions:

1\. Looks like a race condition for the settings button tap. Does it happen if
you wait a second before pressing the settings button?

2\. You can re-enable the tutorial (overlay) screens from the bottom of the
settings menu.

3\. On the usage/reports, I hear you. I won't give you bullshit on "standard
industry practices" here, but I will say that I had to hack a well-known
closed-source library to give you that opt-out from usage reporting. I really
do value your privacy. (I've already requested the library developer fix it,
and will try and write a blog post on how other developers can provide a kill
switch, too.)

4\. The default is there because many people don't like to configure apps,
they just use them as-is. In that light, the default configuration is the one
I felt was best for general use.

~~~
huhtenberg
"Ask" should be the default, I have to insist.

Just tried the actual functionality and it gives the machine gun sound effect,
showing a counter going up to 50-60, then I release the button, the blue
stripe around the button shrinks back, it adds a photo to the bottom-left
area, but when I tap it, there are just 3 frames. What am I missing? Is it
adaptively trimming bad frames (I _am_ shooting in a low light conditions)?

(edit) Just tried again and this time after I release the shot button, it
showed a big circle overlay with the "JPEG" in the middle that counted up to
100% and the resulting photo had the right amount frames. It didn't do that on
the first try. It's either a bug ... or you are missing a helpful hint that
explains what's going on :)

~~~
jpap
Yes, that does require some explanation:

1\. The receding circle is the capture buffer being processed. When you're
tapping on the thumbnail, SnappyCam sees the start of the living photo being
available and shows it. It does not, unfortunately, refresh the thumbnail list
as more shots complete processing.

This is a (feature) bug and I'll work to address it.

2\. The circle with percent progress is what I call "turbo rewind", where the
camera is shut down so that all CPU cores can be applied to compression so
that you can take back-to-back living photos quickly.

You can select the buffer "threshold" for when this kicks in under the
advanced settings: look for Turbo Rewind.

------
gandalfu
It takes time and lots of effort, and ill argue is easier on a quasi standard
platform (processor wise) but apps like this show how much juice can be
squeezed out of the existing hardware by handcrafting the code.

Kudos, I just bought the app!

~~~
jpap
Thanks for the download! Let me know if you've got any feedback, I can be
easily contacted through the app. :D

I'd just like to add that in addition to handcrafted code, choosing the right
algorithm and always trying to "do less work" (less cycles, less data IO,
better use of registers) makes a big difference.

~~~
gandalfu
I always say a good programmer has to be "lazy"!

Some feedback, the default exposure settings showed my room as pitch black (I
have it very dim now), the native iphone5 camera adjusted automatically. I was
able to snap a shot by pointing to the light. Personally I prefer not to crank
the gain on the sensor.

~~~
jpap
haha, yes, if only it pays to be lazy. :) Sometimes doing "less work" means
more up-front planning and thinking. Not a bad thing necessarily.

Interesting on the native camera adjustment. SnappyCam will use the "low light
boost" high ISO capabilities of the camera. I'll have a play around with it.

Otherwise, does the continuous flash help you much?

~~~
gandalfu
The continuous flash didn't fire. I'm running the stock settings.

~~~
jpap
Oh, it's a manual flash.

Enabling that automatically is an interesting problem in itself: I'd have to
estimate the light level based on the camera preview... or perhaps from the
preview metadata.

Will think about how that might be done. Thanks for the thought. :-)

------
polskibus
Just adding my vote for android version! Great job !

------
chacham15
It looks great but I have a few questions/comments.

1\. What is the difference in quality between using this and the video capture
mode? I.e. if what I really want is a high quality video, would this get me a
better result than the built in programs?

2\. Seeing as how you've done all this work (and how Android apps can be
compiled from C) how difficult is it to port this to Android so that the rest
of us can get in on it?

3\. Is it just me, or can anyone else not change the settings / look at the
other demos on the samples page?

~~~
jpap
1\. It really depends on what you're after: are you looking for a video
sequence that plays back, or an individual still? Video is better for the
former, SnappyCam for the latter.

2\. It's a lot of work, hinted at in another thread here on HN. The entire
"app" build on top of the JPEG codec needs to be built from scratch; new
artwork is required, etc.

3\. I just tried it from another machine and works for me.

My backend API is being hammered at the moment, which is awesome, but it
doesn't appear to be overloaded. (Gotta love NodeJS!)

~~~
sgustard
I have the same issue (3), on Safari and Chrome: mouse clicks in the menus
after they're opened are ignored, but the keyboard works to select a video.

~~~
jpap
Weird. Could be a bug in the dropdown component I wrote in AngularJS. :( Glad
the keyboard still works.

Will look into it...

------
Myrth
> To put the speed in perspective, SnappyCam is about 4X faster than the
> normal iPhone 5 Camera app, and more than twice as quick as the Samsung
> Galaxy S4′s 7.5 shots per second.

Does it mean that S4's hardware is faster than iPhone 5 given they're using
similar algorithms, and if you'd make the same app for Android it could get
even better results?

~~~
jpap
It's unclear to me, as there's a lot more going on when taking a photo than
you might think. :) (I originally thought I could knock together a basic
SnappyCam app on top of the JPEG codec within a week or two, it took months.)

If SnappyCam can do it on hardware that is older than the S4, then I can't see
why technically Samsung can't lift their game.

And judging by how quickly they've been chasing Apple, and sometimes stepping
ahead, I wouldn't be surprised to see a bit of leap-frogging for some time to
come.

Let's see what the 5S/C brings in a few months! I'm excited.

------
sytelus
Looks like the most interesting part here is "living photo" that instantly
responds to interactions. Can this be standardize as new video format? It
would be very cool to have all cameras be able to save video in this format.
@jpap should consider formalizing this format, produce viewers on different
platforms and license this tech to manufacturers of point-and-shoot cameras,
GoPro, WebCams, camcorders etc. This feature could make camera an instant hit.
It is a real value add for customers. I can also envision movies getting
recorded in this format and available on Blue Ray so people can instantly
interact with the cool fast action videos in HD. I think the great insight
here is the awesome coolness of instantly interactive video that is ready to
be unlocked inside current camera hardware.

~~~
jpap
I'm really glad to read this! :-)

I had similar thoughts myself, and forms a part of what I have in mind for the
next major SnappyCam release (a taste is what you see on SnappyCam.com today).
My thoughts are perhaps more web-focussed that what you describe, but the
thought is really encouraging!

------
marze
Some questions:

Instead of doing full resolution at 20 fps, can you do a smaller resolution
at, say, 160 fps?

If the next generation iPhone processor is faster (a safe bet), do you think
your software would allow at least 24 fps, and you could use the iPhone to
shoot a 10+ megapixel movie?

Shouldn't Apple have hired you already?

~~~
jpap
It all comes down to what the hardware supports, ultimately.

I'm not performing any true miracles here: I'm just making best use of the
hardware resources available, with some clever software tricks and algorithms.

The iPhone 5 actually supports 60 pictures/sec capture, for example, but Apple
has decided for whatever reason, to disable it on iOS 6. If the iPhone 5 ran
on iOS 5 (surprise?!) then it would likely run at 60 pictures/sec.

On iOS 7 that all changes: so you'll soon be able to capture at 60
pictures/sec, which is rad.

The rollerblader shown on the TC article was shot at Sunday Streets in the SF
Mission District on my iPhone 4S at 60 pictures/sec. The photo quality is
somewhat degraded for the web, but it still looks awesome full screen (from
the SnappyCam website; the TC embed is in a restricted iframe and can't go
full-screen).

I know a couple of great engineers that work at Apple, but haven't spoken with
them for one or more years. Sounds like a cool place to work, but so can be
working for yourself.

It's been a hard slog--I quit my last full-time job in March 2011--but I'd
love to see SnappyCam through and bring to life another startup idea I have in
mind. (Some of the YC partners have already seen me pitch it; SnappyCam has
been a rather good distraction of late.)

------
MikeTLive
At 20fps, could you make a 3d camera app by the user moving their camera in
space and then correcting for stabilization with the accelerometers etc
telling you point in space and using the multiple view points as individual
cameras.

~~~
jpap
That's a really interesting machine vision problem and a _lot_ more complex
than a JPEG codec. :-)

I wonder how long before we start to see Kinect-like infrared cameras mounted
on phones to make the depth problem easier to solve. That would be cool!

~~~
NamTaf
I want to see if you could use all the rapid frames, plus a variant of that
cool Adobe image de-blurring tech [1] that was shown a while ago to produce a
clearer, sharper image during motion?

[1]: [http://prodesigntools.com/photoshop-cs7-image-
deblurring.htm...](http://prodesigntools.com/photoshop-cs7-image-
deblurring.html)

~~~
jpap
There's still much innovation left in image signal processing... and
fortunately much interest in taking good photos!

This reminds me of research into superresolution, an area that's "super
interesting" :-) as well.

The guys who started Occipital (360 Panorama), I believe, tried to dabble in
that with ClearCam many years ago... but I honestly don't know much about it.
Anyone from Occipital here on HN?

------
_quasimodo
You should port it to several platforms and license it as a library. I would
think there are many companys interested in a fast jpeg encoder that is not
embedded in an iPhone App :)

------
egypturnash
This is pretty cool. You got my buck!

I was kinda hoping I could also turn the speed down to multiple seconds per
photo, since it talks about doing time-lapse shots. One of my major uses for
my phone's camera is selfies for art reference, currently done with Genius -
which annoyingly won't do repeated shots at anything less than 10 seconds.
Being able to take one shot every 1-3 seconds would be pretty damn cool for
me.

~~~
jpap
Thanks!

You can reduce the capture rate in the app settings, down to 1 photo per {1,
5, 10, 30, ... } seconds.

Move the slider toward the turtle under "Camera Lens".

jpap

~~~
egypturnash
Oh durf, I fail at exploring UIs. Thanks!

------
zeroDivisible
I must say that this is one of the most interesting apps which I had found in
last few weeks. You should get yourself a beer as this is a neat feat to
accomplish:)

Also, some people were saying that webapp wasn't working for them on some
chrome versions. As for me - I've got the 28.0.1500.95 - the culprit was
Disconnect extension, which when disabled, allowed the whole application to
behave as expected.

~~~
jpap
That really helps, thanks for letting me know about the disconnect extension.
I've never used it, will check it out.

------
mappu
That's fantastic, and a very cool demo.

How does the encoder performance compare to libjpeg-turbo? That also has some
SIMD work for NEON.

~~~
jpap
Yes, Nokia contributed the NEON code for the DCT in libjpeg-turbo.

I haven't had a chance to do a side-by-side comparison as yet, but I suspect
the SnappyCam encoder is faster for many reasons, including choice of
algorithm and the way they use two multiplies (low, high) at times, and their
image row-by-row nature with function call overhead in favour of code
maintainability.

~~~
mansr
I was involved in some NEON work on libjpeg-turbo, and I can confirm that the
image buffer management there is hell, as are some other aspects of the
design. A from-scratch implementation with performance in mind should easily
be quite a bit faster.

------
bobbles
Looking forward to taking these pics and testing out
[http://research.microsoft.com/en-
us/downloads/69699e5a-5c91-...](http://research.microsoft.com/en-
us/downloads/69699e5a-5c91-4b01-898c-ef012cbb07f7/) Image composite editor
with things like photosynth

------
Hopka
It crashes for me every time I take somewhere between 60 and 75 frames with
the main camera. With the front-facing camera, I can shoot forever. In the
iPhone Settings (somewhere called Diagnosis & Usage), I have a bunch of
LowMemory warnings. I'm using an iPhone 4S.

~~~
jpap
Thanks for reporting it in!

It seems I enthusiastically chose a larger buffer size that appears to be
having some issues on some devices that have a lot of memory pressure.

If you reboot your phone, as awful as that sounds, it will likely fix the
issue.

EDIT: I've just submitted an update to Apple that uses a more conservative
buffer size.

This aspect is hard to get right: I once used an adaptive buffer size that
heeded to memory warnings, but that meant that the buffer filled to _lower_
levels than a conservatively sized buffer.

If only iOS had an opt-in for an *alloc returning 0 instead of these warnings,
or at least notifying us of how much space left before we're SIGKILL'ed.

~~~
Hopka
Thank you, I'll try rebooting the phone.

I recently read an article here on HN that briefly touched on memory
management under iOS and especially the problem of apps getting killed, maybe
it is interesting for you: [http://sealedabstract.com/rants/why-mobile-web-
apps-are-slow...](http://sealedabstract.com/rants/why-mobile-web-apps-are-
slow/) (scroll down to "How much memory is available on iOS?")

------
javajosh
Beautiful app, jpap. Well-done! I can't wait to do some side-by-side
comparisons between this and video stills, and see what kind of image quality
differences there are. My overall impression of the app itself is that it's
incredibly solid. Keep building apps!

~~~
jpap
Thanks!! :D

Would love to see some real world comparison examples. Drop me a line when
you've got something, would love to check it out. :)

------
damian2000
I'm interested to know how their method compares to how dedicated digital
cameras and DSLRs do it? are cameras running dedicated hardware/firmware to
achieve the same result? Or have they optimised their software in the same way
that SnappyCam has done it?

~~~
jpap
I can't say, as SnappyCam is my first foray into image signal processing.
(Though DSP isn't new to me.)

I'd guess that DSLRs use a combination of hardware acceleration on the
"tricky" bits (like DCT) with firmware to control the compute hardware.

Huffman is a particularly difficult beast, as it can't be parallelized. The
JPEG bitstream is inherently serial, though there has been some proposals to
improve that.

If you run a SnappyCam JPEG that you pluck from iTunes File Sharing through
djpeg (from libJpeg) you will notice that each of the YCbCr planes are not
interleaved.

I once experimented with a parallel JPEG encoder, encoding the Y, Cb, and Cr
planes in parallel but the threading overhead was more than just queuing up
each JPEG encode separately in a multithread queue.

Bonus points if you notice another marker in the JPEG. That's intended for
parallel JPEG decoding but hasn't yet been implemented in SnappyCam as yet.
(The existing decoder is fast enough for 8Mpx shots.)

~~~
damian2000
Interesting stuff, thanks for the info. Much respect for doing so much
optimisation in assembly (my limit is C, even for embedded work).

------
epaga
Love the "we'll iMessage you a download link" feature on the web page. Are you
using a service for this? Note it doesn't seem to work for me in Germany, it
doesn't change the country code, it leaves it at +1 (instead of +49)...

~~~
jpap
It's a webservice I hacked together that sends iMessages from my old MacBook
Pro. :-)

It was my understanding that German mobile numbers are written locally
starting with 01? [1]

e.g. in Australia, my mobile number would be 040x-xxx-xxx. The international
version is +61 40x-xxx-xxx. When you select Australia, it will show 04.

(OK, I now see how this could be confusing; my apologies.)

[1]
[http://en.wikipedia.org/wiki/Telephone_numbers_in_Germany#No...](http://en.wikipedia.org/wiki/Telephone_numbers_in_Germany#Non-
geographic_numbering)

~~~
ygra
Phone numbers in Germany work the same way like you describe with Australia.
Internationally you'd have +49 (area code without leading zero) (number) and
within Germany you can use (area code with leading zero) (number).

~~~
jpap
Awesome. If you type in your number as if you were local, do you get the
iMessage?

(Internally I add the international prefix. As you can imagine it took a while
to find all of the local prefixes and create number masks!)

------
uladzislau
More technical details on SnappyLabs blog:
[http://www.snappylabs.com/blog/snappycam/2013/07/31/iphone-k...](http://www.snappylabs.com/blog/snappycam/2013/07/31/iphone-
king-of-speed/)

~~~
cfrss
there is a lot said about the assembly code. I wonder whether it make sense to
code it in LLVM IR?

~~~
jpap
There's definitely improvements being made to LLVM to automatically
parallelize code (esp unrolling loops) to SIMD.

I haven't personally tried it, but would love for it to match the code quality
of hand-cranked assembly... writing it is tedious, error prone, but you do get
control over when you preload the cache, the stack, and you can do really cool
things with the CPP and macros to "manually inline" things. :-)

And who doesn't like writing a good 'ol fashioned jump table?!

~~~
cfrss
Thanks, btw there was a thread on this topic recently:
[https://news.ycombinator.com/item?id=6096743](https://news.ycombinator.com/item?id=6096743)

------
tambourine_man
Amazing work, and the living photo thing could be a hit.

Out of curiosity and a bit unrelated, I've been craving for real raw capture
on the iPhone (before bayer interpolation, white balance, noise removal). Is
it possible?

~~~
jpap
It may not be possible, even for Apple.

If you have a look at the data sheets, for example [1], you'll see YCbCr or
RGB output formats being listed.

I guess it would make sense for as much signal processing to be done as early
in the chain in the interests of lowering power consumption. (Less data to
transfer over a serial bus into more circuitry, at the very least.)

[1]
[http://www.ovt.com/products/sensor.php?id=134](http://www.ovt.com/products/sensor.php?id=134)

(Sony also make the sensors for Apple, apparently.)

~~~
wmf
In general sensors output raw and the ISP does the image pipeline including
demosaicing, so a phone could support raw with different ISP firmware.
(Reportedly Nokia wrote custom ISP firmware for their fancy camera, for
example.) I wouldn't be surprised if one of the upcoming Samsung or Sony
phone-cameras supports raw.

------
archagon
Wait, how does this work with the Apple frameworks? I assume you can't go
faster than what Apple gives you. If you were to discard every photo, how fast
could you theoretically go?

~~~
ygra
They simply are not using what Apple gives them. They wrote their own JPEG
encoder which side-steps any limitations that Apple's own implementation has.

------
ajpocus
jpap, this is the best HN thread I've seen in a while. I never comment, but
I'm compelled to now, because it's not often I see a hack this mesmerizing and
exciting. For a moment, I almost wanted to drop everything and dive into JPEG
myself, something I don't think I've felt since reading about John Carmack and
his game engine hacks. Even though I understand <10% of the details being
discussed, I'm compelled to learn more. Thanks, jpap. :)

~~~
jpap
I'm so very happy to read your post! :-)

I'm not usually one to post publicly either, but with practice I'm finding it
comes more naturally.

I do hope you dive in---the devil's details in image signal processing is
really interesting.

------
nazri1
Please tell me there's an ipad version in the pipeline. This is the second
non-free app I have on my ipad - it doesn't disappoint at all. Great work!

~~~
jpap
I have it in mind at a lower priority. To be honest, my main impetus is for
better discoverability on the App Store from an iPad device.

They really like to hide those iPhone apps! ;-)

It will be nice to play with the interactive living photos full-screen, though
iOS 7 makes iPhone apps look amazing on the iPads in any case.

------
tosic
I do not agree with the use of the word "scientific" in this context.
Specially since it appears to be a shameless plug for a product.

------
sergj
This app is a lot of fun! Thanks for making it.

------
pdog
Any papers on the subject? I'd love to dig deep into some of the technical
details behind this.

~~~
jpap
I'd love to disclose the implementation details, and even release it on
GitHub, but unfortunately I have to keep it as a trade secret for obvious
reasons.

Perhaps one day! :D It was a tonne of work that I'd love for fellow engineers
to take a look at. I learned a massive amount from reading the likes of
libTurboJpeg and other OSS implementations; though none of them are using the
same DCT as the one I developed for SnappyCam. (They weren't a good fit for
the ARM NEON ISA.)

------
runn1ng
WHY DONT YOU CURE CANCER INSTEAD

~~~
bulte-rs
Actually, cancer research can benefit (admittedly by a long shot, but still)
by these kind of improvements in image processing. Don't forget that medical
imaging and - thereby indirectly - the recognition/detection of cancer is one
of the first steps in curing said disease.

~~~
jpap
Very well said.

I suspect Clarity on Camera+ is some form of Contrast Limited Adaptive
Histogram Equalization (CLAHE).

CLAHE came about from digital image cleanup of medical scans for human
analysis.

~~~
bulte-rs
Now, a CLAHE implementation on top of SnappyCam .....

~~~
jpap
I considered doing this over a year ago. I was really intrigued by the
algorithm and declared it was a distraction over getting the basic app solid
first.

I'd love to revisit it. It really works wonders. I've played with the CLAHE
implementation in Fiji/ImageJ and while it can produce really good results, it
does require some tuning.

I really admire the Camera+ guys for creating an auto tuning algorithm that is
quite similar. (I'm unsure if they're using CLAHE or a variation.)

------
retube
What's the diff between this and video shooting? Isn't that 25fps?

~~~
sjwright
Video gives you still images that are one quarter the pixel dimensions (1920 x
1080 = 2.07 megapixel) and presumably more highly compressed.

------
jostmey
This may be a really cool technical achievement, but the title is misleading.
It is not scientific - that is, the scientific method was not applied to
increase our understanding of the Universe. No, it is just a really cool
testament to how cool engineering really is.

------
rdouble
This is a great app. What you need to do is market it to skateboarders.

~~~
jpap
Agreed! I've got a good friend who is a skater and he reminds me of this
often.

Any good skating sites I might want to contact?

------
jgh
Good work, jpap. I wish this were posted during the day though ;)

------
dschleef
This kind of functionality is standard on OMAP4 devices.

~~~
jpap
ARM actually publishes reference code under the library name "OpenMAX" for
their mobile processors.

I've read ARM's source code and found they they too use the AAN algorithm for
the DCT. (They provide a tonne of code for other multimedia related stuff
too.)

I learned a lot from their code, even though my implementation is completely
different and original.

I would also dare to say that my asm source is maintainable. I had a very hard
time understanding their code as it wasn't very well documented or laid out...
but nevertheless a valuable learning tool.

------
voltagex_
jpap, if you ever find yourself stranded in Canberra, I'll buy you a beer (a
proper Australian one).

~~~
jpap
Cheers mate, I'm from Melbourne and might just take you up on that! (I live
here in SFO at the moment, but try to get back as much as I can.)

~~~
voltagex_
I'm over in the States in a month-ish but not over your way (Seattle, NYC).

------
bobbles
jpap,

Could you take the 'trimmed' section and create a looping GIF from that? (Can
I do that already?)

~~~
jpap
It's now on the list. Had a few requests for it, and agreed, it'd be cool. :)

~~~
bobbles
Great, I'm not actually sure how the iPhone photo library handles GIFs.. but
I'd much rather be able to choose 'export as GIF' & 'export this frame' than
to save all 100 photos.

great app man, thanks

~~~
jpap
The Camera Roll will apparently host them, but not show them as animated.

Unless you roll your own viewer, many devs suggest using an embedded UIWebView
to animate it.

Otherwise it kinda gets treated as pass-through for most of iOS. The Messages
app apparently animates them nicely, and for some is a real motivation to
include the feature. (So they can send animated GIFs to their friends.)

Glad you like the app! :D

------
jrockway
What's the breakthrough? My GoPro can take 120 photos per second.

Futhermore, what happens if you point this at a device that can affect each
pixel on the phone's image sensor 20 times a second? Is all the information
preserved? If so, this is an interesting hardware hack. If not, this is an
interesting shell game. But I don't see how it's a scientific breakthrough.

(It sure is good for sales when TechCrunch prints your press release verbatim,
though!)

~~~
wmf
A GoPro can do 720p (1 megapixel) at 120 FPS; this app can do 8 MP at 20 FPS.
Also, it's using JPEG instead of a video codec so when you want to pick out a
single frame it's already in the format you want. Likewise there are no motion
artifacts because it doesn't use a video codec, so every frame should be equal
quality.

