
GPUImage 2 - davidbarker
https://github.com/BradLarson/GPUImage2
======
Veratyr
A lower level but more portable alternative: [http://halide-
lang.org/](http://halide-lang.org/)

Takes a bit of learning but it's __really __fast. The tutorial series is
good[http://halide-
lang.org/tutorials/tutorial_introduction.html](http://halide-
lang.org/tutorials/tutorial_introduction.html)

It'd be interesting to see a comparison between the two for something non-
trivial, both in terms of code complexity and performance. My guess is that
Halide takes a lot more code but ends up being 2-3x faster due to the ease of
low level optimization.

~~~
ipsum2
Off topic, do you know if Halide supports RAW image format?

~~~
vanderZwan
Well, RAW image formats vary per camera, so it probably won't "support" any
particular format out of the box since that is not really the responsibility
of the language.

But given a known format I'm pretty sure you can convert any RAW image format
into something Halide understands without loss (at the moment most RAW image
data uses at most 14 bits IIRC, so a 16 bit RGB array should be enough), after
which you can implement, say, debayering algorithms yourself.

In fact, that's the first example you see in the first publication (see top
left) about Halide:

[http://people.csail.mit.edu/jrk/halide12/](http://people.csail.mit.edu/jrk/halide12/)

------
emdowling
I've shipped two apps with GPUImage and have found it to be very fast (when
used properly) and incredibly capable. The biggest issue in GPUImage 1 was
correctly setting up the filter chain; an incorrectly configured chain seemed
to be the primary cause of issues around performance and output (or lack there
of).

The new API is way more concise and super clear. Particularly love the use of
blocks for configuration groups, as it makes preset behaviour more portable
and easily defined (say, for a Instagram-style app).

Great work Brad, this is a much-loved framework and it's great to see the
changes in v2.

------
err4nt
This sounds really cool, but I'm not 100% sure what it means. Is this
something that would be included in desktop software for rendering images on
the screen, or something more useful for file conversion and running
operations on images to produce new image files?

~~~
andrewbarba
I think it's important to read this article
([http://www.sunsetlakesoftware.com/2012/02/12/introducing-
gpu...](http://www.sunsetlakesoftware.com/2012/02/12/introducing-gpuimage-
framework)) before fully diving into this project. GPUImage (the first
version) at a high level gave everyone the ability to create the Instagram
effect (realtime video capture and image processing) with little to no effort.
Now you will not find "Instagram" in that blog post but I've messed around
with almost all of the sample projects in the original codebase and it's hard
not to try and create some similar effects, plus I think it's an easy use case
to explain to others. With that said, this project goes so far and above
Instagram (honestly they shouldn't be compared, but again, easy way to explain
it quickly), the use cases are really just limited to what you can think of
regarding image processing and rendering.

Edit: Forget my Instagram comparison, Brad gives much better examples in the
blog post announcing 2.0:
[http://www.sunsetlakesoftware.com/2016/04/16/introducing-
gpu...](http://www.sunsetlakesoftware.com/2016/04/16/introducing-
gpuimage-2-redesigned-swift)

~~~
coldtea
"Instagram effects"? We've had video (and image) filters for decades before
Instagram was even conceived...

~~~
andrewbarba
You missed the point. Try applying those effects in realtime, on an iPhone 4s,
to a live video stream coming directly from the camera. It's quite difficult,
even with everything that CoreImage provides. GPUImage? It's cake.

~~~
coldtea
Not sure about the 4s specifically, but there are all kinds of apps that do it
on the iPhone using the GPU (so, if GPUImage works on 4s, they should too). In
fact Apple also provides a built-in one...

------
melling
Here's the announcement:

[http://www.sunsetlakesoftware.com/2016/04/16/introducing-
gpu...](http://www.sunsetlakesoftware.com/2016/04/16/introducing-
gpuimage-2-redesigned-swift)

The size difference between the Objective C and new Swift version is
significant:

GPUImage Version |Files |Lines of Code

Objective-C (without shaders)| 359 |20107

Swift (without shaders) |157 |4549

~~~
GuiA
> "That reduction in size is due to a radical internal reorganization which
> makes it far easier to build and define custom filters and other processing
> operations."

Swift code is clearly more concise than Objective-C, as everyone who's worked
with both knows, but this statement shouldn't be taken as a point for
Objective-C vs Swift on its own.

------
rsp1984
With GPU Frameworks like GPUImage or Arrayfire.com I always wonder how bus
transfers impact the overall performance. The best GPU code is useless if most
of the time is spent transferring data from CPU to GPU and the other way back.
That's why it's typically a bad idea to just "outsource" a particular
computation to the GPU "because it's faster". At least that was a big concern
in the old days when I did GPGPU (about 5 years ago) which is why I always
thought GPGPU libraries were a bad idea.

Can anyone shed some light on this matter? Has anything changed in the last
years? Are bus transfers still a concern? If yes, how does GPUImage handle
them?

~~~
14113
I would expect a developer to fully look into the bottlenecks in their
application before applying something like this. For example, if they quite
often have complex, multi-stage image processing pipelines, then offloading
the entire pipeline to the GPU might result in quite significant speedup.

In addition (iirc) CPU-GPU busses have got quite a bit faster in the last 5
years. They're still a large bottleneck, yes, but for expensive, highly
parallel computations on small pieces of data they don't completely dominate
the computation cost.

EDIT:

I've also noticed that this framework uses OpenGL(ES) for its offloading.
Given that, the computation could easily be offloaded to an embedded (i.e.
non-discrete) GPU, eliminating the data movement cost.

------
frozenport
How does it compare to OpenCV?

------
trendnet
I have never used GPUImage on iOS. Are we limited in resolution to the
Metal/OpenGL ES texture size limit of a device with it? Can it work with extra
large images?

~~~
rbritton
Depends on your definition of extra large, I suppose. I've gone as big as
about 10 megapixels in my implementations with it and it's held up.

The big problem with iOS image handling has always been the pitiful amount of
RAM on the devices. It wasn't until relatively recently (iPad 4) that the
iPads even had enough to reliably handle images in the 10-15 megapixel range
without being killed by the memory watchdog.

~~~
cageface
Core Image on iOS is supposed to handling tiling of larger images
automatically. This is one of the reasons I'm planning on porting my iOS video
effects app from GPUImage to Core Image, although GPUImage has served me very
well.

------
mwexler
In case anyone was wondering: "GPUImage 2 is a BSD-licensed Swift framework
for GPU-accelerated video and image processing."

