
Differentiable Programming for Image Processing and Deep Learning in Halide [pdf] - kevlar1818
https://people.csail.mit.edu/tzumao/gradient_halide/gradient_halide.pdf
======
vanderZwan
Very glad to see that Halide keeps being developed, despite remaining a bit of
a niche language (for now).

Anything that makes squeezing out more performance from our computers while
writing more elegant code at the same time deserves more attention.

I wonder if any of the techniques used in Halide can be (somewhat) generalized
to domains outside of image processing?

EDIT: Ah, I see that the paper basically answers my questions, as it sees
applications in ML contexts. Also, theat new demosaicking algorithm looks
great, wonder if will make it's way to DarkTable any time soon?

The page of the paper[0] also has presentation slides[1] which (to me at
least) feel a bit more accessible. Hope the accompanying presentation will be
on YouTube soon.

[0] [http://gradient.halide.ai/](http://gradient.halide.ai/)

[1]
[https://people.csail.mit.edu/tzumao/gradient_halide/gradient...](https://people.csail.mit.edu/tzumao/gradient_halide/gradient_halide_slides.pdf)

------
gammaradiation
Note that TVM ([https://github.com/dmlc/tvm](https://github.com/dmlc/tvm)) is
a similar system (the IR is a fork of the Halide IR, but substantially
improved), and is much more suited to deep learning, with tools like:

    
    
       - TOPI, a library of optimized routines for common deep learning operations
       - Importers for TensorFlow, ONNX (PyTorch/Caffe2), Keras, MXNet, DarkNet and other frameworks.
       - AutoTVM for automatically finding fast schedules for arbitrary devices, much faster than random search or the automatic schedules generated by e.g. Halide.
       - Various mobile GPU runtimes (OpenGL, OpenCL, Vulkan, etc), compared to Halide
       - Large community contributing optimized runtimes (e.g. Intel + Amazon contributing CPU improvements, improved  schedules/declarations, etc)
    

Highly recommend checking it out.

~~~
__abadams__
I find it in poor taste to be plugging TVM on a post about Halide, given TVM's
history of lifting code from Halide without attribution (well beyond the
"improved" fork of the IR).

~~~
vanderZwan
I have not heard about that before, could you elaborate/point to other
discussions on the internet that do the elaborating?

(BTW, based on your user-name: do you happen to be Andrew Adams?)

~~~
__abadams__
I am indeed Andrew. I'd rather not elaborate - it wasn't a huge amount of
code, they added attribution when we complained about it, and they've been
very careful to credit us appropriately since then. I shouldn't have even
mentioned it but the TVM shilling in a Halide post struck a nerve.

