Hacker News new | past | comments | ask | show | jobs | submit login
Fuzzy Metaballs: Approximate differentiable rendering with algebraic surfaces (leonidk.github.io)
137 points by andersource on Nov 1, 2022 | hide | past | favorite | 13 comments



sweet! if you ask me, differentiable rendering is the next big thing (tm).


> if you ask me, differentiable rendering is the next big thing

Do you have a canonical reference for your usage of the adjective "differentiable" ? If people look at the definition they invariably get to this:

https://en.m.wikipedia.org/wiki/Differentiable_function

Which is clearly not what you mean. I'm not asking about what it means, but about a reference. I teach calculus for a living, and I make a big deal about the two meanings of this word, but never found a definitive reference to cite.


Actually, that is the correct definition! It might sound unintuitive at first, but I think this paper[1] describes it really well (p.1):

> The last years have clearly shown that neural networks are effective for 2D and 3D reasoning. However, most 3D estimation methods rely on supervised training regimes and costly annotations, which makes the collection of all properties of 3D observations challenging. Hence, there have been recent efforts towards leveraging easier-to-obtain 2D information and differing levels of supervision for 3D scene understanding. One of the approaches is integrating graphical rendering processes into neural network pipelines. This allows transforming and incorporating 3D estimates into 2D image level evidence.

> Rendering in computer graphics is the process of generating images of 3D scenes defined by geometry, materials, scene lights and camera properties. Rendering is a complex process and its differentiation is not uniquely defined, which prevents straightforward integration into neural networks.

> Differentiable rendering (DR) constitutes a family of techniques that tackle such an integration for end-to-end optimization by obtaining useful gradients of the rendering process. By differentiating the rendering, DR bridges the gap between 2D and 3D processing methods, allowing neural networks to optimize 3D entities while operating on 2D projections.

[1] https://arxiv.org/abs/2006.12057


> Actually, that is the correct definition!

Hmmm; no, it isn't?

Your quotation is vague stuff and not a definition at all. The notion of differentiability has nothing to do with deep learning, rendering, nor neural networks. None of these terms should appear on a definition of the concept.

The classical mathematical definition of differentiability concerns the function itself. For example, the function f(x)=x^2 is differentiable while the function g(x)=abs(x) isn't. On the other hand, the "modern" definition of differentiability concerns the computer implementation of the function. Given the same function f(x)=x^2, you can have a differentiable implementation of it (e.g., using a high level language) and a non-differentiable implementation (e.g., calling a library written in assembler that evaluates this function). At the same time, you can have a differentiable implementation of g(x)=abs(x), which is the sign function, and non-differentiable implementation as well. Thus, both concepts are really independent!

I'm still longing for a formal, canonical, definition of the "modern" notion of differentiability.


Parent is correct that the definition you gave is indeed broadly the correct definition, i.e. there's no principle difference between calculus "differentiable" and differentiable as in differentiable rendering, except maybe the modern sense is a bit more relaxed. The differentiability of the rendering process doesn't depend on the implementation - incidentally look at the first line of this paper's abstract:

> Differentiable renderers provide a direct mathematical link between an object's 3D representation and images of that object

When we call a process differentiable (in the "modern" sense you're referring to) we simply mean that we have a formulation of the process as a differentiable function. In rendering this means the function's domain is the set of tuples (3D object, camera parameters) and the function's range is a set of 2D images of some sort. These are very high-dimensional functions, but mathematical functions nonetheless, and calling them "differentiable" means the same as in the definition you linked to.

Your examples re/ implementation make me think maybe you're conflating differentiability with "autodiff"[0] - languages / libraries that allow you to just write a function and automatically differentiate it for you.

[0] https://en.wikipedia.org/wiki/Automatic_differentiation


> this means the function's domain is the set of tuples (3D object, camera parameters) and the function's range is a set of 2D images of some sort

What would be an example of non-differentiable rendering according to this?


Most rendering pipelines, as they involve discrete operations. For example in mesh rendering, the step of polygon rasterization is typically non-differentiable, because every pixel is either inside some polygon or isn't.


It would seem that you mean "continuous" here, instead of "differentiable".


No, I mean (non-)differentiable, although differentiable non-continuous is also not useful. If you scanned every pixel for every polygon and discretely determined whether it was inside or outside that would be as you describe, but that's rarely the case. Take a look at the typical line drawing algorithm[0] - it procedurally determines which pixels to draw as part of the line. Without modification such algorithms don't give you derivatives of how each pixel depends on the shape parameters.

[0] https://en.m.wikipedia.org/wiki/Bresenham%27s_line_algorithm


You’re confusing “everywhere differentiable” with “differentiable at the points we care about.” g(x) = abs(x) is differentiable almost everywhere (not in the Cantor’s Staircase sense, but in a practical sense), and that’s enough for gradient descent to work in practice. And fitting models by gradient-descent-like methods is the single unifying feature of deep learning.

I don’t understand the implementation thing you’re worrying about. Assuming it takes the same inputs to the same outputs, how you implement a function has no bearing on its limit properties, which is what differentiability is about.



What about differential equations?

Should we stop calling them that if we can't find the slope of an equation?


More generally, differentiable "controllable, well understood thing," so we can start combining the successes of deep learning with the grokkability of more specific solutions.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: