
Band-Limiting Procedural Textures - todsacerdoti
https://iquilezles.org/www/articles/bandlimiting/bandlimiting.htm
======
CyberDildonics
This was called frequency clamping in the book 'advanced renderman' which
talks a lot about procedural textures.

A simple way to think about it is to imagine a pattern of thin black and white
stripes. If you go far enough away from the pattern, there will be multiple
black and white stripes in the same pixel. When they are smaller than a pixel
the average color will be grey. Knowing this, you can fade to grey as the
stripes get tiny instead of arriving at grey from heavy sampling.

~~~
djmips
the author directly addresses this with a nod to Advanced Renderman here:
[https://iquilezles.org/www/articles/checkerfiltering/checker...](https://iquilezles.org/www/articles/checkerfiltering/checkerfiltering.htm)

------
nwhitehead
This is awesome! This reminds me of MinBLEP audio synthesis of discontinuous
functions
([https://www.cs.cmu.edu/~eli/papers/icmc01-hardsync.pdf](https://www.cs.cmu.edu/~eli/papers/icmc01-hardsync.pdf)).
Instead of doing things at high sampling rate and explicitly filtering,
generate the band-limited version directly.

In the article, talking about smoothstep approximation of sinc: "I'd argue the
smoothstep version looks better" Why would this be? I would have thought the
theoretically correct sinc version would look nicer.

~~~
gnramires
> In the article, talking about smoothstep approximation of sinc: "I'd argue
> the smoothstep version looks better" Why would this be? I would have thought
> the theoretically correct sinc version would look nicer.

In this case we are sort of mimicking the eye. The eye doesn't do sinc-
bandlimiting (it does a sort of angular integration -- it sums the photons
received in a region).

I say "sort of", because we're really doing two steps: first we are projecting
a scene into a screen, and then the eye is viewing the screen. We want (in
most cases) that what the eye sees in the screen corresponds to what it would
see directly (if seeing the scene in reality).

The naive rendering approach simply samples an exact point for each pixel.
When there's high pixel variation (higher spatial frequency than the pixel
frequency), as you move the camera the samples will alternate rapidly which
wouldn't correspond to the desired eye reconstruction. The eye would see
approximately an averaged (integrated) color over a small smooth angular
window.

Note we really never get the perfect eye reconstruction unless the resolution
of your display is much larger than the resolution your eye can perceive[1].
But through anti-aliasing at least this sampling artifact disappears.

This window-integration is not an ideal sinc filtering! Actually it's not
bandlimiting at all! since it is a finite-support convolution -- bandlimiting
is just a convenient theoretical (approximate/incorrect) description.

In the frequency domain this convolution is not a box (ideal sinc filtering),
it's smooth with ripples. In the spatial domain (that's really used here), it
probably does look something like a smoothstep (a smooth window)[2]. The
details don't matter if the resolution is large[3].

[1] Plus we would actually need to model other optical effects of the eye
(like focus and aberration) that I won't go into detail :) But you can ask if
interested.

[2] It looks something like this:
[https://foundationsofvision.stanford.edu/wp-
content/uploads/...](https://foundationsofvision.stanford.edu/wp-
content/uploads/2012/02/cg.linespread.png) found here:
[https://foundationsofvision.stanford.edu/chapter-2-image-
for...](https://foundationsofvision.stanford.edu/chapter-2-image-formation/)
This describes only the optical behavior of the eye, there's also the sampling
behavior of the retina.

[3] Because our own eye integrates the pixels anyway. Again this does ignore
other optical effects of the eye (such as "focus" and aberration) that vary
with distance to the focal plane, and more.

TL;DR: The correct function looks something like this
[https://foundationsofvision.stanford.edu/wp-
content/uploads/...](https://foundationsofvision.stanford.edu/wp-
content/uploads/2012/02/cg.linespread.png) , which seems close to a
smoothstep.

~~~
gnramires
Just fixing a mistake: what I described is the window function, not the
integrated (in this case, cosine) function that was used in the article. In
this case there would still be ripples when applying the shown window function
(in cosine integration). I do think ripple-free are probably better functions
(or some faster decaying ripples) because of limited floating point precision
generating artifacts (which can be seen in the center in the second demo).

Experimentally playing around a little I've found

fcos = cos(x) * sin(0.5 * w)/(0.5 * w) * smoothstep(6.28,0.0,0.38*w)

To be a good compromise between eliminating high frequency ripple and
maintaining good definition.

------
pixelpoet
Once more: escape your TeX functions, folks! (In this case, use e.g. "\cos"
not "cos")

~~~
spekcular
There should also be a "\," before the dt in the integral. See the second
mistake on this list: [https://www.johndcook.com/blog/2010/02/15/top-latex-
mistakes...](https://www.johndcook.com/blog/2010/02/15/top-latex-mistakes/).

------
tgb
Delightful post.

> in theory, once per half-pixel, according to Nyquist

I'd have thought this should be once per two pixels instead. Nyquist says
there's no aliasing between functions with wavelength L if the sampling at
intervals of L/2\. So sampling at once pixel should imply a 2-pixel wavelength
minimum without aliasing. Assuming the author is right, what am I messing up?

~~~
andreareina
If L is 1 pixel, then L/2 would be 0.5 pixels.

~~~
tgb
But L/2 should be the _sampling interval_ , which is fixed at 1 pixel. For
example, a signal with a wavelength of 1 pixel (or 1/2 pixel) would be
identical to a constant signal.

------
debacle
Would someone mind talking this down a bit? It seems like we are pre-dithering
the textures so that, when rendered, the noise is less visible. Is that right?

~~~
thatcherc
Here's what I think is going on -

\- In raytracing, you're evaluating some complicated equation at each pixel
location. In this case, there are some cosine components that have a really
high spatial frequency, so you get that aliased TV-static-looking effect in
some parts of the image

\- One way to avoid that would be take many samples in a small region around
each pixel location (at sub-pixel distances) which the author refers to as
'supersampling'. This would work except you'd need to raytrace a _lot_ more
points, which would slow down rendering

\- What you could do instead would be (and this is what the post is mostly
about) would be to replace the cosine(x) function with a function that is "the
average value of cosine(x) from x-w/2 to x+w/2" \- that's the big integral in
the post. This function would effectively just be cosine(x) when x is much
greater than w, but would average out the high-frequency cosine components of
the image when x ~=< w

\- The neat effect is that you can get the same smooth, alias-free image as
you would with the expensive super-sampling operation just by using a modified
version of cosine instead!

~~~
0-_-0
In short: as the frequency of the cosine gets too high you gradually turn it
off so you don't get aliasing from the high frequencies.

------
TheRealPomax
It would be super great if those super short animations ran a simple forward-
reverse loop instead of forward-forward. Would make it a lot easier to read
for people with attention disabilities, while only making things more pleasant
for all the normal folk out there.

~~~
djmips
If you are able, you can interact with it directly. If you go to
[https://www.shadertoy.com/view/3tScWd](https://www.shadertoy.com/view/3tScWd)
it's possible to use the mouse to move the vertical bar back and forth
interactively.

~~~
TheRealPomax
I'm sure, but that's not related to whether or not folks can even read the
article because of constant attention-demanding sudden motion.

------
sxp
Another interesting related article is
[https://iquilezles.org/www/articles/checkerfiltering/checker...](https://iquilezles.org/www/articles/checkerfiltering/checkerfiltering.htm)
on how to nicely render a high res checker pattern.

------
inetsee
The video at the bottom isn't working for me in Firefox. It does work in
Chromium, and it is quite pretty.

~~~
kostadin
The opposite for me, working in Firefox and not Chrome. It's a WEBGL2 shader I
believe.

~~~
inetsee
I'm running Firefox 79.0 and Chromium Version 83.0.4103.61 (Official Build)
Built on Ubuntu , running on Ubuntu 18.04 (64-bit).

~~~
Adam1775
Works with chrome, make sure you don't have ublock origin, privacy badger or
similar plugins working and should hopefully display properly.

~~~
kostadin
This. It was privacy badger blocking connect.soundcloud.

