> I would be interested to know the comparison of the magic kernel to cardinal B-Splines
Well, the observation in the paper
> the rectangular window function can itself be thought
of as the nearest neighbor kernel, with Fourier transform sinc f; and the convolution of the rectangular window function with itself—the "tent" function—is simply the linear interpolation kernel, with Fourier transform sinc² f. The first of these has discontinuities at its endpoints, and significant spectral energy outside the first Nyquist zone. The second is continuous everywhere, and
has significantly less spectral energy outside the first Nyquist zone, but it still has discontinuous
first derivative at its endpoints and midpoint. The third kernel in this fundamental sequence,
the Magic Kernel, is continuous and has continuous first derivative everywhere, and has even less
spectral energy outside the first Nyquist zone.
corresponds quite precisely to the construction of the uniform cardinal B-splines by repeated convolution of a boxcar filter. The "succession of kernels, each obtained by convolving rect x with itself the corresponding number of times" that Costella describes in his paper is quite precisely the uniform cardinal B-splines. (The terminology around B-splines is unfortunately very confused, with different authors using "spline", "B-spline", and "cardinal B-spline" with different meanings, so I'm doing the best I can here.)
This is also an efficient way to implement the Magic Filter in practice if addition is cheaper than multiplication:
You can see that the sequence has been convolved with the Magic Kernel. Each of the addition operations computes the two-sample boxcar filter on a sequence of input samples, and the resulting sequence is pipelined to the following addition operator, so the final sequence you get is the desired convolution.
If you want to do this at a different scale, what you have is a Hogenauer filter or CIC filter, which requires an addition and a subtraction per sample per order, six in this case. Commonly they use a FIR sharpening filter, though usually just in direct form.
The multidimensional splines you get by doing this kind of interpolation successively in more than one dimension are commonly called "box splines", because you get them by convolving "boxes" instead of the one-dimensional boxcar function. This is the normal way to do Gaussian blur in image processing, for example.
If you want to use B-splines of a given degree to approximate ideal interpolation with a sinc kernel as closely as possible with a given support, avoiding any blurring, that's a solvable problem; https://www.cs.tut.fi/~gotchev/DIPII/lecture3.pdf is a set of lecture notes on the topic.
Well, the observation in the paper
> the rectangular window function can itself be thought of as the nearest neighbor kernel, with Fourier transform sinc f; and the convolution of the rectangular window function with itself—the "tent" function—is simply the linear interpolation kernel, with Fourier transform sinc² f. The first of these has discontinuities at its endpoints, and significant spectral energy outside the first Nyquist zone. The second is continuous everywhere, and has significantly less spectral energy outside the first Nyquist zone, but it still has discontinuous first derivative at its endpoints and midpoint. The third kernel in this fundamental sequence, the Magic Kernel, is continuous and has continuous first derivative everywhere, and has even less spectral energy outside the first Nyquist zone.
corresponds quite precisely to the construction of the uniform cardinal B-splines by repeated convolution of a boxcar filter. The "succession of kernels, each obtained by convolving rect x with itself the corresponding number of times" that Costella describes in his paper is quite precisely the uniform cardinal B-splines. (The terminology around B-splines is unfortunately very confused, with different authors using "spline", "B-spline", and "cardinal B-spline" with different meanings, so I'm doing the best I can here.)
This is also an efficient way to implement the Magic Filter in practice if addition is cheaper than multiplication:
You can see that the sequence has been convolved with the Magic Kernel. Each of the addition operations computes the two-sample boxcar filter on a sequence of input samples, and the resulting sequence is pipelined to the following addition operator, so the final sequence you get is the desired convolution.If you want to do this at a different scale, what you have is a Hogenauer filter or CIC filter, which requires an addition and a subtraction per sample per order, six in this case. Commonly they use a FIR sharpening filter, though usually just in direct form.
The multidimensional splines you get by doing this kind of interpolation successively in more than one dimension are commonly called "box splines", because you get them by convolving "boxes" instead of the one-dimensional boxcar function. This is the normal way to do Gaussian blur in image processing, for example.
If you want to use B-splines of a given degree to approximate ideal interpolation with a sinc kernel as closely as possible with a given support, avoiding any blurring, that's a solvable problem; https://www.cs.tut.fi/~gotchev/DIPII/lecture3.pdf is a set of lecture notes on the topic.
If you're interested in this kind of thing you might enjoy my notes from a couple years ago: https://dercuano.github.io/notes/sparse-kernel-cascade-gabor... https://nbviewer.jupyter.org/url/canonical.org/~kragen/sw/de... https://dercuano.github.io/topics/sparse-filters.html.