> what you did is still the best out there
It still is.
I worked with a printing company before. The company impart anti-forgery pattern to packaging material by tuning the half tone-ing angle of the printer. The printing company then offer a seemingly transparent film, with special pattern printed on it, to company requiring brand protection. By overlaying the film on top of packaging material, a specifically designed moire pattern would occur. If you squint your mind enough, it is like public-private key encryption in the physical world. Whenever the brand receive dispute over authenticity of purchased items, a special team, a team having that transparent film, will be summoned to verify the authenticity of item in question. It is one of the many ways the brand in question protect their goods. The printing company was looking for a mobile solution instead of delivery the transparent film, that's where I get to learn more about the printing industry.
It is a relatively well defined problem - removing periodic pattern due to physical arrangement of printing device. This is where algorithmic approach shines over ML approach. I think nowadays a lot of ML is an attempt to outsource the problem understanding part. These are hot dogs, these are not hot dogs. I don't know what defines a hot dog, but is this picture a hot dog?
Hyperbole, of course.
On second thought, I think the author shouldn't remove the periodic noise at all. He was preserving "a copy of history", not the first occurrence of history. It is a property worth preserving. It is a beauty of itself imo.
It’d be a bit like taking one of those game emulators that displays the game with a “scanline”-effect filter… and then hooking the computer running the emulator up to an old TV that has a natural built-in scanline effect. You won’t get the pretty scanlines from the emulator, nor will you get the pretty scanlines from the TV; you’ll just get a mess.
(To put it in technical terms: to accurately show the halftone “information” from the original photo in the book, the print resolution of the book would need to be double the Nyquist frequency of said halftone information. But it very likely won’t be.)
But it's unlikely that you can do that, as, unless you're printing in a four-layer pure-CMYK process (and why would you?), even the best photo scanners will be unlikely to be able to capture enough information to tease apart the original offset-layers in their original colors, where halftone dots have overlapped to produce new mixed colors.
Also, you'll be unlikely to reverse the color-bleed effect of the paper to get out the original halftone dot pitch, in order to print new dots that bleed out to the same size the original's did on new, different paper with new, different inks.
Basically, this is as hard as any art forgery project (i.e. needing to match historical materials and processes and environmental conditions), but before you can even start the forging process, you need to destroy even more information by passing the image through the grid-sampling process of a scanner, and only then try to reconstruct layered vector circles from the resulting sample grid.
If you are talking about preserving the halftone in an original scan, then yes I agree that it should be preserved. Yet you must also accept that the periodic pattern will have to be removed prior to publishing the image since it will be scaled at some point.
If you ask your printer, they should be able tell you what lpi/dpi they can resolve on the media you're printing onto (you may be limited due to dot gain on uncoated stock).
Anyway here is another method, that also targets peak regions on frequency domain for removal but is based on a median filter instead, 
 Removal of Moiré Patterns in Frequency Domain, https://ijournals.in/wp-content/uploads/2017/07/5.3106-Khanj...
 Periodic Noise Removing Filter, https://docs.opencv.org/3.4.15/d2/d0b/tutorial_periodic_nois...
 Adaptive Gaussian notch filter for removing periodic noise from digital images, https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-...
 Adaptive Optimum Notch Filter for Periodic Noise
Reduction in Digital Images, https://eej.aut.ac.ir/article_94_befd8a642325852c3a0d41ece10...
 A New Method for Removing the Moire' Pattern from
This is because lenses effectively do a Fourier transform at the focal point. With the right setup, you can apply filters at the focal point and get pretty much exactly what you would expect. An example of such a setup is the 4F Correlator. 
Fourier optics is a whole subfield within optics, and it really is rather fascinating.
Exposure to Fourier optics really helped develop my intuition around the Fourier transform.
This is how Electron Crystallograpy works. You can choose to get half the Fourier transform (aka the diffraction pattern) with phase information lost, or use a secondary lens to get the full picture back after correction. It's quite magical.
Then you can do FT on that final image on a computer and then modify that pattern in reciprocal space to fix flaws with the image like astigmatism and noise.
http://www.calidris-em.com is the software for this.
The article paints on the Fourier image and sees the effect on the original image. Well, blurring an image is equivalent to painting a centered black ring.
The generated image does not have the same global problem with moiré patterns. The dot patterns remain in the image, randomly dithered or converted into lines. The FFT solution worked better than that particular ML model, although I presume a ML model could be trained specifically to remove printing dots.
Edited: added link to output image.
Literally painting over frequency peaks in the FFT with black circles I imagine would be pretty lossy, and not entirely rid yourself of the pattern (since you're making a new pattern with your dots). Indeed, in the animation, the image does get darker as circles are added, and some of the pattern is still visible.
Perhaps using a blur tool to blur out the peaks in the FFT would serve to maintain original image tone, and further reduce patterning?
There's even more sophisticated wavelet denoisers out there that effectively do the black circle over peaks trick, but automatically and more precisely.
Trying it out now, I'm able to get a good result (to my eye) by deleting the top two bands, but it looks nearly identical to the article's blurring example.
Upscaling the image first by 141% and deleting the same two bands, the dots start peeking through, but the result looks closer to the article's inverse FFT result -- minus the artificial edge contrast enhancement that came from the author's use of black (rather than grey) circles.
It's a bit cumbersome since you can't preview what you're doing, but quite workable. Was trying it out on his reference image this evening and got (IMO nicer) results fiddling with magic wand/fuzzy select/grow/shrink and various fills.
Yeah I supposed you'd need a tool with a continuous wavelet transform and potentially a 2D one at that.
A halftone dot whose size/shape changes gradually across the image acts like slow PWM in a pulse wave, changing the relative amplitudes of the harmonics (but not their locations). However, steep discontinuous changes can have nastier effects (which aren't handled well by either a convolution kernel or FFT).
I suspect it's possible to handle edges better than a FFT using a specialized algorithm, but I don't know if it's possible without inordinate runtimes, and if the end result is significantly better than a FFT or not.
(Also, FFTs won't work as well for non-uniform hand-drawn halftones, like the charming https://upload.wikimedia.org/wikipedia/commons/a/ac/Julemoti....)
For a fixed kernel size and sufficiently large images, full-image FFT is slower than naive convolution because FFT scales by O(n log n) by the image size, while naive convolution only scales by O(n) in terms of the image size. This can be avoided by using blocked FFT instead of full-image FFT:
> If one sequence is much longer than the other, zero-extension of the shorter sequence and fast circular convolution is not the most computationally efficient method available. Instead, decomposing the longer sequence into blocks and convolving each block allows for faster algorithms such as the overlap–save method and overlap–add method. A hybrid convolution method that combines block and FIR algorithms allows for a zero input-output latency that is useful for real-time convolution computations.
As of the darkening in restored image, it depends how the software interpret the band-stopping of dark holes. By dark circling in the frequency domain, energy is taken out from the image. Re-normalizing the image might be desired, but then it would turn dark areas brighter.
FFTs are amazing. In xray crystallography, you can use them to recapture the original image of a crystallized protein from the scatter of dots left by passing through it—essentially acting as the role of lens. They never cease to amaze me with their usefulness!
This is a surprisingly good description of an FFT.
What a fantastic analogy. I wanted to stand up and applaud the author.
Interestingly, filtering out high frequency components from the Fourier transform of the image is exactly how JPEG lossy compression works, so compressing the image as a jpeg would likely have a similar result.
What must Gauss think when he hears this?!
The divide-and-conquer Cooley–Tukey FFT "algorithm (and the general idea of an FFT) was popularized by a publication of Cooley and Tukey in 1965, but it was later discovered that those two authors had independently re-invented an algorithm known to Carl Friedrich Gauss around 1805 (and subsequently rediscovered several times in limited forms)."
> FFT – Fast Fourier Transform – was invented in the 1805 and then, once again, in 1965.
If the image was created using a digital sensor, then it won't work as well, of course because the sensor itself is subject to a grid. However, the kind of Bayer Filter used in the sensor can help tackle the effect at source. This is what Fuji's X-Trans sensor claims to do. (I am a Fuji user but I have no data point to offer in either direction)
For anyone interested, this is an excellent introduction to some of the concepts: https://web.archive.org/web/20060210112754/http://cns-alumni...
Fourier analysis is currently used in "Image processing to remove periodic or anisotropic artifacts such as jaggies from interlaced video, strip artifacts from strip aerial photography, or wave patterns from radio frequency interference in a digital camera."
Also, "JPEG compression uses a variant of the Fourier transformation (discrete cosine transform) of small square pieces of a digital image. The Fourier components of each square are rounded to lower arithmetic precision, and weak components are eliminated entirely, so that the remaining components can be stored very compactly. In image reconstruction, each image square is reassembled from the preserved approximate Fourier-transformed components, which are then inverse-transformed to produce an approximation of the original image."
Source for both quotes: https://en.wikipedia.org/wiki/Fourier_analysis
Also, using inverse fourier transform to descreen is already the basis of lots of popular commercial denoise plugins (for Photoshop etc.). Most of them will automatically measure the angle and resolution of halftone matrix too.
Reading this I actually got excited about something I already knew and had used now and then, when I never felt about it that way before.
This is great for other types of regular patterns, not just moire. For instance, if you scan old 60s/70s photos on textured/fiber paper, you can use this to smooth out the texture if desired.
Blotting out "bright points" in a 2D transform is basically creating a bandpass filter to get rid of those frequencies. That pictures have frequencies, is cool in itself, looking at each vector through the image as a series of samples.
> with a user interface only a parent could love
> “You don’t need ML,” Bryan said. “What you need is inverse FFT.”
Space Cadet Keyboard:
Oh, he's writing a book about keyboards. That makes total sense, then.
Then just counter the rotation of the matrix to get the best possible right-side-up scan.
turns out skimage can do this, not just the 'special piece of software' mentioned in the article.
I am sure there are more tools for it. Good search terms seem to be "FFT Moire".
Fourier transforms can be easily computed in realtime at full HD resolution these days since GPUs are very fast at that kind of math.
And all the harmonics that make out the circle shape.
I'm pretty sure that making the circle repeat horizontally and vertically is some kind of convolution product and that has some nifty properties under Fourier transform.
EDIT: This fork allows drawing over FFT https://geeklogan.github.io/JS-Fourier-Image-Analysis/
Also, this would be a good example of just because you can doesn't mean you should, especially with stripes.
I think you could get the blur method quite a bit further for a fair comparison, e.g. by adding noise.
HN please, make a FFT of the Day tag to not miss those, just on the title.