After doing some more poking it appears as if Avisynth (and thus NNEDI3) is Windows-only. Do you happen to know if there are ways to run it in Linux or OSX? Or if there's a comparable set of software for those platforms?
Avisynth should run in Wine, but there is also Vapoursynth (which works natively on OSX & Linux) and a NNEDI3 port for it. After getting both of them up and running, a script like this ran with the vspipe program that comes with Vapoursynth should do the trick. It's a bit cumbersome since Avisynth and Vapoursynth are primarily intended for processing video, not images, but it gets the job done in absence of a dedicated NNEDI3 resizing tool. I'm actually using this exact setup at work myself when I need to do any image upscaling.
NNEDI3 is fantastic - thank you for providing a link and some samples!
You're absolutely right that I shouldn't have said "normal". I update the post to clarify that this was using "OSX Preview". I did some hunting but didn't find any obvious pointers as to which algorithm they're using. If anyone knows offhand I'll be happy to include it!
The 'hardware effort' is to get dramatically improved processing time by using the GPU since they're trying to do it on a much larger scale.
I have/continue to use imagemagick and similar software-based solutions and they're pretty slow for multi-MB images (but most servers don't have good GPUs so it's the only solution unless you're building custom racks as imgix does).
Yeah, I'm not super sure about the dramatically improved processing time. Especially compared to a SIMD-optimized scaler. You have to spend some time sending the image to the GPU and reading it back too.
Especially if you set imagemagick to use the much worse scaler that imgix uses, I imagine it'd be pretty fast.
On the other hand, if you replaced imgix's stack with the high quality scalers from mpv (written as OpenGL pixel shaders), and then compared to expensive CPU scalers, I would expect a GPU solution to be a win.
Note that imgix also has to recompress the image as PNG or JPEG at the end. This has to be done on the CPU and is probably more resource intensive than any of the scaling.
You can upload 100s of MBs of texture data to a GPU in milliseconds. Sending and receiving from GPU doesn't actually take that long in comparison to the time it takes to process a multi-MB file in software.
Yeah, on second thought, after seeing the low noise reduction result again, I suspect that may be an even better result for what I'm looking to achieve. Many of the details in his rope are preserved and the calligraphy appears to be in better shape.
One thing I should note is that when looking at prints (at least for when it comes to technical analysis) being able to see accurate representations of the lines is far more important than the uniformity of colored regions. Color is almost always at the whim of the printer on any given day, whereas the black lines (from the keyblock) should always remain the same. Granted you're going to have issues either way (using this tool or doing normal scaling) as the source material is inherently compromised.
Although it's not clear what scenes exactly the upscaler was trained up, I suspect that it's currently best suited towards scenes that have lots of large bold lines and not lots of tiny details.
I imagine it was trained on 'typical' anime-style art of black/bold character outlines and mostly flat colors.
A common solution to resizing anime characters is to create a colored vector of an image. The differences between these vectors and the original stills are minimal and usually 'satisfactory'. There is an entire scene of people who create these vectors and another scene of people who use the vectors to create wallpapers and other graphics.  Waifu2x can help replace the need to vector these images by increasing the quality of upscaling them.
This is the prevalent 'style' for anime - at least from the past 8-10 years or so. There are a few outliers and I imagine Waifu2x would work poorly on them. For example, I do not see it working well on a still from "The Garden of Words". 
Yeah - the background looking like crumpled-up wrapping paper is definitely not ideal. I suspect that it's having trouble with mostly-uniform areas of color that have slight variations. It appears to be extrapolating and creating these larger effects.
Have you tried running the original through a high pass filter (to get the textures) and applying it over the vectorized version? It might work for the background texture, though it would probably suck for the text.
I have not - although that's an interesting idea, thank you! Relative to my other projects this is a very low-priority exploration. I was very interested to see if there could be a "cheap win" for this particular sub-problem that I will be dealing with, should I get around to digitizing these books.
Great point. FWIW, I've updated the post to include some of the cartouches, along with a cartouche at the "low noise reduction" level. The two lines in the fourth character appear to still be relatively distinct in this case.
I completely agree. I tried to make a bunch of notes on the (poor) code quality, inline. This is mostly an attempt at archaeology, if you will. Naturally the current release (or even 1.0 release) of jQuery is substantially better code in every respect.
I don't have any links off-hand but I know that there was a Nintendo.com sub-site that was using a pre-1.0 version of jQuery. Even though the overall code quality is rather low it did have decent browser support - unless browsers actively begin breaking their APIs (unlikely) those old versions of jQuery will probably keep working forever!