It points to the problem with these upscaled versions: They aren't any good. In the end, the artist demonstrates how to get good results, draw it again: http://www.dinofarmgames.com/wp-content/uploads/2015/04/yosh...
Of course, the algorithmic approach lets you upscale any number of preexisting games. However, I have concluded that I like the original art better.
There's a ton of graphics research that has the same basic issue, I feel. The premise behind a lot of it seems to be how to do impressive art without an artist, how to automate our way to compelling imagery.
I am guilty of this too, as an occasional graphics researcher and full time tool maker, I'm implicating myself here too. I am with you, the algorithms speak to me as a programmer, and they are really fun to think about and code.
But I suspect we're missing out on some opportunities to do amazing things with an artist in the loop, that we're not yet achieving what we could if we built tools that try to enhance creative people rather than replace them.
Adobe is pretty good about taking Siggraph papers and implementing them with more artist friendly controls than the paper, but I can't help but wonder what we might have if all the Siggraph papers on average aimed to be more controllable and less automatic in the first place.
This is the point of these algorithms, for sure. Hand-crafted graphics will always beat algorithmic upscaling of lower quality hand crafted graphics, given the same amount of time to do each, but that isn't possible for most examples of games with pixel art.
Mario looks horrid.
But you can use literally any of the pixel art'ed images from your link - they all look truly fantastic and production-ready! They're ready to ship now.
Here it is! https://www.youtube.com/watch?v=icruGcSsPp0
> What happens if you make a copy of a copy of a copy (and so on) of a VHS tape? This experiment shows how the quality degrades with every generation.
> The copying was done using two PAL VCRs in SP mode.
> (The video is Fading like a flower by Roxette)
This may sound crazy, but I wonder if there's a way to "automate" this deconstruction with ffmpeg...
ffmpeg -i input.mp4 -strict -2 -crf 51 output.mp4 && rm input.mp4 && mv output.mp4 input.mp4
For better results, run the above command ten times:
for ((n=0;n<10;n++)); do ffmpeg -i input.mp4 -strict -2 -crf 51 output.mp4 && rm input.mp4 && mv output.mp4 input.mp4; done
> Bizarrely, the pixelated stuff in the images you linked look way better.
> But then I remembered I look way better in the morning light.
In one step. Lovely!
This sounds like poetry.
This exact game features in the plot of Philip K. Dick’s 1969 novel, Galactic Pot-Healer.
"Content-Adaptive Image Downscaling" http://johanneskopf.de/publications/downscaling/
I always thought it was sad that it didn't get as much attention. I think it could also interesting to use in thumbnail generation based on regular photos, for example.
Going to ask in that reddit thread if the author is already aware of this algorithm. Might be able to take some cool new ideas from it if they don't! :)
My better half enjoys a bit of cross stitch. For a cheapo little gift I've played with outputting custom cross-stitch patterns based on pixellating images and reducing the number of colors to something more manageable, but it's often a little janky in GiMP and requires a bunch of manual tweaking - for anything remotely detailed it quickly becomes more hassle than it's worth.
This, on the other hand, might just do the trick!
Playing with this, it's very effective for solid-color graphics as it quickly settles in on a nice limited palette. The line-work also does a reasonable job, limiting the manual fixing to areas where lines converge and so forth.
I'm working on writing up a post for /r/crossstitch comparing some different methods, so check over there in a while if you're interested.
I wonder why Microsoft payed him to do develop this kind of algorithm.
Very curious to see this as it progresses however.
It's not perfect but seeing the alternatives it is a huge step in the right direction.
a a matter of fact some of the most recents algo available for video deinterlacing like nnedi3 are perfect for that, and of course also work for pixel art, and I think they blow away the example showed in their gallery - https://www.youtube.com/watch?v=0691zsXWbhA
I wonder if now, in the tensorflow era, a better automatic approach using generative nn could be devised to reach beyond 8x
How about application icons?
I am wondering whether in the future we may forget about writing algorithms altogether and instead rely on ANNs to do tasks that could have done better using hand written algorithms?
Image super-resolution is an ill-posed inverse problem: many possible high-resolution images would be reduced to the same low-resolution image. However, some of the possibilities are more realistic than others. Consider a grey pixel in a photograph from real life. It's more likely that it was downsampled from a 2x2 block of grey pixels than from a 2x2 black-and-white checkerboard pattern. We apply such knowledge by adding regularization to the problem, or using a prior distribution in a Bayesian formulation.
Deep learning is very good at memorizing those priors by looking at real world data. I think, in problems where the prior is important and complicated, data-driven approaches have a big advantage over hand-engineered approaches. That does not mean they will take over every kind of problem.
hq4x and the #xBRZ algorithms are amazing and fast and look nearly identical to these results.
This method still wins out though.. but interesting never the less.
Reminds me of some old newgrounds flash animations and games
When comparing only the 4x results hq4x wins half the time. On the other half hq4x is a close second and generally looks alright. However in some cases the vectorized ones are pretty bad.
Boo and Bowser look great. Mario and Yoshi look terrible. The detailed pixel art and gradients just don't carry over. The shapes of the outlines do, however.
Scanlines IMO look the best for video games, because it also tries to simulate the medium they were originally presented in, not just the graphics data behind the images
ColorDMD first had artists create color overlays for the images and animations, and then emulated the display with colored dots instead of the red/orange color from the original DMD.
A later firmware release included upscaling, with some impressive results. This thread on Pinside (a pinball discussion site) shows some examples from the game "The Simpsons Pinball Party". I've linked directly to a comment that shows an original rendered DMD frame along with colorized versions with and without upscaling.
Though, one lesson I've learned (as a game developer myself), was that enhancing may introduce more visual problems. On a port of a game I worked (Playstation1 -> PC) we hired an artist to enhance the eye textures, but once the team behind the original game saw this, they told us that since there was no EYE animation in the game, and having such high-fidelity texture (for it's time, back in 2000) made all the characters look like toys.. Now if we left (and we did) the eye textures blurry, then your own eyes can't focus, and accept this better (without eye animation). I'm not an artist (just a coder), but this taught me "less is more".
By removing these artifacts, you essentially defeat that purpose. Animating vectors would take a lot more time to create something that seems professional, mostly because you have to be far more detail - analogous to moving from 2d to 3d. I personally don’t see the use cases for this in gaming.
I don't know about that.
In my opinion the images that aren't game characters look best with their vectoriser/upscaler. Cursor 1 and Setup, for instance.
can't vouch for it yet, but will be giving it a try later.
A shame, I'm really keen to spruce up some old 50x50px forum icons from where I used to hang around back in the early naughties.
So many classic games have been destroyed in this process.
Clever algorithms get you some of the way, but you will still need someone with good sense to edit the result to get something usable, which is going to cost you in either time or money.
Downsizing isn't so bad now because of higher DPI displays (e.g. font hinting is obsolete now) but previously designers would need to make individual icons in a number of sizes removing elements as necessary.
Even logically it is an easy leap as they're(the software/filters) most likely choosing the most common color in a given area and then converting that area to a square of that color..
hq4x is included I guess, but unfortunately when you get your games from e.g. GOG, you have to tune DosBox manually every time.
Or, this algorithm wasn’t intended for real-time processing and creates an adjacency graph of similar pixels. Once it’s done that, it will try to fit/optimize splines over the regions. Emulator filters like hqx determine each pixel’s “upscaled shape” by the colours of its neighbours, usually indexed with a table.