A problem with this algorithm is that it's pretty slow. It would be hard to use it for real time scaling.
I think that it's pretty cool though for any precalculated conversion to vector format. It would also be pretty cool if there were software that allowed you to create a pixel art image with a live preview in this format and the ability to export into a vector format.
One of the benefits of pixel art IMO is that it's got a pretty distinct style and its simplicity and restrictions means it's easy to be aware of composition where if you are given an unrestricted canvas that freedom can make it hard to know where to start. If you are given a 8x8 pixel grid to indicate "happy" there's only so many ways to do it, a symbolic heart, a little smiley face, etc. Given a whole sheet of paper you'll have more difficulty.
With the high res displays, there's more to consider than just the upscaling of old programs, the fact is people (especially indie game devs) still want to work with pixel art, and the number of titles with the term "8-bit" in their name that have come out recently. Part of that's because it's in vogue now, but a big part of it is simply that a poor artist can make a passable 24x24 sprite.
But it's still a shame that on high res displays they end up looking like a blocky mess. If you had a program that let you author your sprites under the same conditions, but let you preview an upscaled version and left you with a vector image (or high res texture with mip maps) you can make use of the fidelity while maintaining the simplicity. I think that would be cool.
Even if it's too slow for realtime on today's hardware, I'd love to see some prerendered sample videos.
Just wondering, because the samples in the paper are all lone objects against a clear background. If the same object is drawn in front of a detailed background and then the upscaling is applied, this would affect the patterning. It's a little tricky to tell from the video, since the graphics (especially the backgrounds) use large blocks of plain colours.
Worst case, you could imagine Mario walking in front of the background and his shape shifting madly with every step he took?
(Edit: From freeze-framing the video, I'm guessing that the algorithm is being applied to the whole screen. The scaling works great when Mario is in front of the pale background. There are small artifacts when he crosses the black outline of the green bushes. Very minor in this video, I wonder how it would affect more intricate backgrounds or those that aren't such contrasting colours to the foreground?)
This brings up something interesting: In terms of media containing sprites and minimal layering (e.g. NES-like games), wouldn't it be computationally cheaper to perform the scaling on the sprites and textures independently of one another, instead of post-processing? I wonder if there is any emulator that does this. I'm thinking not, as the NES wouldn't be capable of layering such detail (large sprites + textures). Upsampling emulators.. Hmm...
the computational price will be much much cheaper tho. swap in upsampled sprite (36 tiles to every 1 in old format, let's say) rather than post-processing every single frame in real-time with a smoothing/upsampling algorithm.
it would also keep the pieces distinct from each other rather than having objects/characters/backgrounds morph in & out of one another
They also tend to cause a very subtle distortion of the outlines of foreground sprites as they move over the background.
That said, those algorithms are impressive, and I can see why someone could prefer them over a simple nn.
The art-direction of games from this era wasn't done with the intention that these games would be played on pixel-perfect computer monitors, though. The art was designed and tested for CRT TVs, which would generally give you something like http://i385.photobucket.com/albums/oo299/muddi900/LTTP3xNTSC..., or even http://4.bp.blogspot.com/_Kzdww8T9fUA/TRplyGIj6PI/AAAAAAAAAt...
edit: proof: https://www.google.com/search?q=nes+box+back&tbm=isch
I'm surprised none of these algorithms are made to keep state between frames, so they can keep things like this consistent.
Combine Atwood's law with a proposed law of mentioning ideas on HN being implemented shortly thereafter and there may be a HTML5+JS implementation popping up as a "Show HN" soon.
Here's an example of UI that's scaled with potrace (a similar algorithm): http://i.imgur.com/jDq4M7e.png (left side is the converted side, right side is nearest-neighbor. Top is 3x, bottom is 2x)
That's with the highest amount of colors/depth that potrace allows. On an i7 processor, it takes a LONG time to compute all of that. Each icon is no longer just a couple hundred pixels, it's a couple hundred vector paths. Even rendering it takes a number of seconds to refresh the screen in inkscape.
2. We just need vector-based icons already.
OTOH, for icons, you just need to compute that once and then cache the png.
In Windows 7 and 8, WPF apps are already supposed to be resolution independent (vector-based).
Old-style GDI+ apps which aren't marked as "resolution aware" (i.e. almost everything) are poorly scaled by the window manager and hideously ugly.
But yes, apps are also broken, even very popular ones like Chrome and Steam.
Google almost had support in Chrome 34 after a few false starts, but it looks to have been disabled yet again in current releases.
Now that Windows users are trying to use Retina MacBooks via BootCamp and 2x-resolution Windows 8 tablets like the IdeaPad Yoga2 are out, the DPI scaling issue is finally starting to get mainstream exposure. Most reviews of the IdeaPad called out Chrome specifically and once 4K displays come down in price the gamer crowd will probably jump on the bandwagon as well.
It isn't continuous, but there are more than two scaling levels on OS X:
More details: https://blog.qt.digia.com/blog/2013/04/25/retina-display-sup...
On Windows the app is exposed to the scale factor (for better or worse): https://blog.qt.digia.com/blog/2009/06/26/improving-support-...
They didn't end up using this approach because the 2x mode is so simple, and will likely be the last scale factor really needed (especially with the additional scaling modes).
The same was true for even longer (IIRC) on the Mac. You could enable it with a hidden preference, but lack of app support, compatibility with existing stuff and lack of hidpi displays at the time made it take the back seat. And vector drawed apps only go that far, you need pixel assets at some point or another.
The 2x scale was just the solution finally used by Apple because of the same kind of issues as you describe for Windows.
What is needed for this to work is a user setting to override the programs own preference. There is just too much legacy cruft that accidentally marked themselves as being resolution aware even if they're not. Or drop the old flag and create a new one called something like "really really resolution aware"
I haven't played with this stuff in a few years - my housemates and I tried to use a 46" TV in the basement as our web browsing machine for a while and ended up frustrated every time we tried to scale the UI on any platform (OSX, Ubuntu with both Unity and GNOME2, Windows).
You'd think Microsoft of all companies would work in compatibilty fixes for at least the major apps. Like how they worked in so many fixes to make them compatible from 98-XP and such.
TF2, for example, plays beautifully at 3840x2160@60Hz. Scaling up the graphics for an 800x600 visual novel that plays at 1 frame per 30 seconds is computationally possible.
(The good news is that someone has written an emulator for one VN that I'm working through, so I should be able to play with scaling in that way. If I can get it to compile.)
backup on archive.org: https://web.archive.org/web/20120630044334/http://www.hiend3...
Maybe the results of "ours" aren't exactly faithful in style in some cases, but they're globally consistent, and I find the shapes semantic more truthful. As for hq4x by itself, I actually prefer the original sprites overall.
For Bowser, I think hq4x is fine (the mouth needs not have a smooth contour like a human's - compare with a dinosaur or a crocodile), but you are giving more weight to the fact that the shaded part on the bottom right of the mouth is broken in several pieces, instead of being a single piece like in ours.
For Fake Sage, I think hq4x does better. A pointy moustache is fine. The "ours" version is practically melted, and hardly recognizable as an old man - it looks more like some sort of fish shaman.
For Axe Battler 2, the shaded part of the sword is jagged in hq4x, but that doesn't bother me. Same thing for the broken black outline in the leg. I am much more bothered by how "ours" messes up his face.
In general, I would say that "ours" is better at putting together some parts that are supposed to be a single line, while hq4x separates them (especially with thin shadows). But graphically, hq4x's result is much more faithful to what the sprite is supposed to represent. With "ours", everything looks like melted ink, or like it's been seen through a wet pane of glass in the rain.
It would be nice to see if it's possible to combine "ours"'s better detection of continuous shades with hq4x's better preservation of straight lines and details.
There was a video posted here, with them rendered at the same resolution side by side.
It's really a matter of taste in this video.
A big difference in the techniques is that hq4x is a fix-rate upscaling, whereas Microsoft's can handle arbitrary target resolutions. At the same rez, they are pretty equivalent. But, MS has the option to go higher. HQ8x has not yet been developed. I doubt that HQ(arbitrary)x is possible without completely changing the algorithm.
As for the video, the dent in Mario's cap outline and the similar angles in the background outlines ruins hq4x for me, while I could live with the "200"/"400" lettering (which looks better on hq4x) because it disappears quickly (but does not bode well for other cases where text appears)
Not to mention that muscles have a smoothly-graded tone under the 'ours' algorithm, whereas hq4x has an unlifelike blocky effect.
For simple shapes, like the background, their algorithm works really well, but for complex objects it fails, because it distorts details that were put in with very careful thought and completely depend on the resolution. Such small sprites rely a lot on being looked at by someone who can identify semantically what they're looking at, and any really successful depixelization solution will need to be able to understand what basic shapes the sprite is made of based on what it's supposed to represent.
It does a great job of scaling pixel art too. I'm kinda surprised it hasn't been implemented in any emulator. With the newer opencl implantation it is finally able to run in real-time.
But there's a lot of space to explore there in terms of choosing and designing the neural net, choosing the right training set, and figuring out an initial transform for data that is input to the neural net.
It will be released in the next Inkscape major version, expected soon :)
Dani Lischinski's page: http://www.cs.huji.ac.il/~danix/
Johannes Kopf page: http://johanneskopf.de/
> This paper introduces a novel content-adaptive image downscaling method. The key idea is to optimize the shape and locations of the downsampling kernels to better align with local image features. [...] Besides natural images, our algorithm is also effective for creating pixel art images from vector graphics inputs, due to its ability to keep linear features sharp and connected.
IIRC, this is one of the major reasons that vector graphics based icons have not taken off.
(I also loathe most forms of antialiasing, including ClearType, so that might have something to do with it... I like sharp, clearly contrasted edges, including pixel edges.)
See? Even he doesn't know what to call it!
In general, I think the "ours" method could benefit by moving the boundaries between colours in and out until they get an average brightness match with the similar area in the nearest neighbour version. A lot of the "ours" images have a higher proportion of black.
I think the problem is that those sprites use multiple strategies for handling diagonal outlines. Those look fine when the image is blurred together, but when you try to detect the outlines and upscale them, the differences between techniques are exaggerated.
The Vector Magic results reminded me a lot of this:
I imagine it would end up with pretty weird results though when a sprite was on top of a background that with similar enough colors to trigger the shading detection at its edges.
However, if you're looking at in the case of an SNES emulator or something.. then hq4x seems like the best choice. It keeps true to the original pixel art, while appearing significantly better.
Video of Ours vs HQ4X. Ours seems to have a slight advantage to my eyes, admittedly it is a personal preference. Text has a distinct advantage on HQ4X but Mario and Yoshi look better even if less "accurate".
and links to +1000 day old threads:
yet those space invaders still haunt me.
So far it supports HQX, XBRZ, and Scale2x.
The paper does note that it is designed for hand-edited pixel art, where every different coloured pixel is treated as significant. So I wonder how it behaves when scaling photos?
(Edit: the paper does note that it doesn't do well on anti-aliased graphics...)
And Microsoft is not really being fair to Vector Magic here, there's a ton of settings there to improve that rendering depending on what you want...