UPDATE (Friday 14 March): Maybe they’re reading Pocket Tactics in Cupertino? Andrew Mulholland just wrote in to say that Apple re-reviewed the game and have reversed their decision without Hunted Cow Studios having to make any changes. Common sense prevails. Tank Battle: East Front 1942 will be on the App Store tonight at midnight.
I like to see good articles reposted — even though the site's name contains the word "News", I feel like the site would clearly be less good if it only took submissions of articles that were less than a week old. But, yes, the title of this article is awful and distracting from its (imo worthwhile) content.
Is the video based on applying the algorithm to the entire screen, or is it applied to the sprites individually, then they are plotted to the screen?
Just wondering, because the samples in the paper are all lone objects against a clear background. If the same object is drawn in front of a detailed background and then the upscaling is applied, this would affect the patterning. It's a little tricky to tell from the video, since the graphics (especially the backgrounds) use large blocks of plain colours.
Worst case, you could imagine Mario walking in front of the background and his shape shifting madly with every step he took?
(Edit: From freeze-framing the video, I'm guessing that the algorithm is being applied to the whole screen. The scaling works great when Mario is in front of the pale background. There are small artifacts when he crosses the black outline of the green bushes. Very minor in this video, I wonder how it would affect more intricate backgrounds or those that aren't such contrasting colours to the foreground?)
> Worst case, you could imagine Mario walking in front of the background and his shape shifting madly with every step he took?
This brings up something interesting: In terms of media containing sprites and minimal layering (e.g. NES-like games), wouldn't it be computationally cheaper to perform the scaling on the sprites and textures independently of one another, instead of post-processing? I wonder if there is any emulator that does this. I'm thinking not, as the NES wouldn't be capable of layering such detail (large sprites + textures). Upsampling emulators.. Hmm...
It'd be kind of hard, as what makes up a "sprite" lives in two different places - one that stores all the tile data, and another that basically describes which tiles make up which sprites. The latter is likely to change every frame, and potentially the former, too, so you'd still be rescaling some stuff every frame. You'd also have to rescale some things that are affected by palette changes, which sometimes change every frame (popular way to animate water, for example). Would you end up saving time this way? I'm not really sure. It's definitely an interesting idea.
The thing is, there's no "right format" for sprites, as they're used in the game. See http://benfry.com/deconstructulator/ for an example of how sprites are handled by the NES for Super Mario Bros. Mario is split into 8x8 pieces that are swapped out as needed: how do you determine which pieces are "Mario's sprite" and which are "that coin's sprite"? Remember that some pieces will change while others won't.
yea i dont doubt it would be complex & to build an upsampling sprite rendering engine you'd have to understand all this stuff. With old games like this it would probably involve some manual work cuz you'd have to recompose images, smooth as a unit, then decompose somehow back to the original tiling. I mean.... I'm not gonna do it & wouldn't bother trying
the computational price will be much much cheaper tho. swap in upsampled sprite (36 tiles to every 1 in old format, let's say) rather than post-processing every single frame in real-time with a smoothing/upsampling algorithm.
it would also keep the pieces distinct from each other rather than having objects/characters/backgrounds morph in & out of one another
This reminds me of staring at those tiny screenshots on the back of NES boxes with sweaty palms as a six year old.. lol.. Pixel art was not yet embraced as an art form, I guess. (well, not by the adults)
As interesting as these smoothing algorithms are, they can't add any information to the sprites that wasn't already there. Why make everything look smooth and plasticky when the original game was carefully hand-drawn, pixel by pixel?
I think there is a purpose for this in video, but I'm not sure it's an all or nothing game. Especially here. For starters, the gameplay looks smoother in nearest neighbor because your eyes defocus while playing. Hence, you need sharp pixel edges to dilettante and your eyes do the rest. In a still shot, that's not the case at all. You want focus and smoothness.
I never want to see a Microsoft monopoly over "desktop computing" (hah, what an aged term!) again, but I'd definitely like to see their talent get to assert itself further than it currently is able to. Microsoft started as a development tools company and they're still pretty damn good at it. I hope the next decade allows that group to flourish at the very least.
> Microsoft started as a development tools company and they're still pretty damn good at it.
Why are they so behind in C++11 support in their tools then? I think they are falling behind in their tools advantages already. With emergence of OpenGL debugger from Valve that will only escalate (at least on the gaming scene).
That's ironic considered that Microsoft was ahead of the curve on C++03. They were the first, IIUC, to fully implement the standard, and were way ahead of the competition during the whole process.
It wouldn't be unfair to call me a Microsoft "hater", but I've always had respect for their development tools, and especially their compilers. I'm doing a little Windows development these days, but it's on an ancient version of Visual Studio (2003) that can't be upgraded at this time, so I'm not familiar with the latest and greatest. I'm surprised to hear they are behind the curve here, although I'm not surprised to hear that anyone is having trouble implementing the amazingly complex stuff that was introduced in C++11.
They are behind competition (gcc and clang). I doubt they can excuse it with the lack of resources, so I'm not sure why it is so. I don't really use their compilers so it doesn't bother me, I'm just pointing out the fact.
It's good to keep some perspective. C++ standards are incorporated into tools so much faster than they used to be it's not even funny. I'm not on the inside so I don't know why things are so much better, but clang, gcc, and visual studio people are all doing a pretty great job.
I bought a 13" MBP to replace my 11" MBA because I was tired of waiting for retina and the size/weight difference seemed negligible. Regretted it within a couple of days of just using it around the house.
On a purely numbers basis, the difference doesn't seem significant (hence how I talked myself into the purchase), but there's a massive "human scale" threshold that's crossed in that difference. IMO, an 11" MBA feels more "like an iPad" in those respects vs. a 13" MBP very much feeling "like a computer".
It would be a better comparison in some ways, you're right. Still, when I've picked up a 13" MBA, it's felt more similar to an 11" MBA than a 13" MBP to me, in large part I think because of the difference in thickness: both MBAs are "0.3-1.7 cm" while the 13" MBP is 1.8 cm all the way through. The weight also puts the 13" Air halfway between the 11" MBA and 13" MBP.
Not knowing more about the specs of this hypothetical 12" machine, I'm theorizing it'd be approximately halfway between the 11" and 13" Air in size/weight. That of course may not be right, but it seems unlikely to be way off. So, while my comparison is hypothetical and imperfect, I don't feel it's quite apples and oranges, either.