Hacker News new | comments | ask | show | jobs | submit login

Preface: I have no experience in 3D graphics.

This is a very cool technique. I have been thinking about how to generate normal maps for traditionally drawn 2D images ever since seeing the normal mapped canvas demo[1]. This seems to be an answer.

Drawing lighting profiles that are coherent however does not seem simple. One of the uses of this technique "Depth maps for stereoscopic 3D" appears much more complicated to me that drawing a depth map by hand in the first place. I drew a depth map for my drawing Starry Venice[2] as a step in making it into a wigglegif. Drawing multiple correct light profiles to generate the depth map for a scene such as Starry Venice seems almost impossible to me. This is far from the base use-case, but still.

It will be interesting to see how forgiving the creation of the normal map will be on imperfect light profile inputs. Also, it will be interesting to see if any artists who are masters of this technique will emerge.

[1] http://29a.ch/2010/3/24/normal-mapping-with-javascript-and-c... [2] http://fooladder.tumblr.com/post/61216111704/starry-venice




Thanks! And, that javascript canvas demo is relevant to my interest - thanks for the link!

You're right in your suspicion that depth map generation in Sprite Lamp is not a silver bullet for stuff like that. Images with big discontinuities in depth (especially open scenes, like the one you linked) will likely get you some pretty dubious results in Sprite Lamp. On the other hand, if you look closely at the self-shadowing on the brick gif from the website, I think you'll agree that the results are pretty accurate (note that the little notches and scrapes in the surface of some bricks get picked up accurately too) - while you could paint that map by hand, I suspect that getting the results that nice would take some time, and Sprite Lamp does it in a second or two (pre-optimisation). Stuff like character artwork (like the zombie or the plague doctor) fall somewhere in between - you get results that are good enough for self-shadowing, and with some tweaking you can generate a nice stereogram, but it's not necessarily 'physically accurate' (which in this case is another way of saying "I can't guarantee the results are what the artist pictured").

I'm reluctant to promise features that I haven't tried yet, but I'm planning on some experimentation with a combination of painting depth values and using Sprite Lamp - this will take the form of some tools for messing with the depth map from within Sprite Lamp after it's been generated, with an eye to intelligently detecting potential edges and letting you move whole bits of the scene around at once (and then an integrated means of actually looking at the depth map you've created - wigglegifs might be a good option there, actually).


As an artist myself, I actually think making these assets would be quite easy. I already use layers in Photoshop to first draw my "diffuse map" image, then draw the shading as layers on top of it. It would essentially be the same exact process I already go through to draw cartoons, just with the added steps of drawing a couple more shading layers.


On a second look, it may not be as hard as it first appeared. When shading, the artist will just have to be very conscious of the angle of incidence from the light source. We will see how difficult this is once alpha is released and we get a chance to compute some normal maps and debug their lighting profiles.


> When shading, the artist will just have to be very conscious of the angle of incidence from the light source.

The artist should be doing this already!


They are, or the result would look like crap.


Wouldn't that still require 4x the amount of work though? (Assuming four lighting profiles.)

I could see this getting prohibitive when creating animations for example.


Shading isn't terribly hard and you could probably afford to be a little sloppy in this case. I would suspect that you'll end up spending about 2 times as long on each animation than you would have before.

BUT, with that 2x effort, you're getting a significant improvement in visual quality. The alternative would be the Donkey Kong Country option: model the character in 3D (easily 10x more effort than flat 2D animation, with a much more expensive work force and software), bake in the lighting, and generate gigantic animation sheets. Your asset library will explode in size. The games that have done this have tended to employ significant compression on the images, which can negatively impact visual quality.


Besides, the 3D rendering technique is apples compared with the oranges of pixel art. There's really nothing like artisanally crafted, locally sourced pixels made with love ;)


free-range, organic pixels. Yum.


u forgot small-batch


And artisinal bokeh.


So free range, organic pixels was okay, but artisanal bokeh was not. Glad I understand HN's boundaries now.


Yeah, as a technical guy I would tend to go with the full 3D route. It might be 10x the upfront work but having a fully automated pipeline might save you a lot of work down the line. For example just changing the color of a character could be as few as two clicks in the full 3D solution, but you might have to manually go through each sprite sheet with the other route.

And technically you could export the sprite sheet with however many frames you want (and be able to lower and increase the number easily) while still getting the exact same results as the Sprite Lamp solution. And of course artists could go in and manually make any changes they want.

It's interesting hearing the perspective of the artists. Thanks.


If you're hand-crafting sprites there's a good chance you're using a palletized paint program, which makes changing a character a matter of two or three clicks.

Also there are major stylistic advantages to drawing it by hand. Check out the baked-in motion blur on Sonic's feet in this sprite rip of Sonic 1: http://www.spriters-resource.com/genesis_32x_scd/sonicth1/sh... a while back there was a 2.5d Sonic game, and its motion had a lot less impact because no attempt was made to replicate the motion blur.

Plus of course if you're just drawing it you don't have to worry whether or not it actually makes sense - a lot of the more stylized cartoon characters are VERY hard to build spot-on 3d models of, because they're full of weird abstractions that only make sense in the 2d plane.

And finally, some people just don't like modeling stuff in 3d.

(I'm an artist and ex-animator.)


You could easily combine both worlds. You don't have to bake in the lighting. Just export the light maps, and then use them with a tool like Sprite Lamp to dynamically merge them in at runtime.


For line art (like the zombie example) it may be possible to use one of several available semi-automated algorithms to generate a depth map. Here's a page that shows a few:

http://parter.kaist.ac.kr/jyhahn76/project13.html




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: