Hacker News new | comments | ask | show | jobs | submit login
Sprite Lamp (spritelamp.com)
1323 points by cocoflunchy on Nov 8, 2013 | hide | past | web | favorite | 126 comments



I have absolutely no reason to ever use this but I'm absolutely going to throw money at the author because I'm a sucker for pixel art and this looks incredible. (I spent like five minutes staring at the zombie .gif, slackjawed.)


My girlfriend (who is responsible for most of the art in the examples) and I pretty much reacted the same way when we first got the algorithm working. It was one of those things that went from 'not working at all' to 'suddenly working perfectly when you fix one broken thing'. Good times.

Oh and thanks for the offer of thrown money!


Is my understanding of the technology correct that in the most basic sense what you are doing is creating normal maps, diffuse maps etc for a 3D plane?

This tool looks amazing and this will be my first time supporting a project via kickstater!


That is pretty much correct, except we are creating normal maps, depth maps, and ambient occlusion maps (and anisotropy maps but I haven't talked much about that yet). The diffuse map is something the artist will have to paint.

And, thanks!


Cool, but wouldn't it be better/easier to have a tool where you actually draw a depth map by hand but the tool shows you illumination from various directions in real time? At least for the pixel art case. I'm not an artist but to me that seems more intuitive.


The shadows on the bricks of the wall were what did it for me. All of it was impressive to me.


I need your expert advice on business ethics, HN.

I was impressed when I saw this story yesterday and really liked the idea behind Sprite Lamp, so I figured out the algorithm and wrote a program that replicates its basic functionality of generating normal maps for 2D sprites lit from four directions: http://i.imgur.com/H1H0R8k.png. The program will need some more work before it can be practically used but it's the same basic idea.

I intend to release it as a command line tool under a free license; I do not intend to compete with Sprite Lamp by building an artist-friendly GUI or implementing lots of fine-tuning. The idea has been used in games before and seemingly isn't patented but Snake Hill Games are, as far I can tell, the first to offer this functionality in a stand-alone tool. I would like to see them succeed but I also feel that having a FOSS implementation of this algorithm (mine or otherwise) would be a benefit to the community.

Should I wait until the end of the Kickstarter campaign (edit: or longer) to release it?


This algorithm has been written a thousand times, and probably already exists somewhere online, hidden deep within the internet.

And as you say, it targets a completely different set of users. Sprite Lamp, is building something for artists to use, while the people who want a command line tool, would likely try to build it themselves rather than try to fit the .NET gui into their pipeline.

And additionally, it's likely that your release won't spread fast in the same channels at all, not really even showing up on the radar of many potential contributors of Sprite Lamp.

My point is, the ethics of the decision really only matter if you are going to have any affect on the kickstarter campaign. (Which I think even then you'd be in the proper right to release your code) But your unlikely to affect the kickstarter at all. So, IMO, I think you should release now, and ride the wave a little bit for people who like the idea of Sprite Lamp, but don't like the idea of the interface.


Update: I've emailed the developer of Sprite Lamp. Depending on his response I will release my implementation of the normal map generator to the public either as soon as it's ready (which might take a few days) or after the end of his Kickstarter campaign.

At least one HN user has shown interest in this tool, so as a compromise in case Snake Hill Games does ask me to wait until the end of the Kickstarter campaign I've set up a Google Group [1] for those who might want to test it before the public release. Depending on the circumstances I'll post a private download link or a GitHub link in the group once it's ready.

[1] https://groups.google.com/forum/#!forum/unflatterer


Cool! I'll be interested in seeing how you get on with Snake Hill Games. My guess is most artists want a UI friendly app where they can quickly make assets.


I'd personally prefer a simple CLI for the tool over a GUI, so I'd certainly be interested. Anyway, as long as you're not doing a competing KS or something, I can't see any fundamental ethical problems with releasing it. The sprite lamp idea itself is not that original, although the presentation is very neat. In fact, I'll probably support it in any case, because it brought the idea up.


I think there's a CLI intended for SpriteLamp too.

If Sprite Lamp can fit a similar niche to Texture Packer then it can be successful. For example, if it implements multiple normal-determination algorithms, has loads of parameters to tweak, accepts and outputs many different formats, and has a stable and great looking GUI.


Open source it. You independently did all the work, you're not making any money from it, and you're not using any of his trademarks. There is no "business" here.


If sprite lamp is successful someone will write a FOSS version of it anyway. Commercializing a product takes a lot more than writing a commandline version of the algorithm.

I'd say release it. There is nothing wrong with competition, capitalism works because of it. Sprite Lamp will have to provide a better product if they want to earn money from it.

Better to be confronted with a FOSS version now than when they have invested a lot of time and money into their own product.


It'd be unethical to try and piggy-back on Sprite Lamp's popularity to advance your project. For example, submitting your project later on to HN under the "my take on spite lamp" title.


i dont think its unethical to ride on the coat tails of another's campaign, provided you give back links and credit so the audience can get a trail back to the original source.


Preface: I have no experience in 3D graphics.

This is a very cool technique. I have been thinking about how to generate normal maps for traditionally drawn 2D images ever since seeing the normal mapped canvas demo[1]. This seems to be an answer.

Drawing lighting profiles that are coherent however does not seem simple. One of the uses of this technique "Depth maps for stereoscopic 3D" appears much more complicated to me that drawing a depth map by hand in the first place. I drew a depth map for my drawing Starry Venice[2] as a step in making it into a wigglegif. Drawing multiple correct light profiles to generate the depth map for a scene such as Starry Venice seems almost impossible to me. This is far from the base use-case, but still.

It will be interesting to see how forgiving the creation of the normal map will be on imperfect light profile inputs. Also, it will be interesting to see if any artists who are masters of this technique will emerge.

[1] http://29a.ch/2010/3/24/normal-mapping-with-javascript-and-c... [2] http://fooladder.tumblr.com/post/61216111704/starry-venice


Thanks! And, that javascript canvas demo is relevant to my interest - thanks for the link!

You're right in your suspicion that depth map generation in Sprite Lamp is not a silver bullet for stuff like that. Images with big discontinuities in depth (especially open scenes, like the one you linked) will likely get you some pretty dubious results in Sprite Lamp. On the other hand, if you look closely at the self-shadowing on the brick gif from the website, I think you'll agree that the results are pretty accurate (note that the little notches and scrapes in the surface of some bricks get picked up accurately too) - while you could paint that map by hand, I suspect that getting the results that nice would take some time, and Sprite Lamp does it in a second or two (pre-optimisation). Stuff like character artwork (like the zombie or the plague doctor) fall somewhere in between - you get results that are good enough for self-shadowing, and with some tweaking you can generate a nice stereogram, but it's not necessarily 'physically accurate' (which in this case is another way of saying "I can't guarantee the results are what the artist pictured").

I'm reluctant to promise features that I haven't tried yet, but I'm planning on some experimentation with a combination of painting depth values and using Sprite Lamp - this will take the form of some tools for messing with the depth map from within Sprite Lamp after it's been generated, with an eye to intelligently detecting potential edges and letting you move whole bits of the scene around at once (and then an integrated means of actually looking at the depth map you've created - wigglegifs might be a good option there, actually).


As an artist myself, I actually think making these assets would be quite easy. I already use layers in Photoshop to first draw my "diffuse map" image, then draw the shading as layers on top of it. It would essentially be the same exact process I already go through to draw cartoons, just with the added steps of drawing a couple more shading layers.


On a second look, it may not be as hard as it first appeared. When shading, the artist will just have to be very conscious of the angle of incidence from the light source. We will see how difficult this is once alpha is released and we get a chance to compute some normal maps and debug their lighting profiles.


> When shading, the artist will just have to be very conscious of the angle of incidence from the light source.

The artist should be doing this already!


They are, or the result would look like crap.


Wouldn't that still require 4x the amount of work though? (Assuming four lighting profiles.)

I could see this getting prohibitive when creating animations for example.


Shading isn't terribly hard and you could probably afford to be a little sloppy in this case. I would suspect that you'll end up spending about 2 times as long on each animation than you would have before.

BUT, with that 2x effort, you're getting a significant improvement in visual quality. The alternative would be the Donkey Kong Country option: model the character in 3D (easily 10x more effort than flat 2D animation, with a much more expensive work force and software), bake in the lighting, and generate gigantic animation sheets. Your asset library will explode in size. The games that have done this have tended to employ significant compression on the images, which can negatively impact visual quality.


Besides, the 3D rendering technique is apples compared with the oranges of pixel art. There's really nothing like artisanally crafted, locally sourced pixels made with love ;)


free-range, organic pixels. Yum.


u forgot small-batch


And artisinal bokeh.


So free range, organic pixels was okay, but artisanal bokeh was not. Glad I understand HN's boundaries now.


Yeah, as a technical guy I would tend to go with the full 3D route. It might be 10x the upfront work but having a fully automated pipeline might save you a lot of work down the line. For example just changing the color of a character could be as few as two clicks in the full 3D solution, but you might have to manually go through each sprite sheet with the other route.

And technically you could export the sprite sheet with however many frames you want (and be able to lower and increase the number easily) while still getting the exact same results as the Sprite Lamp solution. And of course artists could go in and manually make any changes they want.

It's interesting hearing the perspective of the artists. Thanks.


If you're hand-crafting sprites there's a good chance you're using a palletized paint program, which makes changing a character a matter of two or three clicks.

Also there are major stylistic advantages to drawing it by hand. Check out the baked-in motion blur on Sonic's feet in this sprite rip of Sonic 1: http://www.spriters-resource.com/genesis_32x_scd/sonicth1/sh... a while back there was a 2.5d Sonic game, and its motion had a lot less impact because no attempt was made to replicate the motion blur.

Plus of course if you're just drawing it you don't have to worry whether or not it actually makes sense - a lot of the more stylized cartoon characters are VERY hard to build spot-on 3d models of, because they're full of weird abstractions that only make sense in the 2d plane.

And finally, some people just don't like modeling stuff in 3d.

(I'm an artist and ex-animator.)


You could easily combine both worlds. You don't have to bake in the lighting. Just export the light maps, and then use them with a tool like Sprite Lamp to dynamically merge them in at runtime.


For line art (like the zombie example) it may be possible to use one of several available semi-automated algorithms to generate a depth map. Here's a page that shows a few:

http://parter.kaist.ac.kr/jyhahn76/project13.html


Whoa cool. This will work well. I implemented a similar dynamic lighting solution for sprites back in 1997. I used 3D models though that were rendered and then the pure colors and Z-buffer were captured for dynamic lighting effects. Mentioned here as "Software Prototype: Real-time 2D Sprite Shader":

http://www.exocortex.org/ben/#High_School-Era_and_Earlier_Pr...


Very cool. I think some MMOs are using that technique in conjunction with server-side rendering for displaying good quality distant objects without having to push as much geometry to the client. Also nice work on Clara, I'm particularly looking forward to V-Ray support.


Do you have any examples of actual games, even if you're not sure that's what they're doing? I don't know of any MMO that has the server side resources to render meshes and send them to clients. Of course, most MMOs aren't "pushing geometry" to clients either, with the exception of Second Life style virtual worlds.


Super cool! Confederate Express[1] is using a similar workflow, with great results. It's a really cool idea, and I'm surprised that it hasn't been used in more games up to this point. Seeing projects like this -- and spriter[2] and spine[3], really does make me want to get back into game development.

[1]: http://www.kickstarter.com/projects/829559023/confederate-ex...

[2]: http://www.brashmonkey.com/spriter.htm

[3]: http://esotericsoftware.com/


I actually thought this was from the Confederate Express team, the example at https://s3.amazonaws.com/ksr/assets/001/232/470/76a69a703ac8... seemed so similar to the examples on the Sprite Lamp page.

Kudos to both, dynamic lighting on 2D art looks great.


Holy moly. I'm the guy that's working on Sprite Lamp, and I just woke up to this thread (and a giant spike in my web traffic). Thanks everyone for the enthusiasm and support! I guess I don't have anything general to say - there are probably too many words on the website as it is - but I'll do my best to answer people's questions here.


Here, read this article about how you can achieve the same effect for real life objects using only 4 photographs - http://zarria.net/nrmphoto/nrmphoto.html


Don't you have to be incredibly skilled to draw lighting from different directions? I'm thinking "yes" - but artists are incredibly skilled.

  The free version will do everything the hobbyist version can do, but without the
  ability to export assets [...] needed for game use – however, the user will be able
  to export (watermarked) animated gifs showing off their artwork.
Great pricing scheme! I hope it works because I'd like to use this scheme too.


Don't you have to be incredibly skilled to draw lighting from different directions?

Artists have to do that anyway. :) This way, after they draw a few angles, software can automate the rest.


My thought was that you'll often light from a typical angle, and use familiar (or even stylized) techniques for that specific angle. To put it in extreme terms, you might only know how to draw with that lighting.

For unusual lighting (e.g. uplit), even an excellent artist would have to give it more attention. Similar to drawing a character from an unusual perspective. At any rate, drawing several unusual lighting angles will exercise one's talents more than using the same standard one.

It's literally looking at it in a new light.


Since you only need, say, four angles, those four angles will soon become usual for you.


True. I think this is what's bugging me: drawing a lit version implies information about 3D shape; in particular, the artist has to reason/see in 3D. So, in a sense, it is a new method of entering 3D information, that works especially well for traditional 2D art styles.

Why not have the artist enter height directly, instead of shading from several different angles? It seems like less work (because there's less information to input); though possibly doesn't mesh as well with how 2D artists work... whereas shading is part of the tradition.

There's something I'm not getting here (that might lead to a better way of doing it, or not).


As a (somewhat rusty) pixel artist I'd say it is easier to draw lighting from, say, straight above, or from the right, than from some arbitrary angle. But it still takes practice and a good eye, certainly.


This technique is fairly common in 2d games now. I wrote a blog post on this technique along with a form of ray tracing to also cast shadows in a 2d game: http://mattgreer.org/post/4dynamicLightingShadows


http://www.kickstarter.com/projects/1661802484/hyper-light-d...

This game totally needs to implement this. It has some similar ideas though. Take a look at this closely: https://s3.amazonaws.com/ksr/assets/000/945/686/c415709a876b...

notice how the wind blowing right to left affects the sprite: his cloak moves with the wind. Now that's attention to detail!


Should have started his Kickstarter today while he's #1 on HN.


If only! I'm Australian, and Kickstarter only opens to our projects on November 13th (which, assuming it gets approved in a timely fashion, is when Sprite Lamp is going live).


We can just push him there again when he launches it ;)


I'm new here so I don't really know the etiquette. Would I be out of line posting a link to the Kickstarter campaign here when it happens? It's in four days. I don't yet have a feel for how Hacker News feels about self-promotion.


That would qualify as a "Show HN" kind of post, sure. I imagine the Kickstarter will have information to add in addition to the website we're all seeing now, too.


Definitely not against the etiquette. When you post the kickstarter link, it would also be nice if you included a link to the comments here in a comment in the new post


This is very impressive, very nice post. However I don't really understand the idea of using Kickstarter to build a paid product. If the community pay for it, then it should be free software. I don't mind it being proprietary, but then I don't see why people would put money to kickstart it. Especially since it seems to be working already.


Pretty cool. I hope all goes well for the author.

But I can't help feeling that generating normal maps from full 3D models would be more robust overall.

For instance you could easily make things like a walking animation or calculate ambient occlusion (pretty standard 3ds max work.) Render it into a sprite sheet and you have the pipeline to dramatically reduce the amount of work artists would have to do.

I guess Sprite Lamp would be best geared toward indies/studios without 3D modelers and can't invest in the aforementioned tech.

This is cool nonetheless.


Almost all the interesting games being made nowadays are indie games, and as you say most indie game makers don't have the budget for 3D modelers. Not all of them necessarily want this effect, but I'd still say that this could have a pretty big impact.


I don't think it's really a budget issue. The type of 3d modeling & animation software that is necessary to create character spritesheets can be had at pretty commodity prices (sub-$200).

Personally, I would never try to do this kind of work by hand in 2D (not a talented-enough draftsman). But the grandparent is right...even in my clumsy hands, I can do a decent job working in 3D. It's just an easier workflow.

Now, style-wise, nice hand-drawn 2D animation is in a class of its own. If indie studios are sticking with 2D, I'd suggest it's frequently more a matter of style than price.


To create a rich 3D world, you need to model and texture a LOT of objects. Usually, this means hiring an army of 3D modelers and texture artists and having them do that all day for three years. Paying them needs a big budget.

To create a rich 2D world, you probably can get by with one or two talented artist/animators. Yes, they may be in a class of their own, but you don't need nearly so many. Paying them does not need a big budget.


> could have a pretty big impact.

You might even say it could be a game changer.


Drawing lighting profiles seems much simpler than drawing normal maps, and more flexible than drawing height-maps. I had access to the Sprite Lamp alpha, and was able to come up with these images [1][2] in a short amount of time. I am enthusiastic about the future of pixel art games that use sprites that have more information encoded in them (lighting, material, physics).

1. http://bp.io/wp/wp-content/uploads/2013/11/hospital2.gif 2. http://bp.io/wp/wp-content/uploads/2013/11/img_Preview.gif


Do you happen to have the intermediate steps for these two examples? I'd love to see your normal maps etc.



Awesome, thank you for sharing! :)


Nice. I seem to recall the SNES game Yoshi's Island (1995) doing something like this, but only with the backgrounds. This is the only relevant screenshot I could find: http://playingwithsuperpower.com/super-mario-world-2-yoshis-...


I'm not sure just from that screenshot that there is any real dynamic lighting/normal mapping. It's surprisingly difficult to tell, but I think they might have just blended radial gradient over the top of a darkened level background. That it is so difficult to tell is probably a testament to how well they've applied that effect!


Notice the color shift from pain-red to yellow/purple on the bricks where they're better-lit.

As far as I can tell, it is indeed a simple radial gradient--but then the results of the layer-multiplication are put through some sort of palette-lookup function. I imagine it'd work sort of like palette-based animations (http://www.effectgames.com/demos/canvascycle/), but with the luminosity of the "light" at that pixel-position serving as the "frame number" of the palette.


Completely off-topic, so first off: Sprite Lamp looks really awesome, great job and fingers crossed for a successful Kickstarter!

But, does anybody else use "upvotes vs # of comments" as a useful metric around here? I'm not actually running a tally but I believe this story takes the cake, at least over the last few weeks (months?) with currently 750 upvotes vs only 70 comments.

In short, I find the number of upvotes divided by the number of comments (or perhaps minus the number of comments) a better metric to determine the "interestingness" of a story compared to the number of upvotes alone. It's not a hard-and-fast rule, but I've noticed myself paying more attention to this dividend/sum recently than to the points alone.

I've pondered the thought a little bit over the past few months and of course there are several factors at play here, so there's not one single explanation for the phenomenon. But here is one shot:

When somebody comments on a story, there's an incentive to also upvote it because of the karma system: the more upvotes a story gets that you've commented on, the higher the likelihood that your comment gets upvoted. Perhaps not everybody thinks like that, but I think it's fair to say that a good percentage of commenters might upvote a story for their comment's sake and not necessarily for the story's sake. (Of course that's simplifying things: when you comment on a story, you probably also find the story interesting so you might have upvoted it anyway, karma or not.)

Anyway, from that perspective, the difference between number of upvotes and number of comments could be termed "genuine upvotes": people who really just thought it was a cool story, without any second thoughts regarding their karma balance and without necessarily having a strong opinion on the subject matter.

There are of course other, and perhaps simpler, factors/explanations: the less comments a story gets, the less controversial it is so if a story gets many upvotes and few comments, then perhaps it simply is uncontroversially good.

I've played with the idea of writing an alternate HN interface that uses this metric to weigh stories, but it never got anywhere. And there certainly isn't a simple solution: how to combine age, upvotes and number of comments into a useful ranking is black magic at best.

Again sorry for this offtopic blurb. It's just something I found myself thinking about a bit, and this story seems like a particularly good example.

And now back to 3d-inspired 2d awesomeness. How come I've never thought of this!


I find the opposite case: I prefer articles that have a parity between upvotes and comments. To me, an upvote is a drive-by action, where a comment indicates real engagement. If an article has tons of upvotes, but not many comments, it means most people aren't really paying close attention to it, they're just thumbs-upping geek cred or something.


I am not an expert on 2d art or dynamic lightning, but I recognize that this technology is potentially really awesome. So I would upvote this story as something I find really interesting, but I would withhold on commenting as I have nothing specific to add to the discussion.


I also find that articles with many upvotes and few comments tend to be the better ones. Some further speculation from several years ago:

https://news.ycombinator.com/item?id=2386658


It would be interesting if a website like hackernews hides the number of upvotes and then see how people will vote. I'm suspecting some kind of herd mentality when it comes to upvotes in general to be honest.


Problem is that number of comments can be a negative signal too, i.e. flame wars.


How is that a problem? It fits in perfectly with the GP's thesis.


Good point. I would agree even more, if you would replace 'number of comments' with 'number of 1st level comments', or maybe 1st and 2nd. Because in many cases when the story has a lot of comments, the majority of them are around the top comments and they are 4th+ level which usually is a discussion between a couple of people.


Not all topics instill a desire to comment even if they are cool. This may be something like that - nice work but don't have much to add...

Now I'm off to upvote :)


So all the examples only use one light... how many lights can you have in a scene? What kinds of lights are supported (omni, spot, etc)? Can you use different colors of lights? If I wanted to use hundreds of lit particles in a scene, how performant would it be?


It seems like the main use case is exporting the normal/depth/ambient occlusion maps, which you'd probably end up dropping into your engine of choice rather than using Sprite Lamp itself in your game (Sprite Lamp would generate the assets, but the game engine would display them).

If you have the maps, there's no hard limit to the number of lights you can have in your scene, or colors, etc; it just depends how many lights your engine can support with decent performance.

Think of Sprite Lamp more like Blender than like Unity — Sprite Lamp is a way to create assets.


As far as lighting goes, I think it would be as performant as it gets. It's the same idea as deferred shading; you only have to shade the pixels that the light affects.

As for types of lights and colors, yes, I think that would all be possible.


I'm no graphics guy (not even by a long shot), but my understanding of this is that it's generating the same maps that you'd otherwise have to generate through some other process. The images used to generate them have a single point source, but I have to assume that, once generated, the maps would work just as well for multiple or different types of light sources. As for performance, that's entirely up to whatever engine you're using to render the damn thing; once you have the maps, you're done with SpriteKit.


And of course I meant to say Sprite Lamp in that last sentence.


Wow. I had already started the (first, tiny) steps in creating something like this for my game. This is amazing. Take my money! :)


Legend of Dungeon was released a few months ago after a successful Kickstarter using a similar technique for lighting its 2d sprites.

http://robotloveskitty.tumblr.com/post/33164532086/legend-of...

Unity just put up a story about the game yesterday as well: http://unity3d.com/gallery/made-with-unity/profiles/legend-o...


Does anyone have a paper (paywalled is fine, I have institutional access) on the tech behind this? Fascinating!


Which part? For the main lighting technique, most of the magic comes from the artist providing the surface normal components. Then to produce the image under some specified lighting direction, you can do the normal 3D graphics thing: dot product the light vector with the normal vector and scale that by the diffuse color. http://en.wikipedia.org/wiki/Lambertian_reflectance

In case the artist doesn't draw the X,Y, and Z surface normal components directly but instead chooses some other set of lighting profiles, you could use photometric stereo to recover the surface normals. (If this is the approach used, then applying such a technique to specially-crafted pixel art is indeed novel).

Here's a factorization technique for photometric stereo that could be applied to the artist inputs: http://www.wisdom.weizmann.ac.il/mathusers/vision/courses/20...


Yeah it's the second case I was interested in. It looks like the algorithm generates a best fit normal map based on the various lighting profiles - presumably it must be told which direction the light is coming from.

I do a lot of research work with stereo, so thanks for the link! Have to give that a go sometime :)


My guess is that it works by blending the hard-light-case sprites trigonometrically, w.r.t. the vector between the position of the light and the origin/sprite.



I understand how normal mapping works, what I was curious about is how the normal maps are generated automatically from different lighting views.


Can this be used on real world (e.g. still life) objects? Outstanding.


Yes, so then the hard part is actually getting a photo of a completely un-shaded object. Alternatively, you could draw the lighting profiles and then specify the true lighting direction for the given photograph -- this is potentially enough for a separate algorithm to the remove the lighting from the initial photo, but that doesn't exist yet in this software (and can get complicated -- for some recent related research see http://www.cs.berkeley.edu/~barron/BarronMalikCVPR2012.pdf).


This reminds me of one of my favorite papers http://graphics.cs.ucf.edu/ekhan/project_ibme.htm


I have no use for this, but it's awesome.


Please make this free software (open source). It looks really great!


Couldn't it encode the normal map in the RGB channels of a single extra image, instead of 4 extra images?


That's what it does, grayscale images are used to produce it.


I guess the 4 source images are easier to see and predict the output.


Is this the technique that I glimpsed on TIGSource Screenshot Saturday the other week?

http://forums.tigsource.com/index.php?topic=24094.msg958013#...


And one could use Teddy and the myriad followup papers to automatically get the 3D profiles:

http://www-ui.is.s.u-tokyo.ac.jp/~takeo/teddy/teddy.htm


We used an automated version of this to get interesting lighting on Skullgirls, and it's a neat effect. It's really cool to see more people exploring all the cool ways you can leverage your 3D card for better 2D art!


This is awesome. BTW, there are ways to automatically extract normals from 2D [1]. I wonder if it would be possible to use it for this purpose.

[1] http://make3d.cs.cornell.edu


This is awesome and kind of makes me want to get back into game programming. Unfortunately, I have no artistic ability whatsoever, and don't really have the time if I did.


there are also tools like crazybump: http://www.crazybump.com/

that generate normal maps from photos without human intervention.

What sprite lamp needs is integration with a 2d game engine. I'm not totally familiar with 2d engines, but they would have to support dynamic lighting with normal maps to make this work right?


This is really cool.

My first thought was it is probably just a clever little convolution kernel trick but it seems harder after reading about it.


I was thinking about the same thing myself. If the lamp can unpack the 3d data from 2d assets in a web browser you could seriously increase the art quality in webapps without hugely increasing download size.


I'm not really sure how. Most web pages don't have dynamic lighting. I suspect that the amount of information to encode the flat image plus the normal map is going to be more than the amount of information to encode the pre-rendered effect.

Now, if you were planning on doing some kind of dyanmic lighting in a web page, then sure, this would help. But I can't think of much use for dynamic lighting outside of games or other physical simulations, which is likely overkill for most webapps.


Any plans on designing algorithms for automatically generating normal maps, or maybe including tools that make it easier to 'paint' them (along the lines of Z-Brush et al)?


I thought kickstarter was supposed to be used for raising funds to develop products? He clearly already has a working product, kickstarter will just be pay-out day.


This is one of the cooler things I've seen in a while.


It comes out looking a lot like voxel stuff, but it can only be rendered from the front. One cool advantage, though, is that the artist can bake in shadows.


Everything that leads to more 2D games is great.


This is badass. Anything to encourage talented designers and developers to build more old-school platformers. Miss them so much.


Instead of painting several "lighting profiles", how about painting a single best effort heightmap/normal map?


Wow, this is great work. Long live pixel art!


does depth map = bump map here? are you treating each pixel as a depth or each pixel as a quad with its boundary points recorded in a depth map? it seems like the latter might be better because you get the norm of the pixel for stronger lighting calculations.


I suppose for animations you'd need to recreate the light profiles for every frame?


Wow this is awesome. Could potentially save hours & hours of time. Count me in.


Wow. I'm no artist but I'm already looking for any excuse to use this.


I stared completely amazed for like +20 loops at the first GIF, incredible job.


This looks like precomputed radiance transfer but for pixel art, cool.


I have nothing valuable to say, but my jaw dropped when I saw this.


How does this work with GLSL?


Is it game-engine-agnostic?


It looks like it just generates maps as textures. You should be able to use it pretty much anywhere provided you write the shaders.


Can't wait.


I remember doing stuff like this in VB/asm about 20 years ago. I miss VGA.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: