Maybe you could do the polar transforms in a pixel shader to make up the difference? The transform also seems to introduce a not-insignificant amount of error, which is unfortunate if you want to use visibility data for AI (in which case you want full precision at whatever your target resolution is).
The artifacts you mention are sort-of inherent in shadow maps, and depend on the resolution of shadow map you are working with. The demonstration I did had a lot of error, but you can see at the bottom some examples of higher resolution test cases which have significantly less error.
My old game Chimaera ( http://www.luminance.org/chimaera.html ) did this - lightmap rendered at full screen resolution (1-1 mapping) in full precision, with an on-demand raycasting solution for offscreen pixels. A raycasting method also allows you to cache the shadow data for static geometry and static light sources, which can be a big win for complex scenes. (I think maybe you could cache the polar data, but it wouldn't be as easy since you can't just translate by x&y like you can with a normal 2D perspective?)
(Edit: I guess you'll lose the nice shades-of-grey effect that the article's bitmap processing technique yields)
(the second example, "raycasting"). You'd work on the "native tile resolution".
I.e. you just "swipe" lines starting at the player position to the edge of the map, in a circle; as soon as you hit an obstacle tile, you stop marking the tiles ("pixels") on the line as lightened. No need to do any per-object iteration (as long as the tile map contains object references in tile positions)
I was thinking about whether there could be a way to apply your technique, without having to do the polar + cartesian conversions.
That's the trick isn't it? How do you determine the amount? That is more or less what I set out to figure out when this started. Sure you can do a calculus/math-based approach, but it is slow and done entirely on the CPU.
Here's where I'd start:
1. See what resolution you need to get the level of grays you want. Your example shows 29x29 pixels for a cell be for it's resampled down. (2900%) so 841 possible shades of grey, (minus the artifacts from converting to polar and back. (the stair stepped edges))
but if you only need 12 shades of grey, render it at 400%, 4x4 pixels and add up the results for each cell. (But with shadow maps, you need a larger render to reduce the effects of the polar conversion.)
2. See if your shadow map approach or ray casting (or this http://www.redblobgames.com/articles/visibility/ ?) is faster to get this type of result:
With your shadow mapping you are processing every pixel several times. With ray casting you can optimize and and only process what the player sees, and you don't have the polar conversion artifacts.
I'm betting on ray casting/ this http://www.redblobgames.com/articles/visibility/ and I bet there's a quick test for partially occluded cells, then chop in half and get the percentage exposed. Yeah, it may be more work to figure out, and not worth it. But you can still super sample only the the partially exposed cells 4x4 for 12 shades, 5x5 for 25 shades etc... and just render what you see instead of everything.
I think the polar coordinate transformation trick is really cool, but I feel like this is the wrong use for it. This is essentially a solved problem using polygons in 2D lighting engines already - to make it work for tiles, you simply scale down and back up to map to tiles.
I have to admit I'm sorely tempted to implement this in WebGL just to see what happens.
Probably a combination of 2d portals and raycasting would do the same trick, but more efficiently. I also think it would be easier to implement, and more flexible if you want to have moving obstacles. Most of this stuff is standard fare in even the simplest of 3d engines and well documented, the 2d case would actually be a simplification of those algorithms.
Nonetheless I think your solution is pretty clever and interesting, so kudos for that.