( Thread: https://news.ycombinator.com/item?id=7739599 )
My (admittedly n00bish and embarrassing) attempt at doing the same thing is here: https://github.com/pervycreeper/game1/blob/master/main.cpp
 in the sense that lines map to lines, as in most photography, Renaissance and later painting, and most computer graphics
I'd really enjoy seeing your write-up on your implementation. If there's going to be an article we're going to link new people to in order to teach them ray-casting I'd hope we could teach them the best techniques that we can.
However, I would love to see your proposed solution demonstrated. Would you care to fork the raycaster and compute the results with an alternative method?
...appears to be describing exactly how this example works, right down to the diagram:
I'd be happy to update this article with an improved method once I understand it.
I'd be interested in further explanation about why you feel the underlying math could be improved, though.
A correct formula is on line 485 of my implementation linked above, found the old fashioned way using basic geometry.
On my old laptop, the demo looked fine. But after switching over to a widescreen (and higher FPS) you can tell that it's slightly wrong up close, and noticeably wrong (jittery) in the distance.
I wonder what's causing the jitter...
To illustrate the question, see https://en.wikipedia.org/wiki/File:Panotools5618.jpg – why do we see the world as in the bottom image instead of in the top one? After all, our eye really is at different distances from different parts of a straight wall, so it sounds logical that we would see the fisheye effect describe in the article. Is the rectilinearity of the image we see caused by the shape of the lens in our eye, or by post-processing in our brain?
Your eye doesn't see it as straight, your brain does. Your eye doesn't actually gather a single coherent image, in fact. It's constantly moving, bringing the higher-precision fovea at the center of your retina to bear on interesting parts of the scene.
Your brain then accumulates all of that data into a single coherent "image" which is what you consciously perceive. This is why, for example, you are unaware of the blind spot caused by your optic nerve obstructing your retina.
A straight line in the world isn't projected onto a straight line on your eye: after all, the back of your eye is curved. (Even if it was, so what, it's not like your rods and cones are aligned in a nice neat grid.) It's just that you've learned that a line of stimulus curved just so across your visual field corresponds to a straight line in 3D space.
There were situations in traffic when I was driving a car and simply did not see cyclists or pedestrians coming. At the time I was aware that one of my eyes were obstructed by the rear-view mirror on the windshield, but that did not explain why I did not see them coming with the other eye.
I wonder why are people not alerted about this when taking the test for the drivers license.
At any given moment we're only looking at a very small slice of that larger spherical panorama. Our brains are constantly constructing a coherent 3d model, with the help of various schematic constraints, such as expectations about straight lines being straight.
We perceive straight lines. but that's not what we see. But the curvature is usually so slight that it is very difficult to see.
It does a lot of pre-processing, so to speak -- which is why we get such awesome illusions as the Poggendorff illusion , tricks like the Pulfrich effect , etc.
I don't know enough about the mechanics of how light enters and is interpreted by the eye, but I wouldn't be surprised at ALL if we "should" be seeing more like a fisheye lens but our brain is saying no, no, those lines are straight, based on the countless other stimuli it's receiving.
Not always. Next time the Sun and the Moon happen to be in view at the same time, look at the dividing line X on the Moon between the parts illuminated and in shadow. Then draw a straight line Y from the Moon to the Sun.
Common sense tells us that the two lines are perpendicular, but often this is not the case. I don't understand it completely myself, but it is something to do with us residing on a curved surface.
It's not the distance to the points in the scene that determine where they appear in a perspective projection, it's the angle. For any single point on the screen/retina/projective plane, it can actually correspond to any distance from the camera/eye (i.e., a ray of possible points).
Another way to visualize that calculation, is to see it as if it's adding a curved lens on top of the viewing pane.
I have no idea what I'm talking about, though.
The sensor on the back of a camera is a flat plane, which is why (with most lenses) you get an image where straight lines appear straight. The sensor on the back of your eye (the retina) is curved.
Also on the topic, the demo scene stuff is mind blowing, too. 
1. No fullscreen
2. I update the screen only if a key is pressed.(No rain, that has to be rendered all the time)
3. I don't use any costly canvas functions. I render everything myself in a pixelbuffer.
4. Typed arrays
I calculate as close as the machine can but with a window.setTimeout(Update, 0);
in order to be responsive.
woa! it was 2008, time sure pass by.
This made me laugh. I would have never thought of that way of doing it, but before I knew how it was implemented, I didn't even notice! That's a pretty good approximation!
I've seen some people suggest that voxels are like sprites for 3d programming (as far as sheer simplicity goes), but this strikes even more so as that. How does this compare to using actual 3d/voxels? Can you still have interesting physics or do you miss out on a lot?
For this reason, you can generally build a raycaster faster and simpler than either voxels or meshes, but there will be things you can't do. Rotation on the x axis is tricky, for example.
It's probably best to think of physics in a raycaster as the same sort of physics you could apply in a top-down 2D game.
Note that it is using canvas 2d rather than canvas 3d (aka WebGL).
But if you take ray casting to extreme, you end up with Ray Tracing, which is kinda expensive. Which is why we had such an effusive conversation a few months ago about hardware accelerated RayTracing.
A sample ray traced scene from AlteredQualia (2nd largest contributor to the famous Three.js Library): http://alteredqualia.com/three/examples/raytracer_sandbox_re...
Just compare with this pure fragment shader demo: https://www.shadertoy.com/view/MsS3W3
DN3D uses sectors (Convex Polygon) to store room's lines (or walls), and draws those lines using player's FOV (Field Of View).
When those sectors are connected with others sectors, it's called portal. This is used to sort only the sectors that is inside the player's FOV.
After a little reading, it sounds like DN3D uses a cousin of a raycaster, as it's still rendering independent columns. Instead of casting rays to find them, it's projecting their vertices. Neat!
Besides, we all start somewhere mate.
Thanks for the great work, i can't wait to play around with this!