Hacker News new | past | comments | ask | show | jobs | submit login
A first-person engine in 265 lines of JS (playfuljs.com)
528 points by hunterloftis on June 3, 2014 | hide | past | favorite | 97 comments



Heh, reminds me of kkrieger - a first person shooter in 100KB for a scene demo.

https://www.scene.org/file.php?file=%2Fparties%2F2004%2Fbrea...


If you missed it a couple of weeks ago, this article about its creation was really interesting: http://fgiesen.wordpress.com/2012/04/08/metaprogramming-for-...

( Thread: https://news.ycombinator.com/item?id=7739599 )


And here is Farbrausch's repo, which includes the .kkrieger code along with a lot of other interesting stuff.

https://github.com/farbrausch/fr_public


It's really interesting but would someone mind explaining why they have to keep this code under some size limit?


Because they can. Because it's fun and challenging to do so. Because bragging rights.



Cool, thanks, looks like that's what I was missing.


Think haikus but in code.


thank you, the title did not interest me back then. Sometimes I wish HN was more like metafilter.


That's great! I did miss it.


This demo unfortunately uses an incorrect perspective transformation. There is no reason to go to trig if you represent the camera plane as a vector, and step along it one pixel at a time, and allowing the wall height to vary linearly in the distance to the camera vector (taking lines to lines). In addition to being correct[1], it has the added benefit of being faster if implemented well.

My (admittedly n00bish and embarrassing) attempt at doing the same thing is here: https://github.com/pervycreeper/game1/blob/master/main.cpp

[1] in the sense that lines map to lines, as in most photography, Renaissance and later painting, and most computer graphics


As an aside, the division by a square root being in the main loop of the algorithm is related to why http://en.wikipedia.org/wiki/Fast_inverse_square_root was developed.


I think what's most interesting about the original article is the write-up and not necessarily the implementation itself. I find reading and following >~ 500 LOC to be burdensome and difficult so keeping it short and small is certainly an advantage for this style of article.

I'd really enjoy seeing your write-up on your implementation. If there's going to be an article we're going to link new people to in order to teach them ray-casting I'd hope we could teach them the best techniques that we can.


I'm not sure I'm convinced, as finding the cosine does in fact find the linear distance from the camera plane to the intersection. This is, as far as I have seen, the standard implementation, eg: http://www.permadi.com/tutorial/raycast/rayc8.html#FINDING DISTANCE TO WALLS

However, I would love to see your proposed solution demonstrated. Would you care to fork the raycaster and compute the results with an alternative method?


My implementation above complies, but I can send you binaries if you need them. Be forewarned that I neglected to implement atmosphere. Here's a screenshot http://i.imgur.com/7LJocjS.jpg


I'm less interested in seeing binary output and more interested in understanding the principles behind what you're saying. The wikipedia page you mentioned: http://en.wikipedia.org/wiki/3D_projection#Diagram

...appears to be describing exactly how this example works, right down to the diagram:

http://www.playfuljs.com/images/raycaster-distance.png

I'd be happy to update this article with an improved method once I understand it.


I'd love to see an updated version and a 'before and after' shot


Why is it incorrect? From looking at the demo, it doesn't seem incorrect at all. If it's incorrect, then the incorrectness doesn't affect the end product, which is all that matters. Lines seem to map to lines, so it seems mistaken to call it incorrect.

I'd be interested in further explanation about why you feel the underlying math could be improved, though.


He ought to be using what Wikipedia calls a "perspective projection", or, sufficiently in this case, a restriction thereof. Notice how the edges of walls do not appear straight on the screen, even though he mentioned correcting "a fisheye effect" (tellingly alluding to a popular tutorial on this topic at lodev.org).

A correct formula is on line 485 of my implementation linked above, found the old fashioned way using basic geometry.


Indeed, you're right: http://i.imgur.com/oQ7S3Jo.jpg

On my old laptop, the demo looked fine. But after switching over to a widescreen (and higher FPS) you can tell that it's slightly wrong up close, and noticeably wrong (jittery) in the distance.

I wonder what's causing the jitter...


It is very interesting to me that you need to multiply the distance-to-wall by the cosine of the angle to transform the image from fisheye to normal. It makes me wonder, why is it that our eye in real life sees straight lines as straight, the way this demo renders the image?

To illustrate the question, see https://en.wikipedia.org/wiki/File:Panotools5618.jpg – why do we see the world as in the bottom image instead of in the top one? After all, our eye really is at different distances from different parts of a straight wall, so it sounds logical that we would see the fisheye effect describe in the article. Is the rectilinearity of the image we see caused by the shape of the lens in our eye, or by post-processing in our brain?


> It makes me wonder, why is it that our eye in real life sees straight lines as straight, the way this demo renders the image?

Your eye doesn't see it as straight, your brain does. Your eye doesn't actually gather a single coherent image, in fact. It's constantly moving[1], bringing the higher-precision fovea at the center of your retina to bear on interesting parts of the scene.

Your brain then accumulates all of that data into a single coherent "image" which is what you consciously perceive. This is why, for example, you are unaware of the blind spot caused by your optic nerve obstructing your retina[2].

A straight line in the world isn't projected onto a straight line on your eye: after all, the back of your eye is curved. (Even if it was, so what, it's not like your rods and cones are aligned in a nice neat grid.) It's just that you've learned that a line of stimulus curved just so across your visual field corresponds to a straight line in 3D space.

[1]: http://en.wikipedia.org/wiki/Saccade [2]: http://en.wikipedia.org/wiki/Blind_spot_(vision)


Wow, the blind-spot test on that Wikipedia page just blew my mind.

There were situations in traffic when I was driving a car and simply did not see cyclists or pedestrians coming. At the time I was aware that one of my eyes were obstructed by the rear-view mirror on the windshield, but that did not explain why I did not see them coming with the other eye.

I wonder why are people not alerted about this when taking the test for the drivers license.


Blind spots are crazy. Your brain will continue some patterns and straight lines right through the blind spot, so it doesn't look blank. http://www.swarthmore.edu/SocSci/fdurgin1/publications/Blind...


I was trying to crack exactly the same problem a decade ago - I was very frustrated by "classical" perspective distortions (such as looking up and down at a very tall pole, you notice in 3D projection the projected width at a given height changes depending on the pitch angle, which is counter-intuitive as your eye doesn't do that). I searched for answers, read Denis Zorin's disertation from Caltech, various different camera models etc. After one talk with a neurosurgeon he mentioned that those "perspective distortions" are perceived often by patients after a brain surgery and as the brain heals the normal undistorted picture begins to emerge. I am still curious if there is a more realistic mathematic formula that would allow us to have exactly the same perspective projection in 3D graphics as we have with our eyes. Anyone knows any recent research on this subject?


this is the perspective projection that we have with our eyes: spherical projection. detail here: http://www.treeshark.com/treeblog/?p=301

At any given moment we're only looking at a very small slice of that larger spherical panorama. Our brains are constantly constructing a coherent 3d model, with the help of various schematic constraints, such as expectations about straight lines being straight.

We perceive straight lines. but that's not what we see. But the curvature is usually so slight that it is very difficult to see.


You need a curved display. A 240 degree sphere segment stretching all around you will allow you to display a fully realistic-looking perspective, as long as the viewer's head is exactly in the centre. Or a head-mounted display like the Oculus could accomplish the same thing.


I would hazard a guess that it's a result of the brain interpreting the electrical impulses before providing the "image" that we "see."

It does a lot of pre-processing, so to speak -- which is why we get such awesome illusions as the Poggendorff illusion [0], tricks like the Pulfrich effect [1], etc.

I don't know enough about the mechanics of how light enters and is interpreted by the eye, but I wouldn't be surprised at ALL if we "should" be seeing more like a fisheye lens but our brain is saying no, no, those lines are straight, based on the countless other stimuli it's receiving.

[0] http://dragon.uml.edu/psych/poggendo.html

[1] http://en.wikipedia.org/wiki/Pulfrich_effect


The brain adjusts the images after learning about the physical world using other senses. If you start wearing glasses turning everything upside down the brain adapts within days.

https://en.wikipedia.org/wiki/Perceptual_adaptation#Experime...


I've always figured the latter explanation - that the picture from our eyes is distorted like that, but the brain corrects the image (it also doesn't hurt that we have two eyes, where as cameras typically don't; I wonder if multi-lens cameras are also subject to that fisheye effect?).


I think it's simpler than some of explanations here. You're not rendering onto the curved eye, you're rendering onto a flat monitor. If you imagine the game view as a window, a straight line outside always projects toward your eye onto a straight line on the window.


Similarly, I used to wonder why an image constructed with 3-point perspective [1] looks perfectly normal, while one with 4-point perspective [2] does not.

[1] http://www.craftsy.com/blog/wp-content/uploads/2013/07/cropp... [2] http://www.termespheres.com/images/perspective/fourpointpers...


> It makes me wonder, why is it that our eye in real life sees straight lines as straight, the way this demo renders the image?

Not always. Next time the Sun and the Moon happen to be in view at the same time, look at the dividing line X on the Moon between the parts illuminated and in shadow. Then draw a straight line Y from the Moon to the Sun.

Common sense tells us that the two lines are perpendicular, but often this is not the case. I don't understand it completely myself, but it is something to do with us residing on a curved surface.


> After all, our eye really is at different distances from different parts of a straight wall, so it sounds logical that we would see the fisheye effect describe in the article.

It's not the distance to the points in the scene that determine where they appear in a perspective projection, it's the angle. For any single point on the screen/retina/projective plane, it can actually correspond to any distance from the camera/eye (i.e., a ray of possible points).


A dumb question/explanation, but wouldn't that be because the screen/viewing pane is flat, while your eye is a ball?

Another way to visualize that calculation, is to see it as if it's adding a curved lens on top of the viewing pane.

I have no idea what I'm talking about, though.


Sometimes after getting new glasses, I would see the world this way until my eyes adjusted.


Its gotta be the shape of our eye, for the same reason the effect works in a mechanical DSLR camera, where light is only refracted through the lens.


> for the same reason the effect works in a mechanical DSLR camera

The sensor on the back of a camera is a flat plane, which is why (with most lenses) you get an image where straight lines appear straight. The sensor on the back of your eye (the retina) is curved.


straight lines are straight in a photograph because of precision manufactured optics shaped specifically to achieve that effect, to reduce "chromatic aberrations" and "barrel distortion".


Would it not be because the retina is curved?


As someone with no experience in computer graphics it's always insane to see demos like this. Especially using just around 250 lines of javascript. Really impressive.

Also on the topic, the demo scene stuff is mind blowing, too. [0]

[0]: http://awards.scene.org/awards.php


The nice thing about this implementation is that the code is very clear and easy to follow. You could easily make it much shorter if that were the goal.


Exactly. I think that's wonderful about this example, it's not just 250 lines of heavily compressed code, it's readable and clean.


Me three. It's covered the 're-read in 6 months time' criteria. I must confess I do change from reading to scanning as soon as the heavy math kicks in.


There's a niche for a website that does a bunch of sub-500 LOC javascript examples of tiny bits of computer graphics, to give a very brief introduction and demonstration of the math, and which then chains them together for some other software.



Scripting languages are good for achieving low line count results.


My raycast engine in Javascript needs a few more lines and works a little bit different, but gives also impressive results I think :) .

http://simulationcorner.net/index.php?page=comanche


Love that you referenced Comanche ... my favorite helicopter game ever. Nice work on the JS side also!


The graphics were impressive, but the gameplay never touched me. Comanche 3 was much better.


I really like how your code uses much less cpu than that of the article. whats the trick ?


Don't know. But here some possibilities:

1. No fullscreen

2. I update the screen only if a key is pressed.(No rain, that has to be rendered all the time)

3. I don't use any costly canvas functions. I render everything myself in a pixelbuffer.

4. Typed arrays


It's #2. Redrawing at 60hz (or as close as the machine can come) is what hits the cpu so hard.


No, I don't use requestAnimationFrame right now. I think when I wrote it, it was not supported by IE. But can't remember.

I calculate as close as the machine can but with a window.setTimeout(Update, 0); in order to be responsive.


Very cool. Comanche was an excellent game too.


Love it! Thanks, just linked from the article.


Hey thanks, that was fast.


Cool. It reminded me of a similar work published by Opera some years ago

http://dev.opera.com/articles/3d-games-with-canvas-and-rayca...

woa! it was 2008, time sure pass by.


> Rain is simulated with a bunch of very short walls in random places.

This made me laugh. I would have never thought of that way of doing it, but before I knew how it was implemented, I didn't even notice! That's a pretty good approximation!


Great work. I had a huge amount of fun converting your last article (terrain renderer in a tiny number of JS) to WebGL and that was only fun because of your clean, easy to understand code. Thanks for sharing.

--Callum


I'm really surprised that it's 265 and not 256... I mean, there's got to be a good reason not to make it a round number, right? I'm mind blown anyways...


Was shooting for 256 but went over to implement touch events ;)


Well I guess that's the downfall of the touch screen era


The downfall is that you an add a complete secondary interface in 9 lines of code?


That's really impressive.

I've seen some people suggest that voxels are like sprites for 3d programming (as far as sheer simplicity goes), but this strikes even more so as that. How does this compare to using actual 3d/voxels? Can you still have interesting physics or do you miss out on a lot?


Thanks! As with most software decisions, raycasters involve a tradeoff. They are related to voxels in that they impose constraints on what can be rendered. Voxels render grids in 3 dimensions while raycasters (typically) render them in 2.

For this reason, you can generally build a raycaster faster and simpler than either voxels or meshes, but there will be things you can't do. Rotation on the x axis is tricky, for example.

It's probably best to think of physics in a raycaster as the same sort of physics you could apply in a top-down 2D game.


It runs really slow for me, like 1 FPS. Firefox 29 on Linux.


Same here (Iceweasel 30). It works fine on chromium (same PC) and Firefox 30 on Android.


Interesting. Does FF 29 on linux perhaps have purely software graphics? It runs much faster than 1 fps on iphones and most androids, 60 fps on modern desktops in safari/chrome, and 30 fps on FF / osx.


I get terrible FPS with Firefox on Windows as well (Intel Haswell graphics).


What graphics card/driver combination do you have? It runs decently for me with Firefox 29 on Linux, intel graphics (Sandy Bridge).


It is not using the GPU to do the drawing: https://github.com/hunterloftis/playfuljs/blob/master/conten...

Note that it is using canvas 2d rather than canvas 3d (aka WebGL[1]).

But if you take ray casting to extreme, you end up with Ray Tracing[2], which is kinda expensive. Which is why we had such an effusive conversation a few months ago about hardware accelerated RayTracing[3].

A sample ray traced scene from AlteredQualia[4] (2nd largest[5] contributor to the famous Three.js Library): http://alteredqualia.com/three/examples/raytracer_sandbox_re...

[1] https://developer.mozilla.org/en-US/docs/Web/WebGL/Getting_s...

[2] http://en.wikipedia.org/wiki/Ray_tracing_%28graphics%29

[3] https://news.ycombinator.com/item?id=7425303

[4] https://github.com/alteredq

[5] https://github.com/mrdocob/three.js/graphs/contributors


2D canvas should be hw accelerated too on most browsers.


Yes, however the raycasting is drawing pixel by pixel. That part is not hardware accelerated, and that is the expensive part. That is that part that shaders speedup a lot, by delegating a lot of the work to the GPU.

Just compare with this pure fragment shader demo: https://www.shadertoy.com/view/MsS3W3


That page froze my browser (Firefox 29) and started leaking oodles of memory until I killed it (along with a bunch of other shit I was working on), so I guess I'll count that as a point in favor of software raycasting.


This is so great. Very straightforward explanation of raycasting. Looking forward to playing with!



Raycasting brings back memories! I remember poring through raycasting techniques to make my Wolf and Doom clones as a teenager. When I was almost done I demoed this for a group of friends at our school's computer club and boy were they were impressed up until the point the clipping algorithm failed and objects stopped disappearing when they fell out of the players field of view. Was teased for months after that. ... .


Nice demo! But one note, I don't know about DaggerFall but Duke Nukem 3D does NOT uses Raycasting to draw walls.

DN3D uses sectors (Convex Polygon) to store room's lines (or walls), and draws those lines using player's FOV (Field Of View).

When those sectors are connected with others sectors, it's called portal. This is used to sort only the sectors that is inside the player's FOV.


Didn't games like Duke Nukem 3d, post-Doom, Wolf games use what was called a scanline algorithm? Drawing the lines in the players FOV as you just stated and then using a clipping algorithm to keep unwanted information out. I think raycasting would be a type of scanline algorithm, but the technique is less primitive since your not using a proper 3d engine. It's a long time since I programmed games like this so am now a bit fuzzy on the specifics.


Thanks for letting me know! It's referenced as a raycaster in several places, and of course the output looks very much like a raycaster's, so I checked up on this.

After a little reading, it sounds like DN3D uses a cousin of a raycaster, as it's still rendering independent columns. Instead of casting rays to find them, it's projecting their vertices. Neat!


I feel very pathetic from this. You could have given me 265K lines and I wouldn't have figured it out.


Pretty sure the author followed this tutorial. http://lodev.org/cgtutor/raycasting.html


Don't feel pathetic! I certainly didn't invent anything here, I just distilled an old technique as much as I could in JS.

Besides, we all start somewhere mate.


I remember being amazed how simple raycasting was when I wrote a similar (though much simpler) engine in Java for a high school project. The "engine" itself was like ~200 lines of code in just 2 or 3 functions. Raycasting is a really clever technology. Cool demo!


I know that the arrow keys have actual arrows on them, but so many modern games have trained us to use WASD to navigate, that if you're going to insist upon the arrow keys for navigation, you should probably mention this somewhere before the link to the demo.


done!



this is incredible. I remember doing something in C in openGL with triple the line count and not nearly as impressive results during a bachelor computer graphics class. This would have been a much more interesting lab.

Thanks for the great work, i can't wait to play around with this!


What's the difference between ray casting and ray tracing?



Why do you need to use Uint8Array and not just a normal array?


I (mistakenly) thought it would access faster. In looking for benchmarks to link you to, I found out that's a false assumption. So - no reason to at all :)


And it even manages to include Google Analytics !


Wow.


Awesome!


<rant about no. of lines being a completely useless metric> <jquery in 1 line>




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: