Hacker News new | past | comments | ask | show | jobs | submit login
Build your own old-school 3D shooter in a weekend (github.com)
376 points by lobo42 6 months ago | hide | past | web | favorite | 100 comments



I think leaving the "old-school" out of the title doesn't do this justice.

My first thought was, with modern engines, or even raw OpenGL that isn't that hard... I did it in a weekend for my "Advanced Computer Graphics" elective in college using OpenGL and that was 2008. But this is kind of cool... it's more of a tiny ray tracer and... well to borrow the title... old-school arcade game.


> leaving the ‘old-school’ out

He added it, but now it unfortunately looks like it says “build your own school shooter in a weekend...”


> in college using OpenGL and that was 2008

Wolf3D was 1992, 16 years before. SGI released OpenGL a few months later. IIRC, the first gfx cards were thousands of dollars. The first successful consumer gfx was 3DFX Voodoo2 in 1996. We take so much for granted nowadays.


It was just 3dfx voodoo, the 2 came later in 1998 :)


I'm not sure what your point is here. My point was it was easy in 2008 I'm sure it is as easy or easier now so a first person game isn't that impressive. I have massive respect for the people who do it from scratch. Especially the ones who were pioneers in the 90's and earlier.

Programming a game from scratch, even a simple looking one without the aid or modern APIs is certainly much more difficult. Which is why I was saying this posting is amiss by excluding the fact this Github project uses older techniques from the title.


One big advantage this has over a simple opengl engine is that it doesn't obscure how the rendering occurs.


Ok, let's put the old school back in there.


Title was edited later to include "old school" and my original comment now makes no sense.


> it's more of a tiny ray tracer

It's a ray caster, where the rays are sent out from the camera to intersect the map. With ray tracers, the rays are sent out from the light source, IIRC.

Your point on OpenGL is valid, but that just removes all the learning from it. OpenGL does so much of the grunt work for you. This kind of old-school game engine is a great learning experience.


> It's a ray caster, where the rays are sent out from the camera to intersect the map. With ray tracers, the rays are sent out from the light source, IIRC.

Not correct. I see this pedantry often, but it's factually wrong. If you look at the Wikipedia articles for both ray casting [0] and ray tracing [1], they BOTH describe the methods as sending rays from the camera/eye.

From what I can tell, the difference between ray casting and ray tracing is that in ray casting, only the primary ray is traced. The ray does not get reflected, refracted, or even traced to light sources for checking shadows. At most, surface normals are dealt with for lighting and texture mapping is applied.

[0] https://en.wikipedia.org/wiki/Ray_casting

[1] https://en.wikipedia.org/wiki/Ray_tracing_(graphics)


It's all a mess.

However typically in video game rendering, ray casting refers to the specific technique where you only trace rays against a 2D scene for a single scan-line, and then draw the entire column of pixels based on that result, such as is done in Wolfenstein.

Ray-Tracing really can be used for anything that intersects rays against a scene but is typically used for set of techniques that at-least involve sending out primary rays from the camera via ray-tracing, but with "ray-tracing" GPUs we are seeing it used more specifically for secondary rays coming from the Camera.

"path-tracing" is generally the set of techniques that end up with a full path from camera to light.

Now, there isn't really much in rendering that sends out rays only from the light source, it's still just too computationally impractical.

However, there is "bidirectional path-tracing" which generally sends out rays from both the camera and the light source, then tries to join them in the middle. It's a bit more complicated but generally converges quicker than other montecarlo renderers.

Anyways, as I said, it's a big mess, partially because it's a big continuum, and there are generally renderers that exhibit properties from multiple of these categories.


> With ray tracers, the rays are sent out from the light source, IIRC.

Are there any implementations which send rays from the light source(aka. forward ray tracing)? This is astoundingly inefficient, as most rays will not intersect the camera.

I've never seen one, other than in brief academic discussions. What you can use forward ray tracing for is to compute shadows.


I had considered making a ray tracer that worked that way, but with the slight difference that rays didn't have to hit the camera, but would just have to hit a point within line of sight of the camera. Obviously there would be massive gaps between pixels, but I would fill it in with a Voronoi diagram [0], or perhaps shaded with Delaunay Triangulation [1]. This renderer would be nothing more than a toy or proof-of-concept, and not intended for real usage.

The classic FOSS ray tracer, POV-Ray, can actually do this. You can define a light and define an object, and it will shoot rays from the light to the object and trace each ray through refraction and reflection. With this, you can simulate the way ripples in a pool concentrate light on the floor of the pool [2] or bending and refracting [3], without manually calculating it and adding extra light sources.

[0] https://en.wikipedia.org/wiki/Voronoi_diagram

[1] https://en.wikipedia.org/wiki/Delaunay_triangulation

[2] http://www.antoniosiber.org/bruno_pauns_caustic_en.html

[3] http://www.povray.org/documentation/view/3.6.2/424/


Yeah, there are multiple techniques that do this sort of thing. Photon Mapping and bidirectional path tracing are on ends of the spectrum


There are two-pass approaches that do this, such as photon mapping. In the first stage, light is emitted from light sources and allowed to to scatter throughout the scene, and each "hit" of a photon on a medium is stored in a large data structure. Then in the second pass rays are traced out from the camera, and on each medium the ray intersects, nearby "hits" are used to estimate the light coming from that spot. This produces effects like caustics efficiently.


It's not really ray tracing, but some engines do something analogous to render shadows. Rendering the scene from the point of view of the main light source, and just recording the depth value at each pixel, yields a shadow map. Now for the main render, for each pixel rendered, re-project its coordinates into the light source's camera and compare the distance to the shadow map. If it's further, it's in shadow.


Sure, you could compute shadows with forward ray-tracing, but it's still generally more efficient to compute shadows with rays originating from the camera end of the path.

Of course, this all starts to get a little more complicated when trying to compute global illumination.


Oh apparently ray casting is considered a form of ray tracing: "Ray casting is the most basic of many computer graphics rendering algorithms that use the geometric algorithm of ray tracing." [1]

1: https://en.wikipedia.org/wiki/Ray_casting


What you mean is forward ray tracing. But "ray tracing" usually refers to the variant sending rays from the camera as forward ray tracing is almost never used.

Ray casting on the other hand is basically ray tracing without any reflections or shadows, in other words each ray stops with the first collision.


Ray-tracing does not necessarily mean rays are cast from the light source.


Thanks everyone for the corrections, obviously my understanding was flawed there!


The readme mentions it but I think it's worth pointing out that this is a very particular (and quite dated) method for sort-of-3D rendering. It's the same type of engine used in the original Wolfenstein.

It's definitely a lot of fun to implement (and I recommend trying if you've never done it before) but it's very different and much more limited than what we expect from a 3D engine nowadays.


> the original Wolfenstein.

Well, the original Wolfenstein 3D, the third game in the series.

https://en.wikipedia.org/wiki/Castle_Wolfenstein

https://en.wikipedia.org/wiki/Beyond_Castle_Wolfenstein


Brilliant! Much of my early teens was "wasted" playing Castle Wolfenstein on a green screen Apple II. Great memories.


Kuma tzee here !


Uh, I had never heard about these games before. I always thought Id created the franchise. Thank you for that bit of videogame history.


Castle Wolfenstein was a noteworthy and revolutionary game in its own right: https://www.filfre.net/2012/04/castle-wolfenstein/


The first two games were top-down 2D stealth games. The third was a pseudo-3D FPS. Most people picked up the series at the third installment, and the third installment is what made it into history books as the First Popular FPS, ignoring earlier FPSes which were Not Popular.


To be clear, Castle Wolfenstein and its sequel were written by Silas Warner and published by Muse Software, which went out of business in 1987 after going through Chapter 7 bankruptcy. Because the trademark was lapsed, Id was able to use the name for their game. But it wasn't a continuation of the original games, more like a reboot by another team and publisher.


Random fun fact: the only reason they could use the name is because the original creator let the trademark expire, thinking it had no commercial future


I read "Id" as " i'd " and thought this was the the most bizarre sarcastic comment. Time for more coffee


True, but what's great about it is that it is such a simple method to achieve something that seems 3D. It's very easy to explain, the math is straight-forward, and it hardly takes any code, making it a great starting point for people interested in this sort of thing. Its simplicity makes it easy to extend the functionality too. For instance, you can add Y-shearing pretty trivially and get a limited range of motion on the view's vertical axis, or add floor and ceiling casting to get textures there, a little more manipulation can add jumping and crouching, etc.


People are saying this is a "quite dated" method for creating a 3D game. Does anyone have a tutorial on creating a 3D game in one weekend (or even 1 day?) using modern tooling? It'd be cool to make a token 3D game with the kids. I'm curious myself.


This kind of engine is great as a starting place for programmers, I think, but Unity is probably what you want for the kind of thing you're doing. There's also Godot if you want to stay in Open Source land, but it probably isn't as good and certainly doesn't have the same level of coverage in tutorials and documentation.

Or there's the time-tested path to gamedev: modding existing games.


Disclaimer: I've never used Unity but have played around with Godot quite a bit.

Godot's 3D capabilities, while almost certainly not as full-featured as Unity, are plenty enough to make an old-school FPS, and a lot more.

The Godot online docs are quite good; well organized, clean layout, easy to find stuff. But for tutorials and howtos, I've had more success with YouTube screencast videos vs the Godot docs.


> Or there's the time-tested path to gamedev: modding existing games.

Definitely for kids, modding an existing game is the right way to go. It will be way more fun, and it'll be a lot more motivating to share things with friends than to do stuff because dad said so.

I think you're also right that Unity is better but it's still pretty challenging. Programming is really crazy hard, especially for young kids.

That's despite great things like Code.org and Scratch. Those activities are substantiated by good observational (qualitative) evidence measuring creativity, not programming ability.

The only hard quantitative data people have is engagement time--that kids spend more time in apps than equivalent regular instruction when learning idiosyncratic turtle graphics in apps. Pretty unsurprising in my opinion.

As far as I know, there is no evidence that these pre-high school programming experiences retain acceptably-performing students in high school programming like AP Computer Science better than forcing them to take the class. Scratch's educational mission, which I am most familiar with, has for now not made such test-based outcomes an investigative priority.

Honestly an old-school 3D shooter is just about the worst thing you can coerce a disinterested kid into doing. If they're pre-puberty and still care what dad thinks, this sort of activity is exactly what kids drop when they get older. If they're in high school and don't care what dad thinks: it's not a multiplayer game, it doesn't inhabit some social space/it isn't a "third place," it's not taking advantage of the greatest power of the classroom in high school, peer pressure.

This is based on my experience making a game that kids 12+ regularly mod, being the only kid in high school that programmed regularly before 18, and interacting with (but not conducting research with) the wonderful people at Scratch, who really do know how to make algorithmic thinking and creativity work for kids 12 and younger.

Maybe at Stuyvesant parents are making the kids learn C++, but you gotta understand that coercion is the placebo, not the treatment. If you're going to coerce your kids into doing something incredibly boring, it might as well be whatever narrow testing regime is hot these days and not what we fantasize vicariously they should be interested in doing.


Serious question: what games would you fine folks recommend to get my kids started with modding? The oldest loves Minecraft, Zelda BoTW, and Smash Bros on the switch.

I’ve built a Raspberry Pi Retropie NES clone and they’re all over super bomberman, and old school Pokémon.

The last game I modded was OpenTTD before my kids were born.


Minecraft is the easiest to mod of those by far.


Speaking from experience, Starcraft 1 or Warcraft 3 ship with easy to use world editors.


i wanted to like unity but not having low level access to the rendering and physics internals is intolerable.


1. What aspect of the rendering did you feel you didn't have control over with your own vertex, fragment, and compute shaders?

2. You can hook into most if not all of the physics internals by overriding Update() and FixedUpdate() functions.

Unity is definitely less flexible and powerful than rolling an AAA 3D engine and editor.. But that takes years, and 95% of the time there's a way to solve the problem in Unity.


Yes, but given what we're talking about, a basic old-school first person shooter, you could do it in far fewer than "years" with C++, SDL2 and OpenGL.


1. the primary issue is you can't properly optimize rendering and if what you want to do doesn't fit neatly into unity's architecture its gonna be dog slow and your gonna spend forever writing hacks and work arounds to get it to work.

you don't have to roll your own everything for a custom 3d engine, theres a lot of libraries out there. the idea that everyone should use tooling that reminds me of visual basic to make games is weird to me.

2. i don't know about that. my experience is that you don't have easy access to the actual physics code


This is the kind of misguided mindset that effectively discourages people from making games. It’s as if someone urged people to look into lumber and the papermaking en route to making their own card game.

This fellow’s mindset is a common one, and likely he’s passing it on from his own experience...Telling that engineering culture discourages the shortest routes to creativity


engine development is one of the more interesting aspects of game dev to me and many other programmers, having so much focus on clunky monolithic closed source platforms like unity is lame. just providing a counterpoint to the unity hype, one rando comment is not gonna turn the tide on its popularity lol.


FWIW the latest versions of Unity are moving in the direction of exposing more of the render pipeline ... Whether you agree or not with doing it through C# is a different question but they are pretty invested in the language ... From my perspective, this approach is a decent compromise for someone that wants to build a render pipeline (basically define how objects are drawn to the screen) on top of an abstraction of the different graphics APIs for all the platforms they support while still maintaining all the other functionality provided by Unity for gameplay and tooling

https://blogs.unity3d.com/2018/01/31/srp-overview/


Checkout this Minecraft clone: https://github.com/fogleman/Craft

It is not tutorial, but the code is very clean and readable and it use modern techniques (compared to Wold3D/Doom style raycasters). The codebase could be extended easily.


There is no such thing as a dated method for creating a 3d game. There are many kinds of games and many pros and cons for creating them different ways. It is popular these days to tell people to use Unity3D, and it is an amazing tool but there are also downsides to using it vs rolling your own thing from scratch too.

Royalties, risk of you game ending up unplayable in 10 years, running into limitations you can't fix because you don't have the source, performance ceilings you can't change, bugs you can't fix.

On the other hand if you do things from scratch you can spend years and years and years.


Use a modern engine like Unreal or Unity, watch a few videos and I'm sure you can get something basic running fairly quickly.


Well yea you can get an FPS game made in Unity in about an hour or two. But it won't really tell you much about what is happening in the background.


I believe the sprites used in the video are taken directly from Andre LaMothe's original Tricks of the Game Programming Gurus. I still have a copy.


That was an awesome book


Only programming book I ever read cover-to-cover.


Oh yes! Good times! I still have my copy somewhere.


I need to dig up the code, but over the course of a week on the train rides to and from work, I built a very simple ray-caster in Pico-8 on the Pocket Chip. It was especially fun since I had figured out the math for it on my own (which, in fairness, isn't terribly complex with even a high-school understanding of Trigonometry).

I had enough fun with it that I think it might become my new "trying out a new programming language" test, since it was easy enough to get done quickly, but difficult enough to where I had to actually learn the language.


I'd kill for a dozen comments, esp. if a goal is to educate students. 498 lines would still be impressive.


It's a neat trick of projection, and really easy to hack on and extend yourself once you know the trick. Using your map of the space, cast a ray from the camera position through every column of the display rectangle; using the distance the ray travelled before intersecting the wall, you can calculate the apparent height of the wall seen in that column of the display; so, in that column, draw the wall that high. You can get fancier with textures for the walls, varying the floor height, walls that can appear and disappear (i.e. "doors"), and so on.

Better than my 10-second explanation: https://lodev.org/cgtutor/raycasting.html


Note that the site has several more entries with extensions to that base raycaster, which you can find in the index.

Interesting note about this technique: you get perspective correct texturing for free. "Real" 3D rendering usually had to cope with affine texturing due to performance costs until dedicated hardware for it came along.


like this? https://github.com/ssloy/tinyraycaster/wiki table of contents is in upper right


Nice!


I see the link to the wiki now in the README.md. The first time, I think the doom graphics distracted me, along with the "cat | wc" metrics. Thanks for the pointer.


Ok, now I see why the headline was changed before. Having the words "school" and "shooter" both in a headline really changes the mood.


This reaching for controversy is absurd.


Here's a html / javascript version of simular:

https://htmlgames.github.io/htmlgames/differences/gallery/in...

type something like lego into the search query and it renders pictures of lego in the maze as well.


Wow, this reminds me of writing a very similar engine in the mid 90s after learning about ray-casting (and playing Doom of course). There was a very good book which covered the whole process called "Tricks of the Game Programming Gurus", which I would have killed for at the time!


I did most of one of these on an FPGA in verilog for fun: https://github.com/dormando/verilog-raycaster - took a lot longer than a weekend :)


This is a great ray casting tutorial to get started:

https://permadi.com/1996/05/ray-casting-tutorial-table-of-co...


I started something similar 2 years ago and I'm now actively pushing for a release date 1-2 years from now.

https://github.com/glouw/andvaranaut


Here is another Ray casting wold3d like tutorial : https://github.com/permadi-com/ray-cast/tree/master/

It has real legacy, was created in 1996! https://permadi.com/1996/05/ray-casting-tutorial-table-of-co...


I remember writing this for my GCSE computer project in C. It didn't take me a weekend though back then. This is a Wolf3D style game. It would be interesting to illustrate the Doom engine that followed with the diagonal walls and lighting effect.

I looked at this after doing GCSe project, finding it to hard and skipping to a real 3D engine like quake because it was conceptually easier.


Maybe it's old hat for everyone else, but this is my first time looking at Gitpod. (Link at the bottom of the project Readme.)


There was a great 3D framework called Cocos3D that had a bunch of PowerVR tools for working with 3D mobile development.

They had an example app that was a FPS made from scratch. The HUD was there, enemies, weapons, environment.

There might be a Metal FPS base game available on GitHub, I can't remember what it was called.


I've been wanting to write a raycaster, as I've never done it before. Might try doing it in a "slow" language like Ruby that I'm better at at first, and then try it in C. I kinda suck at C these days, but it would be a good transition into it again.


I'm way out of practice with C++. (It's been since like 2006.) Framebuffer clear function appears to allocate a new vector.

Does std::vector automatically handle the memory releases? This is called every frame, so I wanted to check here to see if I'm crazy or not.


> Does std::vector automatically handle the memory releases?

Yes. What is going on here is that "img" is being assigned. The "old vector" will have its destructor called, which will call the destructor of all of its elements by default.

-------

I assume you're talking about this code: https://github.com/ssloy/tinyraycaster/blob/master/framebuff...

    void FrameBuffer::clear(const uint32_t color) {
        img = std::vector<uint32_t>(w*h, color);
    }
Which seems inefficient to me. But it is correct code, even if it is inefficient. There's no memory leak here, but the code probably would be way more efficient if it were instead a simple memset.

    memset(&img[0], color, w*h*sizeof(uint32_t));
This memset is likely easier to understand, since it doesn't rely upon destructors / constructors / assignment C++ Magic. And its more efficient to boot. Vectors are guaranteed to be contiguous in memory, which is why it is compatible with memset.

EDIT: Hmmm... the img is never initialized. You'd also have to do "img = std::vector<uint32_t>(w*h, color);" somewhere.

----------

The rectangle is also inefficient.

    for (size_t i=0; i<rect_w; i++) {
        for (size_t j=0; j<rect_h; j++) {
If i and j were reversed, the draw rectangle function would be more efficient. As it is, "j" iterates in the wrong direction with regards to cache lines...

    // This would be faster!
    for (size_t j=0; j<rect_h; j++) {
        for (size_t i=0; i<rect_w; i++) {
Overall though, efficiency isn't a goal at all. Its designend to be a quickie project to get things done as soon as possible. Thinking about all of these details slows down development, so there's something to be said about "just getting it done"


Instead of doing a memset, I'd recommend using std::fill (which will likely call memset or another fast implementation behind the scenes): it makes it clear what you are doing, and preempts questions of "is this legal to do" (which it is, but only for POD data types).


I'm far more inclined toward

  img.assign(w*h, color);
myself.


Normally I choose the methods in <algorithm> because they take an ExecutionPolicy parameter, allowing me to make this code parallel if I wish, but both are valid.


Note for readers looking to hack on a raycaster: If you were to clear the first and second half of the buffer to different colors, you've got a ceiling and floor.


>As it is, "j" iterates in the wrong direction with regards to cache lines

I always wondered 'what if' someone released Doom like FPS game in 1993 that required you to flip monitor on its side ;-). Both Ray-casting and Mode-X work best in vertical plane, while video memory access is only efficient when done linearly. Some quick back of a napkin calculations lead me to believe you could deliver 400x320 at the speed of Dooms 320x200.


Yes std::vector owns the underlying continuous array of memory and will free it on the destructor call of vector.

" img = std::vector<uint32_t>(w*h, color); "

What is more interesting if you have been away from c++ is that the assignment is not a copy! The vector in this case is a temporary and it dies on assignment. Thus img takes the guts of this new vector.


Movement semantics actually existed for vector assignment since the original C++98 STL specification.

Movement semantics didn't exist in the language per se, but they were supported back then by std::swap and std::auto_ptr. Today, it is far easier to represent movement semantics with RValues, std::move, and std::unique_ptr.

But yeah, std::vector's assignment operator was originally at least std::swap based and therefore efficient.


that title does not sound right. could old-school be removed? I feel like "school" and "shooter" are somewhat misleading and potentially alarming together. could it be something like build your own 3d shooter game? or does this refer to first person shooter? I did not read the entire article so take it with a grain of salt.


I love his "tiny" repositories. So cool.


Especially his Tiny Renderer: https://github.com/ssloy/tinyrenderer


I know right? ssloy is on fire, one of the few non-organizations I follow on Github. Does anyone else know some similar users that should get some more love?



That's really cool. I can't tell you how many times I started to make one and stopped half way...


Thousands of french millenials getting hit by the nostalgia feels as they read through the README.


I know there is a performance issue, but does anyone know of any resources to get a basic 3d shooter working in Python? It's just that my kids already know some Python...


Had to reread that headline a couple times...


i almost read that title wrong with the words School and Shooter in them


While surely unintentional this headline is in incredibly poor taste.


Announcement oddly timed with the anniversary of the Parkland shooting.


I hate to be morbid but it's not exactly difficult for any announcement to coincide with the anniversary of a shooting.


Not really oddly timed, as the link between video games and real world violence has been debunked countless times.


Unity give away a complete 6DOF FPS with multiplayer.

All source and assets included

https://unity.com/fps-sample


Sometimes you want to make chili from scratch instead of going out to eat Peking duck.


Yeah, but the recipe isn't the same as when the conquistadores discovered America.


game engine != game.

learning how to raycast, the maths involved, how to make your own game engine, none of those things are covered in that repo. It's a different thing. This kind of negativity is damaging.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: