Ray casting is close to my heart as it's easy to understand and has a very high "effort to reward" ratio, especially to someone who is new to graphics programming. I built a game + engine around ray casting portals [1] (think the game Portal). It was a lot of fun trying to figure out how to bounce rays around a scene and intersect with different objects in the environment and immensely satisfying to have built the whole engine from the ground up. Though I'd probably not do it again. Your top-down ray debug view is very similar to one I came up with!
You mention GCd languages being a bad choice for games in your README. But these days Java has GCs that have < 1ms pauses. I wonder if it is still true?
1ms in a frame that lasts 16ms is already huge in gamedev. We often optimize processes so that they last less than that. I will always prefer 1ms in rendering, physics or audio over memory management. And the fact that it's not 1ms every frame makes things worse, because it might make miss a frame irregularly which will make the framerate jittery and the game feel very unpleasant.
Foolish 60hz peasant, why aren't you aiming for 240hz? Get your crap done in 4ms!
I play a lot of games, from PC to console to mobile to handheld to VR, I really don't get these kinds of takes. Until games from indie to AAA fix their existing loading and stuttering and microstuttering and various other issues, clutching pearls over GC non-determinism seems so silly. There's no GC to blame for why you can stare at a static scene with a can in a console AAA game under development for 6 years in a mature engine and see its shadow flicker on and off. I find it hard to believe we'd be in a much worse situation if GC languages were even more prevalent. Part of this is that there's already a lot of sources for non-determinism in a C++ engine that can cause issues including frame slowdowns. Of course with effort you can eliminate or bound a lot of it, though the same is true for GC'd languages. Real-time GCs have been a thing for decades but sure you probably aren't using one, nevertheless you can often tweak a non-real-time GC in useful ways if you need to -- or as is mentioned every time this topic comes up, do the same thing you'd do in C++ by allocating big pools ahead of time and reusing things (i.e. not making garbage). Memory management is not the hard part of making a game, though the more complex and demanding the game or constraining the hardware the more you'll need to consider it regardless of language, and change up what's idiomatic when programming in that language.
Not that it matters, because any positive t is going to mysteriously always be too much, but the 1ms number is really underselling ZGC. Maximum of 1ms pause times was the goal, in practice they saw a max of around 0.5ms, an average of 0.05ms (50us), and p99 in the above-linked article of 0.1ms (100us). A more interesting complaint would be the amount of memory overhead it needs. Tradeoffs are everywhere.
Recently ran a benchmark against Rust and Java for a particular aspect of our code. The Rust completed in less time, using one thread. Java required twice the time, twice the memory, and thirty five threads. Even if your GC isn't pausing, it's still using as many CPUs as it can to trash your d-cache looking for objects to GC.
On that count alone I would never-ever consider it in hobby/games project of any sort.
It doesn't matter what features a programming language has, if it takes seconds if not minutes to compile even relatively small projects (less than 50-100k LoC).
That is genuinely one of our dilemmas. The product makes enough to spend an extra 20x on machines (and it scales horizontally). So does the impact to developer productivity of Rust offset the 20x cost savings? New features mean new $$$, and more than enough to offset the extra runtime cost of Java.
Personally, I'm now doing hobby projects in Rust because it just feels right. But I've done entire systems in assembly, so I am a bit of a control freak. YMMV.
Why are you doing a clean build? You can't complain about the difference against binary dependencies if you're manually flushing the cache of built dependencies.
And if you're not doing that, you're either wrong, or have a terrible project structure (comparable to including literally every header in every source file in C++)
Isn't compilation time in a roughly similar ballpark for C++? Which is kinda... "quite often" used in games projects? Not sure how much in hobby ones though, if that's what is being discussed here, but I'm somewhat confused (hobby/games sounds like speaking about an either-or alternative?)
The only thing your benchmark proves is your Java could was not as optimized as well as your Rust code. Java has overhead, but certainly not two orders of magnitude.
Our use case is definitely a pathologically bad problem for Java's GC. Nevertheless, it is a real use case, and CPU and GC are the primary impacts to our service.
You mean you're actually seeing 90% GC overhead and are not looking to improve on that? (like by tuning the GC or changing your implementation). GC impact on normal behaving applications should be less than a few percent, so you're not comparing Rust and Java, but Rust and badly written/tuned Java - which doesn't say anything about the maximum attainable performance.
Despite all marketing talk, the real reason why Swift went with ARC was easier interoperability with Objective-C, otherwise Apple would need to build something like .NET's RCW/CCW for COM interop on Windows, which follows the same concept as Cocoa's retain/release calls.
to be fair Minecraft was ported to C++ for anything but PC, and IIRC as of last year the PC version was ported as well, the only thing standing in the way being mod support. The reason Notch wrote it in Java was because that's what he knew, not exactly because it was a fantastic choice for a 3d game.
Bedrock edition is available basically everywhere and unified the non-Java platforms, yeah, though the Java edition still gets feature updates in tandem. There are still other differences between the two that will likely never be rectified (not least of which are bugs becoming relied-upon features). But all that's kind of beside the point. Clearly Java wasn't a bad choice.
Lua with LÖVE [0] or LÖVR [1] is fine for many types of games though, even though Lua is a language with GC. Most of the heavy lifting is still done in low level languages like C or C++. And it should be easy to write performance critical parts in C/C++ anyways, if needed.
I suspect LÖVE / LÖVR would perform better than Java for games, but haven't tested or verified myself.
Idk why people are so worried about gc in general, just keep an object pool around and byte arrays if strings are immutable and never see a GC pause. Sure it'll look a lot like c, but that's besides the point.
At that point, you're not benefiting from the GC'ed language. Just use C/C++/Rust/etc, and you'll end up with something faster and more reliable for a small fraction of the effort.
Agree, however a large majority uses C++ as what I call C+, which is basicaly "C with classes" with a bit of C++11.
To the point that we have the Orthodox C++ movement, and educating people to move beyond "C with classes" that keeps being taught around the world is a common discussion subject at C++ conferences, and ISO C++ papers.
I would certainly have agreed with this characterization of the majority 10 years ago. But - do you really believe this is still the case today? After 10 years of C++11 making headway, and a lot of effort to educate people differently? I wonder...
And don’t forget about all the other ways such corruption could happen, use after free etc.
On top of all that, in managed languages you generally have a stronger runtime type information on top, that doesn’t accept arbitrary memory address to implicitly be read as executable code. Even explicit static casts from Object to more defined type will fail if the object is not of expected type. Code must be defined as function objects in the language to begin with.
It depends. If you're not doing anything that pushes the boundaries then GC isn't a huge deal. After all, there are plenty of games out there written and running on the god-awful web stack.
But historically games have been boundary pushers, and a GC's overhead isn't really desirable in that space.
That looks great. I actually wanted to do a port of an id game in Rust for the longest time but never managed to find spare cycles.
I would add to the list of feature to tackle next:
- Convert from 320x200 aspect ratio to 320x240 aspect ratio. You can do that by converting from 320x200 to 1600x1200. This is easily done with x5/x6 which give you the same aspect ratio as 320x240 and you get no pixel selection artifacts.
We haven't got to reading the Doom book yet, maybe once we do! Although for now we're likely going to focus on developing this one some more, if we manage to get the time.
I'm wondering if anyone develops a spinoff exactly as back in 1992.
You know, it would be interesting to follow the Carmack's route: start from Apple ][ programming for a couple of Ultima and Wizardry spinoffs, port them to PC. Then move to 80286 to make a scrolling engine for a double trilogy, and move up to 80386 to make a Wolfie clone, and continue from there. The point is to use real world machines or emulated env for development. One can probably learn a LOT programming by that way, although much if it is irrelevant in modern gamedev...
Bonus points for using the same tooling and languages from then too. 6502 assembly, then a bit of 16 bit x86 assembly, and finally some C with Borland's 1992 tooling.
Second thought, if one wants to develop DOOM as Carmack did, he would need a NeXT workstation, might not be easy to come by TBH, but maybe there is am emulator good enough.
Yeah definitely! Is there any modern tools that can 1) run on native platform and 2) significantly improve dev experience? I guess the guys who developed Nox Archaist used some modern tools, and on DOS we might have something new because of a large retro community.
The more I think about it, the more I believe if someone can pull this off, although many skills learnt are useless in modern programming (like 6502 asm or 80286 asm or whatever tricks to get the games run smoothly on retro platforms), but the amount of effort definitely would pay off.
I don't think one needs to walk the full Carmack road. I think whoever goes from Shadowforge to Quake is impressive enough. Plus one does not need to implement all these games, many of them share similar engines.
(Bad formatting...)
June 22, 1996 Quake id Software GT Interactive Programming
May 31, 1996 Final Doom id Software GT Interactive Programming
October 30, 1995 Hexen: Beyond Heretic Raven Software id Software 3D engine
December 23, 1994 Heretic Raven Software id Software Engine programmer
September 30, 1994 Doom II: Hell on Earth id Software GT Interactive Programming
December 10, 1993 Doom id Software id Software Programming
1993 Shadowcaster Raven Software Origin Systems 3D engine
September 18, 1992 Spear of Destiny id Software FormGen Software engineer
May 5, 1992 Wolfenstein 3D id Software Apogee Software Programming
1991 Catacomb 3-D id Software Softdisk Programming
1991 Commander Keen in Aliens Ate My Babysitter! id Software FormGen Programming
December 15, 1991 Commander Keen in Goodbye, Galaxy! id Software Apogee Software Programming
1991 Commander Keen in Keen Dreams id Software Softdisk Programming
1991 Shadow Knights id Software Softdisk Design/programming
1991 Rescue Rover 2 id Software Softdisk Programmer
1991 Rescue Rover id Software Softdisk Programmer
1991 Hovertank 3D id Software Softdisk Programming
1991 Dangerous Dave in the Haunted Mansion id Software Softdisk Programming
1991 Dark Designs III: Retribution Softdisk Softdisk Programmer/designer
December 14, 1990 Commander Keen in Invasion of the Vorticons id Software Apogee Software Programming
1990 Slordax: The Unknown Enemy Softdisk Softdisk Programming
1990 Catacomb II Softdisk Softdisk Developer
1990 Catacomb Softdisk Softdisk Programmer
1990 Dark Designs II: Closing the Gate Softdisk Softdisk Programmer/designer
1990 Dark Designs: Grelminar's Staff John Carmack Softdisk Developer
1990 Tennis John Carmack Softdisk Developer
1990 Wraith: The Devil's Demise John Carmack Nite Owl Productions Developer
1989 Shadowforge John Carmack Nite Owl Productions Developer
But from what I know, once you grind out a good enough game engine, essentially you can reuse the same code base again and again. The same with tools. For example Romero's Tiled served all Softdisk games plus Wolfenstein. The only new code is some new functionalities and a lot of new game logic code. Nowadays, you need 6 months just for the concept...
I bet nowadays if a two-programmer team (one for engine and one for tool, both do some designs) with some C++ and game dev experience goes into jam mode, with proper asset support (say he/she downloaded/contracted the assets beforehand), he can pop out games similar to the Softdisk ones in even shorter timespan. After all nowadays computers are much faster and we don't need assembly any more.
The tricky part is to stay on jam mode, and I believe the Softdisk team did wonder back then. Everyone was highly motivated and worked probably 80+ hours per week. They believed they could change the world and they did.
Agreed. Not sure what did the programming environment look like on Apple 2 back then. Probably not great. I don't even know whether there were debuggers. Need some Googling.
I only tried Basic on Apple ][. While it was a fascinating machine, I later greatly preferred the pampered experience of Basic, and then assembly, on MSX2.
Patience... A couple of hours to render a simple scene at 800x600, and displayed using dithering due to a lack of colors in the palette. But it worked.
I'm thinking, a lot of people, even developers are scary of assembly, right? Or a more irrefutable one: it doesn't have much use. Definitely it is true in the sense that indeed even system programmers don't write full assembly programs.
But then I thought about the kids in 80s, the Romeros and Carmacks, and a little before that the Garriots and Gates, those kids started with BASIC but immediately picked up assembly given the chance, because that was how high performance code could be written and they just learned it on the way. No Internet, very few books even for rich kids (usually a [Machine] programming guide is good enough). Now that I think about it, some of them were barely teenages.
I bet the real difference is 1) Much simpler machine arch and 2) They had to learn it for writing games and other high performance software.
I really don't get how people are scared of low level programming, computer languages don't bite, don't attack on dark street corners, don't whatever.
Yes bad programs might corrupt some data, just like removing the plug with computer on might do the same and no one thinks twice about it.
Back in the day we didn't had tooling like Compiler Explorer to understand how BASIC would map to Assembly, only trial and error, alongside pen and paper.
Anyone that wants to learn Assembly nowadays can jump there, click on the high level constructs and see how they map to Assembly instructions for their favourite CPU.
On the other hand, architecture back then was pretty simple though. Plus there is a big OS in middle of everything, you cannot really talk to the hardware directly nowadays. Back in the Apple ][ and DOS days it was mandatory.
That's why I have always been thinking that people actually should still start from an Apple ][/C64/80286/NES/SNES/GB these days, or maybe just embedded system for cheaper introductory tuition/quicker employment. You ignore all bells and whistles of hardware accumulated through the last 30 or so years and start from bare metal, real mode and gradually crawl from there. But without ignoring the bells and whistles of software tools people built for these platforms. I bet nowadays we have better tools for developing on these platforms. As some other commenter said, we can spin up a VM or emulator and directly look at memory and registers.
If anything virtual machines have made this so much easier than before, it's like having a $100K CPU emulator on every desktop. You can look straight into the CPU, see all of the registers change in lockstep and the result on memory. I'd have happily killed for that in the 80's.
Assembly was pretty much the only way to break the speed barrier that BASIC imposed. It started out as 'the' language, then after a while you'd realize there was the computer underneath BASIC that could do more and was faster, but also a bit scary. Then, after you mastered peek and poke and bit by bit learned more about what made the computer tick one day you'd wake up and realize that all you still used BASIC for was to load machine language programs.
It was the most stupid and naive implementation possible, classic reverse raytracing starting from a distance from the center of the virtual camera plane and branching a few times to get the basics of reflection and a light model to work, then adding other features bit by bit. I don't really remember all the details, but I do remember that it was agonizingly slow on the CPU, the DSP made all the difference and allowed me to add a couple of features. The main loop was on every pixel of the virtual camera plane. I no longer have that code (or the DSP, for that matter) but it was a fun exercise and it taught me a lot of matrix/vector math.
The dithering code was also interesting, because my graphics card could not do true color I put 3x64 RGB shades in the palette and then cranked up the brightness for tetris like sets of pixels to simulate a true color display. Worked pretty well.
I noticed it in the examples too, I’m speculating wildly that the ray cast is even angular steps rather than pinhole projection. Totally reasonable to not correct it, IMO, I’ve long thought we should have more non-linear cameras in games.
If you generate the rays the linear way instead, you don't even need any correction.
Generate two points which represent the left and right edges of the screen - you'd put them in at say 45 degrees left and right of the forward vector of the player. Then to generate the direction vector for each column of the screen, just interpolate linearly between those two points, and find the vector from the player to that intermediate point.
The venerable lodev tutorial uses this method, which I also used for most of my engines. I learned an interesting tidbit while comparing the two methods though:
The old-school original methods used pretty small cos/sin/atan lookup tables to do the ray and then the correction calc. Using the linear method you end up with a couple divisions per ray that aren't there in the lookup method. Divisions were (and are, depending on the platform) pretty slow. Linear method still works with lookup tables but they're relatively huge.
Also IIRC With the linear method door-indents need a workaround.
Is there a limit when looking at retro games when the retro-ness becomes a burden rather than "old games is simpler"? Obviously if you move too far back in history you end up with basically game specific hardware, long-dead assembler code. In the original wolfenstein it seems there is a bunch of code relating to things that manage weirdness that we no longer care about (paging, legacy graphics handling) which obscure the actual game.
Was there a "peak" in simplicity when games were at their simplest for a reader from 2022? That is, they are modern enough to not be obscured by historical weirdness yet still simple enough to be approachable? Perhaps Doom is simpler to understand than Wolfenstein for this reason?
The trick isn’t to look at the complete game code which will
always have platform specific code but rather look at the tricks used by the game engines that are conceptual. Or in other words, look at the design rather than the implementation.
This is true even for modern games, because you might implement something differently for a mobile game than you might for a high end PC.
"dos-like is a programming library/framework, kind of like a tiny game engine, for writing games and programs with a similar feel to MS-DOS productions from the early 90s."
I'm thinking exactly about the same thing, but a broader one called the Carmack's road, from Shadowforge on Apple ][ to maybe Quake I, all developed on the original platforms (emulator). Of course not as a tutorial series but as a learning path for myself for the next maybe 5-8 years.
This sounds very interesting! I don't anything more productive to add, other than I would love to read/see/watch/play any output that a project like this had.
I had the same idea while working in this project. Either play directly from the map view (the prototype sort of already does this) or even use a rogue like ascii interface (which, yes, would probably be similar to the Silas Warner game)
in color? luxury! we had black and white and we were grateful for it!
kids: get off my lawn
the grenade explosion was the most satisfying ever. "Take that Nazi scum!"
the 1st Rogue-like I ever made was something I called WolfenHack. my personal cross between Castle Wolfenstein and NetHack. I later genericized my engine and built a zombie apocalypse on top of it.
"a “Carmack” compression, which is John Carmack’s variant of the LZ (Lempel-Ziv) method. According to the Black Book, without much access to the literature, Carmack would “invent” an algorithm to later find out that someone else had done it before.". Wow, he really is amazing.
Fast inverse square root was part of the lighting calculations in Quake III IIRC. Nothing so obtuse is required for the simple ray-casting demonstrated here.
its a shame they included the original shareware resource files in there without any mention that they are NOT covered by the MIT licence that the repository is under.
In fact Turbo Pascal (and later TurboC) was so fast that initially I thought there was something wrong with the compiler, that's how quick it compiled a program. CTRL-F9 and off to the races.
Turbo Pascal was famous for compiling projects quickly. The current rustc compiler is not. The output of the rustc compiler can be better optimized for run time than the output of Turbo Pascal, though.
Some of the interesting bits of the engine are open source: https://github.com/gh123man/Portal-Raycaster
1. https://blog.sb1.io/gateescape/