Hacker News new | past | comments | ask | show | jobs | submit login
Rustenstein 3D: Game programming like it's 1992 (nextroll.com)
351 points by facundo_olano on Feb 2, 2022 | hide | past | favorite | 113 comments



Ray casting is close to my heart as it's easy to understand and has a very high "effort to reward" ratio, especially to someone who is new to graphics programming. I built a game + engine around ray casting portals [1] (think the game Portal). It was a lot of fun trying to figure out how to bounce rays around a scene and intersect with different objects in the environment and immensely satisfying to have built the whole engine from the ground up. Though I'd probably not do it again. Your top-down ray debug view is very similar to one I came up with!

Some of the interesting bits of the engine are open source: https://github.com/gh123man/Portal-Raycaster

1. https://blog.sb1.io/gateescape/


> a very high "effort to reward" ratio,

Sorry to be pedantic, but I think you mean the opposite? Big reward for modest effort?


It may be seen as a transmission ratio: reward = ratio * effort.

Then the ratio is indeed high!


You mention GCd languages being a bad choice for games in your README. But these days Java has GCs that have < 1ms pauses. I wonder if it is still true?

https://kstefanj.github.io/2021/11/24/gc-progress-8-17.html


1ms in a frame that lasts 16ms is already huge in gamedev. We often optimize processes so that they last less than that. I will always prefer 1ms in rendering, physics or audio over memory management. And the fact that it's not 1ms every frame makes things worse, because it might make miss a frame irregularly which will make the framerate jittery and the game feel very unpleasant.


Foolish 60hz peasant, why aren't you aiming for 240hz? Get your crap done in 4ms!

I play a lot of games, from PC to console to mobile to handheld to VR, I really don't get these kinds of takes. Until games from indie to AAA fix their existing loading and stuttering and microstuttering and various other issues, clutching pearls over GC non-determinism seems so silly. There's no GC to blame for why you can stare at a static scene with a can in a console AAA game under development for 6 years in a mature engine and see its shadow flicker on and off. I find it hard to believe we'd be in a much worse situation if GC languages were even more prevalent. Part of this is that there's already a lot of sources for non-determinism in a C++ engine that can cause issues including frame slowdowns. Of course with effort you can eliminate or bound a lot of it, though the same is true for GC'd languages. Real-time GCs have been a thing for decades but sure you probably aren't using one, nevertheless you can often tweak a non-real-time GC in useful ways if you need to -- or as is mentioned every time this topic comes up, do the same thing you'd do in C++ by allocating big pools ahead of time and reusing things (i.e. not making garbage). Memory management is not the hard part of making a game, though the more complex and demanding the game or constraining the hardware the more you'll need to consider it regardless of language, and change up what's idiomatic when programming in that language.

Not that it matters, because any positive t is going to mysteriously always be too much, but the 1ms number is really underselling ZGC. Maximum of 1ms pause times was the goal, in practice they saw a max of around 0.5ms, an average of 0.05ms (50us), and p99 in the above-linked article of 0.1ms (100us). A more interesting complaint would be the amount of memory overhead it needs. Tradeoffs are everywhere.


In 1992 we still used joysticks plugged into OG gameports, the ones using NE558 timers and manual pooling IO until all capacitors are charged http://www.minuszerodegrees.net/oa/OA%20-%20IBM%20Game%20Con...

Reading such gameport took >1ms, and you wanted more than one reading per second :)


The linked page shows 0.1 ms though.


Recently ran a benchmark against Rust and Java for a particular aspect of our code. The Rust completed in less time, using one thread. Java required twice the time, twice the memory, and thirty five threads. Even if your GC isn't pausing, it's still using as many CPUs as it can to trash your d-cache looking for objects to GC.


Rust takes an absolute eternity to compile code.

On that count alone I would never-ever consider it in hobby/games project of any sort.

It doesn't matter what features a programming language has, if it takes seconds if not minutes to compile even relatively small projects (less than 50-100k LoC).


That is genuinely one of our dilemmas. The product makes enough to spend an extra 20x on machines (and it scales horizontally). So does the impact to developer productivity of Rust offset the 20x cost savings? New features mean new $$$, and more than enough to offset the extra runtime cost of Java.

Personally, I'm now doing hobby projects in Rust because it just feels right. But I've done entire systems in assembly, so I am a bit of a control freak. YMMV.


Why are you doing a clean build? You can't complain about the difference against binary dependencies if you're manually flushing the cache of built dependencies.

And if you're not doing that, you're either wrong, or have a terrible project structure (comparable to including literally every header in every source file in C++)


Isn't compilation time in a roughly similar ballpark for C++? Which is kinda... "quite often" used in games projects? Not sure how much in hobby ones though, if that's what is being discussed here, but I'm somewhat confused (hobby/games sounds like speaking about an either-or alternative?)


Kind of, C++ profits that the ecosystem embraces binary libraries, so usually you only have to care about your own code.

Then if you are using modern C++ tooling like Visual Studio, you can make use of hot code reload.

Here are examples in Visual Studio, and Unreal.

https://www.youtube.com/watch?v=x_gr6DNrJuM

https://www.youtube.com/watch?v=XN1c1V9wtCk


The only thing your benchmark proves is your Java could was not as optimized as well as your Rust code. Java has overhead, but certainly not two orders of magnitude.


Our use case is definitely a pathologically bad problem for Java's GC. Nevertheless, it is a real use case, and CPU and GC are the primary impacts to our service.


You mean you're actually seeing 90% GC overhead and are not looking to improve on that? (like by tuning the GC or changing your implementation). GC impact on normal behaving applications should be less than a few percent, so you're not comparing Rust and Java, but Rust and badly written/tuned Java - which doesn't say anything about the maximum attainable performance.


Because you were comparing Apples to Oranges.

Now try that against D, any .NET language, Nim, Swift, Go, or any other GC language that supports value types and unsafe programming.


Swift is technically not a language with GC I believe. It's probably using reference counting like Objective-C before.


Reference counting is a GC algorithm, despite urban knowledge that describes it as otherwise.

Chapter 5 on the "The Garbage Collection Handbook",

https://gchandbook.org/contents.html

Or if you prefer chapter 2 on "Uniprocessor Garbage Collection Techniques" https://www.cs.cmu.edu/~fp/courses/15411-f08/misc/wilson94-g...

Plenty of SIGPLAN papers about the subject.

Despite all marketing talk, the real reason why Swift went with ARC was easier interoperability with Objective-C, otherwise Apple would need to build something like .NET's RCW/CCW for COM interop on Windows, which follows the same concept as Cocoa's retain/release calls.


Even before Java advanced its GC even more, a little-known 3D game called Minecraft was written in it.


to be fair Minecraft was ported to C++ for anything but PC, and IIRC as of last year the PC version was ported as well, the only thing standing in the way being mod support. The reason Notch wrote it in Java was because that's what he knew, not exactly because it was a fantastic choice for a 3d game.


Bedrock edition is available basically everywhere and unified the non-Java platforms, yeah, though the Java edition still gets feature updates in tandem. There are still other differences between the two that will likely never be rectified (not least of which are bugs becoming relied-upon features). But all that's kind of beside the point. Clearly Java wasn't a bad choice.


As proven by his bank account, being written in Java wasn't a blocker for commercial success.


And still today, with all the advances, Minecraft is a stuttery mess, even with optimisation mods like Optifine


Lua with LÖVE [0] or LÖVR [1] is fine for many types of games though, even though Lua is a language with GC. Most of the heavy lifting is still done in low level languages like C or C++. And it should be easy to write performance critical parts in C/C++ anyways, if needed.

I suspect LÖVE / LÖVR would perform better than Java for games, but haven't tested or verified myself.

---

[0]: https://love2d.org

[1]: https://lovr.org


Idk why people are so worried about gc in general, just keep an object pool around and byte arrays if strings are immutable and never see a GC pause. Sure it'll look a lot like c, but that's besides the point.


At that point, you're not benefiting from the GC'ed language. Just use C/C++/Rust/etc, and you'll end up with something faster and more reliable for a small fraction of the effort.


Most modern GC-ed languages offer more than just garbage-collector: On top of my head:

1. Prevents memory corruption

2. Have way better compilation time than C++ and Rust, absolutely horrid compile time for both.

3. Have all sorts of developer-ergonomics features (like being able to write functions in arbitrary order (compared to C/C++))

4. Have built in reflection (none in C/C++)

5. IDEs/autocomplete/refactoring tools etc, just generally work better for C#/Java/etc.

Garbage collection is probably close to bottom on the list why I would use GCed language when making an indie or hobby scale game

I would never consider using Rust simply because it takes too long time to compile even for relatively small projects.

Especially not for hobby or indie scale game projects.


Why downgrade my developer experience, when my GC language also offers C++ class features if I really need them.

People should learn more about what is in the toolbox.


Different languages are not downgrades/upgrades - they're different balance points between constraints.

Also, classes are merely one feature of C++. The language is a far cry from 40 years ago, when it was "C with classes".


Agree, however a large majority uses C++ as what I call C+, which is basicaly "C with classes" with a bit of C++11.

To the point that we have the Orthodox C++ movement, and educating people to move beyond "C with classes" that keeps being taught around the world is a common discussion subject at C++ conferences, and ISO C++ papers.


I would certainly have agreed with this characterization of the majority 10 years ago. But - do you really believe this is still the case today? After 10 years of C++11 making headway, and a lot of effort to educate people differently? I wonder...


Object pools don’t turn off by one errors into complete remote code execution exploits.


How does a GC help with that? That's just boundary checking.


Just? :)

And don’t forget about all the other ways such corruption could happen, use after free etc.

On top of all that, in managed languages you generally have a stronger runtime type information on top, that doesn’t accept arbitrary memory address to implicitly be read as executable code. Even explicit static casts from Object to more defined type will fail if the object is not of expected type. Code must be defined as function objects in the language to begin with.


Object pooling without GC has the same use-after-free problem as with it.


Coding imperfectly in safe languages leads to performance issues. Coding imperfectly in unsafe languages leads to correctness issues.

It’s a trade-off; performance in games is more important than correctness anyways so I tend to agree with you here.


A nitpick: correctness and type safety are very different concepts. Type-safe languages don't provide correctness (and vice versa).


They didn’t mention type safety though (Unless they’ve edited their post?)


One of the biggest mainstream engines (Unreal Engine) is C++ and utilizes garbage collection:

https://unrealcommunity.wiki/memory-management-6rlf3v4i

https://docs.unrealengine.com/4.27/en-US/ProgrammingAndScrip...


It depends. If you're not doing anything that pushes the boundaries then GC isn't a huge deal. After all, there are plenty of games out there written and running on the god-awful web stack.

But historically games have been boundary pushers, and a GC's overhead isn't really desirable in that space.


That looks great. I actually wanted to do a port of an id game in Rust for the longest time but never managed to find spare cycles.

I would add to the list of feature to tackle next:

- Convert from 320x200 aspect ratio to 320x240 aspect ratio. You can do that by converting from 320x200 to 1600x1200. This is easily done with x5/x6 which give you the same aspect ratio as 320x240 and you get no pixel selection artifacts.


Thanks for the suggestion, and for your book and your blog posts, by the way, they are the main reason we thought about tackling this project!


Any plans for a full DOOM (not render-only)?


We haven't got to reading the Doom book yet, maybe once we do! Although for now we're likely going to focus on developing this one some more, if we manage to get the time.


I'm wondering if anyone develops a spinoff exactly as back in 1992.

You know, it would be interesting to follow the Carmack's route: start from Apple ][ programming for a couple of Ultima and Wizardry spinoffs, port them to PC. Then move to 80286 to make a scrolling engine for a double trilogy, and move up to 80386 to make a Wolfie clone, and continue from there. The point is to use real world machines or emulated env for development. One can probably learn a LOT programming by that way, although much if it is irrelevant in modern gamedev...


Bonus points for using the same tooling and languages from then too. 6502 assembly, then a bit of 16 bit x86 assembly, and finally some C with Borland's 1992 tooling.


Second thought, if one wants to develop DOOM as Carmack did, he would need a NeXT workstation, might not be easy to come by TBH, but maybe there is am emulator good enough.


Basically, a Mac :)


You have a point!


and hard-to-press keys. uppercase letters only. and you are ONLY allowed to backup manually, and to 1.4Mb (or worse) floppies

This is The Way

kids: off my lawn


Yeah definitely! Is there any modern tools that can 1) run on native platform and 2) significantly improve dev experience? I guess the guys who developed Nox Archaist used some modern tools, and on DOS we might have something new because of a large retro community.

The more I think about it, the more I believe if someone can pull this off, although many skills learnt are useless in modern programming (like 6502 asm or 80286 asm or whatever tricks to get the games run smoothly on retro platforms), but the amount of effort definitely would pay off.

I don't think one needs to walk the full Carmack road. I think whoever goes from Shadowforge to Quake is impressive enough. Plus one does not need to implement all these games, many of them share similar engines.

(Bad formatting...)

June 22, 1996 Quake id Software GT Interactive Programming

May 31, 1996 Final Doom id Software GT Interactive Programming

October 30, 1995 Hexen: Beyond Heretic Raven Software id Software 3D engine

December 23, 1994 Heretic Raven Software id Software Engine programmer

September 30, 1994 Doom II: Hell on Earth id Software GT Interactive Programming

December 10, 1993 Doom id Software id Software Programming

1993 Shadowcaster Raven Software Origin Systems 3D engine

September 18, 1992 Spear of Destiny id Software FormGen Software engineer

May 5, 1992 Wolfenstein 3D id Software Apogee Software Programming

1991 Catacomb 3-D id Software Softdisk Programming

1991 Commander Keen in Aliens Ate My Babysitter! id Software FormGen Programming

December 15, 1991 Commander Keen in Goodbye, Galaxy! id Software Apogee Software Programming

1991 Commander Keen in Keen Dreams id Software Softdisk Programming

1991 Shadow Knights id Software Softdisk Design/programming

1991 Rescue Rover 2 id Software Softdisk Programmer

1991 Rescue Rover id Software Softdisk Programmer

1991 Hovertank 3D id Software Softdisk Programming

1991 Dangerous Dave in the Haunted Mansion id Software Softdisk Programming

1991 Dark Designs III: Retribution Softdisk Softdisk Programmer/designer

December 14, 1990 Commander Keen in Invasion of the Vorticons id Software Apogee Software Programming

1990 Slordax: The Unknown Enemy Softdisk Softdisk Programming

1990 Catacomb II Softdisk Softdisk Developer

1990 Catacomb Softdisk Softdisk Programmer

1990 Dark Designs II: Closing the Gate Softdisk Softdisk Programmer/designer

1990 Dark Designs: Grelminar's Staff John Carmack Softdisk Developer

1990 Tennis John Carmack Softdisk Developer

1990 Wraith: The Devil's Demise John Carmack Nite Owl Productions Developer

1989 Shadowforge John Carmack Nite Owl Productions Developer


thanks for reminding us: Carmack was a MACHINE in those years. and churning out one impressive/hit game after another


Yeah man, definitely.

But from what I know, once you grind out a good enough game engine, essentially you can reuse the same code base again and again. The same with tools. For example Romero's Tiled served all Softdisk games plus Wolfenstein. The only new code is some new functionalities and a lot of new game logic code. Nowadays, you need 6 months just for the concept...

I bet nowadays if a two-programmer team (one for engine and one for tool, both do some designs) with some C++ and game dev experience goes into jam mode, with proper asset support (say he/she downloaded/contracted the assets beforehand), he can pop out games similar to the Softdisk ones in even shorter timespan. After all nowadays computers are much faster and we don't need assembly any more.

The tricky part is to stay on jam mode, and I believe the Softdisk team did wonder back then. Everyone was highly motivated and worked probably 80+ hours per week. They believed they could change the world and they did.


Yes, he's definitely one of those programmers that just blows my mind with how inventive and productive that they are.


This would be a feat.

But many more people look for the fun. The good bits without the pains of ancient tooling, lacking debuggers and graphics tools, etc.


Agreed. Not sure what did the programming environment look like on Apple 2 back then. Probably not great. I don't even know whether there were debuggers. Need some Googling.


I only tried Basic on Apple ][. While it was a fascinating machine, I later greatly preferred the pampered experience of Basic, and then assembly, on MSX2.


A simple raytracer is also a good step in that list.


Which in 1992 would mean getting acquainted with either MASM or TASM.


I did it in '86 using a DSP32 :)

Patience... A couple of hours to render a simple scene at 800x600, and displayed using dithering due to a lack of colors in the palette. But it worked.


I'm thinking, a lot of people, even developers are scary of assembly, right? Or a more irrefutable one: it doesn't have much use. Definitely it is true in the sense that indeed even system programmers don't write full assembly programs.

But then I thought about the kids in 80s, the Romeros and Carmacks, and a little before that the Garriots and Gates, those kids started with BASIC but immediately picked up assembly given the chance, because that was how high performance code could be written and they just learned it on the way. No Internet, very few books even for rich kids (usually a [Machine] programming guide is good enough). Now that I think about it, some of them were barely teenages.

I bet the real difference is 1) Much simpler machine arch and 2) They had to learn it for writing games and other high performance software.


I am also from that generation.

I really don't get how people are scared of low level programming, computer languages don't bite, don't attack on dark street corners, don't whatever.

Yes bad programs might corrupt some data, just like removing the plug with computer on might do the same and no one thinks twice about it.

Back in the day we didn't had tooling like Compiler Explorer to understand how BASIC would map to Assembly, only trial and error, alongside pen and paper.

Anyone that wants to learn Assembly nowadays can jump there, click on the high level constructs and see how they map to Assembly instructions for their favourite CPU.


On the other hand, architecture back then was pretty simple though. Plus there is a big OS in middle of everything, you cannot really talk to the hardware directly nowadays. Back in the Apple ][ and DOS days it was mandatory.

That's why I have always been thinking that people actually should still start from an Apple ][/C64/80286/NES/SNES/GB these days, or maybe just embedded system for cheaper introductory tuition/quicker employment. You ignore all bells and whistles of hardware accumulated through the last 30 or so years and start from bare metal, real mode and gradually crawl from there. But without ignoring the bells and whistles of software tools people built for these platforms. I bet nowadays we have better tools for developing on these platforms. As some other commenter said, we can spin up a VM or emulator and directly look at memory and registers.


If anything virtual machines have made this so much easier than before, it's like having a $100K CPU emulator on every desktop. You can look straight into the CPU, see all of the registers change in lockstep and the result on memory. I'd have happily killed for that in the 80's.


Assembly was pretty much the only way to break the speed barrier that BASIC imposed. It started out as 'the' language, then after a while you'd realize there was the computer underneath BASIC that could do more and was faster, but also a bit scary. Then, after you mastered peek and poke and bit by bit learned more about what made the computer tick one day you'd wake up and realize that all you still used BASIC for was to load machine language programs.

That gradual ramp made it work.


Yep. Plus nowadays people have so many selections and it doesn't really make sense to use assembly for production unless in rare cases.


Uau, that is when I started in computing with a Timex 2068, naturally ray tracing was not even on my radar at the time.

Did it also do the classical line based rendering?


It was the most stupid and naive implementation possible, classic reverse raytracing starting from a distance from the center of the virtual camera plane and branching a few times to get the basics of reflection and a light model to work, then adding other features bit by bit. I don't really remember all the details, but I do remember that it was agonizingly slow on the CPU, the DSP made all the difference and allowed me to add a couple of features. The main loop was on every pixel of the virtual camera plane. I no longer have that code (or the DSP, for that matter) but it was a fun exercise and it taught me a lot of matrix/vector math.

The dithering code was also interesting, because my graphics card could not do true color I put 3x64 RGB shades in the palette and then cranked up the brightness for tetris like sets of pixels to simulate a true color display. Worked pretty well.

This was the DSP board:

https://www.signalogic.com/images/Ariel_DSP32-PC.jpg

I managed to gently overclock it to 24 MHz without problems, it did get a bit warmer but nothing too bad.


Very interesting, thanks for sharing.


Looks like the engine is missing the fish-eye correction in the ray-cast calc. I love writing these engines for fun :)


Fish eye is a feature! ;) https://strlen.com/gfxengine/fisheyequake/

I noticed it in the examples too, I’m speculating wildly that the ray cast is even angular steps rather than pinhole projection. Totally reasonable to not correct it, IMO, I’ve long thought we should have more non-linear cameras in games.


That's exactly what it's doing:

    let rayAngle = player.angle - player.fieldOfView / 2;
    for(let rayCount = 0; rayCount < screen.width; rayCount++) {

      // ... SNIP ...

      // the ray moves at constant increments
      let rayCos = Math.cos(degreeToRadians(rayAngle)) / precision;
      let raySin = Math.sin(degreeToRadians(rayAngle)) / precision;

      // ... SNIP ...

      // increment the angle for the next ray
      rayAngle += incrementAngle;
    }
It's also using Euclidean distance rather than planar distance for the apparent wall height calculation.


If you generate the rays the linear way instead, you don't even need any correction.

Generate two points which represent the left and right edges of the screen - you'd put them in at say 45 degrees left and right of the forward vector of the player. Then to generate the direction vector for each column of the screen, just interpolate linearly between those two points, and find the vector from the player to that intermediate point.


The venerable lodev tutorial uses this method, which I also used for most of my engines. I learned an interesting tidbit while comparing the two methods though:

The old-school original methods used pretty small cos/sin/atan lookup tables to do the ray and then the correction calc. Using the linear method you end up with a couple divisions per ray that aren't there in the lookup method. Divisions were (and are, depending on the platform) pretty slow. Linear method still works with lookup tables but they're relatively huge.

Also IIRC With the linear method door-indents need a workaround.


Is there a limit when looking at retro games when the retro-ness becomes a burden rather than "old games is simpler"? Obviously if you move too far back in history you end up with basically game specific hardware, long-dead assembler code. In the original wolfenstein it seems there is a bunch of code relating to things that manage weirdness that we no longer care about (paging, legacy graphics handling) which obscure the actual game.

Was there a "peak" in simplicity when games were at their simplest for a reader from 2022? That is, they are modern enough to not be obscured by historical weirdness yet still simple enough to be approachable? Perhaps Doom is simpler to understand than Wolfenstein for this reason?


The trick isn’t to look at the complete game code which will always have platform specific code but rather look at the tricks used by the game engines that are conceptual. Or in other words, look at the design rather than the implementation.

This is true even for modern games, because you might implement something differently for a mobile game than you might for a high end PC.


You could look at one of the ports of Wolfenstein. Perhaps the SNES version? ;-)


Saw this recently too: https://mattiasgustavsson.itch.io/dos-like

"dos-like is a programming library/framework, kind of like a tiny game engine, for writing games and programs with a similar feel to MS-DOS productions from the early 90s."


Adding my own raycasting implementation, in... what else... React/JavaScript -> https://huth.me/raycast/


Need a map editor now :)


There was a more recent Kickstarter that got my interest in the DOS style games restarted: https://www.kickstarter.com/projects/eniko/coding-history-3d...


I'm thinking exactly about the same thing, but a broader one called the Carmack's road, from Shadowforge on Apple ][ to maybe Quake I, all developed on the original platforms (emulator). Of course not as a tutorial series but as a learning path for myself for the next maybe 5-8 years.


This sounds very interesting! I don't anything more productive to add, other than I would love to read/see/watch/play any output that a project like this had.


I wonder how practical it would be to create a Wolfenstein 2D with the same level design, but played in top-down view.



I had the same idea while working in this project. Either play directly from the map view (the prototype sort of already does this) or even use a rogue like ascii interface (which, yes, would probably be similar to the Silas Warner game)


Not a roguelike, but hunt/huntd is like wolf2d.


heresy!

cough

Castle Wolfenstein, by Silas Warner for MUSE

kids: get off my lawn


Yeah, so many comments here talking about the 'original' Wolfenstein while meaning Wolfenstein 3D...

This is where it all started:

https://www.youtube.com/watch?v=8fgok9eHqO8


in color? luxury! we had black and white and we were grateful for it!

kids: get off my lawn

the grenade explosion was the most satisfying ever. "Take that Nazi scum!"

the 1st Rogue-like I ever made was something I called WolfenHack. my personal cross between Castle Wolfenstein and NetHack. I later genericized my engine and built a zombie apocalypse on top of it.


"a “Carmack” compression, which is John Carmack’s variant of the LZ (Lempel-Ziv) method. According to the Black Book, without much access to the literature, Carmack would “invent” an algorithm to later find out that someone else had done it before.". Wow, he really is amazing.


I'm guessing the infamous inverse square root algorithm was used originally for ray-casting described in the article?

Fast Inverse Square Root: https://news.ycombinator.com/item?id=24959157

Excellent article btw.


Fast inverse square root was part of the lighting calculations in Quake III IIRC. Nothing so obtuse is required for the simple ray-casting demonstrated here.


this project is a pseudo-remake of Wolfenstein 3-D; Fast Inverse Square Root as we know it wasn't in an id game until much later with Quake III Arena


A direct link to the source code, for those interested: https://github.com/AdRoll/rustenstein


its a shame they included the original shareware resource files in there without any mention that they are NOT covered by the MIT licence that the repository is under.


I guess using Rust kind of fits the boots of Turbo Pascal for 1992.


Turbo Pascal is small, fast and has a clean syntax. Rust fits in none of those shoes (yet, I suppose).


In fact Turbo Pascal (and later TurboC) was so fast that initially I thought there was something wrong with the compiler, that's how quick it compiled a program. CTRL-F9 and off to the races.


And it also did a very cool thing, break on first error.

No need for compiler vomit caused by a single error.


It fits on having a var : type syntax, naturally it isn't Turbo Pascal 6.0 with the Turbo Vision IDE, given 1992.

I still hope that alternative backends like Cranelift will fix the compilation speed.


Rust isn't fast? O_O . The others I can see...


Turbo Pascal was famous for compiling projects quickly. The current rustc compiler is not. The output of the rustc compiler can be better optimized for run time than the output of Turbo Pascal, though.


Yes, the compilation speed wasn't something I had on mind when I made the remark, maybe I should have abstained to play clever.

Anyway maybe when Cranelift gets more mature.


see also: a series of posts (with accompanying code) on rewriting the doom engine from scratch:

https://github.com/amroibrahim/DIYDoom


Is there anything like this for Voxels (what's used in Roblox, MineCraft,etc)?


There are few articles on 0fps blog: https://0fps.net/2012/01/14/an-analysis-of-minecraft-like-en...

Readable implementation (in C) is this: https://github.com/fogleman/Craft


There is a very simple voxel terrian renderer: https://github.com/s-macke/VoxelSpace


Isn't Minecraft a plain old polygon renderer?


Very large textured voxels!


I'm a sucker for wolf 3d technology... Great job!!!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: