Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I'd expected some AAA title to be written in Rust by now.

I'm disinclined to believe that any AAA game will be written in Rust (one is free to insert "because Rust's gamedev ecosystem is immature" or "because AAA game development is increasingly conservative and risk-averse" at their discretion), yet I'm curious what led you to believe this. C++ became available in 1985, and didn't become popular for gamedev until the turn of the millenium, in the wake of Quake 3 (buoyed by the new features of C++98).




Lamothe's Black Art book came out in '95. Abrash's black book came out in '97.

Borland C++ was pretty common and popular in 93 and we even had some not-so-great C++ compilers on Amiga in 92/93 that had some use in gamedev.

SimCity 2000 was written in C++, way back in '93 (although they started with Cfront)

An absolute fuckton of shareware games I was playing in the 90s were built with Turbo C++.


Kind of true, however they had endless amounts of inline Assembly, as shown on the Black Book as well.

I know of at least a MS-DOS game, published on Portuguese Spooler magazine, that was using Turbo C++ basically as a macro assembler.

One of the PlayStation selling points for developers was being the first home console with a C SDK, while SEGA and Nintendo were still doing Assembly, C++ support only came later to the PlayStation 2.

While I agree C++, BASIC, Turbo Pascal, AMOS were being used a lot, specially in the Demoscene, they were our Unity, from the point of view of successful game studios.


I also remember by videogame magazines I was reading back in early 90s that another C++ compiler that was a favourite among devs was Watcom C++ that was released in 88.


That doesn't mean that it was used primarily with C++ though. IIRC Watcom C/C++ mainly became popular because of Doom, and that was written in C (as all id games until Doom 3 in 2004 - again IIRC though).

The actual killer feature of Watcom C/C++ was not the C or C++ compiler, but its integration with DOS4GW.


Btw, dont’t remember Turbo C or Borland C++ to be able to compile to 32-bit x86 on DOS


Borland C++, Microsoft C/C++, and GCC (DJGPP[1]) could all target 32-bit extended DOS, but Watcom was the first[2] to bundle a royalty-free DOS extender[3].

[1] https://news.ycombinator.com/item?id=39038095

[2] https://www.os2museum.com/wp/watcom-win386/

[3] https://en.wikipedia.org/wiki/DOS_extender


OMG, the name "Watcom" just opened a flood of nineties memories of the demo scene for me. Thanks for mentioning.


I really hope that C++ evolves with gamedev and they become more and more symbiotic.

Maybe adoption of rust by gamedev community isn't the best thing to wish to happen to language. Maybe it is better to let other crowd to steer evolution of rust, letting system programming and gamedev drift apart


I think I don't know a single gamdev who's fond of "modern C++" or even the C++ stdlib in general (and stdlib changes is what most of "modern C++" is about). the last good version was basically C++11. In general the C++ committee seems to be largely disconnected from reality (especially now that Google seems to be doing its own C++ successor, but even before, Google's requirements are entirely different from gamedev requirements).


C++17/20 are light-years beyond C++11 in terms of ergonomics and usability. Metaprogramming in C++11 is unrecognizable from C++20 things have improved so much. I hated C++ before C++11 but now C++11 feels quite legacy compared to even C++17. The ability to write almost anything, like a logging library, without C macros is a huge improvement for maintainability and robustness.

Most of the features in modern C++ are designed to enable writing really flexible and highly optimized libraries. C++ rarely writes those libraries for you.


Heh, mentioning metaprogramming and logging is not exactly how you convince anybody of superior ergonomics and usability.


Metaprogramming is required to get typesafe easy to use code. The problem of most template code is that the implementation gets horrendously complicated but for the user it can create A LOT of comfort. At work for example, I wrote a function that calls an rpc-method and it has a few neat features like:

An rpc call with a result looks like this:

call(<methodinfo>, <param>, [](Result r) {});

vs one which returns void:

call(<methodinfo>, <param>, []() {});

It's neat that the callback reflects that, but this wouldn't be possible without some compiletime magic.


It convinced me


Hi, I'm a game developer and I'm fond of "modern C++" and the stdlib. Sure, I would like some priorities to be different (i.e. we should have had static reflection a while ago), but it's still moving in the right direction.

Particularly the idea that "the last good version was basically C++11" is exactly what I would expect to hear from someone who reads a few edgy articles on the internet but has no actual in-depth experience working with the language. C++14 and 17 are, for a large part, plain ergonomic upgrades over C++11, with lots of minor but impactful additions and improvements all over. I can't even think of anything in those two versions that would be sufficiently controversial to make anyone prefer C++11 over them, or call it the "last good version".

C++20 is obviously a larger step, and does include a few more controversial changes, but those are completely optional (and I don't expect many of them to be widely adopted in gamedev for a decade at least, even though for some I wish it went more quickly).


> stdlib changes is what most of "modern C++" is about). the last good version was basically C++11.

I can only comment this like: tell me you have no idea about current state of C++ without telling me you have no idea about current state of C++.


Then let's hear some counter examples please. As far as I'm aware the last important language change since C++11 was designated init in C++20, and that's been butchered so much compared to C99 that it is essentially useless for real world code.


There a whole bunch of features and fixes that each new version of the standard proclaimed, which severely affected usability, expressibility and convenience of the language. Describing many of them could easily take an hour. I'm sorry, I can only highlight a few of my particular favourites that I regularly use and let you study the rest changes.

https://en.cppreference.com/w/cpp/14

- fixed constexpr, which in C++11 was basically unusable

- great improvements for metaprogramming, which made such gems as `boost::hana` possible, such as variable templates and generic lambdas.

- function return type deduction

https://en.cppreference.com/w/cpp/17

- inline variables finally fixes the biggest pain of developing header-only libraries

- useful noexcept fix

- if constexpr + constexpr lambdas

- structured bindings

- guaranteed copy elision

- fold expressions

I'm at automotive where due to safety requirements we just barely started to work with C++17, so I don't have much practical experience of the standards past it, though I'm aware there are great updates too. Overall - C++11 is as horrible compared to C++17, as C++98 and roughly 03 were compared to ground breaking back then C++11. Personally, when I skim though job vacancies and see they are stuck at C++11, I pass it. Even C++14 makes me very sceptical, even though I used it really a lot. All due to new nice improvements of C++17.

https://en.cppreference.com/w/cpp/20

https://en.cppreference.com/w/cpp/23


Ok, I'll give you fold expressions and structured bindings as actually important language updates. The rest are mostly just tweaks that plug feature gaps which shouldn't have existed in the first place when the basic feature was introduced in C++11 or earlier.

IMHO by far most things which the C++ committee accepts as stdlib updates should actually be language changes (like for instance std::tuple, std::variant or std::range). Because as stdlib features those things make C++ code more and more unreadable compared to "proper" syntax sugar (Rust suffers from the exact same problem btw).


He missed concepts and modules which are also c++20 features, modules are just not properly supported (yet). Concepts are a massive QoL feature and modules might help with compile times.

> IMHO by far most things which the C++ committee accepts as stdlib updates should actually be language changes

From my experience thats not how the c++ committee works. They generally decompose requested features into the smallest building blocks and just include those in the language and let the rest be handled by the stdlib.

The thing that makes C++ unreadable in my opinion is template code and the fact that the namespace system sucks and just leads to unreadably long names (std::chrono::duration_cast<std::chrono::milliseconds>(.....)).


[flagged]


You should probably tone done your speech, and lay off the patronizing attitude, no matter how well justified are your artguments.


Oh I followed the C++ standardization process quite closely for about 15 years up until around C++14 and still follow it from the sidelines (having mostly switched back to C since then), and I'm fully aware of the fact that C++ has designed itself into a complexity corner where it is very hard to add new language features (after all, C++ has added more new problems that had then to be fixed in later standards than it inherited from C in the first place).

I still think the C++ committee should mainly be concerned about the language instead of shoehorning stuff into the stdlib, even if fixing the language is the harder problem.

And I can't be alone in this frustration, otherwise Carbon, Circle and Herb Sutter's cppfront wouldn't have happened.


It's even worse than that, because even if a new proposal had no concerns from a language & library point of view, it can still be crippled by vendor concerns because of short-sighted, entirely unforced errors the vendors made, often decades prior.

It's part of why I don't believe the C++-compatible C++-successor languages will deliver on their promises nearly as well as they think. They only solve half of the problem, which is that their translation units don't have to accommodate legacy C++ syntax.

They still have to reproduce existing C++ semantics and ABIs, their types still have to satisfy C++ SFINAE and Concepts, etc. so they're bringing all of the semantic baggage no matter what new syntax they dress it in.

And anywhere they end up introducing new abstractions to try to enforce safety, those will be incompatible with C++ enough to require hand-crafted wrappers, just like we already do with Rust, only Rust is much further along its own maturity and adoption curve than those languages are.


A practical example on C++14 & its constexpr+variable templates fixes, and why this was important: a while ago I wrote a wrapper over a compile-time fixed size array that imposed a variable compile-time fixed tensor layout on it. Basically, it turned a linear array into any matrix, or 3D or 4D or whatever -D is needed tensor and allowed to efficiently work with them in compile time already. There was obviously constexpr constuction + constexpr indexing + some constexpr tensor operations. In particular there was a constexpr trace operation for square matrices (a sum of the elements on the main diagonal, if I'm not mistaken). I decided to showcase the power of constexpr to some juniors in the team. For some reason, I thought that since the indexing operation is constexpr, then computing the matrix trace would require a compiler to just take elements of the matrix at precomputed at compile time addresses, which will be seen in the disassembly as memory loads from fixed offsets (without computing these offsets in runtime, since matrix layout is fixed in a compile time and index computation is constexpr operation). So I quickly wrote an example, compiled it with asm output, and looked at it... It was a facepalm moment - I forgot that trace() was also constexpr, so instead of doing any runtime computations at all, the code just had already computed trace value as a constant in a register. How is it not cool? Awesome!

Such things are extremely valueable as they allow to write much more expressive and easy to understand and maintain code for entities known in a compile time.


I sometimes wonder if the problem with rust is that we have not yet had a major set of projects which drive solutions to common dev problems.

Go had google driving adoption, which in turn drove open source efforts. The language had to remain grounded to not interfere with the doing of building back-end services.

Rust had mozilla/servo which was ultimately unsuccessful. While there are more than a few companies uinf rust for small projects with tough performance guarantees - I haven't seen the “we manage 1-10 MM sloc of complex code using rust” type projects.


Microsoft is rewriting quite a bit of their C# to Rust for performance reasons. Especially within their business line products. Rust have also become rather massive in the underlying tech in the telecommunications infra structure in several countries.

So I’m not sure that your take is really so on point. Especially as far as comparing it with Go goes (heehee), at least not in terms of 3rd party libraries where most of the Go ecosystems seems to be either maintained by one or two people or abandoned as those two people got new jobs. I think Go is cool by the way, but there is a massive difference in the maturity of the sort of libraries we looked into using during our PoCs.

Anyway. A lot of Rust adoption is a little quiet, and well, rather boring. So maybe that’s why you don’t hear too much about it.


Quiet adoption often means that a couple people in a company chose to invest in at least a small effort. It's unknown if those people would do it again, and they are unlikely to invest 2-3 devs to improve the rust library and language ecosystem.

Major adoption gets you tools like guice, 50+ person tools teams, and more.


Microsoft rewrote one, maybe two microservices as it was driven by a lead interested in using Rust and is rewriting parts of NT kernel (way more important).


It’s much more than that, even now they are continuously opening job postings with a focus on re-writing the 365 platform from C# to Rust.


It’s a bad habit to read too much into a single job posting.

(oh, I remember now, it’s the account traumatized by odata)


I’m not sure why you’re trying to make it seem like Microsoft isn’t rewriting the core of their 365 business products from C# to Rust, but you do you I guess.

As far as I’m aware I was never traumatised by OData. It’s true that I may have ranted about the sorry state of the public packages available outside of C# or Java. Not unwarranted criticism I think, but I wrote our own internal adaptation which now powers basically all our API clients for Typescript as a single shared no-dependency library.

But you seem to think you know me? Have we met?


Alright, if not for that one job posting, I’m curious where you are getting this information from?


I really think the problem of Rust is the borrow checker. Seriously. It is good but it is overkill. You have to do and plan all things around it and discourages a lot of patterns or makes them really difficult to refactor.

I would encourage people to understand Hylo's object model and mutable value semantics. I thinks something like that is far better, more ergonomic and very well-performing (in theory at least).


You can use unsafe code and pointers if you really want, but code will be unsafe, like C or C++.


Look at Hylo. Tell me what you think. You do not need all that juggling. Just use value semantics with lazy copying. The rest is handled for you. Without GC. Without dangling pointers.


TBF, unsafe Rust still enforces much more correctness than C or C++ (Rust's "unsafety" is more similar to Zig than C or C++).


TBF this is not really true. Unsafe Rust is a lot harder than comparable C/C++, because it must manually uphold all safety invariants of Safe Rust whenever it interacts with idiomatic Rust code. (These safety invariants are also why Safe Rust can often be compiled into better-optimized code than the idiomatic C/C++ equivalent.)


With more enforced correctness of Rust (also unsafe Rust) I mean small details like Rust not allowing implicit conversion between integer types. That alone eliminates a pretty big source of hidden bugs both in C and C++ (especially when assigning a wider to a narrower type, or mixing signed and unsigned integers).

All in all I'm not a big fan of Rust, but details like this make a lot of sense (even if they may appear a bit draconic at first) - although IMHO Zig has a slightly better solution by allowing implicit conversions that do not lose information. E.g. assigning a narrower to a wider unsigned integer type works, but not the other way around.


I wonder if Rust is killing flies with canons (as we say in spanish). There are perfectly safe alternatives or very safe ones.

Even in a project coded in Modern C++ with async code included, activating all warnings (it is a cards game) I found two segfaults in like almost 5 years... It can happen, but it is very rare at least with my coding patterns.

The code is in the tens of thousands of lines of code I would say, not sure 100%, will measure it.

Is it that bad to put one share pointer here and there and stick to unique pointers and try to not escape references? This is ehat I do and I use spans and string views carefully (you must with those!). I stick to the rule of zero. With all that it is not that difficult to have mostly safe code in my experience. I just use safe subsets except in a handful of places.

I am not saying C++ is better than Rust. Rust is still safer. What I am saying is that an evolution of the C++ model is much more ergonomic and less viral than this ton of annotations with a steep learning curve where you spend a good deal of your time fighting the borrow checker. So my question is:

- when it stops being worth to fight the borrow checker and just replace it with some alternative, even smart pointers here and there? Bc it seems to have a big viral cost and refactoring cost besides preventing valid patterns.


> What I am saying is that an evolution of the C++ model is much more ergonomic and less viral than this ton of annotations with a steep learning curve where you spend a good deal of your time fighting the borrow checker. So my question is:

That "evolution of the C++ model" (the C++ Core Guidelines) has an even steeper learning curve than Rust itself, and even more invasive annotations if you want to apply it across the board. There is no silver bullet, and Rust definitely has the more principled approach to these issues.


I'm not answering your question here, just saying my opinion on C++ vs Rust. I think that the big high-level difference (before diving into details like ownership and the borrow checker) is that C++'s safety is opt-in, while Rust's safety is opt-out. So in C++ you have to be careful each time you allocate or access memory to do it in a safe way. If you're working in a team, you all have to agree on the safe patterns to use and check that your team members are sticking with it during code rewiews. Rust takes this burden from you, at the expense of having to learn how to cooperate with the borrow checker.

So, going back to your question, I think that the answer is that it depends on many factors, including also some non-strictly-technical ones like the team's size.


An evolution of the C++ model could be something like Hylo. Hylo is safe. Hylo does not need a borrow checker. Hylo does not need a garbage collector.

That is what I mean by evolution. I do not mean necessarily C++ with Core Guidelines.


I think you replied to the wrong reply.


Unsafe Rust is not harder or safer than C/C++. If you can uphold all safety invariants for C/C++ code (OMG!), then it will be easier to do same thing for unsafe Rust, because Rust has better ergonomic.


Better ergonomics for what? For refactoring with a zillion lifetime annotations? Annotations go viral down the stack call. That is a headache. Not useless. I know it is useful. Just a headache, a price to pay. For linked structures? For capturing an exception.

No, it is not more ergonomic. It is safer. That's it.

And some parts of that enforcement via this model is terribly unergonomic.


? I believe the Rust efforts in Firefox were largely successful. I think Servo was for experimental purposes and large parts were then added to Firefox with Quantum: https://en.wikipedia.org/wiki/Gecko_(software)#Quantum


My recollection was that those were separate changes - servo didn’t get to the stage where it could be merged, but it was absolutely the plan to build a rendering engine that outperformed every other browser before budget cuts hit.


We did port Servo’s WebRender to Firefox and shipped it everywhere. The only caveat is that it took multiple years of upgrades, fixes, and rewriting it.


It would be interesting to hace a postmortem of what went well, wrong, etc. for this initial effort.

I believe work continues now somewhere else but it would be absolutely nice to know more from the experience from others.


> Go had google driving adoption

This is commonly said but I think it's only correct in the sense that Google is famous and Google engineers started it.

Google never drove adoption; it happened organically.


> Rust had mozilla/servo which was ultimately unsuccessful.

There's lots of Rust code in Firefox!

> I haven't seen the “we manage 1-10 MM sloc of complex code using rust” type projects.

Meta has a lot of Rust internally.

The problems with Rust for high-level indie game dev logic, where you're doing fast prototyping, are very specific to that domain, and say very little about its applicability in other areas.


Servo is an ongoing project, it has not "failed" or been unsuccessful in any sense.


I think the original poster is perhaps speaking to previous articles (ie https://news.ycombinator.com/item?id=39269949) which from the outside looking in made me feel that perhaps this infact was the case (at least for a period).


Exactly, it's all about the ecosystem and very little about the language features


Kind of both in my opinion. But rust is bringing nothing to the table that games need.

At best rust fixes crash bugs and not the usual logic and rendering bugs that are far more involved and plague users more often.


The ability of engines like Bevy to automatically schedule dependencies and multithread systems, which relies on Rust's strictness around mutability, is a big advantage. Speaking as someone who's spent a long time looking at Bevy profiles, the increased parallelism really helps.

Of course, you can do job queuing systems in C++ too. But Rust naturally pushes you toward the more parallel path with all your logic. In C++ the temptation is to start sequential to avoid data races; in systems like Bevy, you start parallel to begin with.


Aside from a physics simulation, I'm curious as to what you think would be a positive cost benefit from that level of multithreading for the majority of game engines. Graphical pipelines take advantage of the concept but offload as much work as possible to the GPU.


We were doing threading beyond that in 2010, you could easily have rendering, physics, animation, audio and other subsystems chugging along on different threads. As I was leaving the industry most engines were trending towards very parallel concurrent job execution systems.

The PS3 was also an interesting architecture(i.e. SPUs) from that perspective but it was so distant from the current time that it never really took off. Getting existing things ported to it was a beast.

Bevy really nails the concurrency right IMO(having worked on AA/AAA engines in the past) it's missing a ton in other dimensions but the actual ECS + scheduling APIs are a joy. Last "proper" engine I worked on was a rats-nest of concurrency in comparison.

That said as a few other people pointed out, the key is iteration, hot-reload and other things. Given the choice I'd probably do(and have done) a Rust based engine core where you need performance/stability and some dynamic language on top(Lua, quickjs, etc) for actual game content.


> That said as a few other people pointed out, the key is iteration, hot-reload and other things. Given the choice I'd probably do(and have done) a Rust based engine core where you need performance/stability and some dynamic language on top(Lua, quickjs, etc) for actual game content.

I fully agree that this will likely be the solution a lot of people want to go with in Bevy: scripting for quick iteration, Rust for the stuff that has to be fast. (Also thank you for the kind words!)


Yeah, it's a fairly clean and natural divide. You see it in most of the major engines and it was present in all the proprietary engines I worked on(we mostly used Lua/LuaJIT since this predated some great recent options like quickjs).

We even had things like designers writing scripts for AI in literate programming with Lua using coroutines. We fit in 400kb of space for code + runtime using Lua on the PSP(man that platform was a nightmare but the scripting worked out really well).

Rust excels when you know what you want to build, and core engine tech fits that category pretty cleanly. Once you get up in game logic/behavior that iteration loop is so dynamic that you are prototyping more than developing.


In big-world high-detail games, the rendering operation wants so much time that the main thread has time for little else. There's physics, there's networking, there's game movement, there's NPC AI - those all need some time. If you can get that time from another CPU, rendering tends to go faster.

I tend to overdo parallelism. Load this file into a Tracy profile, version 0.10.0, and you can see what all the threads in my program are doing.[1] Currently I'm dealing with locking stalls at the WGPU level. If you have application/Rend3/WGPU/Vulkan/GPU parallism, every layer has to get it right.

Why? Because the C++ clients hit a framerate wall, with the main thread at 100% and no way to get faster.

[1] https://animats.com/sl/misc/traces/clockhavenspeed02.tracy


Animations are an example. I landed code in Bevy 0.13 to evaluate all AnimationTargets (in Unity speak, animators) for all objects in parallel. (This can't be done on GPU because animations can affect the transforms of entities, which can cause collisions, etc. triggering arbitrary game logic.) For my test workload with 10,000 skinned meshes, it bumped up the FPS by quite a bit.


"Fearless concurrency"


C++ classes with inheritance are a pretty good match for objects in a 3D (or 2D) world, which is why C++ became popular with 3D game programmers.


This is not at all my experience.

What I have experienced is that C++ classes with inheritance are good at modeling objects in a game at first, when you are just starting and the hierarchy is super simple. Afterwards, it isn’t a good match. To can try to hack around this in several ways, but the short version of it is that if your game isn’t very simple you are better off starting with an Entity Component System setup. It will be more cumbersome to use than the language-provided features at first, but the lines cross very quickly.


I like the Javascript way of objects just having fully mutable keys/values like dictionaries, with no inheritance or static typing.


Hmm no not really in my experience. Even the old "Entities and Components" system in Unity was better, because it allowed to compose GameObject behaviour by attaching Component objects, and this system was often replicated in C++ code bases until it "evolved" into ECS.


This is how I feel about golang and systems programming. The strong concurrency primitives and language simplicity make it easier to write and reason about concurrent code. I have to maintain some low level systems in python and the language is such a worse fit for solving those problems.


Yeah, OOP makes sense for games. The language will matter a bit for which one takes off, but anything will work given enough support. Like, Python doesn't inherently make a lot of sense for data processing or AI, but it's good enough.


OOP kind of goes out the window when people start using entity component systems. Of course, like the author, I'm not sure I'll need ECS since I'm not building a AAA game.


Had to look up ECS to be honest, and it's pretty much what I already do in general dev. I don't care to classify things, I care what I can do with something. Which is Rust's model.


Interfaces or traits are not ECS though. ECS is mostly concerned about how data is layed out in memory for efficient processing. The composability is (more or less) just a nice side effect.


This is correct. I wonder how Rust models SoA wirh borrowing. Is it doable or becomes very messy?

I usually have some kind of object that apparently looks like OOP but points all its features to the SoA. All that would be borrowing and pointing somewhere else in slices or similar in Rust I assume?


AFAIK tagged-index-handles are typically used for this (where the tag is a generation-counter to detect 'dangling handles'), which more or less side-steps the borrow checker restrictions (e.g. see https://floooh.github.io/2018/06/17/handles-vs-pointers.html).


Sorry I got lost in that sentence. What is Rust's model?


Rust has traits on structs instead of using inheritance. Aka composition.


Even PHP as traits by now. Languages tend to incorporate others Languages successful features. There is of course feature inflation risk of course. There are Languages that take as a goal to avoid that inflation, such as Zig, or that arrives there as a byproduct of being very focused in a specific use case like AWK.


AFAIK composition, in the traditional sense, means that you put your objects/concepts together from different smaller objects or concepts. Composition would be to have a struct Car that uses another struct called Engine to handle its driving needs. A car “has a” engine. A trait that implements the “this thing has an engine” behavior isn’t composition, it’s actually much closer to [multiple] inheritance (a car “is a” motorized vehicle).


Traits do implement interface inheritance, but that doesn't have the same general drawbacks as implementation inheritance (such as the well-known "fragile base class" problem).


I don't know the terminology. I just know that Rust does whatever the alternative is to the Java way with inheritance. You don't get stuck with the classic classification problem.


But that... wasn't in your comment at all...

If I say "I don't care about safety, I care about expressiveness. Which is Rust's model"... "which" has to refer to one of the other things I just mentioned (safety or expressiveness) not some other concept.


You can also have structs be generic over some "tag" type, which when combined with trait definitions gets you quite close to implementation inheritance as seen in C++ and elsewhere. It's just less common because usually composition is all that's required.


To be clear, the reason why Python is so popular for data wrangling (including ML/AI) is not due to the language itself. It is due to the popular extensions (libraries) exclusively written in C & C++! Without these libraries, no one would bother with Python for these tasks. They would use C++, Java, or .NET. Hell, even Perl is much faster than Python for data processing using only the language and not native extensions.


Python makes sense because of accessibility and general comfort for relatively small code bases with big data sets.

Those data scientists at least from my experience are more into math/business than interested in most efficient programming.

Or at least that was the situation at first and it sticked.


Disagree the adoption of C++ was more about Moore's law than ecosystem, although having compilers that were beginning to not be completely rubbish also helped.


Also C++ could be adopted incrementally by C developers. You could use it as “C with classes”, or just use operator overloading to make vector math more tolerable, or whatever subset that you happened to like.

So there’s really three forces at play in making C++ the standard:

1) The Microsoft ecosystem. They literally stopped supporting C by not adopting the C99 standard in their compiler. If you wanted any modern convenience, you had to compile in C++ mode. New APIs like Direct3D were theoretically accessible from C (via COM) but in practice designed for C++.

2) Better compilers and more CPU cycles to spare. You could actually count on the compiler to do the right thing often enough.

3) Seamless gradual adoption for C developers.

Rust has a good compiler, but it lacks that big ticket ecosystem push and is not entirely trivial for C++ developers to adopt.


I'd say Rust does have that big ticket ecosystem push. Microsoft has been embracing Rust lately, with things like official Windows bindings [1].

The bigger problem is just inertia: large game engines are enormous.

[1]: https://github.com/microsoft/windows-rs


Repo contributor here, just to curb some expectations a bit: it's one very smart guy (Kenny), his unpaid volunteer sidekick (me), and a few unpaid external contributors. (I'm trying to draw a line between those with and without commit access, hence all the edits.)

There's no other internal or external Microsoft /support/ that I'm aware of. I wouldn't necessarily use it as a signal of the company's intentions at this time.

That said, there are Microsoft folks working on the Rust compiler, toolchain, etc. side of things too. Maybe those are better indicators!


That's disappointing on Microsoft's part, because their docs make it seem like windows-rs is the way of the future.

Thanks for your work, though!


Don't be, they also killed C++/CX, even went to CppCon 2016 telling us how great future C++/WinRT would bring to us.

Now almost a decade later, VS tooling is still not there, stuck in ATL/VC++ 6.0 like experience (they blame it on the VS team), C++/WinRT is in maintenance, only bug fixes, and all the fun is on Rust/WinRT.

I would never trust this work for production development.


I wish Microsoft had any direction on the 'way of the future' for native apps on Windows


If they did publish a “way of the future” direction, would you believe them?

Fool me N times then shame on them, fool me N+1 times, then shame on me sort of thing.


The most infuriating thing is their habit of rebuilding things just about the time they reach a mature and highly stable state, creating an entirely new unstable and unreliable system. And then the time that system almost reaches a stable state - it's scrapped and it starts all over again.

WPF -> UWP -> WinUI -> WinUI 2 -> WinUI 3 is just such a ridiculous chain. WPF was awesome, highly extensible, and could have easily and modularly been extended indefinitely - while also maintaining its widespread (if unofficial) cross platform support and just general rock solid performance/stability. Instead it's the above pattern over and over and over.

And now it seems WinUI 3 is also dead, alas without even bothering with a replacement. Or maybe that's XAMARIN, wait I mean MAUI? Not entirely joking - I never bothered to follow that seemingly completely parallel system doing pretty much the same things. On the bright side this got me to finally migrate away from Microsoft UI solutions, which has made my life much more pleasant since!


I'd have bought into MAUI if there was Linux support in the box.


I'd say the inertia is far more social than codebase size related. Right now whilst there are pockets of interest there is no broader reason to switch. Bevy as the leading contender isn't going to magic it's way to being capable of shipping AAA titles unless a studio actually adopts it. I don't think it's actually shipped a commercially successful indie game yet.

Also game engines emphatically don't have to be huge. Look at Balatro shipping on Love2d.


> Also game engines emphatically don't have to be huge. Look at Balatro shipping on Love2d.

Balatro convinced me that Love2D might be a good contender for my next small 2D game release. I had no idea you could integrate Steamworks or 2D shaders that looked that good into Love2D. And it seems to be very cross-platform, since Balatro released on pretty much every platform on day 1 (with some porting help from a third party developer it seems like).

And since it's Lua based, I should be able to port a slightly simpler version of the game over to the Playdate console.

I'm also considering Godot, though.


There’s a pretty big difference between the Playdate and anything else in performance but also in requirements for assets. So much so I hope your idea is scoped accordingly. But yeah Love2d is great.


It is. I've already half ported one of my games to the Playdate (and own one), I'm pretty aware of its capabilities.

The assets are what I struggle with most. 1-bit graphics that look halfway decent are a challenge for me. In my half-ported game, I just draw the tiles programatically, like I did in the Pico-8 version (and they don't look anywhere near as good as a lot of Playdate games, so I need to someday sit down and try to get some better art in it).


There are a few successful games like Tunnet [1] written in Bevy.

[1]: https://store.steampowered.com/app/2286390/Tunnet/


Looks cool and well received but at ~300ish reviews hardly a shining beacon if we extrapolate sales from that. But I'll say that's a good start.


Speaking as a Godot supporter, I don't think sales numbers of shipped games are relevant to anyone except the game's developer.

When evaluating a newer technology, the key question is: are there any major non-obvious roadblocks? A finished game (with presumably decent performance) tells you that if there are problems, they're solvable. That's the data.


Game engines are tools not fan clubs. It’s reasonable to judge them on their performance for which they are designed. As someone who cares about the commercial viability of their technology choices this is a small but positive signal.

What it tells me is someone shipped something and it wasn’t awful. Props to them!


> A finished game (with presumably decent performance) tells you that if there are problems, they're solvable.

It doesn't tell you anything about velocity, which is by far the most important metric for indie devs.

After all, the studio could have expended (maybe) twice as much effort to get a result.


Or maybe Rust allowed them to develop twice as fast. Who knows? We're going by data here, and this data point shows that games can be made in Bevy. No more and no less.


Agreed. We've learned a lot from Godot, by the way. I consider all us open source engines to be in it together :)


So far I am way less productive in rust than in any language I've ever used for actual work, so to rewrite an entire game engine would seem like commercial suicide.


"so far" is doing a lot of heavy lifting there =)

I was the same the first two times I tried to use rust (earnestly). However, one day it just "clicked" and my productivity exceeds that of almost anything else, for the specific type of work I'm doing (scientific computation)


I think we shouldn't expect any language to lead different programmers to the same experiences. Rust has the inital steep learning curve, and after that it's a matter of taste whether one is willing to forge on and turn it into a honed tool. Also, I think it's clear that Rust excels in some fields far more naturally than in others. Making blanket statements about how Rust, or any language, is (un)productive is a disservice to everyone.


Yes, the Google folks are also funding efforts to improve Rust/C++ interop, per https://security.googleblog.com/2024/02/improving-interopera...


Thanks for the link. This one was also posted awhile back in a rust comment and when I first read it, I thought Google had used Rust in the V8 sandbox, but re-reading it seems that the article uses Rust as an ‘example’ of a memory safe language but does not explicitly say that it uses Rust. Maybe someone with more knowledge can confirm that Rust was (or was not) used in the V8 Google Chrome sandbox example….

https://v8.dev/blog/sandbox


Rust is not used in V8, to my knowledge.


That description of problems bodes well for Zig


Theoretically accessible describes the experience of trying to use D3D from C very well!

Was trying to use it with some kind of gcc for windows. The C++ part was still lacking some required features, so it was advised to use D3D from C instead C++. There were some helper macros, but overall I was glad when Microsoft started to release their Express (and later Community) Editions of Visual Studio.


I access D3D(11) from C in my libraries and tbh it's not any different from C++ in terms of usability (only difference is that the "this" argument and vtable indirection is implicit in C++, but that's just syntax sugar that can be wrapped in a macro in C).


not true anymore, c11 and c17 are either supported or coming

https://devblogs.microsoft.com/cppblog/c11-and-c17-standard-...


Not really relevant to 30 years ago though.


I worked on many of Activision's games 1995-2000 and C++ was the overwhelming choice of programming language for PC games. C was more common for console. In 1996 the quality of MSFT IDE/ Compiler, plus the CPUs available at the time was such that it could take an hour to compile a big game. By 1998 it was a few minutes. As I recall I think MSFT purchased another companies compiler and that really changed Visual Studio.


I was a developer on the Microsoft C++ compiler team from 1991 to 2006. We definitely didn't purchase someone else's compiler in that time. We looked at the EDG front end at various times but never moved over to it while I was there.

Perhaps the speed-up you remember had something to do with the switch-over from 16 bits to 32, which would have been the early to mid 90s. Or you're thinking of Microsoft's C compiler starting from Lattice C, back in the 80s before my time. There was also a lot of work done on pre-compiled headers to speed compilation in the latter half of the 90s (including some that I was responsible for).


I heard that early versions of C++ IntelliSense from Visual Studio used Edison Design Group's (EDG) front end. Is that true? No trolling here -- honest question. If yes, are they still using it now?


Not true by the time I retired in 2007, but I've got a vague memory of talking to someone on the C++ front-end team some time after that and EDG for IntelliSense being mentioned. So no idea if that's really true or not, and if so, whether that's true today.

I was heavily involved in the first version of C++ IntelliSense, roughly 1997?, and it was all home-grown. It was also a miracle it worked at all. I've blocked out most of the ugly details from my memory, but parsing on the fly with a fast enough response time to be useful in the face of incomplete information about which #if branches to take and, especially, template definitions was a tower of heuristics and hacks that barely held together. Things are much better nowadays with more horsepower available to replace those heuristics.


I was a teenager at that point. I learnt C in the early 90s and C++ after 96 IIRC. Didn’t start professionally in games until 2004 though!


> and didn't become popular for gamedev until the turn of the millenium

Wasn't this also because Microsoft had terrible support for C?

Since the mid-90's, a number of gamedevs moved to C++ but were unhappy with the results.. how OOP works, exception handling, the STL, etc.

My understanding is.. by late 90's.. many game developers, despite using C++, we still coding more inline with C programming than (proper) C++.

Mostly C code but using some features of C++ like, functions inside a struct, or using namespaces, that did not sacrifice compilation and runtime speed.


We wrote this in C++ (and assembler), but used only the most obvious language features. We laid down the first code in '95 or '96:

https://www.youtube.com/watch?v=9UOYps_3eM0


Yeah, gaming industry has become mature enough to build up its own inertia so it will take some time for new technologies to take off. C# has become a mainstream gamedev language thanks to Unity, but this also took more than a decade.


Comparing the time it takes for a prog language to spread from the 80s to today is a bad vantage point. Stuff took much longer to bake back then -- but even so the point is moot, as other commentors pointed out, it took off roughly the same amount of time between 2015 and today.


Hmm I don't agree. We're far away from the frantic hardware and software progress in the 80s and 90s. Especially in software development it feels like we're running in circles (but very, very fast!) since the early 2000's, and things that took just a few months or at most 2..3 years in the 80s or 90s to mature take a decade or more now.


The concept of AAA games didn't even exist back in 1985, very few people were developing games at that era, and even fewer were writing "complex" games that would need C++.

The SNES came on 1990 and even then it had it's own architecture and most games were written in pure assembly. The PlayStation had a MIPS CPU and was one of the first to popularize 3D graphics, the biggest complexity leap.

I believe your are seeing causation were only correlation should be given. C++ and more complex OOP languages just joined the scene when the games themselves became complex, because of hardware and market natural evolution


Many tried c++ in early 90s, but wasnt it too slow/memory intensive? You had to implement lots of inline c/assembly to have a bit of performance. Nowadays everything is heavily optimized, but back then not.


If you’re referring to game dev specifically, there have been (and continue to be) concerns around the weight of C++ exception handling, which is deeply-embedded in the STL. This proliferated in libraries like the EASTL. C++ itself however is intended to have as many zero-cost abstractions as possible/reasonable.

The cost of exception handling is less of a concern these days though.


Exception handling is easy enough to disable. Luckily, or C would probably still be the game developers go to.


Seems like a few contradictory ideas here. Rust is supposed to be a better safer C/C++.

Then lot of comments here that games are best done in C++.

So why can't Rust be used for games?

What is really missing beyond an improved ecosystem of tools. All also built on Rust.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: