Hacker News new | past | comments | ask | show | jobs | submit | pcwalton's comments login

> No Tiny Glade doesn’t count.

> And if you look at Steam there are simply zero Rust made games in the top 2000. Zero. None nada zilch.

Well, sure, if you arbitrarily exclude the popular game written in Rust, then of course there are no popular games written in Rust :)

> And maybe not Switch although I’m less certain.

I have talked to Nintendo SDK engineers about this and been told Rust is fine. It's not an official part of their toolchain, but if you can make Rust work they don't care.


Yeah in my haste I mixed up my rants. The bane of typing at work inbetween things.

Tiny Glade is indeed a rust game. So there is one! I am not aware of a second. But it’s not really a Bevy game. It uses the ECS crate from Bevy.

Egg on my face. Regrets.


> I really don’t think Rust is a good match for game dev. Both because of the borrow checker which requires a lot of handles instead of pointers and because compile times are just not great.

I completely disagree, having been doing game dev in Rust for well over a year at this point. I've been extremely productive in Bevy, because of the ECS. And Unity compile times are pretty much just as bad (it's true, if you actually measure how long that dreaded "Reloading Domain" screen takes).


> More than anything else, this sounds like a good lesson in why commercial game engines have taken over most of game dev. There are so many things you have to do to make a game, but they're mostly quite common and have lots of off-the-shelf solutions.

> That is, any sufficiently mature indie game project will end up implementing an informally specified, ad hoc, bug-ridden implementation of Unity (... or just use the informally specified, ad hoc and bug-ridden game engine called "Unity")

But using Bevy isn't writing your own game engine. Bevy is 400k lines of code that does quite a lot. Using Bevy right now is more like taking a game engine and filling in some missing bits. While this is significantly more effort than using Unity, it's an order of magnitude less work than writing your own game engine from scratch.


But Unity game objects are the same way: you allocate them when they spawn into the scene, and you deallocate them when they despawn. Accessing them after you destroyed them throws an exception. This is exactly the same as entity IDs! The GC doesn't buy you much, other than memory safety, which you can get in other ways (e.g. generational indices, like Bevy does).

But in rust you have to fight the borrow checker a lot, and sometimes concede, with complex referential stuff. I say this as someone who writes a good bit of rust and enjoys doing so.

I just don't, and even less often with game logic which tends to be rather simple in terms of the data structures needed. In my experience, the ownership and borrowing rules are in no way an impediment to game development. That doesn't invalidate your experience, of course, but it doesn't match mine.

The dependency injection framework provided by Bevy also particularly elides a lot of the problems with borrow checking that users might run into and encourages writing data oriented code that generally is favorable to borrow checking anyway.

This is a valid point. I've played a little with Bevy and liked it. I have also not written a triple-A game in Rust, with any engine, but I'm extrapolating the mess that might show up once you have to start using lots of other libraries; Bevy isn't really a batteries-included engine so this probably becomes necessary. Doubly so if e.g. you generate bindings to the C++ physics library you've already licensed and work with.

These are all solvable problems, but in reality, it's very hard to write a good business case for being the one to solve them. Most of the cost accrues to you and most of the benefit to the commons. Unless a corporate actor decides to write a major new engine in Rust or use Bevy as the base for the same, or unless a whole lot of indie devs and part-time hackers arduously work all this out, it's not worth the trouble if you're approaching it from the perspective of a studio with severe limitations on both funding and time.


Thankfully my studio has given me time to be able to submit a lot of upstream code to Bevy. I do agree that there's a bootstrapping problem here and I'm glad that I'm in a situation where I can help out. I'm not the only one; there are a handful of startups and small studios that are doing the same.

Given my experience with Bevy this doesn't happen very often, if ever.

The only challenge is not having an ecosystem with ready made everything like you do in "batteries included" frameworks. You are basically building a game engine and a game at the same time.

We need a commercial engine in Rust or a decade of OSS work. But what features will be considered standard in Unreal Engine 2035?


Yes but regarding use of uninitialized/freed memory, neither GC nor memory safety really help. Both "only" help with totally incidental and unintentional and small scale violations.

> We were writing an animation system in Bevy and were hit by the painful upgrade cycle twice.

I definitely sympathize with the frustration around the churn--I feel it too and regularly complain upstream--but I should mention that Bevy didn't really have anything production-quality for animation until I landed the animation graph in Bevy 0.15. So sticking with a compatible API wasn't really an option: if you don't have arbitrary blending between animations and opt-in additive blending then you can't really ship most 3D games.


> Just a mode to automatically coerce between numeric types would make Rust so much more ergonomic for gamedev.

C# is stricter about float vs. double for literals than Rust is, and the default in C# (double) is the opposite of the one you want for gamedev. That hasn't stopped Unity from gaining enormous market share. I don't think this is remotely near the top issue.


I have written a lot of C# and I would very much not want to use it for gamedev either. I can only speak for my own personal preference.

> Nobody has really pushed the performance issues.

This is clearly false. The Bevy performance improvements that I and the rest of the team landed in 0.16 speak for themselves [1]: 3x faster rendering on our test scenes and excellent performance compared to other popular engines. It may be true that little work is being done on rend3, but please don't claim that there isn't work being done in other parts of the ecosystem.

[1]: https://bevyengine.org/news/bevy-0-16/


I read the original post as saying that no one has pushed the engine to the extent a completed AAA game would in order to uncover performance issues, not that performance is bad or that Bevy devs haven’t worked hard on it.

Wonderful work!

...although the fact that a 3x speed improvement was available kind of proves their point, even if it may be slightly out of date.


Most game engines other than the latest in-house AAA engines are leaving comparable levels of performance on the table on scenes that really benefit from GPU-driven rendering (that's not to say all scenes, of course). A Google search for [Unity drawcall optimization] will show how important it is. GPU-driven rendering allows developers to avoid having to do all that optimization manually, which is a huge benefit.

> Do people actually travel between SF and LA?

Yes. The most obvious example of a company that's built around that corridor is Netflix: tech in the Bay Area, film production in L.A.


> I do not understand how people expect to write convincingly that tools that reliably turn slapdash prose into median-grade idiomatic working code "provide little value".

Honestly, I'm curious why your experience is so different from mine. Approximately 50% of the time for me, LLMs hallucinate APIs, which is deeply frustrating and sometimes costs me more time than it would have taken to just look up the API. I still use them regularly, and the net value they've imparted has been overall greater than zero, but in general, my experience has been decidedly mixed.

It might be simply that my code tends to be in specialized areas in which the LLM has little training data. Still, I get regular frustrating API hallucinations even in areas you'd think would be perfect use cases, like writing Blender plugins, where the documentation is poor (so the LLM has a relatively higher advantage over reading the documentation) and examples are plentiful.

Edit: Specifically, the frustrating pattern is: (1) the LLM produces some code that contains hallucinated APIs; (2) in order to test (or even compile) that code, I need to write some extra supporting code to integrate it into my project; (3) I discover that the APIs were hallucinated because the code doesn't work; (4) now I not only have to rewrite the LLM's code, but I also have to rewrite all the supporting code I wrote, because it was based around a pattern that didn't work. Overall, this adds up to more time than if I had just written the code from scratch.


You're writing Rust, right? That's probably the answer.

The sibling comment is right though: it matters hugely how you use the tools. There's a bunch of tricks that help and they're all kind of folkloric. And then you hear "vibe coding" stories of people who generate their whole app from a prompt, looking only at the outputs; I might generate almost my whole project from an LLM, but I'm reading every line of code it spits out and nitpicking it.

"Hallucination" is a particularly uninteresting problem. Modern LLM coding environments are closed-loop ("agentic", barf). When an LLM "hallucinates" (ie: is wrong, like I am many times a day) about something, it figures it out pretty quick when it tries to build and run it!


I haven’t had much of a problem writing Rust code with Cursor but I’ve got dozens of crates docs, the Rust book, and Rustinomicon indexed in Cursor so whenever I have it touch a piece of code, I @-include all of the relevant docs. If a library has a separate docs site with tutorials and guides, I’ll usually index those too (like the cxx book for binding C++ code).

I also monitor the output as it is generated because Rust Analyzer and/or cargo check have gotten much faster and I find out about hallucinations early on. At that point I cancel the generation and update the original message (not send it a new one) with an updated context, usually by @-ing another doc or web page or adding an explicit instruction to do or not to do something.


One of the frustrating things about talking about this is that the discussion often sounds like we're all talking about the same thing when we talk about "AI".

We're not.

Not only does it matter what language you code in, but the model you use and the context you give it also matter tremendously.

I'm a huge fan of AI-assisted coding, it's probably writing 80-90% of my code at this point, but I've had all the same experiences that you have, and still do sometimes. There's a steep learning curve to leveraging AIs effectively, and I think a lot of programmers stop before they get far enough along on that curve to see the magic.

For example, right now I'm coding with Cursor and I'm alternating between Claude 3.7 max, Gemini 2.5 pro max, and o3. They all have their strengths and weaknesses, and all cost for usage above the monthly subscription. I'm spending like $10 per day on these models at the moment. I could just use the models included with the subscription, but they tend to hallucinate more, or take odd steps around debugging, etc.

I've also got a bunch of documents and rules setup for Cursor to guide it in terms of what kinds of context to include for the model. And on top of that, there are things I'm learning about what works best in terms of how to phrase my requests, what to emphasize or tell the model NOT to do, etc.

Currently I usually start by laying out as much detail about the problem as I can, pointing to relevant files or little snippets of other code, linking to docs, etc, and asking it to devise a plan for accomplishing the task, but not to write any code. We'll go back and forth on the plan, then I'll have it implement test coverage if it makes sense, then run the tests and iterate on the implementation until they're green.

It's not perfect, I have to stop it and backup often, sometimes I have to dig into docs and get more details that I can hand off to shape the implementation better, etc. I've cursed in frustration at whatever model I'm using more than once.

But overall, it helps me write better code, faster. I never could have built what I've built over the last year without AI. Never.


> Currently I usually start by laying out as much detail about the problem as I can

I know you are speaking from experience, and I know that I must be one of the people who hasn't gotten far enough along the curve to see the magic.

But your description of how you do it does not encourage me.

It sounds like the trade-off is that you spend more time describing the problem and iterating on the multiple waves of wrong or incomplete solutions, than on solving the problem directly.

I can understand why many people would prefer that, or be more successful with that approach.

But I don't understand what the magic is. Is there a scaling factor where once you learn to manage your AI team in the language that they understand best, they can generate more code than you could alone?

My experience so far is net negative. Like the first couple weeks of a new junior hire. A few sparks of solid work, but mostly repeating or backing up, and trying not to be too annoyed at simpering and obvious falsehoods ("I'm deeply sorry, I'm really having trouble today! Thank you for your keen eye and corrections, here is the FINAL REVISED code, which has been tested and verified correct"). Umm, no it has not, you don't have that ability, and I can see that it will not even parse on this fifteenth iteration.

By the way, I'm unfailingly polite to these things. I did nothing to elicit the simpering. I'm also confused by the fawning apologies. The LLM is not sorry, why pretend? If a human said those things to me, I'd take it as a sign that I was coming off as a jerk. :)


I haven't seen that kind of fawning apology, which makes me wonder what model you're using.

More broadly though, yes, this is a different way of working. And to be fair, I'm not sure if I prefer it yet either. I do prefer the results though.

And yes, those results are that I can write better code, faster than I otherwise would with this approach. It also helps me write code in areas I'm less familiar with. Yes, these models hallucinate APIs, but the SOTA models do so much less frequently than I hear people complaining about, at least for the areas I work in.


> The OP cites these (https://github.com/devkitPro/libogc/blob/52c525a13fd1762c103... and https://github.com/atgreen/RTEMS/blob/2f200c7e642c214accb7cc...), but that's hardly a smoking gun. The function is trivial, just filling in some struct fields.

Yeah, I'm with Marcan 90% of the time, and in my view Marcan is more likely than not right that that function is derived from the RTEMS function, but in my view there's still reasonable doubt. That is to say, purely based on the evidence linked, I only agree that it's probable that the code is copied and disagree with Marcan's claim that it's "not possible" for the implementation to be non-infringing.

The fact that some of the identifiers are similar raises the biggest suspicions. "__lwp_stack_isenough" vs. "_Stack_Is_enough" is suspicious because I'd probably call that "sufficient_stack_available" or something like that. "LWP_STATES_DORMANT" vs. "STATES_DORMANT" is also suspicious because the normal OS term would be "sleeping". Still, the function logic is different enough that, even with that evidence, one could plausibly claim that only the headers or interface were copied and the implementation was clean-room, which is non-infringing per Oracle v. Google.

In US legal system terms, I'd say that a preponderance of the evidence shows that the code is a derivative work (i.e. it's more likely than not that the code was copied at some point), but that there's still reasonable doubt (i.e. a reasonable person could plausibly believe otherwise).


I looked for other examples in the code and found __lwp_thread_changepriority pretty damning, judge for yourself:

https://github.com/atgreen/RTEMS/blob/2f200c7e642c214accb7cc...

https://github.com/devkitPro/libogc/blob/52c525a13fd1762c103...

The only difference is that the prependit parameter and its associated branch is missing, other than that the functions are completely identical.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: