Hacker News new | past | comments | ask | show | jobs | submit login
Writing Network Drivers in Rust [pdf] (tum.de)
364 points by MrBuddyCasino on Nov 8, 2018 | hide | past | favorite | 123 comments

Per a talk from Joshua Liebow-Feeser at the Rust Belt Rust 2018 conference, Google is heavily using Rust for networking in their new Fuchsia operating system.

This is a rough approximation of a quote he gave during the talk:

"What makes Rust different is not that you can write high-performance, bare-metal code. People use C and C++ to do that all the time. What makes Rust different is that when you write that code, it is safe, clean, and easy to use, and you are confident in its correctness."

If even Google is using Rust for performance-critical development, that seems pretty promising to me.

Isn't the networking stack of Fuchsia written in Go?



I'm pretty sure Go is the dominant language for networking in Fuchsia and not Rust.

They're phasing much of the go and replacing it with rust.

Different layers, perhaps? The DMAing around of physical packets, and the parsing of ARP/IP/TCP headers, require different safety guarantees.

Here are Joshua’s slides from the talk mentioned above: https://joshlf.com/files/talks/Move%20Fast%20and%20Don't%20B...

The Chrome OS team is writing Rust for it's Crostini project as well.

Fuschia seem more like an experiment in object-capabilities design rather than "performance-critical development" to me.

Object-capabilities design might be a net performance win, despite its overhead, if it can obviate some/all layers of “workload-oblivious” sandboxing (e.g. VM hypercalls, container network sandboxing, ring0–3 context switching, separate process memory maps with associated TLB cache flushes, etc.) Capability checks can be optimized away when you (the program loader, e.g. the kernel) know they’re not necessary; while you can never transparently optimize a syscall or hypercall into a regular function call.

Ideally, under a capability-based OS, you can just have a unikernel that directly loads user-supplied(!) modules in a bytecode format where instructions have capability-checking as an intrinsic; and then, after performing static analysis on those modules to guarantee that they don’t do anything crazy, the kernel module loader can JIT them into native code that doesn’t need to do the capability checks. Basically like if the entire userland consisted of ePBF programs—but with higher-level exposed semantics that allow for rich access to kernel structures, rather than just their handles.

Plus, you also get the benefit of safe sharing of higher-level data abstractions. A “native” userland process has to make syscalls using, at most, registers containing pointers to strings or fixed-size non-polymorphic structs. A capability-based userland module, meanwhile, could interact with the kernel even through reads and writes to a shared-memory tree or hashmap.

Essentially, with capabilities, you get all the benefits of every program having the same low-overhead access to the kernel that a kernel driver does, with none of the drawbacks of having to guard against corrupting the kernel’s state.

Those aren't mutually exclusive. Object capabilities are a core part of what makes the L4s faster than Mach. Mach had to do a permissions check on every IPC message since the port table is global. And since it was global, practically that was an expensive ACL check. You don't even have to do a full permissions check on capability endpoints since the mere fact that you have a handle to it is proof of that you have the appropriate permissions.

Wouldn't an operating system be a performance-critical piece of software despite being an experiment in object-capabilities design?

Hey, Google is becoming an interesting place to work again! :)

(Only partly a joke.)

I'd work for Google if they'd offered me a Rust job :)

Too bad the odds are 10000:1 that I get stuck doing Java/Go instead.

You can apply directly to the fuschia team, in my understanding. They put an email at the end of talks usually with where to apply.

I know people who have gone through the their hiring process, and they were sufficiently unsure if they would ever get on the team they applied for.

There's also the story of the Google+ UI lead who though he'd be working on another Google product, then got reassigned before he was even hired into G+ and hated his entire 9mo having never even met with Vic the VP driving G+.

Does that still happen?

I have no idea, I’ve never worked for or even applied at google.

Can you give an example? My Google Fu is failing me at the moment.

Fuschia/Rust is about the only thing I'd do at Google.

The one I’ve seen is fuchsiajobs@google.com

Is this on YouTube? I can't find the talk.

Not yet. I'm slowly putting them together and really hope to have them uploaded by the end of the year. You can follow our channel[1], which should (?) notify you when this year's videos are published.

[1]: https://www.youtube.com/channel/UCptxtVyJkQAJZcFwBbIDZcg

Having written device drivers before, the first thing I wondered was how he allocated a fixed region of physical memory in user space...

"In kernel space there is an API to allocate DMA memory, in user space we have to use other mechanisms since this API is not available here. To disable swapping, we can use mlock(2). [...] However, mlock(2) only ensures that the page is kept in memory. The kernel is still able to move pages to different physical addresses. To circumvent this problem, we use huge pages with a size of 2 MiB. These pages cannot be migrated yet by the Linux kernel, thus stay resident in physical memory."

He didn't, he's depending on an implementation quirk which might change in the future (and also depending on the physical memory backing huge pages being in a region which can be reached by the network device through DMA, though that's probably always true on x86 when memory encryption is not in use).

DPDK also relies on this quirk if there is no IOMMU available. He does mention the IOMMU as future work, but in the context of security.

Well, Linus is obsessed with "don't break userspace", so as long as there is N > 0 users who use that implementation detail, it will stay forever and become documented limitation of Linux kernel :(

This is very cool. I’ve been working on using Rust to manage an XDP based network driver that is much, much, much faster than the socket API in C, and slightly faster than DPDK. There’s no driver in the traditional sense, Rust works as a control plane and eBPF injector, while the XDP programs communicate back to Rust through eBPF maps.

I’m extremely bullish on Rust, it’s such a breath of fresh air to be able to write maintainable, modern code to do low level tasks. I was a C/C++ dev for a little under a decade before switching to Rust and it really delivers on its promises.

Also Rust using llvm is a huge boon to writing eBPF programs.

This sounds very interesting, do you have some code (parts) to share?

I'm also very interested in any more information regarding this since I'll be doing my thesis work on XDP soon and would like to use Rust!


"Based on our findings we can conclude that Rust is a very well-suited programming language for writing network drivers. It is not only a fast and safe systems programming language but was also voted most beloved programming language in 2016, 2017 and 2018 [15, 16, 17]. Writing more drivers in Rust would certainly lead to safer and more reliable computer systems."

Working link to their code: https://github.com/ixy-languages/ixy.rs

Somewhat related: Brian Cantrill's "Is it time to rewrite the operating system in Rust?" (https://www.slideshare.net/bcantrill/is-it-time-to-rewrite-t...), whose conclusion is (following Betteridge's) "no":

> An OS kernel — despite its historic appeal and superficial fit for Rust — may represent more challenge than its worth

However it suggests more hybrid approaches:

* asm/C kernel with Rust kernel components like drivers and filesystems, Rust can be used for new developments and enable an incremental introduction, mentions a prototype example in freebsd

* Rust for non-kernel OS components: an OS is not just a kernel and utilities, daemons, management facilities, debuggers, … bundled as part of the system tend to be more prone to run-time failures than the kernel itself (and more commonly face hostile interactions), making them excellent candidates for Rust

* Rust-based firmwares: firmware is "a sewer of unobservable software with a long history of infamous quality problems", some areas of firmware (and kernel) development are weak in rust (e.g. dealing with allocation failure) but that aside it looks amenable to Rust

Not worth it to him maybe. I wish someone would. Just this week we found out macOS brought back the “ping of death” due to C’s memory unsafety.

Super cool!

The issue is in the network layer, which would more than likely fall under hybrid approach 1: kernel modules in rust.

And you think a macOS written in Rust wont waste tons of money, slow down feature development (until it reaches parity), and introduce new bugs (perhaps not memory safety bugs, but functional and business logic bugs)...

I mean, sometimes I wish feature development would slow down. Lately it seems like each release of macOS is worse than the last one.

Also, it really comes down to whether or not it's a long-term win. Apple certainly has the capital to make it happen if they decided it was worth it.

How is “most beloved” a prerequisite to writing network drivers? Don’t get me wrong, if Rust is a good technical fit, then that’s great, but so convince me on those objective measures.

"Most beloved" is a weird way to put it but if it's a way to gauge the state of the language's ecosystem I'd say it's a very practical concern.

For instance these past few days I've been writing a small bootloader for some embedded ARM chip. I pondered going with Rust but ended up using C for two main reasons:

- I know that if it's doable in Rust it's doable in C but the opposite is not always true (or straightforward). In particular bare-metal/very low level Rust is still somewhat experimental, you can't use stable Rust if you want to be able to use inline ASM or other advanced features. I have very tight constraints regarding code size and memory use and I was concerned that I might not control that as effectively with Rust code than with C.

- There's already a bunch of C code written for this chip that I can reuse without any friction (no binding generation, no ffi etc...). The vendor even provides a huge header file with the full memory map and some code to init the PLLs.

If you're coding in a vacuum then how "beloved" a language is is mostly irrelevant but if you want to benefit from the ecosystem then having a language that people want to use and support is extremely important. The tooling will mature and stabilize as more and more people use Rust to write bare-metal code and device drivers, which will eventually make my first point above irrelevant. Then if the language becomes popular enough it will be easier to find Rust code or pre-made crates to do what you want which will hopefully eventually solve my 2nd issue.

What ARM chip are you considering this for? Is it a Cortex M?

No, Cortex A in an Arria10 SoC. The boot process is tedious because you have to load the FPGA before you can access the external RAM so the stage 1 bootloader has to fit 256KB code+data. Currently I'm using a multi-stage U-Boot but for simplicity and robustness I'd like to cram everything in a single stage loader.

Since I can't get u-boot to create a small enough image with all the features that I want (and also because I want to add vendor-specific customizations on top) I decided to try and rewrite a small, single purpose loader from scratch. I considered Rust but then that meant that I'd have to port or rewrite all of u-boot's code I want to reuse (especially the vendor code contributed by Altera mentioned in the previous post). So C it is.

Popularity influences ecosystem which influences programs written in the language. A language is more than just its spec. Larger mind share improves availability and quality of 3rd party libs, features, performance and quality of the compiler, brain power to guide the language forward, etc.

It's the second order effects that matter.

"People like it" is a major positive. Compare to C++, which often gets neglected in favour of C even in cases where it would probably be better, because most people _hate_ it.

I don't think most people hate C++. They certainly didn't at my last job, especially with the awesome new additions to the language (which granted aren't any good if you can't use them, but we could).

Still, I do think you're right that a significant number of people hate C++, so I think your point is still valid.

Most peoples' experience of C++ is from long before C++17 or even C++11 were usable. I'd agree that someone encountering modern C++ for the first time today, or willing to give it a second chance, might end up quite liking it, but if your first exposure was to an older code-base, that would put you off very easily.

As a human who’ll be writing X, I sometimes enjoy the perspective of other humans who write X.

Objective measures aren't the only ones that matter when one does a job.

Heck, in some industries, like aviation, even something subjective like boredom can kill.

I recently wrote my bachelors thesis and I wish it could have been as practical as this. Props for German universities for allowing a software engineering project to be accepted. There is no way I could have gotten away with so little amount of academic references.

When I took my bachelors, each semester we split the 27-ish people in our year into groups of about 6 people that had to write a piece of software related to a theme, e.g. compilers or robots, where the courses that semester supported the theme. At the end of the semester we handed in a report of approximately 100 pages detailing the decisions and design and theory behind the practical work we did, which we had to defend orally.

The bachelor project was more of the same, but each group should do a part of a bigger piece of software. I don't recall having any academic references in our final report.

It was a very nice way to learn, as you had your group members to study with and learn from each other.

This was in Denmark, Aalborg University. Which have just been named the best engineering university in Europe, and the fourth best in the world [0].

[0] https://www.usnews.com/education/best-global-universities/en...

Hah, I was reading this thinking "That sounds so much like Aalborg". I'm so incredibly happy I went there :)

Here's another report from MIT from earlier this year, reporting largely the same, placing Aalborg at 3rd or 4th depending on the method used [0].

[0] http://neet.mit.edu/wp-content/uploads/2018/03/MIT_NEET_Glob...

That sounds so awesome compared to the style of learning I experienced in college. I wish more colleges took a practical approach.

I think that relates to the term "Technical" in the Universities name. In Germany, these Unis are more open to practical stuff, while the general Unis are the ones more focused on theoretical research. Which however doesn't mean exclusivity - both can be found at both.

(I graduated from the Mechanical Engineering Faculty of the TUM.)

It's fairly typical to do a software project for a Bachelor's or Master's thesis.

I wrote my bachelors thesis in Germany at a non-TU (Technical University).

A large part of my thesis was implementing a language server for the DOT/Graphviz language [0]. It really depends on the professor.

However, the only thing that matters for the mark is the thesis itself. Many students spend too much time implementing their project and have not enough time for the thesis itself.

[0]: https://edotor.net

Conversely, I got a point deduction on my master thesis for including too many (almost exclusively academic) references. Mental. (They were all relevant.)

I think it depends on the professor. When I wrote my diploma thesis (in Ilmenau), my professor also insisted on academic references and something on the research side. A pure engineering project was not accepted. But it's not super hard to disguise some engineering as research, if it at least contains something new (e.g. the work here does use Rust as a relatively new language).

Related (Writing network drivers in Go):


I'm excited to read both of these.

Btw, they also have C# in the same series: https://www.net.in.tum.de/fileadmin/bibtex/publications/thes...

Same university and department :D

Yeah I noticed!

Seeing that this is an undergrad thesis makes me feel like I did nothing in undergrad. I'm impressed

Maybe I missed something, but all of these network drivers in Go, Rust, C# etc are just re-implementations of the original C driver, where it would seem to me that the real work happened.

That would make the ports still be an interesting experiment, but not as impressive IMO.

I think it depends. Writing a non-trivial language port is an interesting way to show off a language. Though it also tends to be an ugly way to show off a language. It depends on the ugliness of implementation for a given language. Early in C# I saw some really ugly implementations of crypto from other languages, for example. Often pretty obvious the person porting didn't understand the new language well.

Some things hard in one language are easy in another, and it usually helps to have a good understanding of both. Just my own POV. However, low level code often requires one to do things "the hard way" for performance reasons. I particularly appreciate what Go, Rust, C# and others bring to the table.

Most of the code I write isn't particularly performance intensive, so I tend to reach for JS in early implementations. Though happy to rewrite if/when performance becomes a concern. I really enjoy articles like this.

Since we are looking at PCI Ethernet drivers written in languages other than C (fun!) here is one written in Forth too: https://github.com/openbios/openfirmware/blob/master/dev/ne2...

Good to remember that C and Unix kernels haven't always had a monopoly on device drivers.

The Rust version of ixy seems to have as many or a bit more lines of code than the C version? That is a bit suprising to me. 1306 for Rust, and '1000' for C. The Rust additionally uses 4 dependencies from crates.io

Unsafe usage is quite small, at 10%, which seems good for a hardware device driver.

That isn't surprising, really.

Rust can go low-level, but its succintness (compared to C) is when dealing with high-level structures. Things like vectors, hashmaps, or even strings. Those rarely are use on a driver.

In addition to that: in C you can share a pointer in many places and write memory wherever you want. Since rust has ownership system things get a little more complicated. Sometimes you have to write extra code to appease the borrow checker.

Consider that you will never get the situations you get in C, where forgetting to check for NULL in one place can result in privilege escalations (example: https://www.exploit-db.com/exploits/33322/). I think it's worth it.

I’m not sure if this is a critique or a question. If former, then I don’t understand why would that matter. Even at 4 times the length of the code, Rust is safer, more robust and better optimized. So why would the length matter?

This might be a preference that isn’t universally shared, but I really do find shorter programs easier to read. I don’t mean code-golf style short, but languages that don’t need a lot of boilerplate or ceremony. I’ve frequently used Python on personal projects for over 20 years because of its expressiveness.

Of course, correctness trumps almost everything in programming. Rust seems like a really nicely balanced programming language that can replace C++ and C and provide additional protection against some kinds of common errors made by programmers. So it’s definitely on my list of languages to try next for a project.

>This might be a preference that isn’t universally shared, but I really do find shorter programs easier to read.

Even so, 1300 vs 1000, or 1.3x is not really in the "so much more as to be more difficult to read" category.

In fact it can be in the "better worded and easier to read category" (e.g. a monolithic stream of code without functions and comments vs one with).

You’re right. Ada has come up in this thread and there was a time when I was excited about Ada (I actually worked in a group that was competing in the early Strawman—Ironman proposals that led to the design of Ada), but Ada really is too verbose for my taste.

Rust might end up being just right, not too concise, like Haskell or APL, and not too verbose, like Java or Ada; it could be a Goldilocks language.

One of Rust's selling points is zero cost abstractions which would normally imply doing more work with less code (as you would expect in a high level language like Python). So it is surprising that it takes that many more lines of code to achieve what C does without those abstractions.

1300 vs 1000 is not "that many more".

It's close to insignificant to comparing two languages. Especially when they boil down to things like:

  let ptr = unsafe {
            libc::PROT_READ | libc::PROT_WRITE,
        ) as *mut u8
vs calling something like this in one line in C, etc.

The C version also crams every line together, the Rust version couples related statements but leave newlines between others (not sure if loc counts those too).

"zero cost abstractions" usually means you don't pay a performance penalty for abstractions you do not use.

Also means that the abstractions themselves don't introduce unnecessary overhead. It's taken from C++.

C++ implementations obey the zero-overhead principle: What you don’t use, you don’t pay for [Stroustrup, 1994]. And further: What you do use, you couldn’t hand code any better. – Stroustrup


Does Rust claim to use fewer lines?

No. Although that's mostly because the standard rustfmt is very generous about using vertical space.

For high-level code it usually does. Idiomatic Rust can look surprisingly similar to Python.

That doesn't apply to driver development, but it might be surprising to someone only moderately familiar with the language.

Contrast this with https://news.ycombinator.com/item?id=18399389, both of which are trending.

Does anyone happen to have a copy of the LaTeX template that they are using? I really like the typography of the two that are currently on the frontpage.

I like the formatting too. It looks like Computer Modern type face with a nice clean layout. This is the template I found at Tum University’s Department of Informatica for a undergrad thesis: https://www.overleaf.com/latex/templates/tum-thesis-for-info...

What really would be nice is an extension of what Zinc does, namely encode the IO registers as strongly types entities. Maybe not even encode their usage, but also their usage (r/w/rw).

Reading the conclusion in the PDF, the Rust version is not faster than the C version and not necessarily safer either, since there is plenty of "unsafe" Rust code involved.

While Rust is nice in some areas, I'm not convinced this is anything more than yet another "let's rewrite it in Rust, because I happen to like Rust" article.

Angry comments defending Rust incoming in 3..2..

How is less than 10% "plenty"? Especially considering that 100% of C-code involving memory access is inherently unsafe using the Rust definition of the term.

Since it's part of what looks like an overall project (ixy) to show how it's possible to do this sort of low-level programming in a variety of languages (the roadmap includes js and python), the developer experience is an important component of what they're trying to shed light on.

> not necessarily safer either

Rust version will not be safer if the authors did 10 times more errors per line than average C programmer.

Rust version has 10% of lines marked as `unsafe`. Programmers need to make 10 times more errors per line to introduce the same number of errors that are considered unsafe in Rust (memory corruption, data races, double frees, deallocated memory usage). I'm not sure what's wrong with this statement.

I was hopping to see something like that. Thank you!

Why rust? Didn't Ada and SPARK already pretty much solve the problems rust is tackling? rust feels like the systemd of languages: came out of nowhere and very heavily evangelized.

Do those languages have anything even remotely similar to linear types [0]? If not then they aren't even in the same league as Rust.

[0] https://en.wikipedia.org/wiki/Substructural_type_system

These types of comments really irritate me. New thing X comes out that solves an unsolved problem. Then someone comments "why not obsolete thing Y that doesn't solve the problem sufficiently"?

I constantly see comparisons like JVM vs WASM [1], or in this case Ada vs Rust. And very often the newer thing is superior and is not popular just because of cargo culting.

[1] i.e. To obtain similar performance while still using java bytecode you will have to stop using the Java object model completely and exclusively use sun.misc.Unsafe and calculate field offsets of structs manually and still have the risk of vulnerabilities like buffer overflows similar to writing programs in C.

Ada doesn't have the memory guarantees of rust. Ada's type system is closer to that of C. Rust's type system is based on ML, which is based on a proven sound type theory. Ada has had a lot of work put into it, but the trouble is that the foundation was never verified to begin with.

Nim produces smaller binaries, has zero-pause, realtime gc (and you can even define gc behavior) and compiles to optimized C. It's a win over Rust in every way including ease of use. Big issue with Nim is tribalism, pure and simple. It's author failed to get any major corporations behind it so it doesn't get hyped up in constant Medium articles and conference talks and therefore does not get a lot of attention, which keeps it a niche language, sadly.

Nim is a lovely language but I disagree with most of your points.

* The Rust borrow checker prevents a lot of issues in C(++) code. Including thread safety. Nim does not provide anything new in this regard.

* The lack of a GC is a feature, not a detriment. One of the reasons why Rust applications are often very fast is because the language naturally guides you to avoid allocations when possible and allows low level control where necessary. Combined with the safety guarantees of the borrow checker, this is a powerful combination. This is possible in Nim but using GC is easier and will be the preferred solution when in doubt.

* While Mozilla support does help a lot for sure, a lot of the hype around Rust (imo) comes from people trying it out, just falling in love with and then promoting it

> The Rust borrow checker prevents a lot of issues in C(++) code. Including thread safety. Nim does not provide anything new in this regard.

Actually Nim does offer something new in this regard. See https://nim-lang.org/docs/manual.html#guards-and-locks.

> While Mozilla support does help a lot for sure, a lot of the hype around Rust (imo) comes from people trying it out, just falling in love with and then promoting it

AFAIK Rust was a much different language before Mozilla officially adopted it (wasn't the borrow checker implemented after Mozilla sponsorship?). So not only was it not the same language, it wasn't known at all. Don't underestimate the power of full-time developers working on a programming language and having the support of a well known organisation behind them.

Sadly a lot of the hype is simply "omg, Mozilla uses this langauge" or "omg, Google uses this language".

Nim is garbage collected, which means it doesn't need borrow checking. Garbage collection in threaded languages is a well studied problem. Safe threading in a statically collected language is new.

This is clearly a better solution compared to the average language but what if you forget to mark a field or object as guarded?

Just take a look at the basic problem of multi-threading: two threads write to the same memory location. This situation is unpredictable and there never allowed to happen. There are three general strategies to deal with this: immutability, mutexes and linear types to ensure there is only one reference with write capability.

The latter makes writing code that is not thread-safe impossible at compile time without sacrificing efficiency and is used by at least two programming languages that I know of: Ponylang and Rust. For me impossible vs "oops forgot to guard it" are worlds apart. The compiler will inform you that it is not thread-safe and then you can decide to use a different solution like a mutex if necessary.

<< While Mozilla support does help a lot for sure, a lot of the hype around Rust (imo) comes from people trying it out, just falling in love with and then promoting it >>

If there was no rampant tribalism, my comment would not be buried with negative votes. It would receive a response like the one you gave, which is subjective, and that would be the end of it. Some may even up-vote it because diversity is always good. But feels like Rust users are zealous and it's a bad sign, tbh. Wouldn't want to be part of a community that acted with hostility towards others for presenting alternative view points.

I think you're getting downvote because you're being very tribal about Nim, and apparently not open for discussion : "It's a win over Rust in every way including ease of use."

You can paint others opinion as being subjective, which they are, but you would be wise to realize yours are as well. That's what's bringing hostility, not the diversity of viewpoint (there's a lot of talks about Go in the thread for example)

Yes, true.

You were downvoted because your claims were not substantiated by factual analysis.

Rather, your comment seemed rather "tribal" while criticizing Rust fans of the same.


Cheers for owning up to it.

BTW, I would see Nim more as a competitor to D, Swift and Java than to Rust.

I think Nim absolutely has a place competing with Rust too.

It has several tunable GCs allowing for great flexibility there, and also supports running with no GC at all. It compiles to C that will compile nearly anywhere C does. It produces smaller binaries, and it can easily link against musl instead of libc for an even lighter footprint. It's also safe like Rust is, and the compiler will help you catch a ton of errors.

That's not to say that it's competitive at this point in time. It's much more unstable as a language due to not reaching 1.0 yet. They only just (August) received any real kind of funding, allowing them to hire on a new developer but it's still just a handful of core devs and some dedicated contributors. The standard library and the community repo (Nimble) can't compare with Rust's and Cargo in terms of package availability and support.

> it can easily link against musl instead of libc for an even lighter footprint

I'm not sure if you were implying that it doesn't, but Rust also supports musl:

    $> rustup target list | rg musl

I apologize if I worded my post weirdly. It was a list of things that make Nim a competitor to Rust, not things that Nim does that Rust doesn't.

There's a fair bit of overlap for obvious reasons: they're two languages suited for the same tasks, after all :)

mrustc [1] compiles rust to optimized C also.

1. https://github.com/thepowersgang/mrustc

Nim, D and Rust are in the same league, as a "C/C++ replacement"


> Tests show that a 2ms max pause time will be met in almost all cases on modern CPUs (with the cycle collector disabled).

So in a reduced mode, pause times are only bounded by 2ms. Not sure how you could say that is zero pause

The GC guys love doing that for some reason. Azul's "pauseless" GC only guaranteed 10ms pause times.

Certainly not zero, but probably enough for hitting the frame deadline for 60fps graphics (16ms). Could even enough for real-time audio processing (at 4ms or 8ms). That will cover the vast majority of latency requirements that application developers using an OS deals with.

Some embedded realtime cases might have more strict deadline requirements, or be complicated by the potential jitter that a GC might intoduce. Eg most discrete math used in motor control systems assumes a fixed sample period.

I haven't done game dev beyond hobbyist level, but 12.5% of your total budget for a frame seems pretty significant. Is that kind of percentage hit generally considered negligible?

60 FPS is not enough, many people have monitors with much higher refresh rates (120 hz, more). You have virtual reality which basically requires a higher refresh rate than 60 FPS to begin with. Using a runtime component for garbage collection is highly sub-optimal.

Rust is possibly an ideal language for game development. Standard Rust has no runtime component, which is perfect. It's also a high-performance language.

The ecosystem for Rust game development is immature, but there's progress being made.

Standard Rust absolutely has runtime components. Otherwise a "Hello World" would not compile into several 100 KiB of binary. (Not saying that that's bad, just that this is how it is.)

I haven't looked since, but jemalloc I believe was a significant cost in this. Recently there was a change merged in to use the system allocator by default:


It has runtime as in "C runtime", that is the code which runs before and after `main` and the code of standard library. It doesn't have runtime as in "Java runtime" (or "C# runtime") that includes the code that runs alongside your code (garbage collection thread).

Yes, I was loose in my use of the term "runtime" and sacrificing correctness. Writing "runtime component" rather than "runtime" makes it more misleading. I probably should've worded my original comment more wisely.

Sorry, within 2ms not absolute 0, but to me it might as well be zero I've yet to notice it in practice.

For a network driver, 2ms seems enormous. At 1Gbps that's 250 kilobytes, which is an awful lot of packets.

But GC doesn’t run continuously! Sure, device drivers and kernel modules should be built in C. Everything else is an approximation.

I appreciate your enthusiasm for Nim, but even I (as a Nim core dev) wouldn't say that Nim is better than Rust in every way.

Trying to promote Nim like you are doing might in fact have the opposite effect :/

Appreciate your work on Nim.

My statement is based on my opinion. There are certainly imperfections in Nim even bugs and things that annoy me but the mental load is far less than working in Rust, for me that is, and so to me it is better in every way since in the end I find it to be congruent with my mental model whereas I can’t say the same about Rust. It is a purely opinionated statement like everything else where meaning is involved. We are the meaning makers. It doesn’t exist in detached form.

Your opinion is valid.

In every way? One could argue that having a GC is a disadvantage compared to Rust regardless of how good a GC it is. Main reason for that is because it might still introduce unnecessary runtime overhead.

Also what about the data race detection stuff that rust does? A GC does not stop data races.

It does not.

GCs absolutely increase memory pressure overhead, or are so conservative that they practically lead to slow leaks. You have to keep that metadata of "what is and isn't a pointer" somewhere and that comes with a cost.

Memory pressure is a different thing. And, once again, GC is optional.

> Memory pressure is a different thing.

No it isn't. The GC has to touch that data, which means it's cache pressure, which means it's perf pressure.

> And, once again, GC is optional.

Sure, but it's an exclusive choice between memory safety and speed. Other languages let you have your cake and eat it too in that respect (Cyclone, Rust, etc.).

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact