Hacker News new | past | comments | ask | show | jobs | submit login
Speed of Rust vs. C (kornel.ski)
619 points by sivizius on March 13, 2021 | hide | past | favorite | 525 comments



"But the biggest potential is in ability to fearlessly parallelize majority of Rust code, even when the equivalent C code would be too risky to parallelize. In this aspect Rust is a much more mature language than C."

Yes. Today, I integrated two parts of a 3D graphics program. One refreshes the screen and lets you move the viewpoint around. The other loads new objects into the scene. Until today, all the objects were loaded, then the graphics window went live. Today, I made those operations run in parallel, so the window comes up with just the sky and ground, and over the next few seconds, the scene loads, visibly, without reducing the frame rate.

This took about 10 lines of code changes in Rust. It worked the first time it compiled.


>> One refreshes the screen and lets you move the viewpoint around. The other loads new objects into the scene.

How did you do that in Rust? Doesnt one of those have to own the scene at a time? Or is there a way to make that exclusive ownership more granular?


Since this got so many upvotes, I'll say a bit more. I'm writing a viewer for a virtual world. Think of this as a general-purpose MMO game client. It has no built-in game assets. Those are downloaded as needed. It's a big world, so as you move through the world, more assets are constantly being downloaded and faraway objects are being removed. The existing viewers are mostly single thread, in C++, and they run out of CPU time.

I'm using Rend3, which is a 3D graphics library for Rust that uses Vulkan underneath. Rend3 takes care of memory allocation in the GPU, which Vulkan leaves to the caller, and it handles all the GPU communication. The Rend3 user has to create all the vertex buffers, normal buffers, texture maps, etc., and send them to Rend3 to be sent to the GPU. It's a light, safe abstraction over Vulkan.

This is where Rust's move semantics ownership transfer helps. The thread that's creating object to be displayed makes up the big vertex buffers, etc., and then asks Rend3 to turn them into a "mesh object", "texture object", or "material object". That involves some locking in Rend3, mostly around GPU memory allocation. Then, the loader puts them together into an "object", and tells Rend3 to add it to the display list. This puts it on a work queue. At the beginning of the next frame, the render loop reads the work queue, adds and deletes items from the display list, and resumes drawing the scene.

Locking is brief, just the microseconds needed for adding things to lists. The big objects are handed off across threads, not recopied. Adding objects does not slow down the frame rate. That's the trouble with the existing system. Redraw and new object processing were done in the same thread, and incoming updates stole time from the redraw cycle.

If this was in C++, I'd be spending half my time in the debugger. In Rust, I haven't needed a debugger. My own code is 100% safe Rust.


Wonderful! Thanks for sharing. This sounds like the exact sort of work that Rust is perfect for.

I'm making a game in Rust and Godot (engine) and since it's a factory game the simulation performance is important. Rust means I worry far less about stability and performance.

I bet if you wrote a good blog entry with screenshots and explanation of how your code loads and renders I imagine it would do well on HN.


Too soon. Someday perhaps a Game Developers Conference paper/talk. I was considering one, but live GDC has been cancelled for 2021. My real interest in this is how do we build a big, seamless metaverse that goes fast. I'm far enough along to see that it's possible, but not far enough along that people can use the client.

Rust is good for this sort of thing. It's overkill for most web back end stuff. That's where Go is more useful. Go has all those well-used libraries for web back end tasks. Parallelism in web back ends tends to be about waiting for network events, not juggling heavy compute loads of coordinated disparate tasks. Hence all the interest in "async" for web servers. As I've said before, use the right tool for the job.


One idea I liked was ...

You have an authoritive world simulation server, as usual.

You then have several servers whose chief job is to keep clients in sync with the authoritive server.

Most network games combine these two roles, but there is a lot of processing and network traffic required to keep clients in sync. For massive multiplayer there is a benefit to scale the "client-interaction" servers.


My question is probably off because I lack the knowledge but how do the commercial games/game engines do this then if this is such a rocket science? Something like Fortnite or an aged GTA do what you've described (downloading assets on demand without any fps-drop) for quite some time now.


The claim isn't that it's impossible, or "rocket science", it's that it's hard to do right and was made much easier. You're bringing up for comparison a game engine that has been in constant development by experts for over two decades (Unreal Engine) and a game engine in constant development for over a decade ago (RAGE). Just because someone makes a professional product using tens or hundreds of millions of dollars doesn't mean it was easy.

There's a reason why Epic is able to charge a percentage of sales for their engine, and companies still opt for it. That's because it's hard to do reliably and with good performance and visuals.


Yeah, but just picking one of many requirements in game dev and advocating why lang x can do this better than y ignores all the other checkboxes. Yeah, C++ is nerve-wrecking but Rust can be even more. IIRC there was a thread about Zig vs Rust and why Rust is just the wrong tool (for OPs use case in that case). IDK but there is a reason why C++ dominates game dev and a reason why Rust still struggles with mainstream adoption compared to same agers like Go or TS.


Game dev has nothing to do with it.

The claim was that Rust made making something parallel easy/easier, and the illustrative example given was someone trying to parallelize something in a game engine. Whether Rust is good for game development or not is irrelevant to the point that was being made.

Even if everyone accepted that Rust was horrible for game development on most metrics, the example given would still carry the exact same weight, because it's really about "parallelizing this would be very hard in C++, but it was very easy because of Rust."


The simplest (and often best) option is to use the Arc<Mutex<MyStruct>> pattern.

The Arc is an async reference counter that allows multiple ownership. And the nested Mutex enforces only one mutable borrow at a time.


I think you mean that Arc is an atomic reference counter (it uses atomic cpu instructions to prevent race conditions when incrementing and decrementing the ref count)


Ah yes, sorry. I remember it in my head with "async", but you're right. :)


Eh, with Arc you can share ownership easily, and there are probably a lot of cleverer concurrent data structures or entity component kinda things that'd just work too. But maybe you can arrange things so that one thread owns the scene but the other thread can still do useful work?


This typically isn't possible because the rendering context is global and is needed for both loading and rendering. You need an Arc to guarantee the correct Drop mechanism.


I'm not sure how your architecture but you might not even need to lock things. I find that using mpsc channels allows me to get around like 60% of locking. Essentially, you have some sort of main loop, then you spawn a tread, load whatever you need there and then send it to the main thread over mpsc. The main thread handles it on the next iteration of the main loop.


But Rust works badly with mmapped (memory-mapped) files, as the article notes. So in C you could load (and save!) stuff almost instantly, whereas in Rust you still have to de-serialize the input stream.


No you don't. I've written multiple programs that load things instantly off the file system via memory maps. See the fst crate[1], for example, which is designed to work with memory maps. imdb-rename[2] is a program I wrote that builds a simple IR index on your file system that can then instantly search it by virtue of memory maps.

Rust "works badly with memory mapped files" doesn't mean, "Rust can't use memory mapped files." It means, "it is difficult to reconcile Rust's safety story with memory maps." ripgrep for example uses memory maps because they are faster sometimes, and its safety contract[3] is a bit strained. But it works.

[1] - https://github.com/BurntSushi/fst/

[2] - https://github.com/BurntSushi/imdb-rename

[3] - https://docs.rs/grep-searcher/0.1.7/grep_searcher/struct.Mma...


I didn't read your code but one problem I suspect you ran into is that you had to re-invent your container data structures to make them work in a mmapped context.


No, I didn't. An fst is a compressed data structure, which means you use it in its compressed form without decompressing it first. If you ported the fst crate to C, it would use the same technique.

And in C, you have to design your data structures to be mmap friendly anyway. Same deal in Rust.

But this is moving the goal posts. This thread started with "you can't do this." But you can. And I have. Multiple times. And I showed you how.


> which means you use it in its compressed form without decompressing it first.

So your code operates directly on a block of raw bytes? I can see how that can work with mmap without much problems.

My argument was more about structured data (created using the type system), which is a level higher than raw bytes.


> So your code operates directly on a block of raw bytes? I can see how that can work with mmap without much problems.

Correct. It's a finite state machine. The docs of the crate give links to papers if you want to drill down.

> My argument was more about structured data (created using the type system), which is a level higher than raw bytes.

Yes. You should be able to do in Rust whatever you would do in C. You can tag your types with `repr(C)` to get a consistent memory layout equivalent to whatever C does. But when you memory map stuff like this, you need to take at least all the same precautions as you would in C. That is, you need to build your data structures to be mmap friendly. The most obvious thing that is problematic for mmap structures like this that is otherwise easy to do is pointer indirection.

With that said, this technique is not common in Rust because it requires `unsafe` to do it. And when you use `unsafe`, you want to be sure that it's justified.

This is all really besides the point. You'd have the same problems if you read a file into heap memory. The main problem in Rust land with memory maps is that they don't fit into Rust's safety story in an obvious way. But this in and of itself doesn't make them inaccessible to you. It just makes it harder to reason about safety.


Dang, burntsushi up in the house! Hey, just wanted to say I enjoy your work––I've learned a lot from it. Thank you!


It's very tedious to debate with someone who explicitly makes assumptions about something (like code) without having read it, and puts the burden of refuting those assumptions on you...


It doesn’t say it “works badly” it says the borrow checker can’t protect against external modifications to the file while memory-mapped, which has a host of issues in C as well.

You can mmap files in Rust just fine, but it’s generally as dangerous as it is in C.


I don’t get this obsession with “dangerous.” Honestly, what does that even mean? I think a better word is “error-prone.” Danger is more like, “oh my god a crocodile!”


> Honestly, what does that even mean?

It has a very specific meaning in Rust: the user can cause memory unsafety if they make a mistake.

> I think a better word is “error-prone.”

The issue with the connotation there is that it's not about the rate of problems, it's about them going from "impossible" to "possible."


There can be real danger when the code is used in certain applications. For example when controlling the gate of the crocodile cage in a zoo.


Concurrency bugs can absolutely cause dangerous danger of the deadly variety:

https://en.m.wikipedia.org/wiki/Therac-25


Unfortunately, as is most always the case of negligence instead of some particular language features:

“A commission attributed the primary cause to general poor software design and development practices rather than single-out specific coding errors. In particular, the software was designed so that it was realistically impossible to test it in a clean automated way.“

Ergo, concurrency doesn’t kill people, people do.


You sound like you make a refutation, but you really don't. This whole discussion is about giving tools to developers that are systematically less error-prone, which your quote suggests would have been helpful to that specific development team.


the main problem here is that C has the capability to declare mmap regions correctly: `volatile char[]` and Rust does not (`[Cell<u8>]` is close but not exactly right, and annoying)

most rust folks who use mmap don't mark the region as Celled, which means they risk UB in the form of incorrect behavior because the compiler assumes that the memory region is untouchable outside the single Rust program, and that's not true

(it's also not true generally b/c /dev/mem and /proc/pid/mem exist, but it's beyond Rust's scope that the OS allows intrusion like that)


Errors are up to interpretation. It just means the thing didn't happen as requested. Errors are meant to be expected or not expected depending on the context.

Dangerous means dangerous. It's not up for interpretation.

Languages have multiple, very different words, for exactly this reason.


Agreed. But still, folks make it sound bad. For instance “danger” in the many context could also be reframed as “powerful”, could it not?


But that may be of little solace. If you snapshot your entire heap into an mmapped file for fast I/O, then basically the entire advantage of Rust is gone.


Is there literally no other code in the application?

Rust has plenty of situations where you do unsafe things but wrap that in safe APIs. If you’re returning regions of that mmapped file, for example, a lifetime can be associated to those references to ensure that those are valid for the duration of the file being mmapped in the program.

It can be used to ensure that if you need to write back to that mmapped file (inside the same program) that there are no existing references to it, because those would be invalid after an update to the file. You need to do the same in C, but there are no guardrails you can build in C to make that same assurance.


> If you snapshot your entire heap into an mmapped file for fast I/O,

I've never heard of this trick. And my first reaction is "That would be a nightmare of memory unsafety if I did it in C++"

What's it used for? IPC?


I think emacs (used to?) do something awful like this. https://lwn.net/Articles/707615/


I'd call mmaping data structures into memory an advanced systems programming trick which can result in a nice performance boost but which also has some severe drawbacks (portability across big/little endian architectures and internal pointers being two examples).

I know some very skilled C++ and Rust developers who can pull it off. If you're at that skill level, Rust is not going to get in your way because you're just going to use unsafe and throw some sanitizers and fuzzers at it. I wouldn't trust myself to implement it.


You have to combine it with other techniques, e.g. journaling to make it safe, but this is not always necessary (e.g. when using large read-only data-structures)


In C you can access pointers to memory mapped files effortlessly in ways that are often extremely unsafe against the possible existence of other writers and against the making being unmapped and mapped elsewhere. It’s also traditional to pretend that putting types like int in a mapped file is reasonable, whereas one ought to actually store bytes and convert as needed. Rust at least requires a degree of honesty.


is it something deeply ingrained to rust? or is it something rust is working on?


It's more like, Rust wants to make guarantees that just aren't possible for a block of memory that represents a world-writable file that any part of your process, or any other process in the OS, might decide to change on a whim.

In other words, mmaped files are hard, and Rust points this out. C just provides you with the footgun.


The problem is that compilers are allowed to make some general assumption about how they're allowed to reorder code, always based on the assumption that no other process is modifying the memory. For example, the optimizer may remove redundant reads. That's a problem if the read isn't really redundant -- if the pointer isn't targeting process-owned memory, but a memory mapped file that's modified by someone else. Programs might crash in very "interesting" ways depending on optimization flags.

C has this issue as well, but Rust's compiler/borrow checker is particularly strong at this kind of analysis, so it's potentially bitten even harder.


While it works great for some cases, one should not forget it doesn't cover external resources, specially those shared across processes.


You have made this claim multiple times. Why do you see this as a language issue and not an OS issue? It becomes an even bigger problem when we talk about distributed systems and distributed resources. Is there a language that handles this?

These issues about multiple processes and distributed systems are framework and OS level concerns. Rust helps you build fast concurrent solutions to those problems, but you’re correct that it can not solve problems exterior to the application runtime. How is that a deficiency with Rust?


Fearless concurrency sales pitch.

Yes languages like Erlang and runtimes like Coyote and Orleans.


Erlang has a great concurrency model with higher overhead than Rust, but similar cross thread safety, doesn’t do anything about exterior resources to the application.

I’ve not worked with Coyote, but if it is the system for .net, it describes itself as a framework, “Coyote provides developers a programming framework for confidently building reliable asynchronous software on the .NET platform”.

Orleans similarly describes itself as a framework, “Orleans is a cross-platform software framework for building scalable and robust distributed interactive applications based on the .NET Framework.”

Rust is a language, similar frameworks are being built with it, the point your making does not appear to be about the language.


If I understand correctly, the Erlang point was, that you can have a distributed system by using Beam to scale to multiple machines and have them communicate via message passing, which is all possible and encouraged, because of how you structure and write code in Erlang, as actors with mailboxes, isolating actors from each other, except for the messages, that are passed.

You say, that the Erlang concurrency model has higher overhead than Rust. In Rust there are probably multiple projects going on right now (one of them is Bastion, but I guess there are probably others), which try to provide Erlang like concurrency. What do you mean by overhead of a concurrency model (that of Erlang) being higher than the overhead a programming language (Rust)? As far as I know Erlang's lightweight processes are about as lightweight as you can get. Is there a Rust framework for Erlang like concurrency, which reduces the footprint of lightweight processes even more?


That wasn’t meant to be a snide comment about Erlang in any way. All I meant by the comment about higher-overhead was that the language itself generally has more costs to run, i.e. runtime, memory usage, garbage collector, interpreted, etc, than Rust.

The “process” model of Erlang is about as lightweight as you can get, agreed.

In terms of capabilities of beam across systems, point taken. Though we start stretching some of the understanding of where languages end and runtimes begin... Rust and C make those boundaries a little more clear.


How could an OS adapt its processes functionality to help Rust here?


Capabilities would probably be helpful :V



Without real world data "fearlessly parallelizing all the things!" is an awful idea due to all the overhead involved.

The most important design decision while writing a parallel algorithm is to decide for what amount of data is not worth it.


He tried with few effort and noticed that for his use case the code is faster, I fail to understand this rebuttal of the parent's comment


The average cellphone today has more than 4 cores. A decent desktop can deal with 16 threads on 8 cores.

There is a lot of untapped parallelism readily available waiting for the right code.


It's not about number of available threads, the very act of scheduling tasks across multiple threads has scheduling and communication overheads, and in many situations actually ends up being slower than running it on the same thread.

That said, I think the original comment was rightly pointing out how easy it was to make the change and test it, which in this case did turn out to be noticeably faster.


Parallelization is the nuclear energy of comp science. Loads of potential, high risk reward and they would have gotten away with it, if it were not for those meddling humans. Its non-trivial and can only be handled by accomplished engineers. Thus it is not used - or is encapsulated, out of sight, out of reach of meddling hands. (CPU Microcode shovelling non-connected work to pipelines comes to mind / NN-Net training frameworks, etc.)


> [...] or is encapsulated, out of sight, out of reach of meddling hands.

That's the real issue here! Most language have poor abstractions for parallelism and concurrency. (Many languages don't even differentiate between the two.)

Encapsulating and abstracting is how we make things usable.

Eg letting people roll hash tables by themselves every time they want to use one, would lead to people shooting themselves in the foot more often than not. Compared to that, Python's dicts are dead simple to use. Exactly because they move all the fiddly bits out of the reach of meddling hands.


> Exactly because they move all the fiddly bits out of the reach of meddling hands.

You don't even need to hide those out of reach. You just need to make it dead simple to use and pretty much nobody will want to touch the fiddly bits.

And those who do know they had it coming.


Oh, definitely. You just need to make it so that people don't have to fiddle with it and don't fiddle with it by accident.

You seldom have to protect against malicious attempts. (But when you do, it's a much harder task.)


As a general thought about parallelizing all the things it's true though. When looking for speedups, parallelization granularity has to be tuned and iterated with benchmarking, else your speedups will be poor or negative.

I think the example case in this subthread was about making some long app operations asynchronous and overlapping, which is a more forgiving use case than trying to make a piece of code faster by utilizing multiple cores.


Also Rust is risky to parallelize: you can get deadlocks.

I don't get the obsession of parallel code in low level languages by the way. If you have an architecture where you can afford real parallelism you can afford higher level languages anyway.

In embedded applications you don't usually have the possibility to have parallel code, and even in low level software (for example the classical UNIX utilities), for simplicity and solidity using a single thread is really fine.

Threads also are not really as portable as they seem, different operating systems have different way to manage threads, or even don't supports thread at all.


This is a bad take. ripgrep, to my knowledge, cannot be written in a higher level language without becoming a lot slower.[1] And yet, if I removed its use of parallelism by default, there will be a significantly degraded user experience by virtue of it being a lot slower.

This isn't an "obsession." It's engineering.

[1] - I make this claim loosely. Absence of evidence isn't evidence of absence and all that. But if I saw ripgrep implemented in, say, Python and it matched speed in the majority of cases, I would learn something.


Python isn't really something I would even think as possible example, Common Lisp, D, Nim, Swift, most likely.


So? I said, "higher level language." I didn't say, "Python specifically."

I would guess D could do it.

I don't know enough about Nim or Swift.

I would learn something if Common Lisp did it. I'd also learn something if Haskell or Go did it.


I am not trying to contradict anyone here, but any language mature enough to have an impl/way to not have arbitrary performance ceilings needs access to inline assembly/SIMD. Cython/Nim/SBCL can all do that..probably Haskell..Not so sure about Go or Swift. Anyway, many languages can respond well to optimization effort. I doubt anyone disagrees.

At the point of realizing the above no ceiling bit, the argument devolves to more one about (fairly subjective) high/low levelness of the code itself/the effort applied to optimizing, not about the language the code is written in. So, it's not very informative and tends to go nowhere (EDIT: especially when the focus is on a single, highly optimized tool like `rg` as opposed to "broad demographic traits" of pools of developers, and "levelness" is often somewhat subjective, too).


You're missing the context I think. Look at what I was responding to in my initial message in this thread:

> If you have an architecture where you can afford real parallelism you can afford higher level languages anyway.

My response is, "no you can't, and here's an example."

> but any language mature enough to have an impl/way to not have arbitrary performance ceilings needs access to inline assembly/SIMD

If you ported ripgrep to Python and the vast majority of it was in C or Assembly, then I would say, "that's consistent with my claim: your port isn't in Python."

My claim is likely more subtle than you might imagine. ripgrep has many performance sensitive areas. It isn't enough to, say, implement the regex engine in C and write some glue code around that. It won't be good enough. (Or at least, that's my claim. If I'm proven wrong, then as I said, I'd learn something.)

> At the point of realizing the above no ceiling bit, the argument devolves to more one about (fairly subjective) high/low levelness of the code itself/the effort applied to optimizing, not about the language the code is written in. So, it's not very informative and tends to go nowhere.

I agree that it's pretty subjective and wishy washy. But when someone goes around talking nonsense like "if parallelism is a benefit then you're fine with a higher level language," you kind of have to work with what you got. A good counter example to that nonsense is to show a program that is written is a "lower" level language that simultaneously benefits from parallelism and wouldn't be appropriate to do in a higher level language. I happen to have one of those in my back-pocket. :-) (xsv is another example. Compare it with csvkit, even though csvkit's CSV parser is written in C, it's still dog slow, because the code around the CSV parser matters.)


Ok. "Afford parallelism => afford high level" with the implication of HL=slow does sound pretty off base. So, fair enough.

FWIW, as per your subtle claim, it all seems pretty hot spot optimizable to me, at least if you include the memchr/utf8-regex engine in "hot spot". I do think the entire framing has much measurement vagueness ("hot", "vast majority", "levelness", and others) & is unlikely to be helpful, as explained. In terms of "evidence", I do not know of a competitor who has put the care into such a tool to even try to measure, though. { And I love rg. Many thanks and no offense at all was intended! }


ack might be an example. It's Perl, not Python, and its author is on record as saying that performance isn't his goal. So it's a bit of a strained one. But yes, it's true, I don't know any other serious grep clone in a language like Python. This is why I hedged everything initially by saying that I know that absence of evidence isn't evidence of absence. :-) And in particular, I framed this as, "I would learn something," rather than, "this is objective fact." So long as my standard is my own experience, the hand wavy aspect of this works a bit better IMO.

> I do not know of a competitor who has put the care into such a tool to even try to measure, though.

Right. Like for example, I am certain enough about my claim that I would never even attempt to do it in the first place. I would guess that others think the same. With that said, people have written grep's in Python and the like, and last time I checked, they were very slow. But yeah, the "development effort" angle of this likely makes such tools inappropriate for a serious comparison to support my claim. But then again, if I'm right, the development effort required to make a Python grep be as fast as ripgrep is insurmountable.

> it all seems pretty hot spot optimizable to me

As long as we're okay with being hand wavy, then I would say that it's unlikely. Many of the optimizations in ripgrep have to do with amortizing allocation, and that kind of optimization is just nearly completely absent in a language like Python unless you drop down into C. This amortization principle is pervasive and applies as deep as regex internals to the code the simply prints ripgrep's output (which is in and of itself a complex beast and quite performance sensitive in workloads with lots of matches), and oodles of stuff inbetween.

> { And I love rg. Many thanks and no offense at all was intended! }

:-) No offense taken. This is by far the best convo I'm having in this HN post. Lol.

Note that I've made similar claims before. In the last one, there is a lot more data: https://news.ycombinator.com/item?id=17943509


When I used to write in Cython + NumPy I would pre-allocate numpy arrays written into by Cython. It's C-like, but because of the gradual typing I think firmly in the higher level (for some value of "er"). One can certainly do that stuff in Nim/SBCL/etc. (and one sees it done).

While allocation is pretty pervasive, I'm skeptical that everywhere or even most places you do it is an important perf bottleneck. Without a count of these 20 times it matters and these 40 it doesn't, it's just kind guesswork from an all too often frail human memory/attention that "ignores the noise" by its very nature. You might be right. Just trying to add some color. :-)

Another way to think of this is to imagine your own codebase "in reverse". "If I drop this optim, would I see it on that profile?" Or look at the biggest piles of code in your repo and ask "Is this in the critical path/really perf necessary?" and the like. Under the assumption that higher level things would be a lot shorter that kind of thought experiment can inform. Maybe an approach toward more objectivity, anyway. Little strewn about tidbits in every module don't really count { to me :-) } - that speaks more to abstraction problems.

But I don't think there is a lot of value in all the above gendankenizing. While I realize some bad "just throw money at it" kicked this off, one of my big objections to the entire framing is that I think people and their APIs really "alter the level" of a language. Indeed their experience with the language has big impact there. Every one reading this knows C's `printf(fmt, arg1, arg2,..)`. Yet, I'd bet under 1% have heard of/thought to do an allocating (or preallocated) string builder variant like `str(sub1, sub2, ..., NULL)` or using/acquiring something like glibc's `asprintf`. People will say "C is low level - It has no string concatenation!". Yet, inside of an hour or two most medium-skill devs could write my above variadic string builder or learn about vasprintf. Or learn about Boehm-Wiser for garbage collected C or 100 other things like that CTL I mentioned elsewhere in this thread.

So what "level" is C, the language? Beats me. Does it have concatenation? Well, not spelled "a+b" but maybe spelled not much worse "str(a,b,NULL)". Level all just depends so much on how you use it. Performance is similar. Much C++ (and Rust for that matter) is terribly inefficient because of reputations for being "fast languages" leading to less care (or maybe just being done by junior devs..). These "depends" carry over to almost anything..not just Rust or C, but sometimes even English. I am usually told I write in much too detailed a way and a trimmer way might have higher persuasion/communication performance! { How's that for "meta"? ;-) }

> This is by far the best convo I'm having in this HN post. Lol.

Cool, cool. There can be a lot of "Rust Rage" out there (in both directions, probably). :)

Anyway, I don't think we'll resolve anything objective here, but don't take a lack of response as indicating anything other than that. You aren't making any strong objective claims to really rebut and I'm glad that you personally undertook the challenge to do ripgrep in any language. I do think many might have done..maybe Ada, too, and probably many more, but maybe all at the same "realized levelness". You just did not know them/feel confident about getting peformance in them. Which is fine. A.Ok, even! I guess your other biggy is Go and that might actually not have worked of all the alternatives bandied about by pjmlp and myself so far.


> While allocation is pretty pervasive, I'm skeptical that everywhere or even most places you do it is an important perf bottleneck. Without a count of these 20 times it matters and these 40 it doesn't, it's just kind guesswork from an all too often frail human memory/attention that "ignores the noise" by its very nature. You might be right. Just trying to add some color. :-)

In general I agree. But I'm saying what I'm saying because of all the times I've had to change my code to amortize allocation rather than not do it. It's just pervasive because there are all sorts of little buffers everywhere in different parts of the code. And those were put there because of experimenting that said the program benefited from them.

The key here is that the loops inside of ripgrep can grow quite large pretty quickly. There's the obvious "loop over all files," and then there's "loop over all lines" and then "loop over all matches." ripgrep has to do work in each of those loops and sometimes the work requires allocation. Even allocations at the outermost loop (looping over all files) can cause noticeable degradations in speed for some workloads.

This is why I'm so certain.

The numpy example is a good one where a substantial amount of code has been written to cater to one very specific domain. And in that domain, it's true, you can write programs that are very fast.

> So what "level" is C, the language?

Oh I see, I don't think I realized you wanted to go in this direction. I think I would just say that I absolutely agree that describing languages as "levels" is problematic. There's lots of good counter examples and what not. For example, one could say that Rust is both high level and low level and still be correct.

But like, for example, I would say that "Python is high level" is correct and "Python is low level" is probably not. But they are exceptionally general statements and I'm sure counter-examples exist. They are, after all, inherently relativistic statements, so your baseline matters.

That's kind of why I've stayed in "hand wavy" territory here. If we wanted to come down to Earth, we could, for example, replace "high level languages" in the poster's original statement with something more precise but also more verbose that this discussion still largely fits.

> I am usually told I write in much too detailed a way and a trimmer way might have higher persuasion/communication performance! { How's that for "meta"? ;-) }

Yeah, it's hard to be both pithy and precise. So usually when one is pithy, it's good to take the charitable interpretation of it. But we are technical folks, and chiming in with clarifications is to be expected.

> I don't think we'll resolve anything objective here

Most definitely. At the end of the day, I have a prior about what's possible in certain languages, and if that prior is proven wrong, then invariably, my mental model gets updated. Some priors are stronger than others. :-)

> You aren't making any strong objective claims to really rebut

Right. Or rather, my claims are rooted in my own experience. If we were going to test this, we'd probably want to build a smaller model of ripgrep in Rust, then try implementing that in various languages and see how far we can get. The problem with that is that the model has to be complex enough to model some reasonable real world usage. As you remove features from ripgrep, so to do you remove the need for different kinds of optimizations. For example, if ripgrep didn't have replacements or didn't do anything other than memory map files, then that's two sources of alloc amortization that aren't needed. So ultimately, doing this test would be expensive. And that's ignoring the quibbling folks will ultimately have about whether or not it's fair.

> I guess your other biggy is Go and that might actually not have worked of all the alternatives bandied about by pjmlp and myself so far.

I would guess Go would have a much better shot than Python. But even Go might be tricky. Someone tried to write a source code line counter in Go, put quite a bit of effort into it, and couldn't get over the GC hurdle: https://boyter.org/posts/sloc-cloc-code/ (subsequent blog posts on the topic discuss GC as well).


I feel we've talked past each other about what is/is not Python a few times. There is Cython and Pythran and Pypy and ShedSkin and Numba and others that are targeting, for lack of a more precise term, "extreme compatibility with" CPython, but also trying to provide an escape hatch for performance which includes in-language low levelness including allocation tricks that are not "mainstream CPython" (well, Pypy may not have those...).

My first reply was questioning "what counts" as "Python". Cython is its own language, not just "C", nor just "Python", but can do "low level things" such as using C's alloca. Maybe the only prior update here is on the diversity of "Python" impls. There are a lot. This is another reason why language levelness is hard to pin down which was always my main point, upon which we do not disagree. Maybe this is what you meant by "exceptionally general", but I kinda feel like "there isn't just one 'Python'" got lost. { There used to be a joke.."Linux isn't" related to the variety of distros/default configs/etc. :-) }

Advice-wise, I would say that your claim can be closer to easily true if you adjust it to say "ripgrep needs 'low level tricks' to be fast and a language that allows them, such as Rust". That phrasing side-steps worrying about levelnesses in the large of programming languages, re-assigns it to techniques which is more concrete and begs the question of technique enumeration. That is the right question to beg, though, if not in this conversation then in others. You might learn how each and every technique has representation in various other programming languages. It's late for me, though. So, good night!


Ah I see. You are right. I missed that you were going after that. I'm personally only really familiar with CPython, so that is indeed what I had in mind. To be honest, I don't really know what a ripgrep in Cython would look like. Is there a separate Cython standard library, for example? Or do you still use Python's main standard library?

We don't have to tumble down that rabbit hole though. If someone wrote a ripgrep in Cython and matched performance, then I would definitely learn something.

> "ripgrep needs 'low level tricks' to be fast and a language that allows them, such as Rust"

I might use that, sure. I think my point above was that I had to riff off of someone else's language. But I think we covered that. :-) In any case, yes, that phrasing sounds better.

Anyway, good chatting with you, good night!


> I do not know of a competitor who has put the care into such a tool to even try to measure, though.

As an aside, I'm the author of ack, and I would ask that folks not use the word "competitor" to describe different projects in the same space. Speaking for me/ack and burntsushi/ripgrep, there is absolutely no competition between us. We have different projects that do similar things, and neither of us is trying to best the other. We are each trying to make the best project we can for the needs we are looking to fill. ripgrep won't replace ack, ack won't replace ripgrep, and neither one will replace plain ol' grep. Each has its place.

I believe this so strongly that I created a [feature comparison chart](https://beyondgrep.com/feature-comparison/) comparing various greplike tools, and a related blog post I wrote on this: [The best open source project for someone might not be yours, and that's OK](https://blog.petdance.com/2018/01/02/the-best-open-source-pr...)


I suspect Don Stewart might be able to do it in Haskell.

That's basically by knowing enough about GHC to carefully trigger all the relevant optimizations.


Hah. I'm highly skeptical. But I suppose if anyone could do it, it'd be him. I would certainly learn something. :-)

I've tried optimizing Haskell code myself before. It did not go well. It was an implementation of the Viterbi algorithm actually. We ported it to Standard ML and C and measured performance. mlton did quite well at least.

We published a paper about the process of writing Viterbi in Haskell in ICFP a few years back: https://dl.acm.org/doi/pdf/10.1145/2364527.2364560?casa_toke...

Unfortunately, the performance aspect of it was only a small part, and we didn't talk about the C or mlton ports in the paper.


Very interesting!

I suspect you could make a very Haskell-like language that's also really fast, but you'd have to base it on linear types from the ground up, and make everything total by default. (Hide non-total parts behind some type 'tag' like we do with IO in current Haskell (and have something like unsafePerformPartial when you know your code is total, but can't convince the compiler).)

That way the compiler can be much more aggressive about making things strict.


Cython with all the appropriate cdef type declarations can match C and so might also do it. Not sure Cython exactly counts as "Python"..it's more a superset/dialect { and I also doubt such a port would hold many lessons for @burntsushi, but it bore noting. }


You would go to parallelism precisely on those platforms where simpler performance fixes (changing some data structures or implementing limited sections in a fast language) are insuficient. Eficient parallelization of an existing algorithm is a major undertaking.


> In embedded applications you don't usually have the possibility to have parallel code, and even in low level software (for example the classical UNIX utilities), for simplicity and solidity using a single thread is really fine.

Depends on which of the classic utilities you are talking about.

Many of them are typically IO bound. You might not get much out of throwing more CPU at them.


ripgrep? :)


Indeed. Many of the optimizations ripgrep (and the underlying regex engine) does only show benefits if the data you're searching is already in memory.[1] The same is true of GNU grep. This is because searching data that's in your OS's file cache is an exceptionally common case.

[1] - I'm assuming commodity SSD in the range of a few hundred MB/s read speed. This will likely become less true as the prevalence of faster SSDs increases (low single digit GB/s).


ripgrep is based on re2, a c library

I would guess it contains more c than rust code...

But what I love about this article is its lack of hype. It makes clear arguments both ways and all of them I can get behind

Hype doesn't help

Edit: To all my downvoters; I anticipated you :) With love and best wishes


No it's not. Its regex library is written in Rust, but was inspired by RE2. It shares no code with RE2. (And RE2 is a C++ library, not C.)

Off the top of my head, the only C code in ripgrep is optional integration with PCRE2. In addition to whatever libc is being used on POSIX platforms. Everything else is pure Rust.


It couldn't figure it out from looking through ripgrep's website: does ripgrep support intersection and complement of expressions? Like eg https://github.com/google/redgrep does.

Regular languages are closed under those operations after all.


No, it doesn't. It's only theoretically easy to implement. In practice, they explode the size of the underlying FSM. Moreover, in a command line tool, it's somewhat easy to work around that through the `-v` switch and shell pipelining.

Paul's talk introduced redgrep is amazing by the way. Give it a watch if you haven't yet: https://www.youtube.com/watch?v=Ukqb6nMjFyk

ripgrep's regex syntax is the same as Rust's regex crate: https://docs.rs/regex/1.4.4/regex/#syntax (Which is in turn similar to RE2, although it supports a bit more niceties.)


> No, it doesn't. It's only theoretically easy to implement.

Oh, I didn't say anything about easy! I am on and off working on a Haskell re-implementation (but with GADTs and in Oleg's tagless final interpreter style etc, so it's more about exploring the type system).

> In practice, they explode the size of the underlying FSM.

You may be right, but that's still better than the gymnastics you'd have to do by hand to get the same features out of a 'normal' regex.

> Moreover, in a command line tool, it's somewhat easy to work around that through the `-v` switch and shell pipelining.

Alas, that only works, if your intersection or complement happen at the top level. You can't do something like

(A & not B) followed by (C & D)

that way.

> Paul's talk introduced redgrep is amazing by the way. Give it a watch if you haven't yet: https://www.youtube.com/watch?v=Ukqb6nMjFyk

I have, and I agree!

Perhaps I'll try and implement a basic version of redgrep in Rust as an exercise. (I just want something that supports basically all the operations regular languages are closed, but don't care too much about speed, as long as the runtime complexity is linear.)


Yeah sorry, I've gotten asked this question a lot. The issue is that building a production grade regex engine---even when it's restricted to regular languages---requires a lot more engineering than theory. And these particular features just don't really pull their weight IMO. They are performance footguns, and IMO, are also tricky to reason about inside of regex syntax.

If you get something working, I'd love to look at it though! Especially if you're building in a tagless final interpreter style. I find that approach extremely elegant.


For my current attempts, I bit off more than I could chew:

I tried to build a system that not only recognizes regular languages, but also serves as a parser for them (a la Parsec).

The latter approach pushes you to support something like fmap, but the whole derivatives-based approach needs more 'introspection' so support general mapping via fmap (ie a->b) is out, and you can only support things that you have more control over than functions.

(And in general, I am doing bifunctors, because I want the complement of the complement be the original thing.)

Sorry, if that's a bit confused.. If I was a better theoretician, I could probably work it out.

I haven't touched the code in a while. But recently I have thought about the theory some more. The Brzozowski derivative introduced the concept of multiplicative inverse of a string. I am working out the ramifications of extending that to the multiplicative inverse of arbitrary regular expressions. (The results might already be in the literature. I haven't looked much.)

I don't expect anything groundbreaking to come out of that, but I hope my understanding will improve.

> And these particular features just don't really pull their weight IMO. They are performance footguns, and IMO, are also tricky to reason about inside of regex syntax.

Well, in theory I could 'just' write a preprocessor that takes my regex with intersection and complement and translates it to a more traditional one. I wouldn't care too much if that's not very efficient.

I'm interested in those features because of the beauty of the theory, but it would also help make production regular expressions more modular.

Eg if you have a regular expression to decide on what's a valid username for someone to sign up to your system. You decide to use email addresses as your usernames, so the main qualification is that users can receive an email on it. But because they will be visible to other users, you have some additional requirements:

'.{0,100} & [^@]@[^@] & not (.(root|admin|<some offensive term>).@.) & not (.<sql injection>.*)'

That's a silly example. I think in production, I would be more likely to see something as complicated as this in eg some ad-hoc log parsing.

> The issue is that building a production grade regex engine---even when it's restricted to regular languages---requires a lot more engineering than theory.

Amen to that!


Ah, thanks burntsushi, I believe you are the ripgrep author even?

Great work btw. Ripgrep is the best

... I will have to restrict my comment to just LLVM being a larger, c++, dependency

... Just angling for more downvotes ;) Thanks for the reply


To be clear, ripgrep has no runtime dependency on any LLVM or C++ library. rustc does.


The interesting thing here is that rust has good threading and fantastic crates

I played with making a regex library in rust. Which, as per RE2 design involves constructing graphs and glueing them together as the regex is traversed

This requires a cycle catching gc, or, just a preallocated arena... It was my first foray into rust and felt I would need to be hitting into unsafe, which I wasn't ready for. Array indexing might decompose into an arena, but syntactically just a bit messier (imho)

Would be interesting to see how the RE2 does it in rust (didn't know that)

I like how the article shows both sides of the fence, it makes me realize:

I get a lot of optimizations from ptr stuffing in c. But sometimes we should lay down the good, for the better


You're overcomplicating it. When it comes to finite state machines at least, it's very easy to use an ID index instead of the raw pointer itself. That's exactly what the regex crate does.

For reference, I am also the author of the regex crate. The only unsafe it uses specific to finite automata is to do explicit elimination of bounds checks in the core hybrid NFA/DFA loop.


> When it comes to finite state machines at least, it's very easy to use an ID index instead of the raw pointer itself.

As an old C programmer, the difference between an array index and a pointer caught me by surprise. In C a pointer is just an unchecked offset into memory. A real array index is just a unchecked offset into ... maybe a smaller chunk of raw memory.

But in rust, an array index is something that comes with additional bounds checking overheads with every use. And the memory it points to is also constrained - the entire array has to be initialised, so if the index passes the bounds check you are guaranteed rusts memory consistency invariants are preserved. Indexes also allow you to escape the borrow checker. If you own the slice, there is no need to prove you can access an element of the slice.

So yeah, you can use indexes instead of pointers, but for rust that's like saying you can use recursion instead of iteration. Indexing and pointers are two very different things in rust.


I guess so. But note that I didn't equate them. I just said that you can use an ID index instead. For the particular program of FSMs, they work very well.

If bounds checks prove to be a problem, you can explicitly elide them. Indeed, Rust's regex does just that. :-)


> I played with making a regex library in rust. Which, as per RE2 design involves constructing graphs and glueing them together as the regex is traversed

You could instead go with a derivatives approach.

https://en.wikipedia.org/wiki/Brzozowski_derivative


Has anyone built a production grade regex engine using derivatives? I don't think I've seen one. I personally always get stuck at how to handle things like captures or the very large Unicode character classes. Or hacking in look-around. (It's been a while since I've given this thought though, so I'm not sure I'll be able to elaborate much.)


I've made some attempts, but nothing production grade.

About large character classes: how are those harder than in approaches? If you build any FSM you have to deal with those, don't you?

One way to handle them that works well when the characters in your classes are mostly next to each other unicode, is to express your state transition function as an 'interval map'

What I mean is that eg a hash table or an array lets you build representations of mathematical functions that map points to values.

You want something that can model a step function.

You can either roll your own, or write something around a sorted-map data structure.

Eg in C++ you'd base the whole thing around https://en.cppreference.com/w/cpp/container/map/upper_bound (or https://hackage.haskell.org/package/containers-0.4.0.0/docs/... in Haskell.)

The keys in your sorted map are the 'edges' of your characters classes (eg where they start and end).

Does that make sense? Or am I misunderstanding the problem?

> I personally always get stuck at how to handle things like captures [...]

Let me think about that one for a while. Some Googling suggests https://github.com/elfsternberg/barre but they don't seem to support intersection, complement or stripping prefixes.

What do you want your capture groups to do? Do you eg just want to return pointers to where you captured them (if any)?

I have an inkling that something inspired by https://en.wikipedia.org/wiki/Viterbi_algorithm might work.

https://github.com/google/redgrep/blob/main/parser.yy mentions something about capture, but not sure if that has anything to do with capture groups.


> About large character classes: how are those harder than in approaches? If you build any FSM you have to deal with those, don't you?

I mean specifically in the context of derivatives. IIRC, the formulation used in Turon's paper wasn't amenable to large classes.

Yes, interval sets work great: https://github.com/rust-lang/regex/blob/master/regex-syntax/...

This is why I asked if a production grade regex engine based on derivatives exists. Because I want to see how the engineering is actually done.

> What do you want your capture groups to do? Do you eg just want to return pointers to where you captured them (if any)?

Look at any production grade regex engine. It will implement captures. It should do what they do.

> I have an inkling that something inspired by https://en.wikipedia.org/wiki/Viterbi_algorithm might work.

Nothing about Viterbi is fast, in my experience implementing it in the past. :-)

> https://github.com/google/redgrep/blob/main/parser.yy mentions something about capture, but not sure if that has anything to do with capture groups.

It looks like it does, and in particular see: https://github.com/google/redgrep/blob/6b9d5b02753c4ece17e2f...

But that's only for parsing the regex itself. I don't see any match APIs that utilize them. I wouldn't expect to either, because you can't implement capturing inside a DFA. (You need a tagged DFA, which is a strictly more powerful thing. But in that case, the DFA size explodes. See the re2c project and their associated papers.)

If I'm remembering correctly, I think the problem with derivatives is that they jump straight to a DFA. You can't do that in a production regex engine because a DFA's worst case size is exponential in the size of the regex.


> If I'm remembering correctly, I think the problem with derivatives is that they jump straight to a DFA. You can't do that in a production regex engine because a DFA's worst case size is exponential in the size of the regex.

Oh, that's interesting! Because I actually worked on some approaches that don't jump directly to the DFA.

The problem is the notion of (extended) NFA you need is quite a bit more complicated when you support intersection and complement.


Indeed. And in the regex crate and RE2, for example, captures are only implemented in the "NFA" engines (specifically, the PikeVM and the bounded backtracker). So if you support captures, then those engines have to be able to support everything.


Can I ask; what about a zdd?

The seem similar to closed languages with a disjunct and conjunct

Though I don't think I will, I was considering adding zdd or bdd to a PEG, to provide that conjunct

ofc, sat solver can represent a regex with conjuncts, but is this a good way of going about it, particularly with unbounded strings??

Would love to hear your thoughts on that


I don’t think people are downvoting you because they disagree on a matter of opinion. You’ve literally got the author of ripgrep having replied to you to tell you that what you’ve said is categorically false.


I anticipated my own falsity. I'm aware and at home with it


Huh? Why do you say something that you anticipate to be false?


Simply bervity

I anticipated I could well be wrong. I ALSO anticipated it would be a hard statement for people to take

I think it was a reasonable statement -- I can't research everything I say, and I had read re2 and regexes in rust to be the same

Interesting to read about redgrep and derivatives approach. Currently I'm programming a language that adds turing completeness to PEG expressions -- as in functions, but extended so the lhs is like a PEG -- just as function body can call sub functions, so too can the lhs

I'm hoping this will give a simple unified language

--

Philosophically:

We make mistakes. If we can't handle that, then either we don't speak or program; or we deny it, program in c, then have flamewars and real wars

Or thirdly, we accept it, program in rust, and let others correct us

We can say you are my rustc compiler. So in effect I used a rust philosophy..... While programming in c


I guess the thing is that if we’re not sure whether or not what we’re saying is true, it can be considerate to phrase it that way, e.g. “I think ripgrep’s regex library is written in C” rather than stating it as a fact. While it is particularly likely that folks on this website will correct mistaken statements, stating them as fact seems more likely to potentially spread misinformation.

But anyways, cheers and good luck with your programming language!


In one word: Jesus


Why would you guess about how much C or Rust code that ripgrep contains when you could very quickly look? https://github.com/BurntSushi/ripgrep


Hm, how do I go from the github repo to a language breakdown of the dependency tree?


A lot of modern embedded hw are running operating systems providing threads (such as Linux) and multi-core CPUs.


Deadlocks are unique to Rust, eh?


> C libraries typically return opaque pointers to their data structures, to hide implementation details and ensure there's only one copy of each instance of the struct. This costs heap allocations and pointer indirections. Rust's built-in privacy, unique ownership rules, and coding conventions let libraries expose their objects by value

The primary reason c libraries do this is not for safety, but to maintain ABI compatibility. Rust eschews dynamic linking, which is why it doesn't bother. Common lisp, for instance, does the same thing as c, for similar reasons: the layout of structures may change, and existing code in the image has to be able to deal with it.

> Rust by default can inline functions from the standard library, dependencies, and other compilation units. In C I'm sometimes reluctant to split files or use libraries, because it affects inlining

This is again because c is conventionally dynamically linked, and rust statically linked. If you use LTO, cross-module inlining will happen.


> ABI compatibility

Rust provides ABI compatibility against its C ABI, and if you want you can dynamically link against that. What Rust eschews is the insane fragile ABI compatibility of C++, which is a huge pain to deal with as a user:

https://community.kde.org/Policies/Binary_Compatibility_Issu...

I don't think we'll ever see as comprehensive an ABI out of Rust as we get out of C++, because exposing that much incidental complexity is a bad idea. Maybe we'll get some incremental improvements over time. Or maybe C ABIs are the sweet spot.


Rust has yet to standardize an ABI. Yes you can call or expose a function with C calling conventions. However, you cant pass all native rust types like this, and lose some semantics.

However, as the parent comment you responded to you can enable LTO when compiling C. As rust is mostly always statically linked it basically always got LTO optimizations.


Even with static linking, Rust produces separate compilation units a least at the crate level (and depending on compiler settings, within crates). You won't get LTO between crates if you don't explicitly request it. It does allow inlining across compilation units without LTO, but only for functions explicitly marked as `#[inline]`.


Swift has a stable ABI. It makes different tradeoffs than rust, but I don't think complexity is the cliff. There is a good overview at https://gankra.github.io/blah/swift-abi/


Swift has a stable ABI at the cost of what amounts to runtime reflection, which is expensive. That doesn't really fit with the goals of Rust, I don't think.


This is misleading, especially since Swift binaries do typically ship with actual reflection metadata (unless it is stripped out). The Swift ABI does keep layout information behind a pointer in certain cases, but if you squint at it funny it's basically a vtable but for data. (Actually, even more so than non-fragile ivars are in Objective-C, because I believe actual offsets are not provided, rather you get getter/setter functions…)

I don't disagree that Rust probably would not go this way, but I think that's less "this is spooky reflection" and more "Rust likes static linking and cares less about stable ABIs, plus the general attitude of 'if you're going to make an indirect call the language should make you work for it'".


Do you have a source on this? I didn't think Swift requires runtime reflection to make calling across module boundaries work - I thought `.swiftmodule` files are essentially IR code to avoid this


Pretty sure the link the parent (to my comment) provided explains this.

It's not the same kind of runtime reflection people talk about when they (for example) use reflection in Java. It's hidden from the library-using programmer, but the calling needs to "communicate" with the library to figure out data layouts and such, and that sounds a lot like reflection to me.


Yes, and if you use the C abi to dynamically link rust code, you will have exactly the same problem as c: you can't change the layout of your structures without breaking compatibility, unless you use indirecting wrappers.


That's ABI compatibility of the language, not of a particular API.

If you have an API that allows the caller to instantiate a structure on the stack and pass a reference to it to your function, then the caller must now be recompiled when the size of that structure changes. If that API now resides in a separate dynamic library, then changing the size of the structure is an ABI-breaking change, regardless of the language.


Rust seems great to me, but aren't we losing a lot by giving up on C's dynamic linking and shared libraries?


To give some context to the parent comment:

$ ls -lh $(which grep) $(which rg)

-rwxr-xr-x 1 root root 199K Nov 10 06:37 /usr/bin/grep

-rwxr-xr-x 1 root root 4.2M Jan 19 09:31 /usr/bin/rg

My very unscientific measurement of the startup time of grep vs ripgrep is 10ms when the cache is cold (ie, never run before) and 3ms when the cache is hot (ie, was run seconds prior). For grep even in the cold case libc will already be in memory, of course. The point I'm trying to make is even the worst case, 10ms, is irrelevant to a human using the thing.

However, speaking as a Debian Developer, it makes a huge difference to maintaining the two systems that use the two programs. If a security bug is found in libc, all Debian has to do is make the fixed version of libc as a security update. If a bug is found in the rust stdlib create Debian has to track down every ripgrep like program that statically includes it, recompile it. There are current 21,000 packages that link to libc6 right in Debian right now. If it was statically linked, Debian would have to rebuilt and distribute _all_ of them. (As a side note, Debian has a lot hardware resources donated to it but if libc wasn't dynamlic I wonder if it could get security updates to a series of bugs in libc6 out in a timely fashion.)

I don't know rust well, but I thought it could dynamically link. The Debian rust packagers don't, for some reason. (As opposed 21,000 dependencies, libstd-rust has 1.) I guess there must be some kink in the rust tool chain that makes it easier not to. I imagine that would have to change if rust replaces C.


Thanks!


I am sympathetic to the point you make but to be accurate, one can consume and create C and C compatible dynamic libraries with rust. So, one is not “losing” something because what you (and me) want - dynamic linking and shared libraries with a stable and safe rust ABI - was not there to begin with.


Some would argue you gain more than you lose.

Also to be pedantic, C doesn't spec anything about linkage. Shared objects and how linkers use them to compose programs is a system detail more than a language one.


Dynamic linking and shared libraries are an OS feature, not a C one. C worked fine on DOS with no DLLs at the time.

This being said, Rust has no problem using dynamic libraries.


The reason Common Lisp uses pointers is because it is dynamically typed. It’s not some principled position about ABI compatibility. If I define an RGB struct for colours, it isn’t going to change but it would still need to be passed by reference because the language can’t enforce that the variable which holds the RGBs will only ever hold 3 word values. Similarly, the reason floats are often passed by reference isn’t some principled stance about the float representation maybe changing, it’s that you can’t fit a float and the information that you have a float into a single word[1].

If instead you’re referring to the fact that all the fields of a struct aren’t explicitly obvious when you have such a value, well I don’t really agree that it’s always what you want. A great thing about pattern matching with exhaustiveness checks is that it forces you to acknowledge that you don’t care about new record fields (though the Common Lisp way of dealing with this probably involves CLOS instead).

[1] some implementations may use NaN-boxing to get around this


Lisp users pointers because of the realization that the entities in a computerized implementation of symbolic processing can be adequately represented by tiny index tokens that fit into machine registers, whose properties are implemented elsewhere, and these tokens can be whipped around inside the program very quickly.


What your describing are symbols where the properties are much less important than the identity. Most CL implementations will use fixnums rather than pointers when possible because they don’t have some kind of philosophical affinity to pointers. For data structures, pointers aren’t so good with modern hardware. The reason Common Lisp tends to have to use pointers is that the type system cannot provide information about how big objects are. Compare this to the arrays which are often better at packing because they can know how big their elements are.

This is similar in typed languages with polymorphism like Haskell or ocaml where a function like concat (taking a list of lists to a single list) needs to work when the elements are floats (morally 8 bytes each) or bools (morally 1 bit each). The solution is to write the code once and have everything be in one word, either a fixnum or a pointer.


Rust makes building from source and cross compiling so easy that I don’t really care for dynamic linking in my use cases of Rust.


Dynamic linking is one thing I miss from Swift - I used dynamic linking for hot code reloading for several applications, which resulted in super fast and useful development loops. Given Rust's sometimes long compile times, this is something which would be welcome.


There are crates for hot reloading in Rust, and they use dynamic linking.


Do you have to stick to a C-FFI like interface, or can they handle rust-native features like closures and traits?


Some stick to C FFI, some enforce that the Rust compiler version is the same which makes ABI issues irrelevant.


> This costs heap allocations and pointer indirections.

Heap allocations, yes; pointer indirections no.

A structure is referenced by pointer no matter what. Remember that the stack is accessed via a stack pointer.

The performance cost is that there are no inline functions for a truly opaque type; everything goes through a function call. Indirect access through functions is the cost, which is worse than a mere pointer indirection.

An API has to be well-designed this regard; it has to anticipate the likely use cases that are going to be performance critical and avoid perpetrating a design in which the application has to make millions of API calls in an inner loop. Opaqueness is more abstract and so it puts designers on their toes to create good abstractions instead of "oh, the user has all the access to everything, so they have all the rope they need".

Opaque structures don't have to cost heap allocations either. An API can provide a way to ask "what is the size of this opaque type" and the client can then provide the memory, e.g. by using alloca on the stack. This is still future-proof against changes in the size, compared to a compile-time size taken from a "sizeof struct" in some header file. Another alternative is to have some worst-case size represented as a type. An example of this is the POSIX struct sockaddr_storage in the sockets API. Though the individual sockaddrs are not opaque, the concept of providing a non-opaque worst-case storage type for an opaque object would work fine.

There can be half-opaque types: part of the structure can be declared (e.g. via some struct type that is documened as "do not use in application code"). Inline functions use that for direct access to some common fields.


Escape analysis is tough in C, and data returned by pointer may be pessimistically assumed to have escaped, forcing exact memory accesses. OTOH on-stack struct is more likely to get fields optimized as if they were local variables. Plus x86 has special treatment for the stack, treating it almost like a register file.

Sure, there are libraries which have `init(&struct, sizeof(struct))`. This adds extra ABI fragility, and doesn't hide fields unless the lib maintains two versions of a struct. Some libraries that started with such ABI end up adding extra fields behind internal indirection instead of breaking the ABI. This is of course all solvable, and there's no hard limit for C there. But different concerns nudge users towards different solutions. Rust doesn't have a stable ABI, so the laziest good way is to return by value and hope the constructor gets inlined. In C the solution that is both accepted as a decent practice and also the laziest is to return malloced opaque struct.


> This costs heap allocations

I'd like to point out that this is not always the case. Some libraries, especially those with embedded systems in mind, allow you to provide your own memory buffer (which might live on the stack), where the object should be constructed. Others allow you to pass your own allocator.


> "Clever" memory use is frowned upon in Rust. In C, anything goes. For example, in C I'd be tempted to reuse a buffer allocated for one purpose for another purpose later (a technique known as HEARTBLEED).

This made me laugh


It's not trivial to write a funny and clever burn, but this just hits the spot...


That is nice, although I think Heartbleed was due to a missing bounds check enabling the reading of adjacent memory, not due to reusing the same buffer...


If my memory is correct: yes, the root cause was a missing bounds check, but the vulnerability was much worse than it could have been because OpenSSL tended to allocate small blocks of memory and aggressively reuse them — meaning the exploited buffer was very likely to be close in proximity to sensitive information.

I don’t have time right now to research the full details, but the Wikipedia article gives a clue:

> Theo de Raadt, founder and leader of the OpenBSD and OpenSSH projects, has criticized the OpenSSL developers for writing their own memory management routines and thereby, he claims, circumventing OpenBSD C standard library exploit countermeasures, saying "OpenSSL is not developed by a responsible team." Following Heartbleed's disclosure, members of the OpenBSD project forked OpenSSL into LibreSSL.


Until very recently, memory allocators were more than happy to return you the thing you just deallocated if you asked for another allocation of the same size. It makes sense, too: if you're calling malloc/free in a loop, which is pretty common, this is pretty much the best thing you can do for performance. Countless heap exploits later (mostly attacking heap metadata rather than stale data, to be honest) allocators have begun to realize that predictable allocation patterns might not be the best idea, so they're starting to move away from this.


True of the more common ones, but it should be acknowledged that OpenBSD was doing this kind of thing (and many other hardening techniques) before heartbleed, which was the main reason Theo de Raadt was so upset that they decided to circumvent this, because OpenBSD's allocator could have mitigated the impact otherwise.


Even higher-performance mallocs like jemalloc had heap debugging features (poisoning freed memory) before Heartbleed, which -- if enabled -- would catch use-after-frees, so long as libraries and applications didn't circumvent malloc like OpenSSL did (and Python still does AFAIK).


> and Python still does AFAIK

Don't you sort of have to do that if you're writing your own garbage collector, though? I guess for a simple collector you could maintain lists of allocated objects separately, but precisely controlling where the memory is allocated is important for any kind of performant implementation.


Python does refcount-based memory management. It's not a GC design. You don't have to retain objects in an internal linked list when the refcount drops to zero, but CPython does, purely as a performance optimization.

Type-specific free lists (just a few examples; there are more):

* https://github.com/python/cpython/blob/master/Objects/floato...

* https://github.com/python/cpython/blob/master/Objects/tupleo...

And also just wrapping malloc in general. There's no refcounting reason for this, they just assume system malloc is slow (which might be true, for glibc) and wrap it in the default build configuration:

https://github.com/python/cpython/blob/master/Objects/obmall...

So many layers of wrapping malloc, just because system allocators were slow in 2000. Defeats free() poisoning and ASAN. obmalloc can be disabled by turning off PYMALLOC, but that doesn't disable the per-type freelists IIRC. And PYMALLOC is enabled by default.


Thanks for the links! I wasn't aware of the PyMem_ layer above, the justification for that does sound bad.

But Python runs a generational GC in addition to refcounting to catch cycles (https://docs.python.org/3/library/gc.html): isn't fine control over allocation necessary for that? E.g. to efficiently clear the nursery?


Ah, good point; at the very least things like zeroing out buffers upon deallocation would have helped. Yes, I was a fan of the commits showing up at opensslrampage.org. One of the highlights was when they found it would use private keys as an entropy source: https://opensslrampage.org/post/83007010531/well-even-if-tim...


That's what happens by using normal malloc/free anyway, no? Implementations of malloc have a strong performance incentive to allocate from the cache hot most recently freed blocks.


Yes, all allocators (except perhaps OpenBSDs from what I see in this thread) do this. It is also why `calloc` exists - because zero-initializing every single allocation is really, really expensive.


iirc both issues caused the problem. Buffer overlow let the memory get read, re-use meant there was important data in the buffer.


It's incorrect, however.

Heartbleed wasn't caused by reusing buffers; it was caused by not properly sanitizing the length of the buffer from entrusted input, and reading over it's allocated size, thus allowing the attacker to read into memory that wasn't meant for him.


OpenSSL had its own memory-recycling allocator, which made the bug guarantee leaking OpenSSL's own data. Of course leaking random process memory wouldn't be safe either, but the custom allocator added that extra touch.


OpenSSL, like many other software indeed used a custom allocator, but this hasn't much to do with this to anything at all, as the system allocator also strongly favors giving back memory that once belonged to the same process, as it has to zero memory that belonged to other processes first.

This is of course a kernel feature when the lower level primitives are used that ask for blocks of memory from the kernel, which zeroes them if they had belonged to another process prior, and does not when they had not, and thus strongly favors giving back own memory. — allocators, such as the standard library's one or any custom ones, are built on top of these primitives.


That's not a good burn though.


This was actually a somewhat significant reason I shared this article. (^.^)


> in C I'd be tempted to reuse a buffer allocated for one purpose

... In rust I'd just declare an enum for this. Enums in Rust can store data. In this way they are like a safe union.


It was quite funny but it's quite likely you'll be reusing memory anyway whether it's on the stack or the heap, no?

The issue with this is that 'clever' compilers can optimise out any memset calls you do.


Rust's safety rules also forbid access to uninitialized memory, even if it's just a basic array of bytes. This is an extra protection against accidentally disclosing data from a previous "recycled" allocation.


> computed goto

I did a deep dive into this topic lately when exploring whether to add a language feature to zig for this purpose. I found that, although finnicky, LLVM is able to generate the desired machine code if you give it a simple enough while loop continue expression[1]. So I think it's reasonable to not have a computed goto language feature.

More details here, with lots of fun godbolt links: https://github.com/ziglang/zig/issues/8220

[1]: https://godbolt.org/z/T3v881


Somewhat off-topic: I just looked into zig, because you mentioned it.

> C++, D, and Go have throw/catch exceptions, so foo() might throw an exception, and prevent bar() from being called. (Of course, even in Zig foo() could deadlock and prevent bar() from being called, but that can happen in any Turing-complete language.)

Well, you could bite the bullet and carefully make Zig non-Turing complete. (Or at least put Turing-completeness behind an escape hatch marked 'unsafe'.)

That's how Idris and Agda etc do it.


With respect to deadlocks, there’s little practical difference between an infinite loop and a loop that holds the lock for a very long time.

Languages like Idris and Agda are different because sometimes code isn’t executed at all. A proof may depend on knowing that some code will terminate without running it.


> Languages like Idris and Agda are different because sometimes code isn’t executed at all. A proof may depend on knowing that some code will terminate without running it.

Yes. They are rather different in other respects as well. Though you can produce executable code from Idris and Agda, of course.

> With respect to deadlocks, there’s little practical difference between an infinite loop and a loop that holds the lock for a very long time.

Yes, that's true. Though as a practical matter, I have heard that it's much harder to produce the latter by accident, even though only the former is forbidden.

For perhaps a more practical example, have a look at https://dhall-lang.org/ which also terminates, but doesn't have nearly as much involved proving.


Thank you for doing the research and not just mindlessly adding features other languages have :)


Oh, that's great. I write interpreters off and on and I love Zig, so it's nice to hear I can get the best code gen while keeping the language small


Really cool investigation. I wonder if this applies to rust as well.

As you said though, this is finicky, and if you need this optimization for performance then you don’t want to rely on compiler heuristics.


Rust's output[0] is basically the same as Zig in this case. The unsafe is needed here because it's calling extern functions.

However, in this specific instance at least, this isn't as optimal as it could be. What this is basically doing is creating a jump table to find out which branch it should go down. But, because all the functions have the same signature, and each branch does the same thing, what it could have done instead is create a jump table for the function to call. At that point, all it would need to do is use the Inst's discriminant to index into the jump table.

I'm not sure what it would look like in Zig, but it's not that hard to get that from Rust[1]. The drawback of doing it this way is that it now comes with the maintenance overhead of ensuring the order and length of the jump table exactly matches the enum, otherwise you get the wrong function being called, or an out-of-bounds panic. You also need to explicitly handle the End variant anyway because the called function can't return for its parent.

I don't know Zig, but from what I understand it has some pretty nice code generation, so maybe that could help with keeping the array and enum in step here?

[0] https://godbolt.org/z/sa6fGq

[1] https://godbolt.org/z/P3cj31


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: