Hacker News new | past | comments | ask | show | jobs | submit login
Rust 1.40 (rust-lang.org)
430 points by mark-simulacrum 34 days ago | hide | past | web | favorite | 150 comments



The continuing improvements to procedural macros are great!

I do a fair amount of integration with C, and the work the rust team is doing for procedural macros has helped a lot. Being able to use them in an extern blocks will surely help more.

Like many, I have a wish list that I complain about sometimes, but I feel bad about complaining because rust has done so much to improve my use case I don't know where to begin. I should write up a blog post about all of the difficulty I had, and how each release since around 1.28 has introduced features that solved these problems one by one.

To me, Rust/C interop is the killer feature. Not only is it a low-level language that can be used instead of C, it can also be used to extend C applications. I'm sure it was not easy to do this well, but what a great strategic decision by the rust team to spend the effort on it!


Can you recommend any good blog or tutorial on integrating C/Python and newer releases of Rust?


I didn't use much except for the official docs (book and reference). Googling around shows some promising material.

Release notes are helpful -- when you see something added related to C, it's a strong hint to read about the new feature.

Macro features are important because rust has sophisticated macros and C macros are closer to text replacement. So when trying to emulate a C header, you need to do a lot of macro magic in rust sometimes.

Sorry that I don't have more to offer from personal experience.


Here's a common Rust foreign function interface for Python, with quite a few examples:

https://crates.io/crates/pyo3

Here's the one for C:

https://crates.io/crates/libc


Using #[non_exhaustive] on enums is going to be general bad practice, just as many in c++ consider ‘default’ switch cases bad practice.

When a new state is added to an enum, we want the code to not compile so that we can fix all the places that need updating.


That is why there are two versions available. The default matches the behavior you describe and is the default because for small projects it is the right decision.

However sometimes you are a dependency and you want to give up this restriction to gain the ability to add things without having to bump your major version number.

By far the most common example is error enums which don't necessarily need all of their downstream crates to handle every error, they likely are bucketing most of them anyway and non_exhaustive ensures they support that.


Error enums are precisely the target for this attribute. Servo's URL parser is a great example, as it currently uses a dummy variant that is hidden from the documentation in order to discourage people from trying to exhaustively match over it: https://github.com/servo/rust-url/blob/7d2c9d6ceb3307a3fad4c...


The `Ordering` example given in the announcement shows another use-case. User code typically won't pattern match on that enum anyway, it will usually just be passed as an argument to atomic functions. And it may be desirable to add another type of ordering as rust's memory model evolves.


Since the developer of a library has no insight into my application, they have no idea how important an exhaustive match is or isn't in any given piece of code I'm writing.

This is a decision library users should be making, not library writers.


It seems like the main issue is consent. If you're using a library in a way the author didn't agree to, that's fine, but you don't get any guarantee it will keep working after an upgrade.

If you want to do that, you could edit your own copy of the library's source code and nobody will mind. Maybe that's enough?

It seems like you shouldn't be able to publish a crate where you're using an upstream library in a way they don't consent to, because now you're involving others in this dispute. A basic requirement for publishing to a shared open source ecosystem should be that you're resolving any disputes you have with upstream libraries and not just going your own way.


You can handle it with a `_` branch and careful attention to your dependency upgrades.


Swift has a really good solution to this, which is an attribute @unknown that you put on the default case, and this attribute produces a warning if there are any known enum variants that would match this case. This way you're future-compatible but the warnings tell you when you need to revisit the code. I'm pretty disappointed that Rust didn't copy this.


This is a good idea for a lint!


Can't you just do that with a catch all pattern match?


It’s only relevant if you have a catch-all match. The idea is to get a compiler warning when new patterns become available instead of a runtime warning.


This ties into what I believe is going to be one of the biggest themes of programming language development in the 2020s: first-class language features that allows defensive libraries to make changes that don't cause breakage in downstream users. Right now I'd say Swift is the poster child of this movement; many of its language features are head-scratchers until you realize that they exist to keep applications compiling and on a clean upgrade path even when their dependencies are actively changing.

Of course, the trade-off is obviously that by choosing to make things continue to compile when something changes, you are no longer causing things to fail to compile when something changes. I'm uncertain how this tension will be resolved in the long run.


I'd be curious to hear more about these Swift features.

(Context: used to do iOS dev w/ obj-c; haven't used Swift, but keeping an eye on it, mostly out of curiosity.)


There was an interesting write up done recently about Swift ABI features that enable this and why Rust doesn't/can't do similar things due to different design goals.

https://gankra.github.io/blah/swift-abi/


That's pretty surprising that they put so much effort into this kind of compatibility stuff when at the same time they make no effort in maintaining compatibility at the language level: every new release of Swift so far had been full of breaking changes that needed tons of work to update a library to.


> make no effort in maintaining compatibility at the language level

There is the Xcode migration which auto updates source code to the new version, which I’m sure you are aware of and leaves you annoyed. Thou it’s more than “no effort”


There's tooling, that's right but I was refering to some kind of language stability commitment (like the one Rust has for instance). I find it surprising that there are language constructs made to help libraries being forward compatible (and that Swift is a leader in that domain), while the language itself is way less stable than most languages (which doesn't shock me since it's still pretty recent, and has really ambitious goals which are understandably hard to achieved on the first try).


Its not a small change ... binary compatibility has essentially meant “c compatible” for the last several decades ... will any of the new languages currently attempting to implement a binary comparability model succeed in shifting this in a meaningful way?


Complete binary compatibility across modules for open records/sums (this is what #[non_exhaustive] boils down to) is quite non-trivial. You end up going through a level of indirection, kind of like objects in Python/Ruby etc. It's not a disaster because it specifically applies to the open case, which ought to be rare; but it's something to be aware of.


Yes, that’s the trade off. Like all trade offs, it makes sense sometimes, but you’re right that it’s not to be used in most cases.


Yeah, I don't like it at all.

Yes, some enums are intended to be expanded. Still, as long as something is a compile-time error, I don't care about having to update my code when I choose to upgrade compilers.

In fact, even if the enum is intended to be expanded, I prefer to hear about new features and changes on APIs I actually use in a given project. It allows me to review mu choices and many times in complex APIs there are new options or flags that are useful to know about and otherwise get ignored. It allows me to update all callers as needed.

I have heard too many times the backwards compatibility story in the C++ world and I never valued it for things that are errors rather than behavior changes. It makes the libraries and the language way too rigid and makes evolving it a pain.

As long as you don't change the meaning of code that compiles cleanly, feel free to change things.

If you really want to keep code compilable because you are doing a chance that you believe will be a PITA, you could always offer a (truly) automated tool.


> Still, as long as something is a compile-time error, I don't care about having to update my code when I choose to upgrade compilers.

It's not just your code; it could be the code of any of your dependencies (or their dependencies, recursively). Besides, the Rust team does intend to keep older code compiling unchanged on newer releases of the Rust compiler, unless the breakage is caused by fixing a soundness hole.

The best example of a non-exhaustive enum, in my opinion, would be std::io::ErrorKind (https://doc.rust-lang.org/std/io/enum.ErrorKind.html). Several Rust releases ago, I added a new variant to that enum (the last one, UnexpectedEof, used by std::io::Read::read_exact - notice how that variant comes after "Other", which used to be the last one). If that enum were not non-exhaustive (non-exhaustive enums already existed since Rust 1.0, though using a doc-hidden trick instead of formal compiler support), I would not have been able to add that new variant, since it would risk breaking anything which matched on every variant of that enum.


Perhaps Rust could add a #[exhaustive] pragma at the point where the enum is being pattern matched to over-ride the libary's #[non-exhaustive] pragma, for users that really do want their code not to compile when an enum is extended.


This proposal has been made in the tracking issue already, yet it hasn't been picked up on yet: https://github.com/rust-lang/rust/issues/44109#issuecomment-...


You need a wildcard match which allows you to fail... at run-time. I see what you mean.


I agree. That's what Semver is for. If you are adding new variants to your enum that can break code for your dependents, then increment your version accordingly.


Right but some enums are designed to allow new types to be added.

For example suppose you add another image format to the `image` library:

https://docs.rs/image/0.23.0-preview.0/image/enum.ImageForma...

What's more annoying - a major version bump every time you support a new format, or not being able to exhaustively match on all 10 formats?


I don't personally use Rust (yet) but I love seeing its progress!

Congrats to everyone who has been contributing to Rust, it's definitely moving the language forward!


There hasn’t been a single Rust job posting in the entire region of Denmark where I live in 2019, so it’s probably fair to say that almost no one uses it.

The progress is still cool though.


The main reason is many companies have a lot of C/C++ devs or other devs learned Rust, so they don't have to hire new Rust programmer. There are many places using Rust, you should check this out: https://www.rust-lang.org/production/users


That is the case here to some extent. Projects have been done in rust, but "rust programmer" isn't what we seek.

Jumping into rust for our major projects doesn't make sense. Existing code is largely C, with some assembly and C++, so rust would just add another language to the requirements.

To somebody who isn't already a rust expert, it isn't even clear that rust is good for the job. We like bitfields as L-values in C, without ugly macros or other wrappers. We like to be able to easily produce a small relocatable blob of binary executable code, such as a firmware image or boot loader. Speed and size matter to us. Sometimes we compile for weird targets such as ARMv4, Coldfire, 80286, and Xtensa.

C certainly isn't perfect. If rust had fewer limitations, that could be convincing. My wishlist is probably zero-priority, but FWIW: I'd like pass-by-value arrays, naked functions, easy ability to get things in place as desired (position independent or fixed) without a PLT or GOT, portable bitfield layout, specified calling conventions, computed goto, and other things that high-level programmers despise.


The above is also certainly non-exhaustive. Google isn't on the list, and they are using rust in fuchia.


    #[non_exhaustive]
    https://www.rust-lang.org/production/users
There you go.


lmao


Facebook too, most famously in Libra.


It is young by language standards, but it has considerable success considering it isn't even 10 years old yet.

Plus there isn't really anything else in the market that directly competes with it outside of what it tries to replace (C/C++).

For example: https://msrc-blog.microsoft.com/2019/11/07/using-rust-in-win...


“ Rust has been the "most loved programming language" in the Stack Overflow Developer Survey every year since 2016”

https://en.m.wikipedia.org/wiki/Rust_(programming_language)

10 years is a long time. Swift is 5 years old and it seems old and baked.

Considering the buzz Rust has had on HN, etc, I thought it was more popular.

Sounds to me like it needs some sort of push to get over a tipping point.

Often part of the problem isn’t technical. In business people don’t want to be first. Perhaps a few more high-profile users.


Rust 1.0 was released in 2015, and before that you'd have breaking changes every month that effectively made it unusable for anything but toy projects. So it's not even 5 years old really.

Besides a big target for Rust is the C and C++ world. On Hacker News in my experience you have a majority of commenters coming from the web world where no news in one year effectively means that the project is dead.

For us in the low level world "stable for 4 years" means "maybe we can start considering using it in production" and the lack of big buzz every other month is more a pro than a con. I'll take boring and reliable over shiny and breaks-every-other-year.

I've just started adding a new feature on a C project started in 2009. If I used Rust I'd want to know that my dev environment will still be usable in 2019. I think the commitment to stability will pay off eventually.


Agreed, Rust's "popularity" in production projects is a topic to be revisited circa 2025-2030.

My personal intuition is that it will have become a strong alternative to C++ by then, and Go will probably eat the other side of that (at the 'upwards' frontier of C++, before/underneath e.g. Python), which given a decade could result in maybe 25-30% of major C++ projects moving or with plans to Rust/Go. That would be a healthy balance of alternatives, a true victory for these mid/low-level contenders.

It's not like the bottom of the stack can be won the likes of Python or Js at 80-90% within a decade. Structurally, it simply cannot.


Though I would argue they are different demographics somewhat. Mobile app and mac developers I would argue keep up with the times much more aggressively compared to c/c++ developers I know.


D? And others of course?


D is in the same space, but it started off having a GC and that kind of caused it more problems that it was perhaps worth being in an almost system space. Recent work with different allocator patterns and adding the -betterC compiler flag, I think, may help D in the long run. However, I wouldn't be surprised if D never gets much more popular that it already is.

The other major competitors to Rust is Zig and Jai. Of course Jai has the problem of being unreleased at the moment (and its fate is tightly bound to Jon Blow).

There are also a few other languages attempting to make inroads in the space. For example, Odin and Kit.

Here's my take on it though. C and C++ will always have some sort of systemic problem because they are able to do too much. In order to be a system language AND also do everything that people want a system language to do (games, embedded, high performance) you need (I assert) to have rough edges and dangerous pit falls.

Rust will eventually beat C/C++ on making web browsers and similar technologies because that's what it was built to do. However, Rust probably won't be able to beat C/C++ in game development and total OS development (although it can probably be partially used for both). Enter Zig and Jai.

D and Go both compete with C/C++ in a space that was temporarily taken over to Java/C#. But in the long run may end up being ceded to something that's less than a managed language but more than a system language.

Ultimately I predict we'll see C/C++ slowly give way to a family of system languages that all hold different niches before finally becoming a relegated to legacy only. This could still take a few decades to complete. And if you look at newer versions of C++ it's possible that C and C++ may even evolve to hold a different niche than the wide series of domains that they used to hold onto so tightly.


I appreciate that you're trying to frame this as areas where all of these languages can be successful in these spaces, but for Rust in particular, this is an odd take:

"In order to be a system language AND also do everything that people want a system language to do (games, embedded, high performance) you need (I assert) to have rough edges and dangerous pit falls.

"Rust will eventually beat C/C++ on making web browsers and similar technologies because that's what it was built to do. However, Rust probably won't be able to beat C/C++ in game development and total OS development (although it can probably be partially used for both)."

Rust has all the necessary escape hatches (through unsafe) required for these spaces. There are people working in these spaces with Rust successfully, today. So, while, the other languages you mention might find success here as well, there is no reason (from a technical perspective) that Rust will not.


The caveat here is that some correct software architectures require littering so much "unsafe" in the code (due to incompatible safety models, not actual unsafe-ness) that it largely defeats the purpose, and a software architecture that lets you avoid most "unsafe" produces a worse product while requiring more lines of code to accomplish the same thing.

Rust will always leave plenty of room for C++ to the extent that it tacitly encourages suboptimal software architecture for some types of applications, such as database engines, that commonly rely on safety models Rust was not designed to express.

I do see Rust potentially replacing a lot of backend Java, eventually.


That's all or nothing thinking. Enough people do it that what you say will probably happen. Thing is, one can always use multiple tools to achieve their goal. Anything Rust's safety model can't handle might be done with a different model, analyzer, etc. One recommendation I keep at is using "unsafe" Rust, porting it to identical C, throwing every tool we have in C ecosystem at it, and port what passes back to Rust with safe wrappers if possible. Rust couldn't prove it safe, it's externally proven safe (or safe enough), and optionally has protections during interactions via wrappers. You get Rust's benefits on everything else you code in the app plus whatever you include that others manage to get past borrow checker.

I call this general concept Brute-Force Assurance where you just modify the form of a program to fit existing tools to get their benefits. Just throw every sound and/or complete analyzer plus a lot of test generators at it. Also, code in a way that helps those tools wherever possible. If one can't, then use them on a version designed for verification first to get the algorithm right, step it toward optimized version, equivalence tests, repeat, etc.


I'm glad you commented because this is what I wanted to say, but couldn't think of a good way to lead into it.

IIRC while Rust does allow you to switch up allocators, it doesn't let you mix and match your runtime with multiple allocators. (Each artifact can be linked with at most one allocator at a time.) There are applications where you want to have multiple allocation strategies for performance reasons.

Uniqueness is really useful for most applications, but there are times that you want data structures that allow multiple pointers to the same data. Having to do this in Rust is going to be a bigger chore than doing it in a language that doesn't support uniqueness.

It will be possible for Rust to still participate in these areas (especially because you can use Rust for only part of your project, so you can use it where you don't have allocation or nonuniqueness constraints in your problem space). However, other options are going to offer a better programmer experience.


> IIRC while Rust does allow you to switch up allocators, it doesn't let you mix and match your runtime with multiple allocators. (Each artifact can be linked with at most one allocator at a time.) There are applications where you want to have multiple allocation strategies for performance reasons.

So, sort of yes, and sort of no. Like, you can swap the global allocator, but there's no way to parameterize standard library stuff over anything but the given allocator. But for your own code, you can write and use allocators however you want. Arenas are often popular, for example.


> I do see Rust potentially replacing a lot of backend Java, eventually.

I'm not quite so sure. Rust can be a good fit for services that need to be very optimized for memory or CPU usage. Java brings so many other extremely important benefits like introspection, management, profiling, hot swapping, and being generally more productive due to the fact that it's a GC'd language. Not to mention the huge ecosystem behind it.


> such as database engines, that commonly rely on safety models Rust was not designed to express.

Do you think you could expand on this?

From my experience Rust facilitates all the same operations as either C or C++, and generally without even needing to turn to unsafe. What I've found in my own (not DB, but networking) work, is that Rust generally asks you to restate the problem in a way that will allow it to be best expressed in Rust. This often differs from the down the middle of the road implementations people have grown used to in other languages, but it doesn't in any meaningful way prevent you from solving the problem, in a safe way.


There are a couple common architectural patterns that are easy to express in C++ but tend to violate assumptions in Rust's safety model. Database engines run into them frequently due to the nature of their core operations, and this ignores that most data structures are intrinsically globals (which Rust doesn't like) due to tight hardware coupling.

Rust assumes that all references to memory are visible at compile-time, and the safety analysis can be applied in cases where this is true with the usual caveats around borrow-checking. The "unsafe" facility is designed to interface with code written in other languages that don't respect Rust's model, and it works well for that. But how do you express the case, common in database engines (because direct storage-backed), where hardware can hold mutable references to most of your address space? There is no way to determine at compile-time if a mutable reference will be unique at runtime or to even sandbox it to a small bit of code. As a consequence, most memory reference are effectively mutable. There are workarounds that will minimize the quantity of unsafe code in Rust if you are willing to sacrifice performance and elegance.

In databases, having many mutable references to the same memory has few safety implications because ownership of memory is dynamically assigned at runtime by a scheduler that guarantees safe access without locking or blocking. This safety model solves the hardware ownership problem, which is why it is used, but it also enables quite a bit of dynamic optimization even if all your references are in software so you'd want to do things this way anyway. In C++, you can make all of this largely transparent on top of explicitly mutable references to memory. Again, you can produce a minimally unsafe version of this in Rust but it is going to be significantly uglier and slower.

As more server software moves to userspace I/O and scheduling models (for performance and scale reasons) it will be interesting to see how this impedance mismatch problem is addressed in Rust.


Lest someone gets the wrong idea, Rust makes _mutable_ globals painful to work with; readonly is fine.

As for hardware DMA’able memory, it’s true that it adds friction to work with in Rust. But C or C++ would fall into the same boat - one would need to sprinkle volatile or atomics, as appropriate, to avoid the optimizer from interfering. In Rust, you’d need to do the same (ptr::{read,write}_volatile or its atomics).

I’m having a slightly hard time imagining a db where “most” of the address space is DMA’able. I’ve some experience with kernel bypass networking, which has its own NIC interactions wrt memory buffers, but applications built on top have plenty of heap that’s unshared with hardware. What’s an example db where most of the VA is accessible to hardware “arbitrarily”?

Also, regardless of how much VA is shared, there’s going to be some protocol that the software and hardware use to coordinate. The interesting bit here is whether Rust and its type system can allow for expressing these protocols such that violations are compile-time detectable (if not all, perhaps some). Any sane C++ code would similarly try to build some abstractions around this, but how well things can be encoded is up for debate.

When a typical Rust discussion ensues, it’s commonly implied (or occasionally made explicit) that “write X in Rust” == “write X in safe Rust”. And this is the right default. But I think any non-trivial system hyper optimized for performance will have a healthy amount of unsafe code. The more interesting question, to me at least, is how well can that unsafety (and the “hidden”-from-rustc protocol) be contained.

As for a db scheduler obviating the need for a compiler to arbitrate ownership, that’s certainly true to a degree. But, this comes back to the protocol I mentioned - the scheduler is what provided the protocol and so long as other components work via it, it can provide safety barriers (and allow for optimizations). But again, I don’t immediately see why Rust (with careful use of unsafe) couldn’t do the same. And afterall, everything is safe so long as things play by the (often unwritten or poorly so) rules. Once systems get big and hairy, it gets tougher to stay within the guardrails and that’s where getting assistance from your language/compiler can be very helpful.

Some of the Rust libs/frameworks for embedded/microcontrollers deal with hardware accessible memory and otherwise “unsafe” protocols, but I’ve seen some clever ways folks encode this using Rust’s type system.


You have some misconceptions about how this all works in real databases. Rust experts who have looked into porting these kinds of C++ database kernels have not been sanguine in my experience. This isn't a theoretical exercise, we need to minimize defects and maximize performance.

- All pointers are ordinary, the fact that the same memory can also be DMA-ed by the hardware is immaterial. You do need accounting mechanisms that let the code infer which objects in memory are at risk of being read/written by a DMA operation. No atomics or volatiles required in userspace. Modern database code is effectively single-threaded.

- Most of the address space in a database is DMA-able because most runtime structures in a database engine must be adaptively persistable to storage. There are various workloads that will force different parts of your runtime state to be pageable because they can overflow RAM while operating within design constraints. Unless you are assuming small databases, that complexity is inconvenient but necessary for robust systems.

- C++ is more expressive than Rust when it comes to making hackery like this transparent, most of which is resolved at compile-time in C++. Much of the mechanics can be taken care of with a pointer wrapper that heavily overlaps the semantics of std::unique_ptr, making the code quite clean and natural. Most code never needs to know how that magic happens. C++ compile-time facilities are currently far beyond Rust.

- You can formally verify the scheduler design, and we sometimes do, but actually implementing it efficiently in real code without the borrow-checker losing its mind is a separate concern.

As I originally stated, you can write such systems in Rust while managing the amount of unsafe code. You just wouldn't want to and there would be little to recommend it compared to the C++ equivalent since it would be objectively worse by most metrics.


It seems you’ve already made up your mind and nothing anyone says will change that :).

The volatile I mentioned isn’t due to concurrency of userspace threads, but to avoid the optimizer from eliminating read/write operations. If the src/dst of those memory ops is DMA’d memory touched by hardware, you’d need to do that. Has nothing to do with concurrency.

Capability to spill to disk is certainly needed, no argument. But “most” of the address space and “most” of the runtime structures? Can you elaborate? Is there an OSS example or some paper or any discussion of such a thing in the open?

You can have custom smart pointers in Rust just as well, and back them with your own mem allocator. While there are features in C++ not currently available in Rust, C++ facilities “far beyond” is hyperbole. How well do you know Rust? Genuine question.


It sounds like GP is using "DMA" to mean a memory-mapped file. I recall there was discussion about how to safely handle them in Rust. See https://users.rust-lang.org/t/how-unsafe-is-mmap/19635


Can you point to a concrete example of the "scheduler" pattern you're referring to? I'm not familiar with it, and it's not clear to me from your post how it works.


I might be wrong but I suspect the "schedular" in this instance is the thing granting write access.

Consider a SQL database engine.

Many threads/process can be accessing a SQL database at any given time.

But if two of those actors require write access to a same object, only one will be allowed access and all others are blocked.

So at this high level the concurrency is guaranteed by the engine, which then means at the low level the engine can safely assume the access is exclusive.


There's one escape hatch that Rust doesn't have: a hatch to escape its complexity. These days I do most of my programming in C++, and my first requirement from the language that will replace it for me is that it be simple rather than a shrine to accidental complexity. So I'm looking at Zig and liking what I see so far. I also think its approach to correctness is ultimately less disappointing than Rust's (as it is now) but that's a whole other discussion. Of course, these are personal preferences rather than some universal claims, although when I bet on a programming language I also care about future popularity, and complex languages tend to never gain more than small niche adoption. Anyway, Rust and Zig have such diametrically opposed design philosophies for low-level (AKA systems) programming, that it will be interesting to see their respective adoption dynamics. If, despite my prediction, Rust ends up being more popular, I'll probably prefer it to C++ and use it.


I'm a huge proponent of static typing systems. However, that being said, I always keep in mind that everything has a cost.

* Just because I'm more comfortable with types doesn't mean that everyone else is.

* Someone may want to do something in a type system which is well typed, but only in a different type system.

* Someone may want to do something in a a type system which is well typed, but which has some bad compilation characteristics for the given type system.

Even if Rust is objectively better, you still have to get used to the things about it that make it objectively better. And you have to keep up with the changes that are made to it. And you have to understand where those better things fail down (for example Non Lexical Lifetimes ... in which case you have to get used to the NLL acronym that people use).

A simpler language, even if objectively worse, can yield better results if it is used with discipline. Discipline that might be easier to hone with less things that need to be considered.

And sometimes better results don't actually matter because the goal isn't the best results tomorrow but reasonably adequate results today.


As, despite our efforts, we haven't been able to find big differences between different languages (considering reasonable choices for the appropriate domain) in any important bottom-line metrics, neither in research nor in industry, I don't think there's much point in even mentioning objective value. The only scientifically acceptable working assumption at this point is that language choice (with the caveat above) makes no significant objective difference. It's like saying, even if rum-raisin ice cream gives us the ability to see through walls I still prefer pizza; we have no reason to believe rum-raisin ice cream does that, so why even mention it? As far as we know, it's all about personal preference -- we have no reason whatsoever to believe that either Rust or Zig are objectively better or worse than the other -- as well as some easily observable secondary objective differences such as popularity.


From Derek Jones' references, I got this study that's about the best I've seen so far showing there is a difference:

http://archive.adaic.com/intro/ada-vs-c/cada_art.pdf

I'll also add that Rust can give you both memory safety and race freedom at compile time. If you debugged heisenbugs, then you know that's a huge benefit. On Lobsters, one guy mentioned being hired for (a year?) to find and fix one in a system. Eiffel's SCOOP had a similar benefit. Languages such as Chapel made parallelism super easy in many forms vs C++ and MPI. Used judiciously, macros can eliminate tons of boilerplate. Erlang's strategy for error handling might go in this list if reliability is a goal.

There's been quite a few examples were a difference choice in language design eliminates entire classes of problems with anything from no effort to significant effort by developer. Increased velocity with fewer bugs during feature integrations and maintenance are provably-beneficial metrics for a business. I think we can say there's scientific evidence of actual benefits from language choices which have potential benefits if used in business. I just can't tell how far, if any, you'll get ahead by using them since there's non-language-design factors to consider that might dominate.


There is a reason why Ada continues to be used in safety critical systems--it works. More bugs are prevented and problems are detected earlier than they would be in a more lax language such as C or C++.

The large uptake and excitement around Rust shows that there are many C and C++ programmers who appreciate the safety guarantees that it provides. The popularity of Rust has actually created a resurgence of interest in Ada and each language has benefitted from the other.

For example, Spark, a well-defined subset of the Ada language intended for formal verification of mission-critical software, is adopting safe pointers that were inspired by Rust (source: https://blog.adacore.com/using-pointers-in-spark).

I would not be surprised if Rust also adds features based on ideas from Ada.

This is good. The "fast and loose" qualities of C and C++ allow far too many errors and security vulnerabilities in software. We have better tools. We just need to use them.


The syntax of Rust with all the different kind of references looks far too complex, just to get a performance improvement.

Garbage collectors provide memory safety without any special syntax.

Or in Delphi all strings are reference counted. They are mutable, if the ref count is 1, and immutable when it is larger than one. It is memory safe with safe aliasing and needs no special syntax.

A sufficient smart compiler could just optimize the reference counting away, and treat everything as immutable, unless it can prove there is no aliasing. Ideally, a language would only have two kinds of reference, mutable and constant, and everything is figured out by the compiler


There's only one kind of reference: &T

I guess Box<T> is special-typed but at the surface it just looks like a normal type

Yes, there's also a C pointer type, but it's for interop


I still think an improved C will have a better chance of unseating C.

But getting the C committee to act is about as hard as designing a new PL.


Let’s pretend for a moment that this SaferC exists. Is it also backward compatible with C? Will it require special keywords in the language, like unsafe, to call into original C? This would be the primary benefit, right?

Now, also, will this new SaferC also bring with it any of the other features people appreciate in Rust? Such as data race free code (because of the strong type/trait system and Send/Sync auto types), or match and let statements that support destructuring of types through pattern matching, or monomorphism for zero overhead polymorphism, or the simple to use tools around the language for managing dependencies, or async programming model that strips away all the complexity of hand written state machines?

For me, all of those features make Rust a modern 2020 language. I’m curious what a SaferC would have. And frankly, if it could exist, why hasn’t it been developed in the last 50 years?


No need to pretend, take a look at D as better C? [1]

[1] https://dlang.org/blog/2017/08/23/d-as-a-better-c/


Yes. That’s a great post, but betterC is not directly backward compatible with C. D also lacks many of the features that I’ve grown to like about Rust.

But, let’s say you’re right. D is SaferC—are the folks who are still waiting for a SaferC able to recognize it as such? Or, have they already decided that like Rust it doesn’t meet the criteria of the language that they’re waiting for?


Go also competes with Rust in many areas though.


Go folks seem pretty persistent in espousing this belief, but I've yet to see many examples of it in action.

Most of the fields where rust is making headway, it is making headway for precisely the things that Go is not known for - lack of a garbage collector, strict and expressive type system, robust generics, etc.


The most prominent projects are in areas where Go is not relevant.

Go established itself as a python/ruby/php competitor. Rust is a c/c++ competitor


Hmm not really, Go is close to Python / Java / C# / C++, not much for PHP / Ruby, people didn't build backend services outside of the web in those languages.

Imagine one second Kubernetes, Docker or Prometheus those app have nothing to do with PHP / Ruby. It's related to C++ / Java.

Prometheus is the equivalent of what's at Google and it was built in C++. Same for all the DBs outhere in Go: CockroachDB, NATS, etcd ect ... those are C++ applications related.


By far the most usage of Go is web backends. But it is true I should have mentioned Java in the language list. Go is a popular Java replacement.

However you are talking about projects that are not particularly language dependent as long as the runtime is reasonably performant. They could just as well have been implemented in Java. For many Rust would arguably be a better choice but that is not relevant. Go shines here, but Rust is barely emerging in these areas.

Rust is a C++ competitor in areas where C++ has no competition because a GC is a no go. This is also where the most prominent use cases are.


D isn't really competing with anything, it never took off. It already has a smaller market share than Rust while being twice as old (almost 20 years).

I'd call D a dead language. Can a dead language compete with a growing one?


Facebook, Ebay, Mercedes Benz and a few others[0] don't seem to believe so. Better tell them to use a different language?

I'm on a Discord for Dlang that's quite active honestly. The D community is plenty active. Their forums and IRC. Maybe not as active as other language communities, but it's not made by Mozilla or Google which gets a lot more attention.

[0]: https://dlang.org/orgs-using-d.html


Facebook does not use any meaningful amount of D.

Source: I worked at Facebook for four years. My first diff at Facebook was even in D, in a project that was already effectively abandoned by the time I changed it.

It's really a tiny, insignificant minority of actively running code there, possibly zero by now.


>Facebook, Ebay, Mercedes Benz and a few others[0] don't seem to believe so. Better tell them to use a different language?

The already "use a different language" for most of their stuff.

Their D use is tokenish, the way you can find any bizarro, niche language used somewhere big. That doesn't mean the language is in any kind of widespread use, just that some teams in some big company or another adopted it - as outliers.


A company the size of Facebook probably has teams tinkering with all sorts of languages most companies will never use.


> D isn't really competing with anything,

It is competing for my attention and many others I guess.

> I'd call D a dead language. Can a dead language compete with a growing one?

I'm much more inclined to spend time with D than with a number of other popular languages if I could choose.

I don't know if I'm alone in this, but for what I know kt might become really popular in the future. (Look to Erlang for an example of a language that went unnoticed for a couple of decades or so.)

FWIW Rust is on top of my list of languages I want to master if I should get time.


I always wondered why D didn't take off in the fintech industry instead of C++


D practionners aren't really on internet forums, let alone internet forums with a bias for whatever is new and status-enhancing (HN). Because the presumed attacks on D are constant, it's become very tiring to answer that no, it's not dead, and it's growing. The real test is in the trenches, not in a factless debate.


You want to talk facts, it ranks so low on Github's usage (based on Github's own API) that it isn't even ranked in the top 25 languages:

https://github.com/benfred/github-analysis/#inferring-langua...

Rust by contrast is 16th and trending upwards year on year. So these D users that are missing from internet forums are also missing from checking in actual code.

You talk about the lack of facts, but I can link you to Stackoverflow's survey, Github's usage, Google Trends, and show you that D isn't doing very well. Where's your facts here? Where is this evidence of a large number of quiet D programmers?

So let's talk facts, I have, your turn.


I have never used D, but possibly developers at boring companies aren't checking in code to public Github repositories?


TIOBE index. Besides, I'm not interested in being right on HN.


TIOBE index shows it in decline. It peaked in 2009 and has fallen since then. It has fallen from 1.8% peak to under 1% (0.93%). Which, even if it was still 1.8% would be terrible for an almost 20 year old language.

I'm not sure how a language that is in decline, and hasn't grown much in its life, isn't a dying language (I'd argue dead at under 1% after that long).

> I'm not interested in being right on HN.

So you show up, call out other posters for having a "factless debate," to which they respond with facts. They then ask you for the same and instead of providing them to prove your point, you're suddenly not interested in a discussion on HN? That seems like bad faith posting to me.


It seems you have a lot of energy to spend on this online debate, and I do not.


I don't suppose I have any listings for Rust jobs in Denmark specifically, but for remote Rust jobs you could try checking out our regular jobs thread on /r/rust. We post a new one with each Rust release, so the current one ( https://www.reddit.com/r/rust/comments/ecxd62/official_rrust... ) will take time to accumulate postings, so for now I'd suggest perusing last cycle's thread ( https://www.reddit.com/r/rust/comments/dvxw6u/official_rrust... ).


I am not sure that there are any Rust job postings in Sweden either, but I know several people using it on their jobs anyway, e.g. people recruited as C++ developers who work in Rust. I think it is hard to see any trends in smaller languages from job postings since a lot of recruitment is done internally and through contacts.


Embark Studios is a game studio made up of ex-EA DICE folks, and they're using a ton of Rust! They're in Stockholm.


Facebook, Amazon, Google, Microsoft, Dropbox, Cloudflare... it’s still early days overall but there is certainly a significant amount of usage.


Notably, a lot of these companies are not just "playing around" with rust but it is already being used for critical projects.

It's pretty difficult to use AWS without touching rust code, for example, and impossible in the case of Dropbox.


Selection bias.


Nobody claimed that Rust users are evenly distributed throughout the world. That doesn't mean that nobody is using it.


How so?


There hasn’t been a single C++ job posting in the entire region of where I live in 1989, so it’s probably fair to say that almost no one uses it.

To me, Rust "feels" like one of those languages that will stealthily become very important at least in certain segments where security, performance, C ABI compatibility (dll/so/dylib) and zero runtime are required.


> There hasn’t been a single C++ job posting in the entire region of where I live in 1989, so it’s probably fair to say that almost no one uses it.

I don't see your point?

> Rust "feels" like one of those languages that will stealthily become very important

Rust is not very stealthy at all :)


> I don't see your point?

My point is that C++, Java, etc. got established long before the hiring patterns and keywords changed. The employers will just suddenly start to expect (language age + 1 year) experience.

I think same will happen to Rust.

> Rust is not very stealthy at all :)

Yet it might appear like that from corporate point of view.


Rust is a pretty niched language trying to replace C++ and similar that are quite slow moving, so it's gonna take time. Most jobs are after all related to web/services where Rust's strong points aren't as relevant.


Is Denmark on the cutting edge of these sorts of things? Many countries have enterprise companies that are heavily invested in Java/C# and maybe 3 startups using a bit of Go/Elixir in some places, but that's about it, it's not specific to Rust.


Denmark is doing pretty well if you look at the history and people of PLT. Probably #1 country in per capita terms. Hejlsberg, Bak, Lerdorf, Naur, Stroustrup, Troels Henriksen etc. (Denmark has less than 2% of the population of USA)


Sure, I am not discounting Denmark has smart computer scientists, nor that the U.S. has better ones, (I'm European myself), just that from my experience the European tech scene is slower on picking up the latest tech, (which is different from educational institutions).


Not affiliated but there is at least one posting here https://concordium.com/careers/


And I know there are at least a few companies using rust to some extend in the capital region, there have been some talks about it at the monthly hacknight https://cph.rs


Weird, I write in Go, and based on the number of posts about Rust, i'd assumed that it was gaining traction


It is.


There was one in late 2018, I applied but it was too difficult from them to hire an American!


Are you looking for Rust employment in Denmark?


I don't know about that, Microsoft security analysis is in Rust, Google wrote an OS in rust, Mozilla firefox is mid-rewrite in rust, Linux is accepting rust work


The #[non_exhaustive] feature makes me unreasonably happy. It is a problem I hadn't even realized might occur, and the solution is very elegant, forcing depending code to be sufficiently general.


For enums at least, I'm not recommending it's use (yet), as with _Nonexhaustive you can make sure you caught all cases internally. Matching on _Nonexhaustive as a user of the library is of course a bad idea. There've been proposals to a lint like #[deny(reachable)] to make such checks possible for #[non_exhaustive] as well but it seems that nothing has happened yet: https://github.com/rust-lang/rust/issues/44109#issuecomment-...


> For enums at least, I'm not recommending it's use (yet), as with _Nonexhaustive you can make sure you caught all cases internally.

I don't think this is necessary. The attribute in question is applied to an enum variant, and that variant's constructor is then given only crate-wide visibility. This looks to be simply a compiler-enforced codification of the pattern you're describing.


It can both be applied to enum variants as well as to entire enums. I meant the latter case. From the release announcement:

> A perhaps more important aspect of #[non_exhaustive] is that it can also be attached to enums themselves. An example, taken from the standard library, is Ordering

For example, take this _Nonexhaustive enum case here: https://github.com/est31/rcgen/blob/d6b84d3d9d51b088dd672975...

A little bit further down, I'm matching on the enum in the to_oid function. I'd prefer if I got an error or at least a warning pointing to the match if I added a new enum case and didn't update the match statement.


non_exhaustive only affects downstream code, within the crate you can still treat it exhaustively. For example in this playground[0] if you build in test mode the match works inside the crate, but fails in the doc-test because that is treated as external code.

[0]: https://play.rust-lang.org/?version=stable&mode=debug&editio...


Oh that's an interesting point, I didn't know that. I guess I'll use #[non_exhaustive] then after all.


How about using #[cfg_attr(test, non_exhaustive]?


Wonderful! Any further news on evolution of the Rust Language Server? I'm still hoping ~Santa~ Annual Gift Man will bring me a considerably crisper, more accurate IDE-like experience to my favourite non-Intellij editor.


Have you seen rust-analyzer? That’s effectively it.


No I don't think I was aware of this. I've been playing with VSCode -> RLS Plugin -> RLS. It looks like this is not that, so I'm pretty excited to check it out.

Thanks for sharing!



Unfortunately you still have to compile it from source instead of install an extension from the store, and there's no autoupdate.

But the experience is vastly better. I deleted RLS immediately. And it's not that hard to install either.


This is part of why I think I never discovered rust-analyzer. I first looked at what's most popular in the VSCode plugin library and found RLS. I then browser searched for alternatives and kept finding RLS. I think I concluded that RLS was all there was.


Thanks for sharing ! I'll stick to RLS for the time being, because I'd like to see it improved, but not replaced.


rust-analyzer is effectively RLS 2.0 -- rust-analyzer development happens on the official RLS 2.0 working group communication channels. Assuming they don't "replace" the RLS name, the most likely outcome is replacing large amounts of code en-masse with rust-analyzer code.

https://rust-lang.github.io/compiler-team/working-groups/rls...


Analyser runs at 400% cpu constantly for me.


Have you tried a recent build? They fixed a few issues around cpu usage over the past few months


I’ve moved from RLS to rust-analyzer, was worth it for me


I'm a little confused by the decision to make breaking changes to the borrow checker in Rust 2015. Wasn't the whole point of Rust 2015 to allow old code that wasn't compatible with 2018's changes to compile? If keeping strict Rust 2015 source-compatibility was too burdensome for the compiler maintainers, why not just remove Rust 2015 entirely and tell everyone to use the auto-upgrader tools?


The only reason that breaking changes were allowed to be made was that they fixed soundness issues.


To elaborate, the original borrow checker that launched with Rust 1.0 had edge cases that could only be resolved with a drastically different analysis pipeline. As a result, it wasn't able to properly reject some patterns that were disallowed by the conceptual model. Over the course of several years this new analysis pipeline was developed and the borrow checker was rewritten to use it, which now properly rejects these patterns. But three years is plenty of time for code in the wild to begin relying on edge cases, so there's been a year-long deprecation cycle to warn anyone who might not have updated their code by now.


Right, I guess I had it in my head that the Rust 2015-vs-2018 divide was that deprecation cycle.


The difference between Rust 2015 and 2018 is less than most people think. Originally, yes, Rust 2018 was used to push the new borrow checker without affecting code using Rust 2015. But unlike a "version" that gets deprecated and left behind, a Rust edition perpetually continues to be supported and benefit from improvements with every compiler release. But this means that in order to support different borrow checkers in different editions the compiler had to continue to ship with both borrow checkers, which, considering how much code that is and the aforementioned bugs, was an intractable proposition in the long term.


I thought these were real errors but Rust decided to give some time before breaking your code? Maybe someone else more seasoned can comment on this?



Woah, todo! is going to come in handy considering the wordiness of unimplemented!. However, weirdly enough, the documentation claims it has been stabilized in 1.39. Why is that?


It’s just a bug. Would you like to file one or would you like me to?


I'll send a PR, one moment.


Awesome, thanks!


I like the IDEA but the implementation is not the good one.

Instead, this is what will be elegant:

    pub enum SearchOption {
        Left,
        Full,
    }

    pub enum SearchOptionBeta:SearchOption {
        Rigth,
    }
Yep, extending struct and enums.

This mean MATCH touch "SearchOption" as the "stable" API and the inner crate touch SearchOptionBeta. When SearchOptionBeta need magic is that it match SearchOption too. Later is just a matter of replace one for the other.

And this is even useful for more than to stabilize certain fields or arms...


I'd personally do this:

  pub enum SearchOption {
      Left,
      Full,
      Beta(SearchOptionBeta),
  }

  #[non_exhaustive]
  pub enum SearchOptionBeta {
      Right
  }


Yeah, this is how could be done with this, but also show why is a poor abstraction.

If enums were abled to be extended:

  match e: SearchOptionBeta {
   SearchOptionBeta:: Left, Rigth, Full
  }


Can somebody elaborate on the #[non_exhaustive] use case inside a private struct? Wouldn't a separate crate not be able to use the struct if it was in another crate?


The point is that the struct is public, and the fields are public, but users of the crate cannot directly create a new instance of the struct (without going through a function call in the module). This allows users to access the struct fields, but they cannot do any operations that would break if a new field is introduced.


I think `Foo` and `Bar` are supposed to be public, otherwise the example doesn't make sense.


let x = Bar::Variant { a: 42 }; //~ ERROR

Of course it's an error, as Bar doesn't have a field called ,,a''


Thanks for pointing this out! Unfortunately, unlike most of our docs we don't yet have testing in place for the code examples on the blog. I've just pushed up a fix though so once that propagates this should be fixed.


Can you cross compile for another platform?





Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: