Hacker News new | past | comments | ask | show | jobs | submit login
Why Not Rust? (matklad.github.io)
310 points by dochtman on Sept 20, 2020 | hide | past | favorite | 319 comments



As I've said before, Go has the advantage of mediocrity. It's boring as a language, but it does automatically most of the things you need for web back-end stuff. It's garbage-collected and does subscript checking, so you're covered on memory safety. There are stable libraries for most things you need in a web server, and those are mostly the same libraries Google is using internally, so they're well tested. The green thread/goroutine approach means you don't have the problems which come from "async" and threads in the same program; there's only one concurrent task construct.

There's a lot to be said for that. You can put junior programmers on something and they'll probably get it more or less right.

Rust is very clever. The borrow checker was a huge step forward. It changed programming. Now, everybody gets ownership semantics. Any new language that isn't garbage collected will probably have ownership semantics. Before Rust, only theorists discussed ownership semantics much. Ownership was implicit in programs, and not talked about much.

Tying locking to ownership seems to have worked in Rust. A big problem with locking has been that languages didn't address which lock covers what data. Java approached that with "synchronized", but that seems to have been a flop. (Why?) Ada had the "rendezvous", a similar idea. Rust seems to have made forward progress in that area.

I used to say that the big problems in C are "How big is it?", "Who owns it for deletion purposes?", and "Who locks it?" At last, with Rust we see strong solutions to those problems in wide use.

Much work has gone into Rust, and it will have much influence on the design of later languages. We're finding out what happens with that model, what's useful, what's missing, and what's cruft.

In the next round of languages, we'll probably have to deal directly with non-shared memory. Totally shared memory in multiprocessors is an illusion maintained by elaborate cache interlocking and huge inter-cache bandwidth. That has scaling limits. Future languages will probably have to track which CPUs can access which data. "Thread local" and "immutable" are a start.


> You can put junior programmers on something and they'll probably get it more or less right.

I see junior programmers screw up Go catastrophically. Thinking green threads are magical, they forget the existence of mutexes and the resulting code has data races left and right. Languages like Haskell combine green threads with carefully controlled mutation (be it IORef or STM) to avoid this. Go doesn't, so you still need low-level knowledge like how to avoid deadlocks.

As I've said before, Go gives you the veneer of an easy language when it's in fact full of traps for those who aren't experts.


Most companies, in my experience, want tools that are easy to get right in 80% of cases, and in 20% of cases require experts. Go has the benefit of most benign work is deadass simple to get right. The things you referenced in your post are obviously the remaining 20%.

Rust, as an example, is a pretty difficult language to become productive in. Sure, once you've invested the time, you're equipped to handle a wider array of problems, but most organizations don't particularly care about the top. They care about becoming relatively productive relatively quickly.

Go's approach to certain problems require expertise, it is not a silver bullet. But there are no silver bullets, and the examples you gave require expertise in any language. The issue is that you've kind of just cherry picked those three concepts as proof of complexity, ignoring the boatloads of other examples in which Go's simplicity does add value.


> Rust, as an example, is a pretty difficult language to become productive in.

Perhaps it's experience with other languages, or the fact that I grew up when that sentence started with C++, but I honestly found rust pretty easy to get up to speed in.

Am I an expert? No, and until I get the chance to use it for a living that won't change. But getting from "I want to do X and Y with this language" to having done so was straightforward enough.


I'm happy that you became productive in Rust quickly; I do not believe that to be the average case.


On the average Rust is quite difficult. And if you're someone doing a side project or trying to build a startup I wouldn't advice using Rust initially


Stripping a language down to the point we can learn it in a week is a bad tradeoff, because we'll never improve after that. The toolbox will always be empty, best treated as a codegen target for abstractions in some more insightful language.


There are plenty of languages that you can go from "Never touched" to "Having contributed something" in a week. Are you saying that those languages aren't productive or valuable?


A language is productive if it reduces the effort to solve your problems. I think some people feel productive as soon as they're able to solve a problem by cranking out a lot of code, but I view that as the language failing to help them.


Best of luck.


Totally agree. I never understood as a demerit that go is boring ... like it's some slight to programmer swagger. Most engineers prefer simple. Beginner programmers may screw up go but in my experience it's more a limited gestalt issue: they can't wrangle the requirements, the domain types, code structure (and whether those are in shape or no) ... Consequently they react to the code in terms of how it comes to them.


> I see junior programmers screw up Go catastrophically. Thinking green threads are magical, they forget the existence of mutexes and the resulting code has data races left and right....

When I worked at a company that used Go, I saw senior programmers with decades of C++ experience screw Go up in exactly this way.


That's exactly what I meant. C++ is a hard language and it doesn't pretend it's easy. Go is also somewhat hard but it really pretends it's easy.


Go is comparatively easy. However, if you think it is very easy, it is very probably that you will make many mistakes in using Go.

I have said it many times: Thinking Go is easy to master is considered harmful. Holding such opinion (thinking Go is easy to master) will make you understand Go shallowly and do many mistakes in Go. One the other hand, if you learn Go seriously, Go could help avoid these mistakes easily (and with a happy programming experience).


Yes. Go's locking is really no better than C/C++. There are queues, and there are mutexes. Everybody has those now. Go just has a new story for them. The whole "share by communicating" thing tends to lead to people just putting "tokens" on queues, not the actual data.

But Go does have a run-time race condition checker.


I see this in Elixir all the time too. Lots of people seem to think that immutability combined with message passing makes their program immune to races. As a consequence I see all sorts of designs with races all over the place.


Indeed there's a discussion of this in the official Elixir site's Getting Started guide:

https://elixir-lang.org/getting-started/mix-otp/ets.html#rac...


Yes, in retrospect Go's push to "make concurrency easy" ended up being one of its worst attributes; Go's real value is in its simplicity, efficiency, and boringness, all of which concurrency runs counter to. On the other hand, it's questionable whether Go would be as popular as it is today if it hadn't hyped up the concurrency angle.


"it's in fact full of traps for those who aren't experts." This is true for pretty much every languages.


> "it's in fact full of traps for those who aren't experts." This is true for pretty much every languages.

Yes, but every language has a different number of traps, and each trap has a different difficulty, and each person has a different likelihood of encountering each trap.

So when someone says that, they mean that it's worse than the average (well known) language on one or more of those axes.


well, Go is definitely not worse than average in this regard.


Mutable parallelism is indeed hard, but you don’t have to be an expert to know the basic rules (lock mutable things) and design your application to minimize concurrent mutation. By contrast, learning Haskell well enough to be productive takes weeks if not months. As for Haskell, it sure addresses parallelism, but it introduces far more significant problems with respect to application development. There are reasons it isn’t widely used in production.


I learned three new terms today.

Green threads, data races, mutexes.


This is true of every language.


> Tying locking to ownership seems to have worked in Rust

I'm not sure whether I would call it "locking to ownership". But the thread safety guarantees that Rust provides through the type system (`Send/Sync`) are probably the main reason why I would consider using it also for applications which would otherwise favor something along C#, Java or Kotlin to be used. Those languages are for me super productive, and by using them you don't have to worry about memory safety either. However they provide no safety net against threading issues.

And unfortunately threading issues are far too commmon - basically everytime I see an entry to mid-level engineer using threads there will exist at least one issue which will cause some pain later on (because it's invisible, will break in weird ways, and will be hard to find). With Rust the compiler will tell people that something is lacking synchronization.

There will obviously still be issues - e.g. if someone tries to be clever with atomics and misses the correct dependencies and ordering. Or just because some architecture is not perfect and things deadlock. But there are a lot less of the "oh, I didn't knew this required synchronization at all" issues.


> In the next round of languages, we'll probably have to deal directly with non-shared memory. Totally shared memory in multiprocessors is an illusion maintained by elaborate cache interlocking and huge inter-cache bandwidth. That has scaling limits. Future languages will probably have to track which CPUs can access which data. "Thread local" and "immutable" are a start.

Interesting prediction. It is precisely the model of threading which Nim follows, global variables are thread local and sharing them is restricted. I'm not sure if the Nim designer thought of it in the terms you've described, but I will certainly point this out to him :)


It's the route JavaScript is taking too. JavaScript has traditionally been single threaded, so all the existing types are single-thread only. Multi-threading is being introduced (still very experimental atm) and the only way to have shared memory is through dedicated "shared" types.


I've settled into sort of a hierarchy. If I really want to hack out a small prototype quickly, I use NodeJS. Not having types helps me change things around quickly. If I already have a pretty good idea of my data model but still want to develop quickly, I'll use Go. If it's something 1.0.0+ and I want to make it as reliable as possible, I'd use Rust. My problem seems to be few of my projects ever get to that stage, so I'm mostly writing Go these days...


I never really understood this.

You still program with types when hacking out a small project with NodeJS. How does moving type errors from compile time to run time help you move faster? It always made me move slower.

Honest question.


To each their own. For small projects, I feel faster with a scripting language. When prototyping with Go, I constantly have to move back and forth between the type definition and the places I'm using the type. With JS you just use it. Lots of things like manipulating JSON or sending requests have less boilerplate as well. Like I said, once the data model is more firm I move faster with a typed language.


> When prototyping with Go, I constantly have to move back and forth between the type definition and the places I'm using the type.

I do the same but I don't feel slower doing it. I mean I have to move around between function bodies and callers, for example, when I'm hacking in python, so the two don't feel that different to me (moving between a user and a provider or definition of something when hacking on code).

But what does feel different to me is if I hack up some go and get a compile error, that feels MUCH faster and nicer to me than hacking up some python, running for a minute, and half way through hitting a runtime type error.


There's at least some speed up when prototyping - think having n functions, and wanting to do some sort of refactoring. You might want to iterate on some refactoring idea, and test it out on the most problematic of those n functions. Dynamic typing gives you the possibility to avoid refactoring the whole codebase (the parts touched by your refactoring, I mean) and test out the refactoring idea on just that function.


I'm not sure I'm convinced, though the idea is interesting.

I feel like if I was refactoring a function and that refactor had a small ripple effect - say I didn't have to change any data structures outside the function and didn't have to change many other functions (callers or callers) - then I don't think there will be much speed up.

And if this was a small refactor on a larger function, or a larger refactor which included changing data types used in other places, then I want to be sure everything I'm about to run as part of my experimental partial refactor was updated.

Sure, if the type system complains about some unrelated function which no longer compiles then I might lose some time commenting out that function or fixing it up, but also the type system may point out a spot I didn't think was part of my experiment but which really was, and I'll save time on runtime errors or debugging. Or it may point out an issue in unrelated code that I didn't think about but which may make the refactor infeasible.

In my experience this kind of thing is a wash, but I would not be surprised if other people's experiences differ.


> It's garbage-collected and does subscript checking, so you're covered on memory safety.

Go is only memory safe in sequential code, or concurrent code that never accesses shared data from multiple threads. It does not protect against data races as Rust does.


Go needs synchronized and immutable collections, or at least to start trusting users to roll their own. Concurrent slice appends will overwrite each other unpredictably, and concurrent map writes and reads can panic.


I’ve always found it better to avoid concurrent mutability as much as possible. That has worked very well for me, and I don’t generally have the negative experiences other Go programmers have complained about. Not as nice as static type system guarantees, but it’s available today.


It is not always feasible. Even if sometimes it is possible, it isn't without any cost.


Yeah, you’re absolutely right that there is always some situation where some amount of shared mutable state is necessary, but there’s still a lot of fat to trim in most cases.


Rust only protects against thread based data races, it does nothing to prevent data races across OS IPC with mmap or shmem segments.


Which is why mmap and shmem are unsafe features in Rust.


Only directly, one can easily depend on a "safe" wrapper for them.


What you say is prone to make people who are not familiar with Go think there will not a safe way to accesses shared data from multiple threads in Go. This is not the truth. In Go, multiple threads could access shared data safely. When the code is implemented correctly (which is easy), these threads will never access the shared data at the same time.

Rust just makes this guarantee in the syntax level (at compiling time), which is different from most other languages, but with the cost of slow code complication, rigid, and not easy to learn.

BTW, what you say shows you don't understand basic Go knowledge at all.


> As I've said before, Go has the advantage of mediocrity. It's boring as a language, but it does automatically most of the things you need for web back-end stuff.

Given that Go is essentially a safer, smarter C, it boggles my mind to see it being used for applications where getting the business logic right is much, much more important than performance or complicated bit-twiddling.


I echo this. I've been exposed to some Go at work recently, and Rust on my free time. I've been learning both over last few months.

Go is like C. More like C than any other mainstream language I use. IMO this is bad. C is very verbose and hard to read. Rust is more like C++. Close to hardware but many high level language features.

I could rant a lot about what I don't like about Go, but it boils down to this. No generics, bad package system, bad error handling, bad code generation, bad GC, limiting syntax encourages copy paste and bad abstractions, defying common conventions to be cute (like capitalization in variables), mediocre standard library.

I love Rust more as I get better. I already hate Go for some of the above reasons, I wouldn't even use it if it wasn't for work.

Google hypes Go like crazy, I'm convinced that's the only reason it's popular. I think there's many better options, even Java, that don't have many of the shortcomings listed above


I have written a _lot_ of business logic in go, and I can say that imo it is a superb language for doing so. Vastly better than C, and I’m coming around to it being better even than my beloved python.


I'd be interested to hear your thoughts on why Go might be better than Python for business logic.


Mainly that it’s just so aggressively boring (which I love). No exceptions means you have to deal with your errors all the time, and it’s just generally very easy, when dropped into a random spot in the code, to figure out what’s happening. Very little magic.


Thanks


What language should be used in those business-first cases?


High-level, easy to read languages with large, robust ecosystems that your team has a high degree of proficiency coding.


I am looking forward that the generics 2020 edition actually make it, as Go is becoming anyway harder to avoid on my domain.

I think ultimately the biggest Rust contribution will be for GC languages (tracing GC | RC) to also adopt some kind of early reclamation, similarly to Swift's ongoing approach, and we will reach a good enough situation and that will be it.


> Go has the advantage of mediocrity.

This is true in many aspects. Go shares many common feelings with other popular languages.

However, there are also something in Go those are not mediocrity. Being an almost static language but also as flexible as (sometimes even more flexible than) many dynamic languages is one of those not-mediocrity things in Go.


This is a feature of most static language with reflection support, already available in the 80's.


Reflection contributes only very little to flexibility.


So what else is left?

Rediscovering protocol oriented programming like Objective-C in 1986, Interlisp-D in 1983, CLU in 1975 and Standard ML in 1997?


The type system of rust makes doing some "dumb" things impossible.

The type system of go makes doing some "smart" things impossible.

The choice comes down to whether or not you view depending on the expertise of your employees as a liability or an asset.


That's an interesting point of view. I don't have pleasant memories of deciphering third-party macro and templates in a previous consulting contract.

Pretty sure my client wasn't delighted either in having to pay expensive extra hours for me to cleanup the project.


Beyond a certain scale there are by definition no good programmers or bad programmers, there are only average programmers. So it seems pretty clear to me that depending on them to be experts is a losing proposition.


SDL had an ownership extension in the 90s: http://sens.cse.msu.edu/Software/Telelogic-3.5/locale/englis...


> Future languages will probably have to track which CPUs can access which data.

Sounds like the https://en.wikipedia.org/wiki/Actor_model with actors/virtual threads pinned to cores.

But which vendor is going to convince the industry to produce software that doesn't depend on the legacy cache coherence stuff so they can leave it off the die? I'm reminded of Itanium where "sufficiently smart compilers" to take advantage of VLIW seemed plausible but never really arrived.


Itanium only died because Intel was a victim of its cross license deal with AMD.

Had AMD not been allowed to produce x86 clones and Intel would have pushed those Itaniums everywhere they could, while ramping down x86 production.


It doesn’t speak well for Itanium if they shipped before they could deliver better bang/buck (even after customers’ porting costs) than the commodity architecture.


Agile product development.


> Not All Programming is Systems Programming

Personally I often use Rust for "non systems-programming" tasks, even though I really wish a more suitable language existed.

Rust has plenty of downsides, but hits a particular sweet spot for me that is hard to find elsewhere:

* Expressive, pretty powerful, ML and Haskell inspired type system

* Memory safe. In higher level code you have almost zero justification for `unsafe`, unless you really need a C library.

* Immutable by default. Can feel almost functional, depending on code style.

* Error handling (it's not perfect by any means, but much better than exceptions in my book)

* Very coherent language design. The language has few warts, in part thanks to the young age.

* Great package manager and build system.

* Good tooling in general (compiler errors, formatter, linter, docs generation, ... )

* Library availability is great for certain domains, decent for many.

* Statically compiled. Mostly statically linked. Though I often wish there was an additional interpreter/JIT with a REPL.

* Good performance without much effort.

* Good concurrency/parallelism primitives, especially since async

* Increasingly better IDE support, thanks to the author of this blog post! (rust-analyzer)

So I often accept the downsides of Rust, even for higher level code, because I don't know another language that fits.

My closest alternatives would probably be Go, Haskell or F#. But each don't fit the above list one way or another.


I feel like it should be trivial to make a 'scripting' variant of rust. Just by automatically wrapping values in Box/Rc when needed a lot of the cognitive overhead of writing Rust could be avoided. Add a repl to that and you have a highly productive and performant language, with the added benefit that you can always drop down to the real Rust backbone when fine-grained control is needed.


Check out Rune: https://rune-rs.github.io/

It’s more of a prototype, but I think it’s in the direction you’re describing.


Similiar to GP, I too have been wondering about a Rc'd Rust.

Unfortunately Rune and Dyon[0] are dynamically typed, which isn't so attractive to me.

More promising are Gluon and Mun, both of which are statically typed. Of these two, Gluon has a somewhat alien syntax if you're coming from Rust (it notes it's inspired by Lua, OCaml and Haskell) so Mun is probably a better choice...but it seems very early, and the website notes as much (to serve the needs of a Rust-scripting-language I'd want seamless interop between it and Rust, which isn't quite there).

So I don't think there's anything in this space right now, but there are some promising options.

If you're wiling to go a little further afield, I'm kind interested in assemblyscript[3] - it's 'just' another WASM-based language so it's not a huge leap of imagination to believe there could be tooling to enable the seamless Rust interop. Just a matter of effort!

[0] https://github.com/PistonDevelopers/dyon [1] https://github.com/gluon-lang/gluon [2] https://github.com/mun-lang/mun [3] https://www.assemblyscript.org/


gluon looks great, but the book site is down for some reason, which is unfortunate since i am looking for something nearly exactly like this.


Why not D? It's suitable for system programming and much better than Rust for non-system programming due to the default GC (but you can disable that where appropriate).

It run fast due to native compilation and program compilation time is much faster than its competitors including Rust and C++.

D also now testing memory ownership support inspired by Cyclone/Rust.

It can interface easily with C, C++ (via DPP), Python and R [1].

[1]https://dlang.org/blog/2020/01/27/d-for-data-science-calling...


I just wish they had more manpower and a straight idea of what D should target as roadmap.


> Expressive, pretty powerful, ML and Haskell inspired type system

It's great that they use this, but it's still difficult to program in a purely functional style in Rust the way you would in, say, Haskell, because of memory management. Closures can create memory dependencies which are too difficult to manage with Rust's static tools.


An interesting development in Haskell is Linear Types. It opens the possibility of non-garbage collected Haskell. Still a lot of work left but it’s based on theory similar to that behind the Rust borrower mechanism (if I understand it correctly).

https://www.tweag.io/blog/2020-06-19-linear-types-merged/


A "non-GC Haskell" would also need some way to make strict and lazy evaluation equally idiomatic and freely interchangeable in the language. Some general approaches are known that would clearly help with this issue e.g. "polarity" and "focusing", but the actual work of designing such a language has not been done.


This is definitely true, but you can often get surprisingly far by just boxing, cloning, Rc-ing, etc. whenever you hit something like this.


the bigger issue is around the combination of closures, generic parameters, and traits. because higher-rank types can't be expressed and closures are all `impl Fn()`, you start to get an explosion in the complexity of managing the types and implementing the trait. try implementing something in the "finally tagless" style for an example of the kinds of things that can go wrong really fast. `Box` and `Rc` don't get you out of it and the resulting implementation is brittle and boiler-plate heavy in actual use.

from memory:

    trait Prop<'a, T> {
        fn p<F: Fn(&'a T) -> bool>(f: F) -> Self;
        fn eval(&self, &'a T) -> bool;
    }

    enum LiftedBool<'a, T, F: Fn(&'a T) -> bool> {
        Base(Box<F>),
        And(LiftedBool<'a, T, F>, LiftedBool<'a, T, F>),
        Or(LiftedBool<'a, T, F>, LiftedBool<'a, T, F>),
        Not(LiftedBool<'a, T, F>),
    }

    impl<'a, T, F: Fn(&'a T) -> bool> Prop<'a, T> for LiftedBool<'a, T, F> {
        fn p<F: Fn(&'a T) -> bool>(f: F) -> Self {
            // this fails because `F` in the impl doesn't necessarily unify with `F` in the trait definition
            // it can be made to work with `dyn` for the function argument but requires boilerplate to actually use
        }

        fn eval(&self, &'a T) -> bool {
            // obvious evaluation by cases
        }
    }
this example can be fixed to work via `dyn` for the argument on the trait function and the struct itself but it becomes increasingly painful to actually work with. this kind of generic work is not at all ergonomic but very easy in Haskell.


Yes, but doesn't the additional housekeeping negate much of the elegance of Haskell?


True. My biggest complaint there is actually lack of guaranteed tail call optimization.


Can you describe what's wrong with haskell, ocaml (which, incidentally, was a major inspiration for rust), or f#? They seem to tick most of the items on your list, though this one I don't understand:

> In higher level code you have almost zero justification for `unsafe`, except you really need a C library.

If you need a c library, you can build a 'trusted' wrapper around it as easily in rust as with any other language.


> Haskell

I adore Haskell, but my personal problems with it:

* Records! Lenses are fine, but a huge complexity increase. I love the recently accepted record syntax RFC though, this will make things a lot nicer.

* I feel like the plethora of (partially incompatible) extensions make the language very complicated and messy. There is no single Haskell. Each file can be GHC Haskell with OverloadedStrings or GADTs or ....

* Library ecosystem: often I didn't find libraries I needed. Or they were abandoned, or had no documentation whatsoever, or used some fancy dependency I didn't understand. Or all of the above...

* Complexity. I can deal with monads, but some parts of the ecosystem get much more type-theory heavy than that. Rust is close enough to common programming idioms that most of it can be understood fairly quickly

* Build system ( Cabal, Stackage, Nix packages, ... ? ), tooling (IDE support etc)

> F#

I admittedly haven't tried F# since .net core. I just remember it being very Windows-centric and closely tied to parts of the C# ecosystem, which brings similar concerns as in my sibling comments about Java.

> if you need a c library, you can build a 'trusted' wrapper around it as easily in rust as with any other language.

Sure, but if that wrapper does not exist, you have to build it yourself. I can say from experience that writing an idiomatic, safe Rust wrapper for a C library is far from trivial, so you lose the "I don't have to worry about memory unsafety" property.


> I admittedly haven't tried F# since .net core. I just remember it being very Windows-centric and closely tied to parts of the C# ecosystem

FWIW I did a small project recently in f#, using .net core (on linux), and it never seemed like a second-class citizen. Can't speak to the c# ecosystem, though.

> > if you need a c library, you can build a 'trusted' wrapper around it as easily in rust as with any other language.

> Sure, but if that wrapper does not exist, you have to build it yourself. I can say from experience that writing an idiomatic, safe Rust wrapper for a C library is far from trivial, so you lose the "I don't have to worry about memory unsafety" property.

Right. My question is, why is this a mark in favour of rust? It doesn't seem to distinguish it at all from the other languages you mention.


> I feel like the plethora of (partially incompatible) extensions make the language very complicated and messy. There is no single Haskell. Each file can be GHC Haskell with OverloadedStrings or GADTs or ....

Aren't macros, especially proc macros these days in Rust having the same effect? Personally I feel like this is a tradeoff every language has to play with: you either limit to a special way of writing, or adding some sort of ad-hoc system that enables rewriting syntax and even to a degree, semantics.


Anecdotally, proc macros just aren't that common. Almost every Haskell tutorial I read introduces language extensions, and it seems like many users have a set of extensions that they always enable by default. I don't think proc macros are really comparable in that sense, although maybe they will be in the future?


Overly powerful proc-macros aren't in common use; most common proc-macros are either ones that automatically Derive a trait, or ones that serves as attributes on methods or functions and perform some transformation of the source code (without introducing a DSL)


One of C++‘s biggest draws historically is that it’s not just used to write systems software. Rust could fill a similar spot.


What is C++ used for besides systems software?


Office. Photoshop. Mozilla. Before Android (Kotlin/Java), iOS (ObjC/Swift), and Electron (JS), most GUI apps from the 1990s and 2000s were C++. Also most of Google's search backend (later they started mixing in Java microservices).


Games?


You might like Scala and/or Kotlin. You listed Go, but Go’s type system is very weak, as is Go’s support for immutability, two problems that Scala and Kotlin don’t share.


I conceptually love Scala, but in practice it has a lot of downsides.

* Tied to the JVM

* You constantly have to use Java libraries. Which means you have to understand the Java ecosystem. Which increases complexity a lot, since that ecosystem is very mature but also often deeply layered, complex and full of historical baggage

* It can be used in so many different ways. Nicer Java on one end. Almost Haskell on the other (with cats etc).

* The flexible syntax and multi-model concept leads to very variable and non-coherent code styles and libraries

Kotlin is also great, but the type system is less powerful. And the same JVM and Java library ecosystem concerns apply. ( I've jokingly heard "we don't talk about Native" multiple times... )


Scala has been my primary professional language for the past 5 years, and I actually don’t have to use Java libs very often anymore, the pure Scala ecosystem has really caught up. When you do have to use them, they’re normally pretty easy to wrap.

Totally agree with the plethora of ways to use it being an issue, though. “Classic Scala” and “let’s write Haskel is Scala” is a real split in the community, and a significant downside. Then you have Akka too, though at least that solves different problems (highly stateful AND highly concurrent, which is a pretty rare mix).


> You constantly have to use Java libraries. Which means you have to understand the Java ecosystem.

That's not so true in my experience. You have to reach for Java libraries in the same cases where you'd have to reach for C libraries in other languages. When you're using a Java library it's usually to do something specific (because using the big Java frameworks makes no sense), so the complex parts of the Java ecosystem don't really affect you.


You constantly have to use Java libraries. Which means you have to understand the Java ecosystem. Which increases complexity a lot, since that ecosystem is very mature but also often deeply layered, complex and full of historical baggage

This problem is bigger in Clojure which has a smaller ecosystem than Scala and is why I don't pick it for writing "real world" applications in it.


Personally I'm not very excited about making Java a hard dependency, or the fact that Scala can't be bootstrapped like most other languages. [0]

[0]: https://bootstrappable.org


Your 2nd point will stop being true very soon. Scala 3 will apparently be coming out sometime this fall (https://www.scala-lang.org/blog/2020/09/15/scala-3-the-commu...), and the compiler will be the Dotty compiler. Dotty is bootstrapped (https://dotty.epfl.ch/blog/2019/05/23/15th-dotty-milestone-r...).

The first point is true, but IMO not a big deal for most applications. Scala native exists, but is very immature. Still, if you’re writing web services, data processing jobs, etc., I don’t see much of an issue with depending on the JVM. Definitely an issue with things like command line tools, or apps with super tight resource requirements, but that’s not the case for a tonne of software.


> Personally I'm not very excited about making Java a hard dependency

Why be concerned about this? Either way you're making a hard dependency on some languages runtime, be it go, rust, or the jvm and I see it sticking around a lot longer than the others.


Maybe you're still tied to their standard library, but technically Rust doesn't have a runtime.


Many languages compile to assembly so that you can distribute them easily. Trying to send a friend a compiled tool is a billion times easier than asking them to install JDK / Node /etc.


At one point in time, I thought Scala would be this, but it very much isn't. It feels bulky and lacking orthogonality to me, and it's tooling leaves a lot to be desired. (Note: this might be true of Rust too once it is as old as Scala.)

Kotlin, though, yep, big fan.


What kind of programming do you do? I feel like this is important context for understanding your perspective.


Did you try Swift? It is conceptually and syntax wise very similar to Rust.


My understanding is as soon as you target on a non-Apple device, you lose a large chunk of the API (foundation). This includes things like file system interaction, date handling, sorting/filtering, networking and text processing.

My understanding is there is some effort to provide a version that works outside of Apple OSes, but it’s incomplete https://github.com/apple/swift-corelibs-foundation/blob/mast....

If you are looking for static linking stdlib it looks like there’s still an open bug on Linux. https://bugs.swift.org/plugins/servlet/mobile#issue/SR-648


FWIW, as an iOS-centric programmer who’s also implemented a bunch of scripts and several moderate API projects in Swift/Linux, I haven’t found the OSS Foundation to be particularly lacking. The only quirk I can remember having to work around is the fact that for some reason, all the networking code (i.e., URLSession) is pulled out into a separate module, FoundationNetworking.

FWIW, looking through that list you linked to, it seems most things are complete apart from relative edge cases (NSGeometry), fairly Apple-specific (Bundle), or replaced by something newer (NSCoder). The big one that jumps out to me as being a larger issue is the URL authentication stuff… so I guess steer clear if you need to handle basic auth/Kerberos/etc. The XML parsing being mostly incomplete is a bummer, too. :(

Ultimately, though, I’ve found Swift on Linux to have everything I need for what I’ve been building with it. Obviously, YMMV, but (and not just for you, anyone else reading as well), I wouldn’t be scared off just because Foundation isn’t 100% complete.

###

Edit: that being said, this is also just a defense/rebuttal to the comment I’m replying to. Swift isn’t the answer to all projects; I still reach for Clojure as my first choice for large API projects, mostly due to my own preferences.


It’s been a while since I had checked, so I’m glad to see it’s filled out a lot more. It seems like the language evolution has slowed down a bit too. The last time I had given it was a little after the swift 3 release which had quite a lot of breaking changes. I let my previous experience here prevent me looking deep enough into it.

I’ll have to give it another shot.


I did, and I think it's a great language in many ways.

My biggest problem is that, as far as I know, it still is very much second/third class citizen on non-Apple platforms. With little signs from Apple that this will change. (Happy to be corrected here!)

Other (more minor) complaints are the Objective-C baggage and the inheritance system - I enjoy the lack of inheritance in Rust.


Swift does indeed hit a lot of the same sweet points as Rust whilst having automatic memory management and generally more user friendly defaults (at the cost of performance). Alas, it doesn't have anywhere near the library ecosystem that Rust has in most domains, and it's cross-platform support is also quite poor.


I'm a Swift programmer. I've also used plenty of other languages. I don't know enough about Rust to either sing its praises, or curse its name.

I'm a bit leery to label Swift as a "systems language," as I feel that systems languages should be a wee bit closer to the bone (like ObjC); but that also means that I am not a fan of using systems languages to write application code. I wrote both levels in C and C++ for years, and HATED using those languages for app-level code. Swift, to me, is an almost ideal application language.

But I don't write system code anymore, and have no desire to. I love writing app-level code, and love Swift.

It's been a long time since I've had the pleasure of employing a language I actually like using.


> Expressive, pretty powerful, ML and Haskell inspired type system

This is not a reason to use rust. It's like saying "I like Ferraries, because they drive fast", failing to specify WHY you need to drive fast (you usually really don't, unless you are on a race track)

> Memory safe. In higher level code you have almost zero justification for `unsafe`, except you really need a C library.

Java, C#, etc.

> Immutable by default. Can feel almost functional, depending on code style.

Okay, however this isn't a big win in practice. Java has this too with @Immutable and mostly that's enough.

> Very coherent language design. The language has few warts, in part thanks to the young age.

Yes, Rust looks fine. Let's see how it evolves. Coherent language design is however again not a reason to use rust, because it fails to mention WHY you need it and WHY it solves a business problem better than Java, C# or C++.

> Great package manager and build system.

Yeah, pretty much all modern languages have that, so it's not worth to even mention. Rust builds are slow, so there is that.

> Great tooling in general (compiler errors, formatter, linter, docs generation, ... )

Really? Ever used Java or C# tooling?

> Library availability is great in certain domains, ok most.

Compared to what? C++? I don't even think there it's true. When comparing to Java or C# library availability and quality is a joke in Rust.

> Statically compiled (though I often wish there was an additional interpreter/repl). Mostly statically linked.

Yeah... What do you need that for? Again no mention of why that's even useful.

> Good performance without much effort.

Like in Java, C# and C++ you mean?

> Good concurrency/parallelism primitives, especially since async

Async is a paradigm that received a lot of criticism lately. It turns out to be cancerous. Fibers will likely replace it and Java is getting it soon. Otherwise yeah, concurrency is something any modern language should solve and maybe Rust has a head-start here. The whole purpose of Rust is focused around safe multi-threading. However other languages don't sleep. You don't switch your company to Rust just because it does one thing better for a couple of years. Other languages will catch up soon.


You seem to be very convinced that Rusts main competitors are Java and C#. I don't think this is usually true. Rusts main competitors are C and C++. I agree that in most cases where Java and C# are possible options they should be prefered over Rust. There are however many situations where neither Java nor C# are options.


> There are however many situations where neither Java nor C# are options.

Not too many anymore, and the space for these situations shrinks over time, rather quickly.

20 years ago, almost everything was implemented in C or C++, for all platforms. The computers didn’t have extra RAM for the GC overhead.

Web sites were the first to migrate, probably because security. Desktop GUI apps followed.

On mobile, before iPhone was launched in 2007, people used a lot of C or C++ there. PalmOS only supported C, Symbian SDK was C++ based, most Windows Mobile apps were written in C or C++. For the same reason, not enough resources for a GC. We now have gigabytes of memory in these devices and it’s no longer the case, on Android it’s almost 100% Java or Kotlin, on iOS it’s Swift.

This is happening with videogames, see Unity3D. Even low level soft-realtime multimedia is good enough for .NET now, here’s an example where my C# implementation delivers same performance as C implementation of VLC player: https://github.com/Const-me/Vrmac/tree/master/VrmacVideo

Pretty sure in a couple of years once MS improves a few relevant things in their .NET core (they need better constant propagation, and NEON support) we’ll see high-performance numeric stuff following the trajectory. The hardware SIMD support in .NET core 3.1 is a good step in the right direction.


> Programmer’s time is valuable, and, if you pick Rust, expect to spend some of it on learning the ropes.

Of all the arguments, time-to-productivity may be the most compelling. Rust will keep most new programmers from being productive far longer than any other language. I'd say 3x minimum.

What's a little surprising, though, is how many of the difficulties beginners have stem from just one concept: ownership.

Counterintuitively, ownership is quite simple. But what's mind-bending about it is that it doesn't exist in any other language a beginner is likely to have used. Yet ownership pervades Rust, sometimes in very hard to detect ways. And perversely, it's possible to write a lot of Rust without ever "seeing" ownership thanks to the complier.

Eventually, though, the beginner comes face-to-face with ownership without recognizing the trap s/he's fallen into. Ownership isn't something you can "discover" by messing around in the same way that most language features can be teased apart. The term "fighting with the borrow checker" is actually a symptom not of a struggle with a feature but a struggle with a basic, non-negotiable concept that hasn't been learned.

I can recommend this video for getting over the hump:

https://www.youtube.com/watch?list=PLLqEtX6ql2EyPAZ1M2_C0GgV...

In my experience, Rust productivity shoots up by a lot given a basic understanding of ownership.


Besides the learning difficulty, I'd there are some real ergonomic issues related to ownership, such as the difficulty of cloning into a closure [1], or nested &mut receiver calls being rejected despite being sound [2].

[1] https://github.com/rust-lang/rfcs/issues/2407

[2] https://github.com/rust-lang/rust/issues/6268


"But what's mind-bending about it is that it doesn't exist in any other language a beginner is likely to have used."

This is true about killer features in any language that is above the 'average' on the power spectrum ( http://www.paulgraham.com/avg.html ). In fact, even beyond beginners, I see resumes for people all the time with 20+ years who have only used Java/Python/C/C++. If you are evaluating languages for power or expressiveness, not just tooling/libraries/domain integrations, you're going to have longer ramp up time on average, regardless of the seniority of the developer, because you're by definition looking for features that aren't average (and therefore not mainstream).


Anyone who doesn't like my favourite language is a blub programmer.


Do you know C++ well? Could you compare time to productivity for C++ and Rust?


I think it depends where you spend time in each language and what projects you do.

I use both regularly, and with Rust my time to productivity is faster because I'm not dealing with a build system first and with splitting code between headers and implementations.

I find rust is also better for letting me keep less of the program mentally in mind since I can localize my thinking about data a lot more.

Where rust is worse for productivity is the learning curve is high for many (though personally I found it much less so than C++ given how large the language has gotten). The other issue is most libraries I deal with are built in C++ so I have to either bridge them or drop back to C++


I'd be interested in this as well. C++ seems like a complex beast and full of legacy code / learning materials. I feel I'd could likely be writing decent Rust code sooner than I could C++.


And 40 years of history, wait until Rust survives for 40 years to see how it looks like.


Rust in prod has been bittersweet for us. Our main goal was to 1) do our job and 2) leverage some of the great promises of Rust.

Deterministic memory management and bare metal performance are great and have been realized benefits. The great promises were realized.

On the “do your job” front though, the lack of a good STABLE library ecosystem has been a real issue and big source of frustration. It seems that most library developers in the Rust community are hackers writing Rust for fun, and I do not say that in a negative way. But the consequence is that things are usually not super well maintained, but more critically are targeting Rust Nightly (which makes sense as Nightly has all the new cool compiler stuff).

Add the scarce professional talent pool, the unavoidable steep learning curve, low bus factor risk... It’s just hard to justify pushing (professionally) more Rust beyond its niche.

With Mozilla pulling out (to some extent), the big focus on Web-assembly... it just feels off if all you want to do is build boring backends.

The contrast with Golang’s “boring as a feature” is quite interesting in that regard.

Time will tell if Rust will make it to the major leagues, or will be another Haskell.


Most libraries don’t target nightly anymore. It certainly used to be the case that nightly had all the cool features everyone wanted, but almost all features that popular crates depended on have now been stabilized. Even Rocket (the most high-profile holdout I know of) now works on stable as of earlier this year.

As for maintenance, as with all library ecosystems it’s a mix. The most popular crates tend to be the most well-maintained in my experience. This is definitely something to consider when taking on new dependencies.


That matchtes my observations.

There's no doubt that Rust as a language checks all the marks (the complexity of Rust is IMO unavoidable providing such a feature set).

What most discussions about the greatness of Rust miss:

1. Not language features, but teams and processes make successful projects.

2. Stability, matureness and a great standard lib like Go or ecosystem like Java beat language features.

Rust is a great playground for enthusiasts. But for your production apps you're probably better advised to use the boring stack.


Many things that used to nightly don't anymore. I think the only thing I'm using on nightly is Rocket, and that's set to change soon. May I ask what it was?


parquet-rs: to read or write parquet files, cf the Arrow project: https://news.ycombinator.com/item?id=23966150 where one person asking about triggered a ticket in Arrow's JIRA. I am not holding my breath (a 2.5 years old issue on Github about the exact same thing appears to be dead: https://github.com/sunchao/parquet-rs/issues/119)

The web framework situation has also been a problem for quite some time. No clear stable equivalent to jetty, flask, go's net/http. Great that Diesel (which looks great) is making it to stable. I really wished it was stable 3 years ago.

On that note it would be great if crates.io would have stable/nightly compatibility as a primary concept/badge/filter. I can't think of any other mechanism to nudge library developers into supporting stable as a first class citizen.


I would love the stable/nightly badge on crates.io, that's a great idea. OTOH, it is nice that some libraries use nightly though because that provides some nice feedback. That said, some crates hide nightly only functionality behind a feature flag, and that'd be great to expose too.

The web frameworks seem to be coming along nicely. I think the main reason for stagnation was the unstability and flux around async. Actix (stable), Warp (stable) and Rocket (stable in the next major release as all nightly features have stabilized) seem to be the strongest contenders at this point.


> (stable in the next major release as all nightly features have stabilized)

I initially misread this comment; what you are saying (and what is true) is that all the features it was using are now stable, but the version that uses them hasn't been released yet. I initially thought you were saying "once all its features are stable."


I (hobby gamedev) moved from Rust to C a few weeks ago, here's something that made me switch:

1. Compilation time (no more words needed)

2. Forced to use Cargo for managing dependencies. The downside of a good dependency toolchain is that people will use it and there will be tons of indirect dependencies. For cross-platform development it's very common to see one of your indirect dependencies doesn't support the platform you wanted, and there's no way around it (even you know how to fix it you'll have to submit a pr and wait till the merge and crates.io release (takes years), or download the whole dependency chain and use everything locally which is a total mess)

3. Too strict. As a hobby gamedev the thing I value the most is the joy of programming, I don't want to spend all my time trying to figure out how to solve a borrow checker issue (you'll always face them no matter how good you're at Rust, and I don't want to use Rc<RefCell<T>> everywhere), and I just want to be able to use global variables without ugly unsafe wrappers in my naive single threaded application.

As a language I love every aspect of Rust, just ergonomically I don't want to deal with it now.


> (even you know how to fix it you'll have to submit a pr and wait till the merge and crates.io release (takes years), or download the whole dependency chain and use everything locally which is a total mess)

For what it's worth, cargo supports patching dependencies without maintaining the whole dependency chain locally. See here: https://doc.rust-lang.org/cargo/reference/overriding-depende...


That's very good to know! Thank you. Yes such feature is very needed for a package manager


Yea. The Rust is a language pleasant to read about but not to wish to write on.


I hope this isn't too much of a tangent given the article's subject is details and specifics. I adore Rust for it's unique combination of features:

- Can write fast, and systems-level code; something typically dominated by C/++

- Makes standalone executables

- Modern, high-level language features

- Best-in-class tooling and docs (As the article points out)

I'm not familiar with another language that can do these together. It's easy to pidgeonhole Rust as a "safe" language, but I adore it on a holistic level. I've found Rust's consolidated, official tooling makes the experience approachable for new people.


It's the modern high-level language features that get me. Despite Rust technically being a lower-level language, they are so well implemented that I often find Rust more enjoyable to code something in than other supposedly friendlier languages. The killer is the compile times. If Rust had Go-like compile times it would be almost perfect.


The expressions "high-level" and "low-level" are almost meaningless in this context. Rust, like C++, has a broad range.

In any big system, almost all of the code will be doing high-level-ish things no different than you would do in a Python program. At unpredictable points it will need to dip into low-level-ish activities, where it might end up spending the most runtime. If you were writing Python, you might call out to a C or C++ module, then.

It is this broad range that defines a modern systems language. It has organizational features that make a big system manageable, and features to get full value from hardware. People used to try to use C as a systems language, with well-known failings.


You get accustomed to the slow compile times of Rust over time. Then you do some Go stuff and get extremely surprised by how quick it is, almost as if Go's cheating. But it's just rustc being slow :).


You don't achieve success by getting (some) people accustomed to your failings. That is a route to failure.

There are many, many routes to failure. Staying off them doesn't guarantee success; it's table stakes.


Sadly I am spoiled by Delphi, C++ Builder, VC++, D, Eiffel,...


Rust is high-level language. You mean "a language which allows low level access to hardware". Right?


"high level" and "low level" have somewhat fluid meanings, and in the case of Rust, aren't ideal for describing the language on the whole. Eg: You can do "low level" things like read a MCU register using a raw pointer. You can take advantage of "high level" abstractions like Result types.


High level language is platform independent. Low level stuff can be done from almost any language. I used peek and poke in Basic to access hardware registers 30 years ago.


> Modern, high-level language features

I think this is something the article overlooked. Sure, Rust has the ability to be a system level language, but you don't have to use it that way and the breadth of good, albeit commonly immature, libraries is testament to this. Rust enums and pattern matching are some of my favourite features in any language.

Python is often my go to for writing tools, but I mistrust its typing and how it behaves when a library throws an exception - not forgetting that pylint warns if you try to catch Exception. Conversely, Rust error handling feels a bit more verbose, but I worry less about what it does in failure scenarios. I either handle the error or pass it up the stack and carry on with my day.


> Conversely, Rust error handling feels a bit more verbose, but I worry less about what it does in failure scenarios

If you're just writing a tool, you can have main return a Result with Box<dyn Error> and then just use `?` everywhere. The code doesn't really end up that much more verbose other than needing to add a return type everywhere, which can be mitigated with a type alias if needed.


That's a good approach if you don't care about the nature of error and plan on logging it or outputting it as is, but you lose any metadata for the error unless you want to use unsafe to get it back out. What I order to do is use pattern matching for control flow in error handling, e.g. was this a server error or was I given something invalid? Depending on the error, I might want to try again.

With Python, you can create a rich hierarchy of errors using inheritance, but to achieve something similar in Rust the best option I've come up with is maintaining an enum that wraps the different error types. It feels clunky, so I hope there's better ways.


If you're interested in generating a lot of the boilerplate for an error enum, I'd highly recommend `thiserror` (https://crates.io/crates/thiserror). It's made by the same person who made a lot of the other prominent proc macros libraries in the Rust community (e.g. serde).


That looks exactly like what I'm looking for! Many thanks for the recommendation.


In the section on "Rust is big and you have to learn it" the author was too nice to people like me: I'm just not smart enough to manage the demands Rust puts on me while also solving my own problem.

I feel like this is a really serious problem for Rust: the number of people capable of making the right choices among the 10 ways to structure a class is too small. This is also why there's too much dangerously bad C and C++ in the world.


Yea, I think Rust may be hurting currently for conventions or "best practices". The benefit of most "best practices" isn't usually that the opinions actually have any merit, but rather that they are consistent and meld into the background. Has any large company that uses Rust published a well-regarded and well-adopted detailed style guide in the way that Google publishes style guides for the languages they use?


You probably won't find much of written out style guides on Rust, since with rustfmt and clippy, a strongly community endorsed code formatter and linter have been available since (almost) day one. With that, you have most of the "best practices" in form of a tool already.


Clippy catches a lot and I can't imagine coding without rustfmt.

But there's more to style guides than formatting and Clippy lints. Some designs will pass Clippy (or worse, lead to Clippy warnings far past the point where a redesign is economical) but be inefficient, inflexible, or have a poor effect on compile speeds.

One major problem I experienced in my last job (where I worked on production rust) was the insane amount of macros and proc macros used. This was an originally C++ heavy shop so they leaned on it more than, say, a python shop moving to rust would.

This led to terrible compilation times and confusing code only certain engineers could maintain (I'm guilty here - I've since left and I know of one proc macro I wrote that will cause headaches to anyone who uses it).

We need an opinionated style guide. The language is so complex we probably need multiple, with different trade offs.


> One major problem I experienced in my last job (where I worked on production rust) was the insane amount of macros and proc macros used.

Is there any language that allows full-fledged macros that doesn't have projects that descend into this?

This seems like a standard problem with programmers deciding 1) they don't like the language semantics and use macros to adjust that (then why are you using the language) or 2) over-factoring before you really have enough reuse.


I'm not convinced totally that's the case, but I'm definitely more open to that idea after that job. I've stopped using macros in my side projects and I avoid projects that use them now.


I'm not sure that style guides are the right way to tackle such arising problems. That sounds more like completely separate problems that are each best explained in longer blog posts and learned by the engineers over time.

Most of the popular style guides from big companies I know from other languages are much more one a trivial level where each of the bullet points can be quickly explained and generally isn't more complex than a clippy lint (which is why I mentioned that).


I imagine the advent of more functionality being available in `const fn` would probably reduce a lot of the `macro_rules!` and proc macro use - at least it has for my projects.

Definitely would like to see a more in-depth guide to patterns for structuring large code bases. I've finally dialed things in pretty well in my own head, but it took quite a while to get there, with a lot of lessons learned the hard way.


I'm thinking specifically of something like Google's C++ or Go style guides, that focus on content rather than formatting. Go in particular like Rust has a standard formatter.

https://google.github.io/styleguide/cppguide.html

https://github.com/golang/go/wiki/CodeReviewComments


Not many people just come out and say it, but I think Rust is an ugly programming language. That's a good enough reason for me.


I think that is correct, but I also find Rust to be an unfriendly language and this is a very widespread problem in software engineering: our tools are generally over-complicated and people are expected to just deal with it as a sort of hazing ritual. We've also come to associate one's worth as a programmer with how many of these arcane and opaque tools they can master.

It's much easier to invent a complicated tool and come up with excuses about why all the complexity was required than to invent a tool which makes people happy and productive. Of course some people are rather weird and they're happy with baffling tools, but I'm talking here about the majority. Garbage collection is the kind of invention which freed countless programmers from having to painstakingly worry about memory management. It's not perfect, as it doesn't handle all resources, but it is doubtlessly a step in the right direction.

Now if you look at Rust, their solution to memory management is to force the programmer to add countless annotations so that the unintelligent compiler can figure out what's happening. This is a step in the wrong direction and I certainly hope that this is only a local optimum that will be surpassed, otherwise the future of system programming looks quite sad.


FWIW, I’ve written lots of Rust, and I’ve only had to use lifetime annotations (which is what I assume you’re talking about) a few times. The vast majority of the time the compiler allows you to elide lifetimes.


I took a lot of heat for that in a recent thread. There's really no convincing Rust's most vocal fans what a mistake it was to focus on fixing so few of C++'s problems when they had the opportunity to fix the its-grammar-is-dogshit problem (and slow compiles times) in the same stroke but completely failed to do so. (Votes were all over the place. At one point that comment was up 10–15 points IIRC even with some opposition, until the Aucklanders woke up and buried it in downvotes. And that was when I was being a lot more polite about it than I am now.) C++ programmers seem to have a permanently altered sense of what constitutes "tastefulness".

Every time I think of the ugliness of Rust, I'm reminded of the comments from Stallman where he acknowledges the elegance of Java as an evolution of C's syntax, even though he never reached for Java for himself, having always stuck to C and Lisp instead. Then he goes on to trash C++.


Are you saying Rust didn't fix the grammar problem?


Could you try to explain why you think rust is an ugly programming language?


• Rust is focused on being explicit rather than pretty/elegant. So there's sigilisitic code like `&/&mut`, `move||`, `Arc<T>` for details that are implied in higher-level languages.

• Rust wants to have easy-to-parse unambiguous syntax. This makes turbofish required when you have a type in expression context. You need `->` for the return type.

• Rust uses generics, and angle brackets are ugly. It also mandates `{}` in blocks, but allows skipping `()`, which gives you a high ratio of ugly brackets to the nice round ones :)

• Rust wants to make up for the cost of explicitness by adding abbreviations. So you have `fn` that's both explicit and terse. Instead of verbose `switch/case/default`, Rust's `match` uses patterns and `_ =>`.

Rust designers pay a lot of attention to the syntax, but it's designed from a very practical point of view (it is clear? unambiguous? easy to write? composes with other syntax?), and "does it look neat?" is very low on the priority list.


You sound like the kind of guy who prefers CoffeeScript over JavaScript; YAML over JSON. Every single programming language has punctuation -- the "punctuation-free" languages have all just decided that their punctuation should be invisible.


I would call the design "rugged". The syntax is well thought out actually.


That's a good way to describe the feel. I think it only truly gets ugly when you do something like Arc<Mutex<Box<HashMap<String, Vec<String>>>>>. But the beauty of "(almost) everything is an expression" makes up for that.


Sure however compare it with cpp.


A big thing missed here is compatibility. The systems programming market is huge but most of it is covered by C/C++ programs, many of which being giant codebases with millions of engineer hours inside them. You can't just rewrite them in Rust. And Rust will always be second in place when it comes to interacting with C/C++ codebases, C++ will always be better at it, and if it's only the explicit safety barrier.

So languages like Rust are relegated to picking up the new greenfield codebases and some few codebases where engineers felt courageous enough to introduce it.

Overall, the signs are good though. I think it's much easier to onboard new engineers to Rust rather than to the company specific dialect of C++.


I'm hoping that with the advent of projects like autocxx [1], the C++ compatibility story will rapidly change. Templates and some of C++'s more exotic features will always be an interop issue but Rust's ergonomics and type system are addicting.

I've been experimenting with "bidirectional" (for lack of a better word) refactoring in autocxx where the bulk of the C++ implementation stays the same but wrapped in a safe Rust interface (using autocxx to generate the underlying interop) that is then reexposed by another run of autocxx to the rest of the C++ code. New Rust code can use the safe interface while old code is refactored to use the now (slightly) safer wrapper over the Rust interface. This allows for very intentional, piecemeal refactoring across the code base.

Rust's compatibility with other languages like Python and Javascript is already, imo, second to none thanks to a few well designed libraries and the macro system. The only language that comes close is C# with its IronPython integration, except with Rust it's a lot easier for library authors to integrate runtimes. It took me less than a day to construct a JS/Python monstrosity with an absurd call stack (Node => Rust => Python => Rust => Node => Rust => Python) when I needed to wrangle together a bunch of legacy proprietary libraries in webapp form.

Like you said, the signs are good for Rust's interop story in general.

[1] https://github.com/google/autocxx


You're right about the massive amounts of C/C++ code out there. I'd disagree that the integration is harder than with C++, though.

Calling Rust from C and vice versa is fairly straightforward and there are automatic code generators for doing so. Things just get wrapped into an `unsafe` block and that gives you the same guarantees you'd have with C++ (none, basically). From there you can start to gradually rewrite critical parts in safe Rust and profit from the cargo ecosystem along the way. In that sense there's even an advantage for Rust over C++ in my book.


Integrating with C is easier than with C++ but it still needs those safe wrappers. With C++ you can use stuff directly, and maybe sometimes have to convert between say C and C++ strings. Integrating with C++ is much harder, especially if some C++ features are used. What if you have to inherit a class to do something? What if there is a generic function that you need to invoke? Even tasks as simple as catching exceptions are difficult. Things have improved with dtolnay's cxx thankfully.


Calling C++ libraries from Rust is in general impossible. The more useful the library is, the less likely it is that Rust can call it. As the languages evolve, that will sharpen.

There is no "C/C++" code. Using the expression mainly generates confusion, most usually in the person saying it.


> Rust is a systems programming language.

While Rust is a systems programming language, it has a very advanced Traits system. This makes it, in my opinion, more powerful than your average scripting language (Python, JavaScript, etc...) I have recently used Rust for something that I should have, normally, used JavaScript or Python for. Based on my experience, I probably gained in development time because I didn't do much debugging. Rust code just works, it's simply amazing.

> Complexity

One way to approach this, is to actually enjoy this complexity because there is real theoretical computer science behind it. It's not complexity for its own sake.

> Compile Times

This one is really annoying even with a powerful processor.

> Maturity

I'd argue that since many high profile companies like Facebook, Google and Microsoft used Rust to start new projects (like Libra, Fuchsia); Rust is stable enough to be used in a production capacity. You'll probably benefit more if the project is new and thus have less time dealing with C compatibility.


> One way to approach this, is to actually enjoy this complexity because there is real theoretical computer science behind it. It's not complexity for its own sake.

Some people enjoy complex languages and some don't, and while the complexity in Rust does serve a purpose -- supporting the borrowing system and a design philosophy that is similar to C++'s -- there is "real theoretical computer science" behind any language design. The theory doesn't tell us which approach is good and which is bad, just what properties a language has. Some designs, usually on the more complex end, lead to many properties that are of interest to people who study those properties, and so there is some correlation between people interested in studying formal systems and people who prefer complex languages.


Traits do not make Rust more powerful than your average scripting language. They are much less powerful.

The biggest hole here is the complete absence of any dynamicism. Rust types do not have any real runtime presence.

Concretely, there is no way to dynamically check whether this value implements some Trait. This is used in Golang (Copy -> WriterTo) but is impossible in Rust without manual implementation.


Rust's typesystem really makes it a great fit for business applications. Totally agree. Kotlin comes close, though.

Rust itself is production ready, but not the ecosystem.


The cognitive overhead of rust is more than prototyping oriented langauges. It is not because of dynamism but the need to think about details. That's natural for the intended domain of rust. (Robust system programming).

But rust might suffer due to scope creep because someone felt the need to teach rust to all ruby rail / javascript webshits.


Sure, not everything is system programming, but Rust has some higher level features that are even missing in languages like JavaScript and Python. Pattern matching is very powerful.

I'm not entirely sure, but have the feeling that Rust's "cons" are mostly short term draw backs and the "pros" could be really valuable in the long run.


Consider a function like getElementById(). This is natural in JS, but a crisis in Rust - what does it return? How do you ensure that your caller doesn't have some unwrapped RefCell on the stack? etc.

Broadly speaking, Rust is good at trees and bad at graphs. This implies real tradeoffs, real design decisions. There are many software components that Rust is just not good at, by design.


Apparently [1] it returns Option<Element>.

[1] https://rustwasm.github.io/wasm-bindgen/api/web_sys/struct.D...


Yes, I didn't understand the problem here neither.


I think there's a proposed pattern matching implementation for javascript in the works.


There is, and the author of that proposal created it after learning Rust.



Another thing not mentioned is there's still no mature cross-platform production ready GUI libraries written in Rust yet. None of them come even close to Qt or countering Electron. Just mere unmaintained bindings to them. [0]

I guess it is either Electron (HTML/CSS/JS) + 200MB Chromium engine or Qt5 (C++) for now.

[0] https://www.areweguiyet.com/


There are a bunch of tiny single person projects but they are young and have tons of missing features and bugs. Ideally, you'd have a team building and maintaining a solution but that requires either an insane amount of coordination (and there's tons of disagreement about which design pattern to use), or money.


"yet"

Most languages never get one, and content themselves with piggybacking off of WxWidgets or JavaScript (or whatever).


Indeed. There is certainly no mature GUI framework in Rust yet, simply because it is an extremely ambitious undertaking, but I believe that Rust is by far one of the most promising languages for developing such a thing. I see great vitality both in explorations (for example, figuring out the best patterns for expressing reactivity) as well as infrastructure.

I'll give an example of the latter which I find compelling. OpenType shaping (one of the subproblems of text layout) is a notoriously difficult problem, and almost nothing can compete with HarfBuzz, which is written in C++. Yet there is already one pure Rust alternative being used in production, Allsorts (used in Prince XML), and another promising one in development (rustybuzz, being developed by RazrFalcon, who has a track record of shipping ambitious 2D software).


I rather use the native engines from each platform.


I've come to the conclusion that it's simply impossible to develop many important applications using exclusively platform-provided GUI elements. I guess the evidence of this was always there; after all, Internet Explorer and Office have to implement their own controls, even though they're developed by the same company as Windows.

But by all means, if you can implement your applications using native widgets, please do so. You'll be able to make them accessible, internationalized, etc. with much less work.


The way is just to provide a thin wrapper good enough for the features that the application needs, not a full blown framework.


Which is why it is always a better bet to use use platform languages, they always get them.


> I guess it is either Electron [...]

The really crazy thing is that the group that eventually birthed Rust had an acceptably performant Electron alternative even before Electron existed—and it supported both HTML and native-ish looking widgets. But they completely underinvested in it and eventually killed it—insisting that they knew better and that no one really needed or wanted it. In the meantime, Electron came along and is now so ridiculously pervasive that it's regularly brought up in conversation—even where its of very little relevance—just as a consequence of people looking to complain about how popular it is. The only thing we hear from the former group who strangled their baby, though, is how dire the outlook is for them and their influence on the future of computing.


I’ve never heard about this Electron precursor. What was it called? Could you link to anything about it?


Gecko, which is the rendering engine used by Firefox, has existed for longer than Blink/Webkit (powering Electron). It also supported a UI toolkit (described in XML and a wrapper on Gtk etc.) called XUL [1]. I think OP is talking about this. Mozilla never tried to make it easy to use Gecko (plus the Spidermonkey JS engine) as a re-usable library outside Firefox, making something like Electron really difficult to do. On the other hand, Blink is a lot easier to integrate, which is why Electron exists (as a thin wrapper over Blink+V8).

[1]: https://en.wikipedia.org/wiki/XUL


I'm assuming the reference is to XULRunner [1].

[1] https://en.wikipedia.org/wiki/XULRunner


> we don’t know how to create a simpler memory safe low-level language.

This may well be true, but using a memory-safe language is never, ever the goal. The goal is creating correct and secure programs -- that have as few bugs and security flaws as possible/required -- as cheaply as possible. While a memory safe language in the style of Rust is one means toward that end, that eliminates an important class of bugs at the cost of language complexity, it is not the best way toward that goal, at least not that we know, and it is certainly not the only one [1]. I.e. the hypothesis that if I want to write a program that's as correct as possible/needed as cheaply as possible then I should necessarily use the language that gives me the most sound guarantees regardless of the cost this entails is just some people's guess. It's hard to say if it's a good guess or a bad one because it's clear that we're talking about specific sweet-spots on a wide spectrum of options that could be very context-dependent, but it's still a guess, with good arguments both in its favour as well as against.

> In Rust, there are choices to be made, some important enough to have dedicated syntax.

Not only that, but those choices are exposed in the type signature and are, therefore, viral. Changing some internal technical implementation detail can require changes in all consumers. This is not a problem with the type system -- on the contrary, Rust's type system ensures that all the changes that need to be made are made -- but it is a fundamental problem with all low level language. They all suffer from low abstraction, i.e. a certain interface can represent a smaller number of implementations than in high-level languages (even if the choice is not explicit in the type, like, say, in C, the usage pattern is part of the interface). But Rust's choice to expose such details in the types has its downsides as well.

> If you use C, you can use formal methods to prove the absence of undefined behaviors

C now also has sound static analysis tools [2] that guarantee no undefined behaviour with a nearly fully automatic proof that scales to virtually any code size and requires relatively little effort, certainly compared to a rewrite.

[1]: Another low-level language with an emphasis on the same goal of correctness, Zig, takes an approach that is radically different from Rust's and is so simple it can be fully learned in a day or two. Which of the two approaches, if any, is better for correctness can only be answered empirically.

[2]: Like https://trust-in-soft.com/, from the makers of Frama-C


Memory errors are not as big of an issue in the real world as the designers of Rust seem to assume. Andrei Alexandrescu has this quote about how Rust skipped leg day. You should all look it up. Here's what actually matters, in the real world, for the type of programs I'd choose Rust or a Rust-adjacent language:

- Fast compile times

- Easy build system. Must be parallel, saturate all cores, and handle distributed builds as a built-in first-class feature. Should be able to produce static binaries or dynamically linked binaries. Should be able to consume binary dependencies (that means a stable ABI).

- Optimized, fuzzed, highly-tested, standards compliant web stack built-in

- Ability to compile CUDA kernels, built-in

- Automatic memory management with an escape hatch for where manual is necessary

- Rich metaprogramming and compile-time reflection

- Created by or funded by a large U.S. corporation so that PMs are likely to choose it.

- IDE/LSP language server/editor plugins maintained by the language designers

Failing this, you are an academic language, not a real world industry programming language in 2020.


> Memory errors are not as big of an issue in the real world as the designers of Rust seem to assume.

Microsoft and Google have both reported that memory safety errors make up about 70% of their security vulnerabilities.


How many percent of all errors are memory safety errors? Why did Google choose C/C++ for Fuchsia after evaluating also Rust? How come, Actix announced a new version with memory leaks fixed?


> How many percent of all errors are memory safety errors?

They did not release those numbers.

> Why did Google choose C/C++ for Fuchsia after evaluating also Rust?

A couple of things here:

* We don't actually know if they did evaluate Rust for the kernel or not.

* At the time they would have been making that evaluation, Rust was one year past 1.0, and was significantly less mature than it is now.

* They had a bunch of people who had experience with existing kernels that were in C and C++, and in fact, based Zircon on one of them: LK.

All of these are good reasons to pick what they picked. However, there's one more thing here, and that's that Fuchsia is a microkernel, and so the kernel is a lot smaller than in other OSes. Rust is used for a bunch of components that would be in the kernel if Fuchsia was a monolithic kernel.

> How come, Actix announced a new version with memory leaks fixed?

Well, first of all, memory leaks aren't a memory safety issue, so this would be irrelevant. However, Actix had actual memory safety issues as well, and did fix those. This happened because the author used a lot of unsafe that they didn't need to. This is pretty straightforward though; it was easy to find those issues because of the way that Rust works, and then they were fixed. That's kind of the point!


Still, security issues are secondary. The primary issue is always to make things work. You have no security to worry about if you don't have a working software.


That is solved by using garbage collection.

There are times you need manual memory management. Then there are other times you don't (the vast majority of use cases). Rust forces you to constantly pretend that memory management matters, which gets quickly tiresome.

Not everyone is working on Chromium or Windows. The reality of the matter is that for most real-world software development security doesn't matter at all.


- Easy build system. Must be parallel, saturate all cores, and handle distributed builds as a built-in first-class feature. Should be able to produce static binaries or dynamically linked binaries. Should be able to consume binary dependencies (that means a stable ABI).

Rust can actually be built with Bazel. The tooling an ecosystem for doing that is nowhere as good as Cargo and I really wish the Rust team would focus more on supporting other build systems as first class (i.e. not having half the conventions for packages being set by Cargo - things like build.rs output syntax, the various environment variables), since they bring a lot of mature tooling and performance to the table.


> “Rust should have stable ABI” — I don’t think this is a strong argument. Monomorphization is pretty fundamentally incompatible with dynamic linking and there’s C ABI if you really need to.

This is an interesting topic unto itself, described in one of my favourite pieces of technical writing of all time:

How Swift Achieved Dynamic Linking Where Rust Couldn't - Alexis Beingessner - https://gankra.github.io/blah/swift-abi/


Yeah, compiling in Rust was the bane of my existence. I tried to stay on old versions for as long as possible to avoid compiling.

At some points, I'd basically have to wait 10-15min.. so I'd go for a walk or grab coffee. Then I'd switch to javascript and it felt like going at the speed of light. To each their own.


This is a pretty good summary.

Not to pile on, because Rust is still in a fragile condition, but the point that Rust cannot, and never will be able to call, typical C++ libraries deserves a boost (no pun intended).

This matters because C++ has more facilities to encapsulate powerful semantics into libraries than any other language. People write libraries in C++ that cannot be written in other languages, routinely.

This makes it usually impractical to integrate Rust code into an existing modern C++ codebase, unless it implements a wholly independent subsystem. That matters because effectively all of the most demanding systems-level work, today, is conducted in C++, not C (old OS kernels and databases excepted).

The longevity problem also deserves attention. The normal, expected fate of any new language is to die. It practically takes a miracle to survive, and we have no way to predict miracles. So, we don't know if there will be anyone to maintain Rust code written today.

What will it take to survive? It comes down to numbers. Rust has an excellent adoption rate for a language at this stage. To survive, it might be enough were the rate to increase by two orders of magnitude.

You don't get that just by more and better publicity. It needs change. But the changes needed are, by experience, very, very unpopular among existing Rust users.

Rust will never displace C++ or C. C++ does things Rust can't. C will dwindle only as its user base retires, because C users actually like it for its failings: it makes them feel tough (or something). Rust is an overwhelmingly better language than Java or Go, and the world would be a better place if Rust were to displace them.

But neither of those has the specific problems that the borrow checker demands be solved. They have other, graver weaknesses. So, for Rust to displace them, its advocates will need to change their approach to appeal to users of those languages.

That will require at least a different build model that admits an order of magnitude faster builds, and a looser use of the borrow checker that generates less frustration. It might need accommodations to integrate in Go and Java projects, maybe including support for a JVM target, virtual call mechanism, and import of foreign Go and Java modules.

To get any of that, the project will need to excite people now in those environments with the prospect of easing their pain. Rust's advantage there is that their pain is great, and Rust could ease it.


>> Rust is an overwhelmingly better language than Java or Go

100% agreed

>> and the world would be a better place if Rust were to displace them.

Rust needs a standard lib like Go or ecosystem like Java. Production stuff needs to be stable. Especially Go gets this right.

Rust with a standard lib like Go could rule the business apps world. But Rust doesn't even include an async runtime. It's not a problem, if Rust wants to stay in the niche where it is. Also, Rust advocates should not wonder why the adoption in the general purpose web space is far below its potential.


> Rust will never displace C++ or C. C++ does things Rust can't.

Could you expand a little bit on that?


The language enables a kind of calculus on types, that libraries routinely use to get better performance, better compile-time error detection, and to direct control to the best code path.

C++20 just got "concepts", which make this capability vastly easier to use when creating a library, and enable some new capabilities along the same lines, while greatly improving compile-time error reports.


There are some meta programming features C++ has that Rust doesn’t, like “template non-type parameters” and “template template parameters.” The former is called “const generics” in Rust, and is still in development, and the latter is “higher kinded types,” which Rust may or may not ever get.


I should add that asking about C++ is absolutely the wrong response if you are interested in saving Rust from scheduled oblivion.

What it needs are active measures to increase its adoption rate by orders of magnitude. Hype on HN only goes a little way.


> Complexity - Programmer’s time is valuable, and, if you pick Rust, expect to spend some of it on learning the ropes

This has been my experience as well. I've been trying out Rust for a week now and as an experienced programmer, the complexity of the language is overwhelming. That said, you don't need to know all language constructs to get started. In the last week, I've been able to build a simple but useful cross-platform GUI app after reading only the ownership/lifetime section of the Rust book.

But yes, it is not for everyone and it certainly isn't a simple language to pick up.


> cross-platform GUI app

May I ask using what?


I evaluated a few libraries, but ended up using iced [0]. I'm writing a short blog post with my findings (will hopefully post it on HN)

[0] https://github.com/hecrj/iced


From a security point of view, the major issue with Rust is still the shallow standard library, a shame the article doesn't emphasizes this and instead lists this as a negligible point. How many will end up using rust wrongly due to the lack of a rand library, or anything cryptography, etc. And that's nothing compared to the lack of a hex encoder/decoder, of a tcp library, of a json library, etc. Compare that to Golang and Rust has a lot to improve on.


Yeah, with all those focus on security, it is kind of astonishing that rust ecosystem has grown such a hard-to-audit web of dependencies.

The only thing that explains it is an influx of web programmers who are used to this.


I think Andrei Alexandrescu's comment that Rust "skipped leg day" is still mostly true (Rust does what it set out to do better than anyone else but the metaprogramming in particular isn't very attractive)


Peronally I feel more comfortable writing metaprograms in Rust than any other language. Could you elaborate?


Rust macros have no access to types at all. Types are essential for any but the most trivial metaprogramming.


If you consider the scope of metaprogramming to include both macros and traits (and I do), then this is seriously understating the case. Rust's type system has a Prolog-like core and is Turing complete, so is quite expressive in able hands. A great example of the two features working together is serde, which automatically derives serialization and deserialization code (with custom overridable behavior such as defaults) for a very wide range of types. It's also something like 20-50x faster than Swift's Codable based implementation of JSON.


I guess I do, too, now. It has got more powerful since I last looked.


Can you give a specific example of this?


TIL Rust is working on its own version of ASAN and UBSAN: https://github.com/rust-lang/miri How shocking.


Shocking why?


I imagine because ASAN and UBSAN are dynamic analysis, while Rust's philosophy is to offer static guarantees.

I guess the problem is "unsafe" blocks. Someone else can probably elaborate better, but if you have pure Rust code with no unsafe blocks, you shouldn't have any memory errors. But large systems written in Rust might still benefit from finding memory errors dynamically rather than statically with something like ASAN.


You are correct, yes. This is used on unsafe code only.


Will the compiler performance eventually match C++, or are there fundamental limits to it?


Rust already compiles as fast as C++ if you write in roughly equivalent style.


In my experience C++ is a lot faster, especially with multiple cores.


I can easily beat Rust, thanks to incremental linking and binary libraries.


I'm not sure what your point is there; rust and c++ are both famous for having slow compile times.

Languages with good compile times include:

- C[1] (unless you go crazy with the preprocessor)

- Go (unless you go crazy with code generation)

- D (unless you go crazy with templates)

1. With TCC or similar. Gcc/clang compile times are just ok (though better than with c++; and interestingly, though gcc is slower for c++, it's faster for c).


It should in theory be better because of modules instead of header files.

It is bad because rustc emits quite a lot of LLVM IR and lack of binary libraries / library caching.


Too complicated which means when your lead rust dev leaves good luck finding another one.


If one chooses manual memory managment, the "complicatedness" (as you put it) is not a matter of the language, it's a matter of programming type.

As other computer engineering concepts (e.g. referential integrity in database systems), the lack [of implementation] of a feature (e.g. foreign keys) doesn't make a system that needs that concept simple - it just shifts it to the application level.


As the article says, it's a chicken and egg problem. Small number of jobs, means small number of people who have had those jobs in the past and now have those 6 years of experience with Rust plus minimum 4 years of leadership experience. Means more deciders having the attitude you have instead of "let's go with Rust!".


If you run a Rust shop you need to invest in training and mentoring. It's probably not the right choice for a butts-in-seats feature factory.


Seems like its mostly getting used by places doing fairly cutting edge infrastructure or other works where its going to take a fair bit of training to begin with.


It definitely wouldn't be easy as finding your next web developer (or similar) but relatively niche languages (in my experience) tend to have some very clever people on their forums (or similar) so - especially for Rust which is a established language in my mind these days - you shouldn't have that much of a problem finding someone technically capable (HR notwithstanding)


That is besides the lacking of the standard lib the second most important point why our team decided against Rust.


The article fails to mention Ada's capabilities in the area of formal verification. The 'SPARK' subset of the language is designed explicitly for this purpose.

Modern Ada/SPARK also support Rust style 'safe-pointers': https://blog.adacore.com/using-pointers-in-spark


Thanks! This convinced me. I'll stick to C for low-level, C++ for CUDA stuff, and C# for everything else.


I'm also convinced that there's still no mature cross-platform GUI libraries written in Rust, so that's that. All you get is a mixture of unmaintained or unstable bindings that may break when the library changes its API.


That was my decision as well after a few forays with Rust in GUI programming, I just add Java alongside with .NET due to different deployment scenarios.


Just to clarify: do you still have a realm where you would use Rust?


Only for my hobby coding trying out ideas across as many languages as I can, as language geek to get inspired about new ways to approach solving problems.

For my line of work. I rather use what is available out of the box in SDK installers and respective IDEs, no need to add extra complexity.

Now in general, I think Rust's sweet spot is on kernel, drivers and maybe composition engine, anything else GC languages with low level features, e.g. Swift, D, Nim, Crystal, .NET Native, Java (after Panama/Vahlala), Go (assuming 2020 generics get in) are a much more productive way to tackle problems.

EDIT: Just like with FP, I am a firm believer that eventually all mainstream languages will have some form of affine types, and Rust will have played the role of a stepping stone into the adoption of Cyclone and ATS ideas for low level coding.


What if you were working on something where a cve or data leak from a buffer overrun could kill the company, or end a life?


The evangelical hype around rust is quite something. Not only does rust save companies, it saves lives. Check in next week to see if it can stop hair loss.


Steve Kablink does have great hair!


Really! So now we have personal attacking shit instead of talking about technical info.


How do you ensure an unsafe block on an dependency downloaded from cargo out of some pad-left like implementation doesn't do exactly that?

Also should be noted that Rust lacks the necessary certifications.


In a situation where memory unsafety could put people (or the future of the company) at risk downloading random unaudited dependencies off of crates.io is obviously a no-go. Ideally downloading off of crates.io is a no-go during building anyway, since cargo can work with all of your external dependencies mirrored to a local filesystem or your own custom crate registry.


So basically the same approach as required by ISO security standards for C, C++, Ada and Java.


you use cargo-geiger for that: https://github.com/anderejd/cargo-geiger


It doesn't work with binary libraries, something that is quite common on Ada, C and C++ worlds, the domains targeted by Rust.


cargo-geiger isn't foolproof, there are well-known false negatives: https://github.com/rust-secure-code/cargo-geiger/issues/101


If you're working on safety-citical software, I think using formally verified C code (as discussed in the article) would be the best approch.

If a bug could kill someone, you should (to the greatest extent possible) have a proof that such bugs are impossible.


People safely wrote software that could kill people long before Rust was invented.


https://www.youtube.com/watch?v=iOBXVOAbpdY Rust is not as safe as it claimed, not to mention many will use its unsafe mode in practice anyways.


I’m not sure what the downvotes are for. The paper, upon which the video is based is linked in the description and is easily accessed. Is the issue primarily with parent’s less than substantial comment (basically just linking to a conference talk) or is there some issue with the underlying empirical analysis of Rust’s library codebase?

The thrust of the article is, from the abstract, that Rust should consider altering the compiler and the crates ecosystem to better enable the safety features that Rust champions, because ‘ results indicate that software engineers use the keyword \unsafe in less than 30% of Rust libraries, but more than 75% cannot be entirely statically checked by the Rust compiler’.


> Is the issue primarily with parent’s less than substantial comment

I would assume this, but also

> is there some issue with the underlying empirical analysis of Rust’s library codebase?

this is hard to tell, because

> The paper, upon which the video is based is linked in the description and is easily accessed

I don't think this is true, or else, I am missing it somehow. Where is the paper? I don't see it linked from the page that's linked in the description, and a quick google of the title didn't turn it up for me either.


It is available on arXiv. Link follows:

https://arxiv.org/abs/2007.00752


It's... fine. It's got some issues though. Like:

> Since Unsafe Rust may modify an arbitrary memory location, any use of Unsafe Rust in any dependency compromises the static guarantees of the whole library

With this definition, literally every Rust program is not safe, because:

1. Most operating systems don't have syscalls written in Rust, so doing anything at all ends up needing unsafe.

2. Even if you wrote an OS with safe syscalls, doing things with the hardware is going to require unsafe.

So, buried somewhere in there, unsafe will exist. So I don't personally think this definition makes a lot of sense.

To me personally, comments like this are more interesting, and tell a different story:

> The number of unsafe blocks per crate is small for the majority of the crates, more than 90% of the crates have fewer than ten unsafe blocks

This validates one of Rust's core propositions, IMHO. From eyeballing the graph, it seems that ~70 of crates have zero unsafe? Very interesting.


Ada, Java, C#, Go, Python


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: