Hacker News new | past | comments | ask | show | jobs | submit login
Facebook picked Rust to implement Libra (reddit.com)
276 points by tosh 33 days ago | hide | past | web | favorite | 222 comments



And WindRiver just sent their first pull request[1] to support VxWorks as a target.

Shameless plug - I made a proposal[2] to create Business Applications Working Group to help promote Rust in enterprise software, quant and "normal" finances, taxes, and governance, the areas now completely dominated by COBOL and Java. Rust is a perfect fit for such applications, offering more safety and security, speed and higher levels of abstraction. Hopefully, formal verification will arrive soon as well.

[1] https://github.com/rust-lang/rust/pull/61946

[2] https://internals.rust-lang.org/t/proposal-business-applicat...


Please don't take this personally.

I think it's great that you wanted to push Rust to the area where Java flourished (by Java I'm guessing Enterprise?). I respect your passion.

Personally, I never like a language that has esoteric syntax utilizing various non-alphabetical characters. Perhaps this particular aspect is what drew me to languages like Java/JavaScript. I don't mind writing "public static void" because writing code is like 10-20% (the rest is code-review, meeting stakeholders/Product Owner/PM, come up with plans, write documentations, etc) of my activity throughout a decade of my career as a Dev.

There are some caveats (or rather ... personal dislike) with Java and JavaScript for example: I don't like the Closure syntax in Java and I don't like the fact that JavaScript lacks cohesiveness => People who write ES5 code do have to implement patterns specific to JS in order to re-create some of the OOP constructs. ES6 with Class/constructor/static syntax help but then they added generator via '*' (asterisk, but why? why don't use the keyword 'gen' or 'generator' just like async/await keywords?).

The other thing I realized lately is that I prefer GC platform/ecosystem so that I can focus less on the platform itself.


I'm curious what esoteric syntax utilizing various non-alphabetical characters you're talking about in Rust? I recently started learning it and nothing jumps out to me as particularly egregious.


There are three ways this objection is usually expressed:

1. Long ago, Rust had some interesting built in syntax for different kinds of pointers, and so people checked out Rust then, and didn't realize we got rid of it. This objection has happened less and less since that was ~5 years ago (wow time flies), but sometimes it still comes up.

2. The lifetime syntax uses a single apostrophe, like 'a. We took this syntax from OCaml (though slightly re-purposed), but if you haven't used OCaml, and many have not, it can feel very weird.

3. Not everyone prefers the "curly braces and semicolons" style of systems languages, and so they feel the <>s and ::s and {}s and ;s are noisy. We kept this style to be familiar to systems programmers.

In general, Rust doesn't have a lot of unique syntax, but it does draw from two generally disparate areas: functional languages and systems languages. If you haven't used both before, some of it can feel strange. It happens.


> 3. Not everyone prefers the "curly braces and semicolons" style of systems languages, and so they feel the <>s and ::s and {}s and ;s are noisy. We kept this style to be familiar to systems programmers.

Oh boy do I agree about the noise. I absolutely love Rust, and I definitely don't think it's noisy (in normal examples) but before I first learned Rust generic code feat like syntax soup. It seemed bizarrely messy and convoluted.

Mind you, I love Rust now. This is not a criticism towards Rust. It's simply a reflection on how a non-Rust programmer, largely non-Systems programmer saw Rust's flavor of syntaxes for Generics and Lifetimes.

How time flies. Now I push Rust at work where it makes sense and use it for everything in my personal (code) life.

My favorite part? Likely how it feels imperative while still offering functional features. Iterators, for example, are my favorite thing in the world. This is partially due to system shock I'm sure; the idea of iterators to this old Go programmer feels like magic haha, it was quite the change coming from Go.


> Oh boy do I agree about the noise. I absolutely love Rust, and I definitely don't think it's noisy (in normal examples) but before I first learned Rust generic code feat like syntax soup.

What's the alternative? Text-based syntax ala Pascal/Ada is actually noisier than symbol- and short-keyword-based, because you don't ever get to file the "syntax" parts away as you become familiar with them; you have to process them visually every time you read that code. There may be a case for that sort of overly-verbose syntax as a form of literate programming - an introduction meant for users who are as of yet unfamiliar with any given system; but as an everyday practice it's quite painful!


Significant whitespace a la Python.

When I first started learning Python I hated the idea of significant whitespace, now I think it is the best thing ever.


Using rustfmt consistently will get you basically 90% of the way towards the "Pythonic" benefits of significant whitespace. On the other side, having lexical block nesting be controlled by an explicit pair of characters { } is a pretty significant advantage, particularly so for Rust where placing a bit of code in its own block can have rather significant implications of its own!


I'm the other way around. When I started with Python I instantly fell in love with "format is the syntax". A few format bugs later (people changed code and didn't push/pull a block, or merged two blocks by mistake), and using autoformatters, and now I wish Python used another mechanism.


Somewhat agree with this explanation.

I started my journey from QBasic (high-school) to C/C++ (college) to Java to JavaScript and I have to admit that I use Java/JS the most throughout my career so my experience is biased toward my own personal experience.

Had I stuck with C/C++ for 10 years and dabble with Haskell/OCaml at night, I might feel different.

Once I got stuck in Java, I felt very comfortable there.


Yes:

http://blog.plover.com/prog/Java.html

Money quote: "I enjoyed programming in Java, and being relieved of the responsibility for producing a quality product."


Meh ¯\_(ツ)_/¯. It is his opinion, not a fact.

I never believe in connecting programming language with quality product because I see no evidence.


I don't doubt his enjoyment, or his reason for it.

People gravitate to work using languages with characteristics that suit them. Most people coding Java would be less happy with something else, and most people using something else would be less happy obliged to use Java.


You have a point there.

Though, I can accept JS and Go (the last one takes a while).


Counterpoint to #3: the <>s and ;s and {}s were very much appreciated during learning because of familiarity - basically same as typescript minus the turbofish.


ADA uses ' as syntax, and I hated it :)


https://doc.rust-lang.org/book/ch12-04-testing-the-librarys-...

  pub fn search<'a>(query: &str, contents: &'a str) -> Vec<&'a str> {
    vec![]
  }


You didn't elaborate on exactly what you find esoteric about all of this, but here's all the non [a-zA-Z] characters, what they do, and how they relate to my sibling comment about objections:

<> for generics. Point 3.

'a for a lifetime. Point 2.

() , and {} for function syntax. Point 3.

pattern: type for arguments. This is in a few different languages, and doesn't fit into my below points, but the paragraph I made after the point.

& for references, point 3.

-> for the return type of functions. In the paragraph after.

! to indicate that vec! is a macro. This is pretty unique, actually. Ruby does let you use ! in function names, but that's not really the same thing.

[] is used to invoke vec! similarly to how you index anything else, that is, this creates an empty vec, like [] would create an empty array. This is sorta point 3 and the final paragraph.

None of this is to say that you're wrong about this; I think it just really demonstrates how hard it is to please people with syntax. None of this is unique to Rust, but the combination of influences is.


> I think it just really demonstrates how hard it is to please people with syntax

There's no silver bullet just like anything else. It's a matter of preference.

Generics is fine (exist in C++ and Java). Although I dislike crazy generics <T<S>> but that could be design problem.

<'a> (yeah technically it's just the 'a.

"->" non-alphabet, I came from C family (C/C++/Java) so return type stated earlier as part as the whole long-train of ceremonial stuff alongside the modifier (public static int myMethod)

Pointers and Reference: not my cup of tea but I understand it's important for system language. Others can argue its importance for "business apps" (or web-apps) but I never need pointers and references so _shrug_.

semi colon after input parameter:

query: &str

vs

string query

contents: &'a str Vec<&'searching str>

<--- the fact that now you can combine these non-alphabet stuff together => "&'a".

vec![] => exclamation mark squeezed between what appears to be a keyword "vec" and an array syntax

Again, don't get to riled up. I just personally don't enjoy reading syntax like this.


Given their requirements, I'm not sure they could figure out a way to do this that would satisfy you.

The syntax choices Rust made don't seem any stranger than e.g. Go. Rust just has more concepts because of what it's trying to accomplish.


Of course, of course. I'm not asking them to satisfy me. I'm not in the field of system programming/embedded software.

I'm more in app-development.


It's funny, ~10 (maybe 15 now? yikes) years ago I was totally in a similar boat. Hardcore Python, felt that syntax was gross and brackets were dumb, etc (not that this is how you feel).

Over the years I have just grown repeatedly to love static and typed. The larger my code bases the more it (dynamic features) always seemed to be something that caused me trouble. Typed languages felt bad for prototyping, but as time went on that argument of mine just didn't seem to hold water. I definitely don't feel that way now.

And, while I do like Rust's syntax, and have grown strangely fond to the syntax coined "The Turbofish" ::<>, I just haven't seen a typed language that can do what Rust can with a vastly more simplified syntax (ie, Python). I imagine it's not possible, but I'm not making claims here - just thinking out loud. Of course, Haskell and friends might be near that camp, but arguably even if the language is as powerful I'd they're too different to qualify for my statement.

Anyway, not trying to convince you of anything. Just thinking out loud. Carry on friend :)


> felt that syntax was gross and brackets were dumb

This is how I feel when I saw any languages that utilize special characters. It is shorter, I give you that, but that shortness is taxing my brain.

No, I'm not asking for convincing as I mentioned I'm not the target audience of Rust (system programming, OS, etc).

Ironically at college my specialization was System Programming, OS, and Database. Pretty much infra stuff that Rust serves well :). Not sure what happened along the way that changed my mind.


FYI, "->" is C++ as well, as of eight years ago. This is a valid C++ declaration:

auto multiply (int x, int y) -> int;


Thanks. I never caught up on C++. I had glanced over it (C++ 14/17) a few months ago and I recalled they added MORE crazy stuff on top of already souped-up language.

One of the many reasons why I stick with Java.


> vec![] => exclamation mark squeezed between what appears to be a keyword "vec" and an array syntax

vec![1,2,3] declares a vector of [1,2,3]. But it isn't `keyword` ! `array`, it's more like vec(1,2,3) where vec is a variadic function. But instead of being a function, its a macro so its done at compile time using AST transformations.

Basically, `vec` isn't special, `!` is. `anything!()` or `anything![]` are ways to call macros.

I don't recall the exact transformation, but vec![1,2,3] is basically equal to

    let v = Vec::new();
    v.push(1);
    v.push(2);
    v.push(3);
It's not quite equal because it might initialize with Vec::with_capacity(N) where N is the number of args, but you get the point.


More than syntax, methinks it's semantics and culture. For those who went off road toward ML, Lisps, Prolog etc, syntax becomes a mere detail. We care more about what the linguistic constructs give us, whatever the clothing. But for others, it's a god damn slap in the face.


This is one way to view the state/situation in our field.

Those who build data-pipeline, ML, processing big-data, they love constructs that can help them achieve their goal faster.

Those constructs are essentially processing collection type of constructs: predicates(), forEach(), map(), reduce(), transform(), filter(), etc.

Those who writes business-apps tend to favor OOP because they need to map business domains.


Steve, combination of all of those resembles Vogon poetry that's often found in modern c++. It's what it looks like to a newcomer, that's undeniable. Having said that, it does tend to go away as you start using it. Almost.


In a BASIC- or COBOL-like language, that would be something roughly like:

  PUBLIC FUNCTION Search \
    GIVEN REGION Searching \
    TAKING Query: SHARED Str, \
    Contents: SHARED Str REGION Searching \
    GIVING Vector OF SHARED Str REGION Searching IS
      TAKE Vector!Empty
  END FUNCTION.
It might look like an improvement if you're seeing Rust-like code for the first time ever, but it very much isn't one in the longer run! Even the final version of that function might be somewhat tedious to write and review, to say nothing of something even slightly longer than that!


  pub fn search<'searching>(query: &str, contents: &'searching str) -> Vec<&'searching str> {
    vec![]
  }
Give your lifetimes meaningful names.


That’s interesting — I haven’t seen any code that does this. Do you have more fleshed out examples on github to share?


We had discussions about this prior to 1.0, and some people do write code this way, but in general, you’re only using lifetimes in more complex code, and doing this makes it even more verbose. It does sometimes help, though. I personally prefer the ‘a version in this case for this reason, it’s actually harder for me to understand. YMMV.


> Personally, I never like a language that has esoteric syntax utilizing various non-alphabetical characters.

Check out COBOL. You'll love it. :-)


Can't do Big-Data, no libraries.


For me Java signals cheap labor. You’re not going to get a high five for speeding the server code up 38% nor are you going to get one for finding 23 bugs not in Jira. It’s the janitor fixing a broken bulb.

You are not going to find people that knows about avx or red-black trees in a Javashop in my experience.

That said, I’d hire them just don’t want to be them.


Wow... I mean... just wow...

> avx or red-black trees in a Javashop in my experience.

You meant AVL right?

As much as I dislike Leetcode and whiteboarding interview culture, there's an army of people who know these stuff (data structure) working for

Google (largely Java shop), Amazon (boy, this one is a HUGE Java shop) LinkedIN (o yea, o yeaa...) Square, Twitter, etc.

I can go on but let's just stop there.

Try LeetCode and see how many submissions are in Java :). I'd say it's probably the most-used programming languages doing whiteboard data-structure styled interview.


AVX (https://en.wikipedia.org/wiki/Advanced_Vector_Extensions).

It's weird to bring up big companies as they use every language under the sun and of course, they want beginner friendly languages for a big chunk of their work.

But I do think you underestimate the number of times they reach out for systems languages like C++, heck Fortran beats Java in numerical computing.

I haven't heard about LeetCode so I cannot comment on that. A site for unemployeed coders to find work?


Leetcode is the de-facto du-jour place to practice your algorithms chop for interviews for majority of the Valley/Seattle/NYC companies (and maybe extend that to Austin as well, pretty much any big-tech-hub cities). Heck even up north in Canada (Vancouver/Toronto), I heard companies have adopted that model as well.

> It's weird to bring up big companies as they use every language under the sun and of course

Why is it weird? it's the reality and they have massive amount of engineers that use Java/JavaScript.

For those who want massive salary, good-looking resume, and potential to work on solving interesting problem (potential, not guaranteed), they will have to prepare (grind) for the technical interview at Leetcode. Some take months.

> But I do think you underestimate the number of times they reach out for systems languages like C++

I don't. Amazon+LinkedIN+Twitter are super massive Java shops.

Let me turn the table around...

If you haven't heard Leetcode by now then well...¯\_(ツ)_/¯ (pardon me, I don't mean to attack you but this uhm... crazy whiteboarding/leetcode movement started in 2016 and have been taking our industry by storm).

Maybe you should try the site and solve 30 problems ranging from EASY, MEDIUM, and HARD to get some idea.

Hi-tech companies these days expect fresh-grad to be able to solve MEDIUM (trie, some combo-algo, dynamic programming, backtracking, string algo) level questions within 15-30 minutes.


I'm working in quant finance, and i'm using Rust. I suppose some of us are scared of punctuation, and some of us aren't.


You don't need to avoid the GC. Why not SML or the like? Why Rust?


It's not uncommon for up to 1/3 of usage and therefore the bill on VMs in the cloud to be consumed by garbage collection. So if you can rewrite it without a garbage collector, you can save money. A great book on this topic is "The Beast Is Back" by jetbrains. Advocating C++ in that case (written in 2015). If GC makes you more productive, that's good, but at some point rewriting things without GC makes sense.


So if you can rewrite it without a garbage collector, you can save money.

Save money over what? It's usually a lot easier to optimize memory use than to rewrite code. Going by the 80/20 rule, 80% of the memory pressure on the GC will be created by 20% of the code. So amortize the price of the rewrite over the expected lifespan of the app. Then compare this to, let's say, the cost of optimizing 10% of the app to eliminate 40% of the memory pressure. Then, also factor in the likely rate of bugs introduced in a rewrite compared to the optimizing GC and the cost of incurring, finding, and fixing those bugs.

Going by this analysis, I would suspect that in some environments where rapid iteration is key, progress is fast, and apps have short lifespans, it might be better to optimize GC instead of rewrite to eliminate it. I'd also expect that in other cases, it is better to rewrite to eliminate GC.


This analysis assumes that not using GC costs something. However, in modern C++ code, as in Rust code, you can root around as long as you like and not find any code outside low-level, standard libraries that does any memory management. Avoiding GC costs exactly nothing in progress or in iteration time.

So, the analysis of GC overhead is always going to pit cost X against cost zero, no matter how low you manage to get X.


One point is that the rewrite itself costs developer time that could be spent on something else.

Aside from that, rewriting something in C++ based on code in a higher-level language with better abstractions might cost additional developer time, maintainability, and quality.


> One point is that the rewrite itself costs developer time that could be spent on something else.

But they're already advocating for a rewrite, just within the same language.


I thought someone was also advocating for rewriting something in a managed language with GC to a language like Rust.


Avoiding GC costs exactly nothing in progress or in iteration time.

This might be true in new development. The specific context in this discussion was rewrites.

So, the analysis of GC overhead is always going to pit cost X against cost zero, no matter how low you manage to get X.

Again, you're talking about new development. That's not going to fit everyone's situation.


OK, I get you.

The problem is that whatever level of GC overhead you start with, or achieve, it will be non-zero, and its actual magnitude, including typically big cache-footprint knock-on effects that show up attributed, in perf results, to mainline processing, will be practically impossible to estimate reliably without comparing against a rewrite.

So, instead, you generally have to say: we compared some similar(-ish) program Y that was rewritten and cut the number of server instances required to meet demand by 30%, 60%, or what-have-you. But, exactly for the reasons you cite, comparisons published are against performance under GC after that optimization has already been done, as much as was practical.


including typically big cache-footprint knock-on effects that show up attributed, in perf results, to mainline processing, will be practically impossible to estimate reliably without comparing against a rewrite

It sounds like you're most familiar with, and you're arguing from a situation where the cost/benefit tradeoff overwhelmingly makes efficiency the king. So sure, in that situation, don't use GC. Write code with those cost/benefit tradeoffs in mind. That's valid, probably awesome and wonderful in certain situations. However, that doesn't mean that those particular cost/benefit tradeoffs rule over everything, everywhere, and everyone else should follow suit.

that optimization has already been done, as much as was practical

Which could mean a lot of things. I think, for the sake of your argument, you're arguing that we can assume it means, "very close to maximum optimization." Sorry, but you can't make this assumption everywhere and always. It depends. As Matt Easton says: Context!


> that doesn't mean that those particular cost/benefit tradeoffs rule over everything, everywhere, and everyone else should follow suit.

Indeed, a very great deal of code running on servers is in Python or Ruby. That's the low-hanging fruit before you start talking about GC overhead in a compiled language. But of course most of that is somebody else's code. People tend to be interested in what it costs to run their own code, not the next office over's.

If you believe your own argument, it would be foolish to start a rewrite while easy GC optimizations still await. Not everyone chooses wisely every time, but it's the charitable assumption.


How much is the usage consumed by malloc/free or equivalent?


malloc/free are not used in modern C++.

The fraction of runtime involving allocation and deallocation, at the level where they happen, is typically negligible. In servers, after program startup, it is often (and deliberately) exactly zero.


> malloc/free are not used in modern C++.

I don’t think that was their point.

> The fraction of runtime involving allocation and deallocation, at the level where they happen, is typically negligible. In servers, after program startup, it is often (and deliberately) exactly zero.

This is at least slightly misleading, though.

Obviously, memory comes from somewhere. You can amortize costs by not needing to allocate new pages often.

Minimizing the amount of memory consumed by an application dynamically is the only way to absolutely reduce the cost. There’s lots of ways to do this, and plenty of C++ and Go software aim for “zero allocations.” However, there is still actually allocations in many softwares with “zero allocations” because they still use the stack.

For deeply concurrent applications, the stack ends up being a lot of memory. If you reduce the amount of stack memory per fiber, it reduces memory usage initially, but then fibers are more likely to hit the guard page and allocate more stack.

There’s strategies to reduce dynamic allocations pretty much all over (even in GC’d languages like Go.) The fact is, though, avoiding it is much akin to avoiding the GC. In Go, its actually identical to avoiding the GC.

(As a note in post, I acknowledge that not every concurrent application uses fiber style concurrency, but I believe with minor adaptations this point still stands for many classes of applications. Fully avoiding OS allocations is possible, but it definitely isn’t the “default” for C++ apps.)

——

This isn’t to say your point is not correct at least for some viewpoint, but it’s not actually that simple, which is absolutely worth noting.


They're not directly invoked, but that's still what's called under the hood.


In Rust as in C++. But the fact that the actual calls don't appear in your source code means you don't incur any programmer cost relying on them. Yet, where runtime cost would be a problem (typically, affecting latency on a hot path) it can be avoided entirely.


> malloc/free are not used in modern C++.

I think using term "are abstracted away" is better choice here. You still allocate memory, dosen't matter if it's malloc/free, mmap/unmap or compiler is doing it for you. It cost time and space. Sometimes the cost is negligible but not always, depends on application.


> The fraction of runtime involving allocation and deallocation, at the level where they happen, is typically negligible. In servers, after program startup, it is often (and deliberately) exactly zero.

In that case you don't use std::string, std::vector... or any containers at all? Or anything else that allocates, directly or indirectly.


C++ still needs to allocate and free memory too and for all we know naive C++ might spend more time in memory management code than a GC language.


There is a fair bit of C++ code in use. We have no need to guess.

And, the answer is that real programs in obligate-GC languages spend overwhelmingly more time in GC than C++ or Rust. Much of this time is spent waiting on cache misses, which are hard to track to the responsible bit of code.


Do you have evidence for this? Specifically C++ and Rust tend to be written for applications where tight control over memory is necessary and so any sample like you're describing is going to be biased by these carefully tuned C++/Rust applications. Even the standard libraries for C++ and Rust differ considerably in allocation behavior from Java or Python--these languages are conventionally designed to allocate differently, but that doesn't mean that the GC is the problem. Further, different programming paradigms allocate memory differently and the distribution of these paradigms across GC languages and non-GC languages (or whatever terms you like) are almost certainly varied. There are lots of confounding variables to control for, and until you control for them you're pretty much just guessing.


I think it's pretty obvious if you use these languages. A lot of things that require you to heap allocate in Java or Python (like classes!) have stack-allocated versions in rust/c++. Which means that code which avoids slow heap allocation is much nicer than equiavlent code in Java that must avoid high level abstractions.

You're right that the GC isn't necessarily the issue. It's more the forced heap allocation which most GC languages come with.


Agreed, although to pick a nit (because it's an interesting nit IMO) "Classes" is orthogonal to allocation. In C++ you choose where you allocate your class and in Java the escape analyzer chooses for you. C# and Go have a GC and (more or less) semantics for heap-vs-stack allocation.


That heap allocation is most likely either optimized away and allocated in stack/ allocated in minor heap which is same as bumping a pointer just like in stack!


Only for Java, not Python. And in any case, the cache performance will still be worse, but that’s a more minor cost.


> ...for all we know naive C++ might spend more time in memory management code than a GC language.

That's actually true, as long as all things are equal. GC can amortize memory management cost. Of course at cost of more jitter.

Unfortunately many GC language users tend to do way more allocations as well, diminishing the advantage and even turning it into negative.


>we know naive C++ might spend more time in memory management code than a GC language.

That's not really true anymore after the introduction of smart pointers. Basically C++ implements reference counting to manage memory as a result.


Smart pointers make no difference to the time spent allocating and freeing memory.

But we know that GC always imposes huge costs that point benchmarks uniformly fail to reveal. Often the costs are tolerable, or even negligible. At issue is the set of recourses available when they turn out not to be.


In what regard? GC just amortizes the cost. But memory allocation doesn't fundamentally work any differently.

In fact, it can be worse in Java because you have no control over whether an object lives on the stack instead of the heap.


No. GC imposes expense in addition to actually managing the memory in use, or newly not. If that were not true, there would be no reduction in hardware footprint after a rewrite.

Make no mistake, rewriting is a huge expense, rarely embarked upon without readily demonstrated benefit. (Exceptions tend to be rewrites in Java for organizational / political reasons. But I digress.) Not spending the time, instead, adding features, and delaying new features until there is a place to put them, can dwarf the base cost of the rewrite. That rewrites are done frequently enough to be discussed tells you there are huge operational benefits available.


The OP mentioned nothing about organizational costs, I was talking purely about performance. The OP was incorrectly claiming you'd spend more time in C++ allocating memory, not less.


Your argument is puzzling. Reference counting is a particularly slow kind of GC. It makes you spend more time freeing memory, not less. And it does nothing to reduce the cost of allocations either. On the contrary, a generational GC can allocate short-lives objects virtually for free—much better than the system allocator. The advantage C++ does have is the ability to control memory such that you can write memory reclamation code that is optimized to your particular application, leveraging information that simply isn’t available to a GC.


Well said, although that's not the only thing at issue; the prominent tradeoff for that generally smaller (or less powerful) set of recourses is that you have easy, automatic memory management for the default case. I.e., you don't need to make decisions about how memory is managed unless you need to make those decisions. In C++ and Rust, you have to constantly make those decisions (which kind of pointer to use, where will it allocate, what happens to ownership, how will this affect callers, etc).


"We know". Where is the evidence?


I though about it (why use Rust if your do not need high performances and can do with a GC) and the two good reasons for me would be the quality of the ecosystem (Cargo is great and you can find solid libraries for most use case) and the fact that its libraries have a strong focus on correctness that I have not found elsewhere (and that caught bugs in code that I transcribed from F# to Rust).


> ...why use Rust if your do not need high performances...

Because you want better battery life? Reduce power and cooling costs? Reduce cloud service bill?


At the last Rust meeting I attended, people were complaining bitterly about Cargo. I didn't get what it was, exactly. But there was talk of a replacement in the wings.


What “rust meeting”? There’s certainly no plans of a replacement from the Rust teams.


Rust is not "more" safe than a GC language because it has a borrow checker.


Rust is more multi-threading safe than GC languages because of the borrow checker: https://blog.rust-lang.org/2015/04/10/Fearless-Concurrency.h...


I don't think this is a fact for _all_ GC'd languages. For example, Nim is a GC'd language and it's also multi-threading safe. It perhaps doesn't give as much freedom as Rust, but it does still achieve safety. The compiler restricts sharing of memory because each thread gets its own GC, and for locks the compiler verifies that variables are locked [1].

1 - https://nim-lang.org/docs/manual_experimental.html#guards-an...


No, but Rust provides this memory safety without GC, which is AFAIK, unique in a mainstream language.

The type system knowing about ownership and borrowing also allows the elimination of race conditions (outside of `unsafe`, of course...). I don't know of a GC language that also does that, though obviously that doesn't mean there isn't one... ;)


> elimination of race conditions

Rust doesn't get rid of race conditions. It lets you write memory safe concurrent code. These are two separate things. You still have to be cognizant that code may not run in the order you expect.

This is still a great language feature.


Probably meant "data race" instead of "race condition". https://doc.rust-lang.org/nomicon/races.html


Yup, thanks. Will be more careful with that distinction in the future. =)


Data races shouldn't be happening with a GC'd language with a proper memory model, getting this right is important. Haskell has no dataraces if you don't use IORefs in safe code. Afaik java has a happens-before memory model that can be used to detect races.


To be fair, GC only collects memory and Rust's ownership manages ownership of other resources like files and ports and stuff too as I understand it!


More generally, it lets you tie resources to syntactic scopes.

An example in the standard library is MutexGuard - you get one when you lock a mutex, and when you drop it, that releases the mutex. You can't forget to release the mutex, and you also can't release it too early, because then you lose access to the data protected by the mutex. There are ways to defeat this, of course, but you have to go out of your way to do it.

I've used a library which uses a similar pattern for writing to a ring-buffered message queue: when you grab a slot in the queue to write to, you get a SlotGuard, and when you drop it, you lose access to the slot, but that commits the slot and lets other threads read it.


It’s more safe than those languages because of its type system. Memory safety isn’t the only kind of safety, after all. :)

NOTE: maybe the parent edited his comment between my reply and yours?


I see this statement is oft repeated on HN, but no proof is offered in its support.

Rust can guarantee the elimination of race conditions in low-level multi-threaded code, but aside from that there's nothing really safer about its type system compared to say Java. And one would anyways use a higher-level concurrency library in Java, or perhaps the concurrency annotations, making the formerly mentioned advantage barely relevant.

Memory safety is certainly not the only kind of safety, but it is alas the only kind of safety supported by Rust (and every other language except C and C++).


> Memory safety is certainly not the only kind of safety, but it is alas the only kind of safety supported by Rust (and every other language except C and C++).

I think the parent poster's entire point was that Rust does provide safety beyond memory safety. Perhaps not as a forced concept, ie: you do not have to express all structures as finite state machines, but you can very elegantly express your structures as finite state machines. And you can do so in a way that most other languages can't, because of affine types.


Ok. Where's the proof that finite state machines are safer? Or that expressing structures in a certain way is safer?


A type system that encodes FSTs makes it impossible to express invalid state transitions, hence the safety.

Anyway, I'm only clarifying that the parent poster was explicitly talking about safety outside of memory safety.


I don't see why such a minor thing is considered an important safety improvement.

Using optionals, result types and match would give me a nice feeling too, but I won't claim that my code's safer because of that. Those are super low-level tools.


I realized it was an important safety improvement when I wrote a medium sized project in Rust. Put minimal effort into (manually) testing it (no automated tests), and then sat back watched while it ran flawlessly for a week, and then a month, and then 2 years. Zero issues.

I've never had that experience in any other language. Usually unless you're careful and put effort into testing and making sure that you handle every possible exceptional case, you'll have several bugs and crashes. Of course these are possible in Rust, but in practice I'vr found they're mucb rarer. Most of these issues are caught by the compiler.


I didn't attempt to qualify "importance" of the method, only give one example where Rust provides a safety mechanism that other languages do not; the ability to encode finite state machines into the type system with no risk of reusing invalid states.

Whether you care about that or think it's valuable does not change that it is a way of encoding a model into your type system to make a class of errors impossible.


Quantifying the impact of the method is very relevant though, because if using FSMs makes code 0,001% safer, your statements that this makes a class of errors impossible or that Rust provides additional safety mechanisms are both true, and yet should have no bearing on the real-world safety of a program written in Rust.

And the lack of evidence in this area suggests to me that the safety benefit is indeed negligible.


Your original comment:

> I see this statement is oft repeated on HN, but no proof is offered in its support.

Where "this statement" refers to

> It’s more safe than those languages because of its type system. Memory safety isn’t the only kind of safety, after all. :)

To paraphrase, you did not have proof that Rust had features that enabled safety beyond memory safety. I gave you an example of such a feature, and explained how it provides a type of safety beyond memory safety.

> And the lack of evidence in this area

There's plenty of evidence. Abstractly, the "program is in invalid state but continues" is a huge error in general.

More concretely, in terms of research,

https://www.microsoft.com/en-us/research/blog/p-programming-...

I could likely find thousands of other applications of FSMs (or papers on the subject) for safety critical work.

Here's one of the first results when I looked,

https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/201600...

> To this end, state machine analysis offers a method to support engineering and operational efforts, identify and avert undesirable or potentially hazardous system states, and evaluate system requirements.

I don't really feel the need to argue that state machines are a powerful tool for improving correctness of code. I think I've already addressed that Rust provides safety features that are:

a) Not easy to express in other mainstream langauges

b) Do not fall into the category of memory safety


I think we went far off topic form what I wanted to know and somehow ended up discussing the merits of FSMs, so I'd like to get back to what I consider the key point here:

* Rust provides guaranteed memory safety. Memory leaks aside.

* Rust provides freedom from race conditions. Deadlocks aside.

What other kinds of safety does it provide? I often see optional/result/match or generally "the type system" brought up and this is what I'm explicitly challenging.


> What other kinds of safety does it provide?

I'm confused because the entire thread is about answering this question and I even detailed it explicitly in the previous post.

It provides the safety from invalid state transitions in the form of opt-in state machine type encoding.

What else is there to discuss? I just gave you a concrete example, which you challenged on the basis that it was not impactful, so I provided you research showing otherwise.


> And the lack of evidence in this area suggests to me that the safety benefit is indeed negligible.

It should only suggest to you that it is prohibitively expensive to control for all of the different variables in order to provide a remotely reliable quantification.


> Of course these are possible in Rust, but in practice I'vr found they're mucb rarer.

Apparently rarer than the occasional typo (the above sentence contains two).


I program Python, Rust and C and I can assure you that my Rust programs certainly have the least amount of bugs. Not because Rust is a "safe" language, but because certain types of silly mistakes I tend to make in Python and C just wouldn't compile.

Aside from the language, understanding the Rust way of things certainly improved my code in Python and C too. And with "the Rust way" I mean the way you need to structure things if you don't want to end up fighting the borrow checker all the time. The concepts they use there make a lot of sense even in languages that don't enforce them and have been known before Rust was ever invented.

Edit: so in essence subjectively it makes me find more mistakes easier, it provides a beautiful environment for test driven development and working with it is a pleasure if you understood how things are meant to be done. This is all no real proof, but I certainly would feel more secure implementing critical code in Rust than I would doing the same thing in Python or C.


> because certain types of silly mistakes I tend to make in Python and C just wouldn't compile.

Python and C are similar in that regard. You can compile almost anything in them. Will then crash in runtime if not OK, gracefully for Python, the whole process for C.

Languages like Java or C# are different. Both are strongly typed, a lot of things are checked by compilers.

I believe C# is safer than Rust because native code surface is much smaller. Safe Rust is fundamentally limited by that borrow checker. Read the source of Rust standard library, you'll see tons of unsafe code everywhere. Rust developers probably did a good job verifying their implementations of stuff like Vec and HashMap, but there're less used parts of the standard library. Non-standard third-party libraries also using unsafe.

.NET doesn't need any unsafe code for such things. Even fundamental parts of the standard library, like List and Dictionary, are 100% managed code, to be specific it's C#: https://github.com/dotnet/coreclr/blob/master/src/System.Pri... Same is true for majority of third-party libraries: unsafe is supported by the languages, but using it may complicate deployment.


The Standard library has to use unsafe because of performance considerations, it is extremely well tested for that reason.

I wrote multiple projects in Rust and I never had to use unsafe even once. Arguably all these projects weren’t all that lowlevel, but why would I write something that somebody else has already implemented and tested? The beauty of Rust, after all is that you can go to the lowest level if you like to, but you can stay high level if you like and you still end up with code that is surprisingly performant.


> has to use unsafe because of performance considerations

Java and C# solved this problem without unsafe, with JIT compilers.

> The beauty of Rust, after all is that you can go to the lowest level if you like to, but you can stay high level if you like

All sufficiently complex systems are written in more than one language. Operating systems are written in C and C++, have shell scripting running on top. Videogames are written in C++, they have integrated script engines or VMs for higher-level stuff. Productivity software like MS office or AutoCAD have script engines.

No single language does the job. It’s not just performance versus safety tradeoff. These higher-level languages are much easier to learn, meaning developers are cheaper, and in many cases like VBA even non-developers can do stuff done with them. They often aren’t compiled or compile very fast, this makes iterations faster and allows interactive prototyping. They often run in sandboxes, solving security issues.

I don’t program in Rust, but I’ve read quite a lot about it. And I program C++ and C# for decades, often in the same software at the same time. IMO C++ is better at lower-level things: tooling, SIMD, manual control over RAM layout, better at creating fast data structures, OpenMP, has way better libraries (e.g. in the current project I use Eigen a lot, it’s based on C++ metaprogramming i.e. not portable to anything else). C# is better for everything else: it’s safer, has way better libraries included (e.g. async IO is awesome), is higher level, for the last couple of years it’s cross-platform and open source, and is much easier to use.


The standard library has far more unsafe code than an average Rust project. One of the major reasons why is “needs a lot of unsafe to implement” is a historical criteria for “this belongs in the standard library”.


> One of the major reasons why is “needs a lot of unsafe to implement” is a historical criteria for “this belongs in the standard library”

Custom data structures often need a lot of unsafe code. Graphs, trees and linked lists are everywhere in system programming.

You can’t add everything to standard library, too many variations.

For trees, in C++ and/or C# I’ve used red-black, B-tree, unordered, and multi-dimensional ones like BSP, k-d, and PK. Graphs are all different, depending on what you need to store (just the topology, per-node data, per-edge data), and which operations need to work fast on them. Even linked lists are different, sometimes you want single link, other times you want them to be circular instead of linear.

And there’re other pointer-based structures useful for some applications, like skip lists and tries.


Thanks to generics and the ease of publishing packages, those aren’t usually “custom” in Rust programs; you import a package that has them implemented for you. Just like you import ones from the standard library.

Even then, that’s still a very small amount of the overall code in a given project.


> those aren’t usually “custom” in Rust programs

On modern hardware, RAM latency is often the main reason why CPU-bound code is slow. The fix is often changing RAM layout, i.e. modifying data structures. And when the code uses SIMD to process the payload portion of these structures, alignment requirements cause them to become really custom pretty soon.

> that’s still a very small amount of the overall code in a given project

That’s not universal across all software. Lately, I’m working in the area of CAD/CAM/CAE. Before that I worked on videogames, HPC, multimedia. In many of these projects, data structures were not a small amount, even relatively speaking.


Rust's type system supports any type of resource, not just memory. The same isn't true for garbage collectors (and no, "finalizers" is not correct resource management). Also, real enum types (including optional) and pattern matching. I'm sure you can come close with Java using interfaces or something, but it's going to be cludgy and tedious in practice (especially if you have primitive variants).

To be clear, I'm only stating that Rust is safer, not that safety is the end-all-be-all for every single application and you should never use COBOL or Java or even that you should prefer Rust over them in the general case or anything of the sort.


And I'm stating that I don't see why it's safer than Java, even if Java doesn't have pattern matching (it does have optional).

It is nice to be able to encode something in the type system, but one could just as well use design by contract or unit testing, and these actually do have supporting evidence in their favor. Is there even any supporting evidence that static typing results in better quality?

I think most would agree that it improves performance, makes refactoring and managing the code base easier.

Note: I'm also a fan of static type systems and of encoding relationships and requirements in them, but I have no proof that this actually improves quality. It makes things easier to reason about for me and maybe it eliminates some classes of errors, however it's not obvious that these classes of errors represent a noticeable amount of the total.


In Java, you can modify same data from multiple threads without the compiler complaining anything. In Rust that would cause a compile time error, until you synchronize the data.


Not only multiple threads, a common mistake is to modify a collection while iterating it, which can happen within a single thread (and it's even worse when it happens conditionally, so it'll only fail sometimes).

The thing I love the most in Rust is the duality between "&mut" and "&": when you have a "&mut" reference to something, you know nobody else has any "&mut" or a "&" reference to it, so you can modify the data structure without worry; and when you have a "&" reference to something, you know nobody else has any "&mut" reference to it, so you can read the data structure without worry. I wish I had something like that in Java, instead of having to sprinkle "synchronized" everywhere just in case (which not only doesn't protect against a recursive call within the same thread, but also can lead to AB/BA deadlocks... which is made so much more fun by code called from executors which have a ThreadPoolExecutor.CallerRunsPolicy, meaning not only will it happen only in production, it will only happen in production when the system is overloaded. Took us weeks of debugging to reproduce one instance of that...).


In Rust, all the unsafety of C is just 1 small keyword away. It's used a lot in practice, both in standard and third-party libraries.

It's very hard to integrate Java and C. People don't normally do unless they have to. Random third-party libraries from internets usually don't.


> In Rust, all the unsafety of C is just 1 small keyword away.

I really don't understand why people constantly bring this up. What's supposed to be the problem in the first place?

Want a safe program? Just don't use "unsafe". Perfectly doable for the vast majority of programs.

Java has its "sun.misc.Unsafe". In addition for being unsafe by default when it comes to modifying same data between threads. It doesn't enforce synchronization, so Java programs incorrect in this aspect compile just fine.

> It's very hard to integrate Java and C. People don't normally do unless they have to. Random third-party libraries from internets usually don't.

I don't see how this is related to the discussion.


> What's supposed to be the problem in the first place?

Bugs like this one: https://medium.com/@shnatsel/how-rusts-standard-library-was-...

> Want a safe program? Just don't use "unsafe"

There’s a lot of unsafe code already in standard library, even more in third-party libraries.

For this reason, I think VM-based runtimes are safer in general. It’s not free like in Rust, costs runtime performance especially startup time. But for Rust, we have to trust authors of standard library, and all other libraries we consume. That’s too many people to trust, and too much code to verify.

> I don't see how this is related to the discussion.

Java standard library doesn’t use unsafe code, it’s all pure Java, e.g. https://zgrepcode.com/java/openjdk/10.0.2/java.base/java/uti... Same with vast majority of third-party libraries.

Security boundary is lower on the stack, attack surface is much smaller at that level. We only have to trust/verify JVM code. Everything that runs on top of that, including the standard library, runs in a strong sandbox.


Fair point. I guess you could say Rust is just as secure as the weakest library routine you call from your code.

That said, in C++, unsafe is just form example a harmless looking pointer cast or array [] indexing away.

At least with Rust you know where the dragons lie. Just grep for "unsafe".


Lack of exceptions, and forced error handling is a big deal. As is explicitness around null.


enterprise software, quant and "normal" finances, taxes, and governance, the areas now completely dominated by COBOL and Java.

So Java really is the new COBOL?


Yes. Except, of course, that COBOL is also the new COBOL. There may be more new code being written in COBOL now than ever before.


FAANG still uses Java heavily.

Android will keep Java relevant for a long time, even after the introduction of Kotlin.


Well, good luck with that, i guess.

But being realistic, i dont think any non-GC´ed language has a chance to become the next corporate darling as Java is now.

In that sense, Go, Swift, C#, etc are much more fit to become the next kings in this area, and even them will do it very, very slowly.

The script family of languages also dont make a good fit, giving they tend to be bad at creating big, performant and constantly evolving kind of programs (Javascript, Python, etc).

I guess the languages which are AOT compiled, GC´ed and mostly objected oriented with some mix of FP will probably be more of a fit the take the crown jewels of the king Java.

The problem Rust will face in this scenario, is that it requires people with a level of brain power that contrary to 'high tech' is not that much available in the corporate level, (and the ones that have dont want to spend all of their lives behind a computer screen).

And also is a "productivity hog", when we compare to sophisticated languages like C# or Swift, as productivity in Rust is on par with C++, which is not good enough when compared even with the verbose and over-enginered Java codebase.

It´s basically the same reasons why, C++ is not the prefered language there. And i dont think just using the magic word 'safe' will have any charm for this kind of crowd. Giving while Rust can use this trick against C and C++, it doesnt work for productivity minded languages like Go, Kotlin, Swift and C#.

Mind you that modern GC´ed languages are pretty safe, and also pretty fast. So it will be very hard to convince the corporate layer to use a language where there will be scarcity of labor, and that means they will have to pay more and be able to have less employees.

The smartphone revolution gave performant and resource aware languages a renaissance and Rust is surfing in this wave. Rust has a lot of chance in smartphone, embedded, libraries and utilities and even in specific scenarios on the server.

But even for the "rewrite all in Rust" mantra for C++ (a language Rust is competitive with) it will stop going forward, giving the Unix, C and C++ codebases are pretty much very influential and with a lot of sophisticated and complex stuff already out there.

Anyway, i dont understand this urge to use just one language for everything. I think this is unrealistic and illusory, and i think its a better approach to elect two or three languages with fitness in some fields and just use them.

The "one language to rule them all" was already the Java approach since the nineties, and while it make it a very popular language, it was in a time where the competition were not that great and even then, it proved to not work really well in all scenarios (and i think thats a good thing because it means we are free from a computer language monopoly, which would slow down the evolution in the field).


I think Rust is probably too complex for the average enterprise developer who seldom thinks about code outside of work. Not that these developers aren't smart - they just have different life priorities than a lot of HN readers and Rust experts.


I am so to speak an industrial entreprise developer working with Vxworks on a medical device. This is exactly the kind of context where you don't mind if the compiler takes a bit longer or if you have to fight the borrow checker in exchange of not having a data race or a memory leak with catastrophic consequences at the wrong time.

Today we are using static or dynamic code checker (which are not cheap) but it would be so much better if we could simply not create some kind of issues in the first place.

I know that Rust is not a silver bullet either but having some Rust support in Vxworks makes so much sense. This is basically my secret wish of the last few years coming true. If anyone at Windriver is reading this, thank you!

It does not mean we are going to rewrite everything tomorrow, but at least we can seriously consider that for new features we could introduce some Rust services in a few years likely.


Rust's complexity isn't accidental unless it's being used for a job where it's overkill. Achieving memory safety in non-GC languages (let alone the degree of other static safety Rust enables) is easier in Rust than C++.


The way I think about it is that Rust exposes all the hard stuff that's hidden in C++.

C++ makes things look simpler than they are and eventually you end up in one nightmare scenario after another, because of all the hidden complexity.

Rust just shoves all that complexity to the forefront, right in your face, so that you are forced to deal with it and don't end up in nightmare land later.


Honest question, what part of Rust is more complex than C++, smart pointers and all? Because it's not evident.


I'm not a full time Rust developer, so your milage will vary:

Rust is a more modern language, so there are a number of modern approaches that Rust uses that C++ can't due to backwards compatibility. And Rust isn't fundamentally an OOP language. I'd say Rust is a bit closer to C than C++, but that's a gross simplification.

But I think what Rust does is forces the developer to address application complexity at compile time, instead of finding out about these problems during run time. Things like segfaults, race conditions, etc. are ferreted by the compiler.

So, Rust might feel more complex, but in reality it's just forcing you to deal with those issues sooner rather than later.


I think Rust is closer to C++ in an important way. Rust and C++ provide abstractions in situations where C spells out the implementation explicitly: e.g. Trait objects (virtual inheritance in C++), lambdas (same in C++), enums (no equivalent in C++). To accomplish similar things in C you'd specify how it will be represented at the machine level, e.g. a struct of function pointers; a function pointer + struct pointer; a union + some way of identifying members. Generally this adds a lot of convenience to the higher-level languages compared to C, but it sacrifices flexibility in the special situations where you need that level of control (e.g. implementing an efficient VM you might want the tag of your object type to be more of a bitfield).


Those are great concrete comparisons.


Another gross simplification I like is: Rust is a memory safe C with type classes. (traits)


Turing complete templates and inheritance make C++ far more complex I'd say then, even at compile time.


Lifetimes. In C++ you don't have to think about ownership to make the compiler happy; you only have to worry about that stuff if you want your program not to crash.


That's exactly why I use Rust, I can not be sure when I get enough confident with C++ to not make my program crash in unexpected ways.


Can you give an example? Smart pointers make you consider ownership. Or does Rust account for more than that?


Rust's ownership model tracks the safety of borrows at compile time in ways C++ can't. You can derive a reference from another reference, and the compiler won't let you do anything to the original object that would leave the derived reference dangling. In C++ there are two different conventions for that: use plain old pointers to represent the borrows, and be careful they don't outlive the owning `unique_ptr`; or use `shared_ptr` to do the bookkeeping at runtime (also available in Rust, as Rc/Arc).


Got it, thanks for the explanation.

So C++ programmers are able to subvert the upfront complexity by ignoring C++ 11 features then, going straight to raw pointers.


This is spot-on. Worked with enterprisy java devs .. seen their code. They couldn’t give a shit about code quality


Java exists to make shitty code less shitty. But in reality it will often hide the shittiness through countless layers of abstraction.

Java is a great glue language. But if the problem you are solving can't be fixed by glue, say performance, computationally heavy algorithms or something of that kind, then look elsewhere.


Someone has created a Rust crate called libra, which claims to be a "Simple fantasty Adventure MUD" (sic). It's currently just a "Hello, world!" program.

What are the chances it's a placeholder for malicious/trojan code intended to steal Libra coins? (c.f. the event-stream backdoor[2])

[1] https://crates.io/crates/libra

[2] https://news.ycombinator.com/item?id=18534392


Global unqualified namespace is a classic mistake. Causes all sorts of silly "squatting" and drama issues.

Should really be:

    https://crates.io/crates/someorg/libra
    https://crates.io/crates/facebook/libra
People like unqualified names because it's slightly cute, though.


The argument I saw was that a global namespace ensured:

A) Packages can change owners without changing names (and breaking downstreamers)

B) It encourages creative names, rather can calling something myusername/http, you have to give the package a real name like hyper, express, curl, etc.


But A) you could still implement giving ownership to another user without giving the other user your whole namespace and B) why would you even want B?


If you give ownership without changing the organization that owns the package it then signals trust..

Usually, orgs won't be comfortable with this..

(B) is brilliant... We all know what EC2 and S3 is... If Amazon had just called these services for: compute and storage, there wouldn't be any brand.

Branding makes things easier to search for and to talk about.

All this said, these downsides might be worth it


npm install @aws/s3 works for me.

npm install burrito doesn't.


> B) It encourages creative names

Who wants this?


Empirically, libraries with unique names are more popular than ones that aren’t. It helps to have a unique name when searching, they’re more memorable, it’s easier to disambiguate between two libraries that do similar things, etc.


> Empirically, libraries with unique names are more popular than ones that aren’t.

Seems like this is survivorship bias. All of the common names get squatted by people who never push code, and then anyone who's actually going to build something will need to pick something unique.

Reqwest is more popular than request, for example. It's not exactly what most would expect, and if you search "request" you get the much less popular crate.


This is true even in ecosystems that have namespacing. And in hybrid systems, like npm’s, the non-namespaces versions are almost universally more popular than the namespaced ones.


Interesting, but I wonder how this works in purely namespaced systems - if non standard names are more popular in, say, maven.

I don't know. I'm unconvinced that this is a meaningful metric, and I worry more about typosquatting/ malicious dependencies, which, to my knowledge, crates.io does almost nothing to proactively deal with.


Namespacing doesn’t solve typosquatting or malicious dependencies. You typosquat the namespace instead. Any dependency can become malicious.


Yeah, you're right.


I have been enjoying how npm is addressing this with @organization/project. Makes using npm a little less awful these days.


Looks like the repo and github account were made today. [1] That’s pretty fishy to me.

Crates.io has all sorts of other problems where it assumes good faith.

[1]: https://github.com/kollaborator


Can't Facebook call it something like FB.Libra?


A dot is not allowed, but they could use a dash or underscore.


Note that Facebook has also been previously using Rust for https://github.com/facebookexperimental/mononoke; as well as for some smaller things. They've been sponsoring Rust conferences too.

I am not personally interested in cryptocurrency, but it's always great to see more big companies using Rust!


URL isn't clickable because of a semicolon. This should work: https://github.com/facebookexperimental/mononoke .


Ugh, thank you.

This is the second time this has happened to me today, you'd think I would have learned. Sigh.


But it's great that it is in your muscle memory, no linting errors at least :p


> We'll need to work together on challenges like tooling

If you wonder what is meant by that, I think I know of an example. They have done this (quite unusual) choice to not include a Cargo.lock into their project because it gives a bad developer experience on merge conflicts [1]. They'd love to have a merge tool for the purpose, see more in [2].

[1]: https://github.com/libra/libra/issues/52

[2]: https://github.com/rust-lang/cargo/issues/1818


Thanks for linking to the repo. I just had a look. It's impressive how many github users want to be funny and make bullshit pull requests and nonsense issues. What is the point of doing all that :-(?


Can cargo report the test coverage yet?


https://github.com/mozilla/grcov though it requires a nightly rustc for now since it uses some unstable RUSTFLAGS.


That is an improvement since the last time I looked. Thanks!

My ideal would be "cargo test --coverage=report/".


Bitcoin (Bitcoin Core) is implemented in C++. At that time, Go and Rust weren't born yet.

The most popular Ethereum implementation (Geth) is written in Go. The second most popular Ethereum implementation (Parity) is written in Rust.

Now, Libra is written in Rust.

So I guess the moral of the story is, if you want to write a greenfield high performance application, Go or Rust should be your choice compared to C++.


It really depends on the type of application. Rust's memory safety model works well for most software.

Some types of software require memory safety models that Rust was not designed to express e.g. safety models when mutability of references and ownership of objects are not observable at compile-time. High-performance database engines tend to have this challenge due to pervasive DMA-ed I/O and opaque page-structured memory. The OS hides this from most applications that aren't high-performance systems code. In C++ (which also looks askance at these memory models) this is addressed by designing an execution scheduler to dynamically guarantee safety; the scheduler understands the memory model and never schedules or executes a sequence of concurrent operations across a set of mutable references that would violate the safety model -- the operations are oblivious. In modern designs this requires no locks and negligible state.

Rust can deal with these types of memory models too while minimizing the amount of unsafe code with enough layers of indirection, it just has a high performance cost.


> So I guess the moral of the story is, if you want to write a greenfield high performance application, Go or Rust should be your choice compared to C++.

I don't see how this generalization follows from the three examples you've cited?


Wishful thinking would be my guess.

Rust in production use tends to be found in circumstances where there is no need for mature tooling, and no expectation of a need to hire experienced coders. Of course there are exceptions.


It is not that hard for C++ developers to pick up Rust. The borrow checker makes explicit safety requirements that exist in C++.

The tooling does need to be more mature but for many projects the pioneer tax paid to improve the tooling may be worth it. Besides, it's not as if C++ dependency management is very mature either — most people build custom tooling for that anyway.


> If you want to write a greenfield high performance application, Go or Rust should be your choice compared to C++.

Go is not in the same performance category as Rust and C++. It has more limited use cases where it works well, due to mandatory runtime. Rust and C++ are more general purpose.


Performance is one reason to use Rust, but it's not the only one. It enforces correctness at compile-time, eliminating a ton of easy-to-miss bugs. If your application is multi-threaded, it also enforces guarantees about thread safety, eliminating another class of bugs. If you care about correctness, that is another reason to use Rust.


When it comes to performance, Go is more comparable to Java than Rust.


Grin (mimblewimble) is also written in Rust https://github.com/mimblewimble/grin


Surprised it's not OCaml; they've invested highly in OCaml.


In their frontend, yes. The Flow JS typechecker, and (kind of) Reason.

Backend and infrastructure sees more Rust though. Eg the Mercurial server implementation.


How’s the OCaml multi core rollout going these days? IIRC shared memory parallelism requires FFI still.


The primary reason they stated is memory safety as this is gonna be really critical stuff dealing with money. Also no GC.


OCaml is more memory safe than Rust (as the "unsafe" parts are harder to access). But it does have GC, so GC is likely the primary reason.


I wouldn't label Rust as being less safe than OCaml. You have to explicitly opt-in to any unsafe operations. More often than not, the unsafe blocks are needed because the compiler can not check all of the invariant cases.


> You have to explicitly opt-in to any unsafe operations

Many of the unsafe operations don't exist in OCaml. Often because the GC makes them unnecessary.


You can implement all of the unsafe operations in OCaml using the unsafe Obj.magic cast: http://caml.inria.fr/pub/docs/manual-ocaml/libref/Obj.html


Does OCaml even have a way of shedding memory safety?


OCaml is an "obligate-GC" language.

Most languages that offer optional GC, including Rust (with its RC and ARC boxing) and C++ (with std::shared_ptr), implement it mainly via reference-counting.


Sure. Obj.magic is a bit cast.


Yes, the `Unsafe` module (well, at least used to have it... fairly undocumented... maybe it's been removed?).


Oops, I misremembered the name. @pcwalton is right, it's called `Obj`.


I mean, you can just use #[forbid(unsafe_code)] if you really want to. You do have to audit your dependencies, but that's true with OCaml as well (call Obj.magic and all bets are off).



And Cardano is already working closely with the Haskell community..


Great. I think this is the kind of booster Rust sorely needed.


When security holes in Libra implementations surface, that will reflect badly on Rust.

There will of course be explanations why the holes are really not Rust's fault, and that bugs and security failings are possible in any language, but it will force a more realistic assessment of Rust's strengths.


There have already been such assessments. For instance, in https://hacks.mozilla.org/2019/02/rewriting-a-browser-compon... , Mozilla estimated that writing a given browser component (specifically the CSS engine) in Rust, 51 of its 69 security bugs (73.9%) wouldn't have happened. The other 18 weren't things that the implementation language doesn't inherently help with.


The problem with Mozilla's C++ code is that it was developed in a time where memory-safety was an afterthought and they're still paying for that. Rewriting it in anything will offer a great opportunity of improving it.

For example, the real fix for their vulnerable code example is using automatic bounds checking, just as Rust does. Replacing [] with at, eliminates the vulnerability. Most C++ programmers are allergic to at, so Mozilla could have created some container wrappers that do this by default. GCC and I think also clang have a macro which adds automatic bounds-checking to the STL containers.


> Replacing [] with at, eliminates the vulnerability.

And for decades of C and C++ programmers, [] is still the most obvious way to index an array. And the most obvious way creates potential security vulnerabilities.

See Rusty Russell's classic "easy to use versus hard to misuse" posts: http://ozlabs.org/~rusty/index.cgi/tech/2008-03-30.html and https://ozlabs.org/~rusty/index.cgi/tech/2008-04-01.html .


Historically, C and C++ programmers didn't care much about safety, especially if they had to give up performance to reach it. This is changing in the newest C++ standards and tooling (e.g. clang's -Wlifetimes).

Bounds checking is slower, so that's the main reason why it's shunned. Rust made it mandatory, but from what I've seen there are people disabling that to squeeze some drops of performance nevertheless. Luckily, I think Rust also has an option of replacing indexing with iteration, thereby avoiding the performance hit. I think it currently does this a bit better than C++, but I might be wrong.

Anyway, my critique still stands, that blog post is really low effort and doesn't prove much. I've looked at some of the fixes for Firefox security bugs and the code reminds one of the typical older C++ projects with raw pointers everywhere. Perhaps implementing a browser requires all sorts of unsafe access patterns.


The Rust compiler can eliminate bounds checking anywhere it can prove that it doesn't need it. And beyond that, you always have the option of writing an unsafe block, and people sometimes do in carefully selected cases driven by profiling. (That also encourages keeping such code carefully contained and well-scrutinized.)

I'm not arguing that bounds-checking should be mandatory. I'm arguing that bounds-checking should be the default, and you should have to knowingly bypass it, rather than the other way around.


Meh. There are probably very few C++ codebases that have received more security attention than those of browsers like Firefox. Bug bounties, fuzzing, tons of hardening work.

It's really just not accurate to say "they weren't writing good C++" or "it's a legacy codebase". Those could both be true things, but it's totally glossing over the hundreds of millions of dollars invested in fixing that codebase.


Tons of hardening work to plug an endless sieve.

If browsers had remained sane, allowing one to download HTML, CSS and pictures instead of running so-called applications, they would have been secured a long time ago and we would have had a decent internet instead of this spyware and ads infested mess.

With the rate of change that they're subjected to, they will never be secure. In 10 years Firefox will perhaps be 50% Rust and still full of vulnerabilities.


> If browsers had remained sane

This has no bearing on the issue. Yes, browsers are fast moving codebases. No one is arguing otherwise.

> In 10 years Firefox will perhaps be 50% Rust and still full of vulnerabilities.

Someone has already cited a case study from Mozilla showing a significant improvement in memory safety for components moved over to Rust, so all evidence at this point is to the contrary.


If browsers had refused to run applications, we'd have a lot more native applications that don't have any sandbox at all.


Native applications do have sandboxes, macOS definitely has this, Windows has this, iOS, Android...

AppStores can also perform various checks which will never happen on web sites. The web is really the wild west, why do you think browsers are the main attack vector nowadays?


I wrote "more realistic" deliberately.

C++ is coded quite differently today than when Gecko was written -- at least, in places where the official Coding Standard does not forbid it. Mozilla and Google's do.

After the more-realistic assessment, claims about Rust's inherent safety will need to be walked back some. It will still be an enormous improvement on C.


Right. And I think it is reasonable thing to happen. Languages can't mature without big public successes and failures.


Tangential:

Is there an easy way to map all links of form www.reddit.com/* to old.reddit.com/* on all my devices? (I assume /etc/hosts would work for linux?)


https://addons.mozilla.org/en-US/firefox/addon/old-reddit-re...

https://chrome.google.com/webstore/detail/old-reddit-redirec...

In order to redirect it on all your devices, install Firefox. It can install extensions on Android, too. Should solve it for everything except smart TVs (Firefox for Amazon TV can't install extensions) and iOS.


Well that's perfect. Thank you.


> (I assume /etc/hosts would work for linux?)

No, because both www.reddit.com and old.reddit.com (in fact, *.reddit.com) already map to the same IP address. They are distinguished through the Host HTTP header. The only way to do that mapping would be within the browser (or with a MITM proxy, which I wouldn't recommend).


Thank you


Log in. Then you can set a preference to always show old Reddit.


When possible, I try not to log in. I dislike targeted content or targeted ads, I hope clearing cookies helps with avoiding it.


Yes, you could write a proxy auto-config file[1], host it somewhere and have it rewrite the urls you want.

I think that's how an ad-blocker worked on iOS before they enabled the functionality.

[1] https://en.wikipedia.org/wiki/Proxy_auto-config


I wonder if they even considered Ada/SPARK 2014, or are program languages like fashion trends, and Rust has the runway right now. I like Rust, but I would think the amount of work that has been put into Ada/SPARK 2014 would make it the better choice for the problem set.


I would imagine that Rust’s advantage over Ada here is in the second paragraph. I don’t know if they directly considered it or not.


Steve, I would agree; you are a large part for that momentum. Community is important, and the SPARK 2014/Ada community has proven itself to be very helpful, and has a long history behind it. I know big business decisions are sometimes made by advocates or evangelists, but you would hope somebody did their pros/cons matrix on top of that one criteria. SPARK 2014 has really surprised me in terms of what it brings to the table aside from Ada's legacy, which is also still evolving.


off-topic: do you think it's safe to assume consent of a site's cookie policy by clicking/scrolling like on https://libra.org/ ? I actually like it because these banners are just annoying, but I'd be afraid to do something like this (The first time I checked out that site I only noticed that something disappeared).


The language of Libre "Move" is inspired by logic. It makes sense that they would pick a language that is also inspired by linear logic.


Libra I thought?


They want to be trusted.


Bruh. It's Li𝘣𝘳𝘢.


I hate Facebook, but Rust is probably the best possible choice.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: