Regarding the OP, I'll quote a comment from reddit: https://www.reddit.com/r/rust/comments/8g6mam/krust_a_formal...
"It looks like they wrote an interpreter for a small subset of Rust in a particular formal system. But the subset is extremely small, and it doesn't look like they derived any novel results from this. The examples they give, of proving the running time of a GCD algorithm, would work in pretty much any procedural language; it's pretty much just a syntax modification away from the same example in any language.
They implement a very simple model of borrow checking, but their subset doesn't seem to support references in type signatures, so it doesn't seem like they can even model any inter-procedural effects of the borrow checker, just some purely local restrictions on aliasing.
I suppose that this is a good first step if you want to write a formal semantics of Rust in 𝕂, but it seems like there's not really enough here yet to be of interest."
I thing the work is interesting.
It would be a great endeavor to model the whole language so you need people to help with that. But how would one know that someone has started a project like that if they don't publish something about it?
In my opinion the whole story about only accepting "novel" and "readily applicable" results in academia isn't quite helpful. A "Publish early, publish often" strategy could produce more and better results in the long term I think.
If a result isn't useful or novel on it's own that's no issue imo. It can still become an important building block of some later innovation! Reusable work done is work done, indifferently of it's "novelty" or direct applicability.
Also the 𝕂 Framework is amazing for formal reasoning about languages so this work is indeed an important first step.
That (at least the "publish early" part) is already the case for a lot of programming language research in things like semantics or program analysis. Published papers often treat a toy language (no arrays, no dynamic memory allocation, etc.) and leave the rest for "future work". Sometimes that future work does materialize, often it doesn't.
It's a difficult situation for academia: If the basic system is too hard to extend to a full language, you will end up without final results, and from the point of view of your funders you have just wasted a lot of time and money. If it's too easy to extend to a realistic language, you might build a tool but end up doing "only engineering" without having new breakthroughs to publish, again seen in academia as wasting research funds.
For many people in academia, the incentives for building systems really applicable to real-world languages are just not there.
I fail to see how publishing repeats of the same old conclusions that boil down to applying the same basic and well-known techniques to specific subsets of a trendy programming language, and in the process failing to provide anything meaningful, does anything to advance science.
In fact, what does the article add to the understanding of the problem?
Exactly what I’d hoped someone would do. Well, the first step anyway. The second step was doing a Rust semantics in terms of the C semantics already done. That allows quite a few things like Rust-to-C compilers, defeating Karger-Thompson attack, and potentially verification of interactions of Rust and C FFI code. On my end down the line, also use of C tooling for verification of equivalent Rust code.
EDIT: TIL this is fixed in Go 1.10 which came out in February: https://golang.org/doc/go1.10#runtime
If rust continues to improve ergonomics and library support, I don't see anything stopping it from replacing C except in niche embedded systems that don't get LLVM support.
It is a network effect, regardless of how one might think about Go.
The fact that Docker and K8s are written in Go, doesn't provide anything for anyone considering to writing new container projects...
Trying to avoid Go in such scenarios is a futility.
E.g. maybe they want to use some e.g. Docker provider libs that are mainly/only available as go packages?
Else, why would a Docker/K8s deployment impose Go usage for anything else regarding dev-ops?
In other words what I'm asking is: isn't their use of Go just an implementation detail of Docker/K8s, as far as dev-ops are concerned?
And you might happen to work in one of those companies that have IT defining what programming languages are available on devenv images.
I expect rust to encroach on those quite a lot, as well as less system-y C code (such as utils).
It's still early days. Rust is showing a lot of promise, for me personally, when writing high performance web services.
If having a high profile industrial backer makes a language not niche then OCaml is not a niche language. (I wish this were the case.)
I used to write a lot of C++, every team I worked on always had a problem finding people who could write native code well, I'm pretty sure that hasn't changed in ~3 years or so. Why does something have to be wildly popular to be considered successful? I've found most of this industry is more about using the right tool for the job then trying to fit one language into every domain space.
That said I think there are a few places where you'll see Rust really succeed in the next 3-8 years:
1. Domains that have hard memory or performance constraints. Rust matches C/C++ for performance(and I say this as a hardcore C++ perf nut) but without the segfaults/corruption(unless you step into unsafe land).
2. Products that need to go to multiple platforms but can't afford a GC. Rust has by far the best cross-platform development story I've seen in a natively compiled language. I've personally build things that run on WASM/Win32/OSX/Android/Linux across a wide range of hardware and embedded in C#/Java/C++/C.
3. Places where safety/memory corruption is unacceptable. You just get less of these from Rust, full stop. Yes you can do it in C/C++ but you better be willing to have a QA org larger than your dev team.
Firefox is a perfect example of how these various concerns intersect. I used to be a serious C++ developer working on high performance, memory constrained platforms. In the past two years anything green field for me has been Rust based and I've been 100% satisfied. In both personal projects and some professional work Rust has never come up short.
Is it hard to write? Sure.
Do I spend less time debugging/tracking down mem corruption/dealing with poor ownership?
>What makes you think that the languages which Rust displaces aren't also niche?
Any definition of "niche" that includes C or C++ is meaningless.
Really, you might want to check your echo chamber a bit. Native programming is not the norm in the industry, most fresh grads have touched C or C++ once or twice and know the theory but none of the practice. Heck back when I was in college we had people coming to our community college to take the C course there since UCSD didn't offer it as part of their curriculum.
Now, does the majority of software run on C/C++ based systems? Totally.
There's an inverted pyramid in software. The lower down the stack you go the more specialized and domain specific you get. Component/app/framework reuse means that you need less people at the bottom then you do at the top otherwise we'll all be fullstack web/frontend/backend/driver/kernel/vhdl developers.
Either way I think Rust has a rich and vibrant community which isn't going away anytime soon. Would it be nicer to see wider adoption? Sure, but I'm not worried about it either.
Thus I can easily see rust replacing C/C++ as the default language for writing 'high performance libraries' in, for use in Ruby/Python/etc. It's usually more similar to what higher level programmers are used to too. You get to use functional code for a start.
But also exactly as in Firefox - libraries / modules can be written in Rust, and then brought in to replace existing C++ code. And I'm sure the ergonomics of this process will improve too.
If a C++ project lead / manager can say, in all honesty:
"The x component needs to be re-written for whatever reason - either major refactoring for a desired feature, or performance, or to split out to use in another product. We can either do it in C++, or Rust, and Rust will garentee no new segfaults, we can trivially parellelise a lot of it for improved performance, etc. etc. with no major disadvantages..." then Rust will inevitably grow.
If you want full function? Just don't pass/use anything that's mut.
Need to break that contract for performance? You still have single-ownership which gives you 90% of the above but with no performance penalty.
You still get logic errors/etc but it's a great no-compromise solution when you can afford the time to get your ownership right.
Not quite. There are still mutable globals (they just have to have to have the appropriate "I'm MT-safe" marker traits); and there's nothing preventing any function from performing any I/O that it desires. Even better, once specialization lands, generic trait-functions will be able to behave differently when called on different types, even if the special behavior isn't valid given only the public type-signature's assumptions.
Heck, if it is that small, but has that big an impact, that’s saying something good too, isn’t it?
I don't believe that Go or Swift are growing because of their technical merits. Swift is essentially forced by Apple upon any developer who needs to develop applications that run on Apple products, and Go, in spite of its pedigree, can't be sold to newcomers without waving the Google brand.
> Widely displacing C is a pipe dream.
For old projects, sure. Yet, for greenfield projects then, given the choice, Rust does offer quite a lot of sought-after features that C is sorely lacking.
It's also a very good language, in par with Rust, and with the creator of LLVM behind it, and the creator of Rust (and many other Rust people) working at Apple in Swift.
>Go, in spite of its pedigree, can't be sold to newcomers without waving the Google brand.
And yet, Dart never got anywhere, despite the "Google brand" (and far more support initially, including official branding, an IDE, and having their #1 compiler guy working on it).
Displacing C for new projects though, it's not.
And it has increasingly done it for major projects from major companies (not just Mozilla).
This is already great for such a new language.
It is surprising to me, since academia often trails by many years.
Two classes off the top of my head from this spring:
Hearing rumbles of some more in the future...
Only current one I could find is
For Haskell, the best I've heard is Facebook for some custom dev tools and Standard Chartered. I'm sure there's more, but...
And lots of big companies use Haskell, just like Rust - Facebook has used it for a while now.
It's important to remember that while rust may dominate the HN headlines, it's still a very niche language. That said, it's also showing a lot of promise in a world that largely dismisses programming languages as being irrelevant.
But as phrased in the abstract, I first though that you pass all of the Rust tests, or at least a significant majority of them. If I were a reviewer of this paper, I would complain about this loudly, including to the PC chair/editor. As a fellow researcher, I urge you to change this wording in the abstract to something less misleading.