In my opinion Rust is about doing things right. It may have been about safety at first but I think it is more than that given the work of the community.
Yes I know there is the right tool for the right job and is impossible to fill all use cases but IMO Rust is striving for iPhone like usage.
I have never seen a more disciplined and balanced community approach to creating PL. Everything seems to be carefully thought out and iterated on. There is a lot to be said to this (although ironically I suppose one could call that safe)!
PL is more than the language. It is works, community and mindshare.
If Rust was so concerned with safety I don't think much work would be done on making it so consumable for all with continuous improvements of compiler error messages, easier syntax and improved documentation.
Rust is one of the first languages in a long time that makes you think different.
If it is just safety... safety is one overloaded word.
The importance of Rust is that it's raised the baseline for low-level languages in the modern age. If any future systems languages emerge that don't feature memory safety, it will have to be a deliberate choice that must be defended rather than just an implicit assumption of how the world works.
With that said, can you recommend a good source to really understand best practices and patterns for ownership and borrowing? I feel that's the biggest hurdle to using Rust (at least in my case).
The rise of "it demos well so we should use it" is in a lot of ways troubling. The inflection point for productivity doesn't need to be "five minutes in" to be worthwhile if you're doing something that is, itself, worthwhile.
How about http://rust-lang.github.io/book/ ?
As someone who has yet to do more than do minimal dabbling in the language, this is a very positive message. It expresses that there is some work to learning this so don't be put off if you experience it, it's normal, and that the work required to learn it pays off in the end.
That's probably a more appropriate message than that it's not hard. I don't think it's appropriate to express to people that learning pointers in C and C++ is "easy". It's not "hard", but it's not necessarily easy for some people. It requires a specific mental model, and depending on how they learned to program, it may be more or less easy for them to wrap their head around. Afterwards, it's easy and makes sense. I assume the borrow checker follows a similar learning hurdle. That doesn't mean we should forget what it's like before we've learned it though (and at this point, there's probably a lot of people in the midst of learning rust that haven't quite fully internalized the borrow checker).
That said, I still occasionally have problems where I feel like I'm doing something "dirty" or "hacky" just to satisfy the borrow checker. It's easy to program yourself into a corner and then find yourself calling `clone()` (the situation, I've been told, has gotten much better in recent releases with improvements to the borrow checker, but alas I haven't had a chance to play much with Rust in nearly a year).
Another thing that I still find difficult is dealing with multiple lifetimes in structs, to the point that I usually just say "to hell with it" and wrap everything in an `Rc<T>`. And sometimes there's simply no safe way (afaict) to do some mutation that I want to do without risking a panic at runtime (typically involving a mutable borrow and a recursive call), which leads to a deep re-thinking of some algorithm I'm trying to implement. That's not Rust's fault, though—it's a real, theoretical problem that arises in the face of mutation. In time, I'm sure there will be well-understood patterns for handling such cases.
If you look at Rust from a systems programmer perspective and compare it with the systems languages OP lists then, yes, safety is THE most radical feature.
But Rust can compete on so many more levels. Web services, user facing applications for example. Languages competing in that space usually bring memory safety, so it's kind of a non-issue. Safety enables Rust to be a viable choice for these tasks, but it needs more than that to be on-par with the other languages. And Rust's got plenty of things going for it, so there's nothing wrong with stopping to play the safety card (since that is expected anyways) and painting Rust as a language that is actually fun to work with.
How ? There are languages with more expressive type systems high level type systems (Haskel/OCaml presumably). There are languages with much more mature libraries, ecosystems and tooling (C#/Java). There are languages with both (F#/Scala).
What is it that makes Rust a good applications programming language ? You said it your self GC doesn't really matter that much in this space and GC based languages are just more elegant and not to mention the tooling is way more mature. Runtime also doesn't matter that much and with the recent changes to .NET can be avoided anyway.
This Rust fanboyism is turning in to the new node hype "use node for everything node is web scale fast because it uses an event loop io instead of thread based io", "use rust for everything because the type safety is literally the best and it's the only language with a strong type system out there". I get it it's new and shiny, I like it to, and it has a strong argument to make in the systems programming - the designers are doing a good job of making trade offs that let you retain low level control while still having memory safety - but these tradeoffs are just that and they come at the expense of higher level stuff, higher level languages don't need to make these because they don't pretend to be systems programming languages.
I have to agree that the "Rust all the things!" game is getting old.
Clojure/Clojurescript is used for long running processes and web development. So backend/frontend stuff. I also use it for fun experiments, the REPL is great at that.
Rust is used when performance matters, when I want the simplicity of running a native executable with no heavy dependency, and when I need to write low level components.
Python is my goto scripting language for quick scripts, hacks, messing around. It also serves my scientific computing needs, data mining, visualization, etc. Also, this is the best language I've found for doing coding interviews or practice. So easy to whiteboard python.
F# is good for simply knowing an ML language and when I want more functional flair in the .Net world.
C# and Java are used to pay the bills, as they are easiest to find jobs for.
There would be valid alternatives for most of those categories, but I highly recommend everyone invest in knowing one language for each one.
I haven't switched to Rust yet (I still deal with C/C++ at work on "legacy" code, and I'm having too much fun with Lua for my own stuff) but I don't expect to pick it up "quickly" and I'm sure I'll be trying to write C in Rust for some time. It comes with the territory.
If what you are suggesting is true, isn't the biggest problem by far the fanboyism posing as knowledge? Wouldn't complaining about Rust be like living in the age of alchemy, and complaining about someone's particular potion? Isn't the epistemological squishyness of the entire field the biggest problem by far?
I actually wish rust would accept its systems niche even more and move the stdlib to crates and make nostd the default mode. Personally, I see no reason to market rust for webapps or gui stuff, it cant/wont compete with Rails/QT for years to come if ever there.
Why should cargo be any different? It is solving a problem I don't have (debian, ubuntu, openbsd, and freaking illumnos all have acceptable package management), and creating a massive new problem (there is a whole thread below this one talking about rust dll hell between nightly and stable, and the thread links to other HN articles!). From my perspective all this work is wasted just because some developers somewhere use an OS that doesn't support apt or ports.
Sorry this is so ranty, but I really want to know if anyone has had luck using rust with their native package manager.
Let's say I want to build a piece of software that depends on some software library written in C at version 1.0.1. It's distributed through my system package manager, so I sudo apt-get install libfoo.
~~ some time later ~~
Now let's say I want to build a different piece of software that also depends on foo, but at version 1.2.4. I notice that libfoo is already installed on my system, but the build fails. After a quick sudo apt-get install --only-upgrade libfoo. This piece of software now builds.
~~ Even later ~~
When I revisit the first project to rebuild it, the build fails, because this project hasn't been updated to use the newer version yet.
I'm fairly inexperienced with system package managers, but this is the wall I always hit. How should I proceed?
Anyway, Debian/Ubuntu has multiple fallbacks for this situation:
b. parallel versions for libraries that break API compatibility (libfoo-1.0...deb, libfoo-1.2...deb that can coexist).
c. install non-current libfoo to ~/lib, and point one package at it (not really debian-specific)
d. debootstrap (install a chroot as a last resort -- this is better than "versioning packages per-project" from an upgrade / security update point of view, but worse from a usability perspective -- you need to manage chroots / dockers / etc).
I suspect the per-project versioning system is doing b or d under the hood. b is clearly preferable, but hard to get right, so you get stuff like python virtual environments, which do d, and are a reliability nightmare (I have 10 machines. The network is down. All but one of my scripts run on all but one of the machines...)
A long time ago, I decided that I don't have time for either of the following two things:
- libraries that frequently break API compatibility
- application developers that choose to use libraries with unstable APIs that also choose not to keep their stuff up to date.
This has saved me immeasurable time, as long as I stick to languages with strong system package manager support.
Usually, when I hit issues like the one you describe, it is in my own software, so I just break the dependency on libfoo the third time it happens.
When I absolutely have to deal with one package that conflicts with current (== shipped by os vendor), I usually do the ~/lib thing. autotools support something like ./configure --with-foo=~/lib. So does cmake, and every hand-written makefile I've seen.
No, what that thread is talking about is that somebody wrote a library to exercise unstable features in the nightly branch of the Rust compiler, and that inspired somebody else to write a sky-is-falling blogpost claiming that nightly Rust was out of control and presented a dozen incorrect facts in support of that claim, so now we have to bother refuting the idea that nightly Rust is somehow a threat to the language.
As for the package manager criticism, the overlooked point is that OS package managers serve a different audience than language package managers. The former are optimized for end-users, and the latter are optimized for developers. The idea that they can be unified successfully is yet unproven, and making a programming language is already a hard enough task that attempting to solve that problem is just a distraction.
Anyway, it sounds like I stepped on a FUD landmine. Sorry.
It sounds like you work in this space. From my perspective, debian successfully unified the developer and end-user centric package manager in the '90s, and it supports many languages, some of which don't seem to have popular language-specific package managers.
What's missing? Is it just cross-platform support? I can't imagine anything I'd want beyond apt-get build-dep and apt-get source.
That's the problem with FUD, it gets everywhere and takes forever to clean up. :)
> I got the impression that it is not trivial to backport packages from the nightly tree to the stable tree
Let's be clear: stable is a strict subset of nightly. And I mean strict. All stable code runs on nightly, and if it didn't, that would mean that we broke backwards compatibility somehow. And even if you're on the nightly compiler, you have to be very explicit if you want to use unstable features (they're all opt-in). Furthermore, there's no ironclad reason that any given library must be on nightly, in that boring old every-language-is-Turing-complete way; people use unstable features because they either make their code faster or because they make their APIs nicer to use. You can "backport" them by removing those optimizations or changing your API, and though that seems harsh, note that people tend to clamor for stable solutions to their problems, so if you don't want to do it then somebody else will fork your library and do it and steal your users. There are strong incentives to being on stable: since stable code works on both nightly and stable releases, stable libraries have strictly larger markets and therefore mindshare/userbase; and since stable code doesn't break, library maintainers have much less work to do. At the same time, the Rust developers actively monitor the community to find the places where relatively large numbers of people are biting the bullet and accepting a nightly lib for speed or ergonomics, and the Rust developers then actively prioritize those unstable features (hence why deriving will be stable in the February release, which will get Serde and Diesel exclusively on stable, which together represent the clear plurality of reasons-to-be-on-nightly in the wild).
> What's missing?
I've already typed enough, but yes, cross-platform support is a colossal reason for developers favoring language-specific package managers. Another is rapid iteration: it's way, way easier to push a new version of a lib to Rubygems.org than it is to upstream it into Debian. Another is recency: if you want to use the most recent version of a given package rather than whatever Debian's got in stock, then you have to throw away a lot of the niceties of the system package manager anyway. But these are all things users don't want; they don't want to be bleeding-edge, they don't want code that hasn't seen any vetting, and they really don't care if the code they're running isn't portable to other operating systems.
I think a more accurate assessment would be that both Red Hat and Debian extended their package support through repositories to enough packages that developers often opt for the easy solution and use distribution packages instead of package manager provided ones because it's easy to, and there are some additional benefits if you are mainly targeting the same platform (and to some degree, distribution, if that applies) that you are developing on.
Unfortunately, you then have to deal with the fact that some modules or libraries invariably get used by code parts of the distribution itself, making their upgrade problematic (APIs change, behavior changes, etc). This becomes problematic when using or targeting a platform or distribution that provides long term support, when you could conceivably have to deal with 5+ year old libraries and modules that are in use. This necessitates multiple versions of packages for a module or library to support different versions sometimes, but that's a pain for package managers, so they tend to only do that for very popular items.
For a real, concrete example of how bad this can get, consider Perl. Perl 5.10 was included in RHEL/CentOS 5, released in early 2007. CentOs 5 doesn't go end of life until March 2017 (10 years, and that's prior to extended support). Perl is used by some distribution tools, so upgrading it for the system in general is problematic, and needs to be handled specially if all provided packages are expected to work (a lot of things include small Perl scripts, since just about every distro includes Perl). This creates a situation where new Perl language features can't be used on these systems, because the older Perl doesn't support them. That means module authors don't use the new features if they hope to have their module usable on these production systems. Authoring modules is a pain because you have to program as if your language hasn't changed in the last decade if you want to actually reach all your users. Some subsert of module authors decide they don't care, they'll just modernize and ignore those older systems. The package manager notice that newer versions of these modules don't work on they older systems, so core package refreshes (and third party repositories that package the modules) don't include the updates. Possibly not the security fixes as well, if it's a third party repository and they don't have the resources to backport a fix. If the module you need isn't super popular, you might be SOL with a prepackages solution.
You know the solution enterprise clients take for this? Either create their own local package manager repo and package their own modules, and add that to their system package manager, or deploy every application with all included dependencies so it's guaranteed to be self sufficient. The former makes rolling out changes and system management easier, but the latter provides a more stable application and developer experience. Neither is perfect.
Being bundled with the system is good for exposure, but can be fairly detrimental for trying to keep your user base up to date. It's much less of a problem for a compiled language, but still exhibits to a lesser degree in library API change.
Which is all just a really long-winded way of saying the problem was never really solved, and definitely not in the 90's. What you have is that the problem was largely reduced by the increasing irrelevancy of Perl (which, I believe was greatly increased by this). Besides Python none of the other dynamic languages (which of course are more susceptible to this) have ever reached the ubiquity Perl did in core distributions. Python learned somewhat from Perl with regard to this (while suffering it at the same time), but also has it's own situation (2->3) which largely overshadows this so it's mostly unnoticed.
I'm of the opinion that the problem can't really be solved without very close interaction between the project and the distribution, such as .Net and Microsoft. But that comes to the detriment of other distributions, and still isn't the easiest to pull off. In the end, we'll always have a pull between what's easiest for the sysadmins/user and what's easiest for the "I want to deploy this thing elsewhere" developers.
I think the language-specific ones will win for developer-oriented library management for platform-agnostic language environments.
My theory is that each language community thinks it will save them time to have one package manager to rule them all instead of just packaging everything up for 4-5 different targets.
The bad thing about this is that it transfers the burden of dealing with yet another package manager to the (hopefully) 10's or 100's of thousands of developers that consume the libraries, so now we've wasted developer centuries reading docs and learning the new package manager.
Next, the whole platform agnostic thing falls apart the second someone tries to write a GUI or interface with low-level OS APIs (like async I/O), and the package manager ends up growing a bunch of OS-specific warts/bugs so you end up losing on both ends.
Finally, most package manager developers don't seem to realize they need to handle dependencies and anti-dependencies (which leads you to NP-Complete land fast), or that they're building mission-critical infrastructure that needs to have bullet proof security. This gets back to that "reinvent dpkg poorly" comment I made above.
In my own work I do my best to minimize dependencies. When that doesn't work out, I just pick a target LTS release of an OS, and either use that on bare metal or in a VM.
Also, I wait for languages to be baked enough to have reasonable OS-level package manager support. (I'm typing this on a devuan box, so maybe I'm an outlier.)
Is there anyone out there saying "builds only when connected to the internet so it can blindly download unauthenticated software ... SIGN ME UP!"
On the other hand there is a quite dark cloud on the horizon with the stable vs nightly split. You can't run infrastructure on nightly builds; or add nightly builds to distributions.
Rocket is a recent one. I talked with the owner of Rocket and one of their goals was to help push the boundaries of Rust by playing with the nightly features. With that sort of meta-goal using nightly is sort of a prerequisite. Meh.
You can use almost all of the code generation libs on stable via a build script. Tiny bit more annoying, but if it's a dependency nobody cares. A common pattern is to use nightly for local development (so you get clippy and nicer autocompletion) and make the library still work on stable via syntex so when used as a dependency it just works.
The most used part of the code generation stuff will stabilize in 1.15 so it's mostly not even a problem.
The looming dark cloud of stable vs. nightly only looks like a dark cloud to those outside the Rust community.
The article that made its way up Hacker News awhile ago (https://news.ycombinator.com/item?id=13251729) got pretty much no traction whatsoever in the Rust community.
I have been extremely pleased with the rust community and the rust maintainers.
And no I was not paid by them to say this... :)
In 2017 we'll get Tokio and lots of Tokio-ready libraries. Some of them already work and compile with stable Rust. And maybe in the end of the year we can take a proper look what we can do with Rocket, or Diesel...
Diesel does work on stable today, but its nightly features will be on stable in five weeks with 1.15.
Are you talking about impl Trait or some other language changes?
That said, if you want to play with nightly Rust, it's pretty trivial. Rustup https://www.rustup.rs/ makes it easy to install multiple Rust toolchains and switch between them, in a fashion similar to rvm and rbenv.
rustup run nightly cargo build --release
You were saying? In any case, there's no such thing as a stable vs nightly split. There's pretty much zero libraries that require nightly, and the few that have a nightly option for optional features will no longer require nightly after the macros update lands.