Rust seems like a fine choice for this use case. However, the decisionmaking process is a bit odd. They considered Java, Lua, and Python, but not Go? I think Go is far more "Rust-like" than those other three, ie for people who want something one notch nicer than C++.
While I don’t disagree about the consideration of Go, calling Go Rust-like is really strange.
I think of Go as more Java-like than Rust-like. But even that’s a hard comparison. Really, Go is C-like with a garbage collector for memory safety.
Rust is Java or C++-like with a strongly typed compiler, support for generics (not Go), lifetimes for memory management and no GC (closer to C++), and a type system that enforces some constraints around concurrency. I guess they are similar in that they both (by default) create static binaries, but so does C and C++ (if you want).
Rust and Go seem to get lumped together because they both came on the scene around the same time, but that is pretty much where the comparison ends.
> Rust and Go seem to get lumped together because they both came on the scene around the same time
That, and that the golang authors at that time claimed that it was a "systems language". They would later go back on that by attempting to redefine what "systems language" meant.
I'm not anti-GC :) The language itself though doesn't lend itself well to larger complex designs however, which I feel precludes it from more interesting projects, and we can see that by the fact that it's mainly getting used for "devops" kind of work. Compare that to Rust or Java (e.g. GraalVM) or C#, the latter two which weren't really advertised as "systems" languages, yet were used there anyway (maybe real-time Java is something else though)
> Rust and Go seem to get lumped together because they both came on the scene around the same time, but that is pretty much where the comparison ends.
I don't think it has to end right here. It's definitely true that Rust has a much stronger type system and is more suitable for some kinds of applications (super-low level, libraries, things were we try to avoid allocations, etc.).
However there is a certain set of applications for which both languages are well-suited and potentially more appealing than the other mentioned languages (e.g. Java, C#, Python, etc). Those are applications were we want to have a low memory footprint and ahead of time compilation (to improve startup times and to avoid having to ship also a big VM). Command line applications or daemons fall into the category. And we can definitely see both Rust and Go gaining tractions in those domains.
Real-time is a huge field. There is hard-realtime with < 1ms deadlines. And there is soft realtime which is often "fast enough that the user doesn't complain", which is maybe somewhere between 100ms and 1s for various applications. For a cloud-connected device I assume it's in the latter category since most transport protocols won't provide much better guarantees anyway. For those use-cases most languages will be fine.
Both Java and Python have (limited) implementations for time-sensitive microcontrollers: Chances are high your SIM card or your chipcard is running Java, and I believe Python (MicroPython) is also in active use on some SOCs
Python is mostly refcounted, and I think Python is useful to put in the table because it's a common language and provides a good baseline for user-friendly scripting language performance.
I guess I misunderstood the benchmark chart. The Java line shows it taking 500mb, which is absurd, but I assumed that they were running on the target platform and therefore had 500mb of RAM.
It's not clear what the chart is actually demonstrating, or why it had a "Source" link that just goes to a 4-year-old blog post about concurrency in Rust that contains no benchmarks.
My guess is the benchmark chart is just that: a benchmark. They didn't actually write their target program in 6 different languages. They probably just found a set of benchmarks that they believed to be at least somewhat representative. I really wish the article would actually explain this though.
Lua 5.1 has LuaJIT, which is quite fast. Performance was obviously a priority, and since LuaJIT doesn't support newer than 5.1, that's probably why they used that as the point of comparison.
There used to be some great benchmark comparisons including luajit on the programming language benchmark site[1], but I swear that site has gotten worse as time has gone on.
That would be one reason, you could compare more language runtimes. I think the core of my issue is that it used to provide more data points, and give the user more power. I would think that the site is primarily targeted toward developers, and that we could handle a more data-heavy UI like it used to have.
* There used to be more languages and runtimes shown.
* The site makes decisions for you about which languages you want to compare, whereas before it used to give the user more power. It used to be easier to compare arbitrary languages with each other. I think you could see the "How many times slower?" graph on individual language pages, but it has been a long time since that was on there.
* Related to above, it seems like it changed to be a more mobile-friendly format. I'm not inherently against that, but if it means I get less points of comparison, I am.
* You used to be able to see examples of run-times on multiple machines. It's less relevant now since everything is multi-core, but it did mean that there would be single and multi-thread implementations of some programs, and you could see how they were written and performed differently.
There are other reasons. Most of these are related to each other, and to be clear, I still like that the site exists, and that it provides useful metrics.
> It used to be easier to compare arbitrary languages with each other.
True; and a couple of years of Google Analytics showed that there were one or two people who did. The massive majority made the same comparisons, so it was made easier for them.
> … "How many times slower?" graph on individual language pages…
True; and intentionally removed to force a slower look at the measurements.
> … single and multi-thread implementations of some programs…
There still are — look at "busy" and "cpu load" instead of elapsed "secs".
I was fully aware that my criticisms were unlikely to be taken seriously. Otherwise, why would the site have changed from what it was before?
> > … single and multi-thread implementations of some programs…
> There still are — look at "busy" and "cpu load" instead of elapsed "secs".
I do want to clarify this point. I'm aware of the "cpu load" section of the results. That is not at all the same thing. I don't think this was necessarily even intentional before, but the fact that both systems existed, meant that contributors could explicitly optimize for either single-threaded or multi-threaded.
A single-threaded algorithm will likely smoke a locking multi-threaded algorithm when run on a single-core. It will also probably look completely different.
I did take your comments seriously. Have you taken my point-by-point response seriously?
I acknowledged your "criticisms" that "there used to be more languages and runtimes shown" and that "you used to be able to see examples of run-times on multiple machines".
Do you acknowledge that those "criticisms" sound a lot like someone saying — give me more free stuff ?
> …contributors could explicitly optimize for either single-threaded or multi-threaded…
They still can; have you noticed that the Chapel programs are optimized for program size?
> …single-threaded … will likely smoke a locking multi-threaded…
When dealing in such embedded systems, there's often not a whole lot of help online for arcane issues. Device documentation etc. usually makes default assumptions about C/C++. Drivers, other tools - same. This is usually a big concern.
If there isn't much software involved, then the natural choice would be that which most easily integrates with the platform, because at that level, there's always trouble.
Given limited information, I probably would have chosen C/C++, and then done some experiments with Rust to see if it was worthwhile, and then possibly port over during an appropriate iteration.
This post feels like hype marketing or something. They're telling me what I already know/believe about Rust, and it doesn't help that it's a guest post.
They're choosing Rust for an embedded usecase, they could have gone into detail about embedded Rust, and whether they consider the ecosystem to be mature/sufficient for their needs.
If they didn't consider this (seeing that this feels like a decision made over theoretical discussions and previous benchmarks), this could come back to bite them.
Off topic nitpick: I wish Python stopped being added to these kinds of performance lists. Python isn't a poor tool for those kinds of problems, it's the wrong tool. And I don't mean, "oops you locked yourself into the wrong language." I mean that valid Python is Python that calls into a language like C or Rust when there's performance needs. Numpy and Scipy are, I'm going to bet, mostly C.
Imagine I did a comparison of the fastest way to cut a board lengthwise. And my comparison was using a table saw, a jigsaw, a circular saw, and a planer. Actually, that sounds dumb enough that I kind of want to do that. All the worst ways to rip a board.
> Numpy and Scipy are, I'm going to bet, mostly C.
You'd be surprised; About 25% of SciPy (much of the numerical computing portions) are in FORTRAN. It turns out that decades of compiler/numerical computing research has made the FORTRAN compiler generate numerical code which runs faster than GCC/Clang today.
Most people use the GCC Fortran front-end. I think the main reason you get speed from Fortran is because it provides more hints to the compiler, but also because generations of PhD students have been optimising compilers for Fortran-specific performance gains.
Maybe, but most people who are using numpy are probably using their Linux distro's package for it (or the pip package) which means it's being compiled using GCC.
MicroPython is a wholesale rewrite of Python to be suitable for microcontrollers. I imagine this is the reason it was included in a comparison for languages to be used for an SoC. From memory, the ESA (European Space Agency) uses it for some of their projects.
It even shows Ash Moosa (an Atlassian employee) as the author of all the guest posts when you're on the main blog page (https://bitbucket.org/blog/), and you can only see who the actual author is by going into the post individually and looking for the "guest post by" section at the top of the text (and below another section showing Ash Moosa as the author).
Really bad design if they're planning to put up a lot of guest posts like this.
I'm surprised to see that Go wasn't considered. While not as lean and mean as Rust, it also has excellent support for parallelism and safety, while being easier to get into.
Edit: For clarity, I'm not saying Go is the best choice. But when Lua, Python, and Java are in the running, it's surprising that Go wasn't. Especially when the article calls out the learning curve of Rust as a downside.
Well, Go has excellent support for parallelism and excellent support for safety - whereas Rust actually supports parallelism with safety. Rust is also gaining lots of useful "tweaks" that will make it quite a bit easier to get into in the future - and the syntax alone is a big draw to former Ruby, Python etc. programmers, while you really can't say the same about Go. Now, of course Rust is not for everything (some things really can't be done without efficient GC, and Go gives you that), but it looks like these Bitbucket users made a very solid choice here.
concurrency, not parallelism, and even there it doesn't have anything similar to immutable data structures or java.util.concurrent or System.Collections.Concurrent
> support for safety
Not more than any other available GC'd languages.
> some things really can't be done without efficient GC, and Go gives you that
Only if you care about pause times at the expense of throughput, and you don't have to deal with very large heaps
MicroPython is a wholesale rewrite of Python to be suitable for microcontrollers. From memory, the ESA (European Space Agency) uses it for some of their projects.
SoC also describes a modern phone, rasberry pi, probably even some modern laptops/netbooks. I think it's generally agreed that these are not embedded devices, but the line used to be much blurrier.
It sounds like the evaluated their options, added in a few that they were never really going to consider anyway, came to the conclusion that C++ was best but then made up some reasons to use rust anyway.
It's not any safer than other GC'd language available today, and likely quite less so compared to languages with immutability and proper persistent data structures in them (Scala, Java, F#, etc.)
It's at the discretion of the runtime though. With native threads you have more control. And you can have several goroutines running on the same processor concurrently.
If you're gathering data through multiple sensors data from one end, and pushing it out to network from another, the performance of the 4-core ARM CPU should be the least of your concerns. You are never going to reach 100% CPU utilization unless you're doing something very wrong. Contemporary sensor protocols are slow, to say the least. Same for the network end. You'll saturate atleast one of those two ends before running out of CPU power. Use Python. It is easy, fast and has a ton of library support for numerical analysis. Stop buying into the hype please.
They're just linking to an old blogpost from 2015 there. Back then Rust used to rely on a custom memory allocator which introduced a lot of RAM overhead, this has been fixed for quite some time now. Don't get me wrong, I like that these folks are using Rust, but this post is leaving me with a bit of a weird feeling - it just does not seem very impressive or meaningful.
Thats probably just the rust binary being a bit larger, I wonder how the memory measurements are done.
Java is suspiciously close to 512MiB which could mean they just looked at the RSS size of the process, but Java does not automatically give memory back to the system, especially if the min heap is set to something like 512M.
Java being in the same order of magnitude of performance, they might be abandoning the java ecosystem to quickly considering how good the tooling and library ecosystem maturity is.
Sure, but that benchmark is a single point on a manifold. That point may tell you little about your program which likely differs from the benchmark along several axes.
Sure, but it the post went "we need a language with great performance, so we chose the second most performant one according to this test.". It was a bit odd.
It would be nice if some details could be given about the project so we could put 'performance' etc. into context.
There are not that many applications that really need the extra performance boost from the JVM to C++, usually, that would be a platform issue, i.e. the device can't support the JVM or whatever.
For smaller projects (we're dealing with sensors, does this mean it's small?) especially hardware related, the issues are going to be integration, libraries, supportability for those tricky things; in such scenarios one's hand is often pushed to C etc..
The advantages of Rust it would seem would have to be rather significant and noteworthy to make the jump to a nonstandard platform, for so many reasons.