It seems to have more libraries and the ones it has are more advanced than what would be expected from a language this young.
That being said, the Rust community has done a lot of amazing work. There are people who are not just talking shit, but actually making attempts to port or rewrite some major things in rust.
I was thinking about using it for a project recently, but I found the GTK documentation to still be lacking.
Documentation is the biggest factor is language survival. I hated Python around 2002/2003, but today the documentation is orders of magnitude betters. I really got into PHP back then thanks to its documentation, not realize how awful a language it was under the cover (although PHP7 seems to be making some huge strides to get it away from whale-guts status: an actual AST, name spaces, things that are slightly more sane).
So long as they're more tutorials, stackoverflow questions and library projects for Rust, it will grow. I have pretty high hopes for it being a major language for embedded systems. Who knows, maybe in five years we'll have production read Rust based kernels that can run existing docker containers.
That's a nice oversimplification, even disinformation. More like ALGOL60-68 was designed with prevention of common issues in mind. It didn't run on PDP-11, so they ported & modded BCPL into C to write UNIX. All kinds of stuff written in that language with tons of flaws that were mostly the same and preventable since ALGOL's. Many calls to use alternatives (eg Wirth's, Ada) that prevented them by default with switches to turn each off only when necessary. One was a clever C variant called Cyclone with pointer and memory management tricks to knock out errors. Inspired by it and some others, Mozilla team created Rust to achieve similar safety/security objectives that existed since 1960's for ALGOL.
So, it's not a fad or gimmick. It's recognizing certain things regularly trip up system programmers. The basic techniques to stopping it were deployed and field-proven far back as 1961 in mainframes. Many others since, like those in Rust. The author, rather than your assertion, is simply applying that wisdom to reduce flaws instead of ignoring it. Rust and DOOM were probably chosen because they were more fun for author than rewriting djbdns, OpenSSH or Nginx in Ada2012/SPARK2014 w/ full, prover use. Entirely understandable even if not ideal.
So, while it is true that the success of C primarily derives from it being used to implement UNIX, it is also true that for a long time (and, arguably, still today), GC'ed languages where perceived to be inadequate for system programming.
Garbage collection is not mentioned in ALGOL68 report at all. There's a link to it on Wikipedia page if you want to check. Not ALGOL60, either. It wasn't in Pascal, Modula-2, or original Ada 1983 rationale either. PL/0 of MULTICS, PL/1 & PL/S of IBM, ESPOL/NEWP of Burroughs, or Mesa at Xerox didn't have it either. All had various enhancements for safety in quite a few forms. So, I have no idea where you're getting that information from. Disinformation, most likely, that was probably given to you by those who didn't study history and strawmaned up reasons safer, non-GC languages couldn't exist. ;)
Note: Dynamic memory allocation generally had to be done carefully since there was no GC. Yet, the other safety features of the language helped reduce the number of problems you'd run into. Especially those that discouraged directly working with pointers.
Now, let's look at memory safety. C was a language designed specifically for PDP-11 & its model. Burrough's B5000 (1961) was a machine custom-designed for ALGOL and its philosophy. It had stack overflow checking, array/pointer bounds checking, pointer protection, HW checking of argument types during function calls (made possible by OS written in ALGOL w/ strong types), and tags to prevent execution of data. You can bet your ass that ALGOL and a machine designed for it was much more memory safe than C on a PDP-11. Or even C on a mainframe. Doing a pentest on that architecture with today's knowledge gave me very few points of ingress due to all those checks implemented for the safe-by-default language.
"it is also true that for a long time (and, arguably, still today), GC'ed languages where perceived to be inadequate for system programming."
That would be true. Thing is, that had nothing to do with the safe languages and approaches that didn't use GC's and predated C. Further, Modula-2 showed one could match C-like simplicity/efficiency with better safety with about no effort. Taken further, Hansen developed a safe-by-design langauge (with 5 keywords lol) and OS called Edison on PDP-11. At that point, it was clear that Thompson and Ritchie's preferences were only reason their PDP-11 language was that insecure and hard to analyze. :)
"Rust is possibly the first non-GC'ed safe language that seems to have a chance to reach mainstream adoption."
This, plus claim about recent advances, I agree with. It has a lot of potential. It's why I promote it and help people posting here with Rust projects. Also tried to help them with their docs.
"Garbage collection is not mentioned in ALGOL68 report at all."
Possibly is not mentioned explicitly, but it might be implied. Did any conforming algol implementation ship without a GC? Wikipedia explicitly lists 'Cambridge ALGOL 68C' as an extension omitting GC. As far as I know it took a while to have a properly conforming Algol 68 compiler as the spec specifies behaviour, not implementation (cf. Knuth's 'Man or Boy Test').
Also, as you pointed out, many of those languages relied on hardware support for safety. It is at least plausible that the progressive cpu intergration of the '80s which lead to the rise to dominance of simpler and faster architectures (RISCs and even x86) left languages and OSs that realied on more complex hardware support at a disavantage compared to C and UNIX.
It is around that time that the Lisp Machine was discontinued and the iAPX 432 failed.
You nailed it. The spec specifies how the language is to behave rather than dictate its implementation. That kind of thinking was critical with hardware so diverse as the past. You can add a GC if you want but it's not assumed. You can do that with C, too, as many have.
"Also, as you pointed out, many of those languages relied on hardware support for safety. "
It was often used but not required. The older languages established safety by including strong typing, bounds-checks, and some interface checks by default. These knock out tons of errors. Modern languages have them actually. Some went further with custom hardware accelerating it, esp Burroughs, but that wasn't the norm.
"It is at least plausible that the progressive cpu intergration of the '80s which lead to the rise to dominance of simpler and faster architectures (RISCs and even x86) left languages and OSs that realied on more complex hardware support at a disavantage compared to C and UNIX."
It's the best hypothesis. Even Burroughs, now called Unisys, got rid of their custom CPU's for MCP/ALGOL since customers only cared about price/speed. AS/400 did same with transition to POWER based on customer demand. GCOS did the same thing. As you said, LispM's and i432 (and BiiN's i960) died since they did opposite. Java machines exist with Azul's Vega3's being friggin' awesome but they largely didn't pan out. Azul is recommending software solutions on regular CPU's.
Far as I see it, market drove development along just a few variables that severely disadvantages safe HW and SW stacks. This was probably because software engineering took a while to develop and market to learn other things (eg maintenance, security) mattered. Damage was done, though, with IBM mainframes, Wintel PC's, and Wintel/UNIX servers dominating.
For UNIX, open-source and simplicity also contributed to its rise. Another aid to various products was backward compatibility with prior software or languages that are shit for lack of better word. Trends like that feed into the hardware demand trend and vice versa. So, it wasn't any one thing but price/performance was huge factor given all people looked at were MIPS, VUPS, MHz/GHz, FLOPS, and so on.
Regarding hardware safety, I believe we might see a resurgence of builtin support for safety features. CPU designers have more transistors available than they know what to do with it: after adding yet another execution unit and widening the vector lenght again, they are reaching a diminishing return point, so they might switch to adding back these safety features.
And in fact it is already happening: W^X and virtualization can be considered as belonging in this area, and more recently Intel added MPX and MPK that are a more direct attempt at userspace security features.
New architectures are being designed with security in mind, like the vaporware^Wupcoming Mill.
I always look for a SQL driver, and then if it has connection pool support. If a language passes this second test, the language is ready ;)
It's also bad for images; for some reason they thought the scroll wheel should zoom in and out instead of scroll, and the only way to scroll is to click and drag. It's like their UI devs are on crack.
If you're bored this afternoon, here's an open 10 year old open bug against Firefox that will certainly entertain and possibly infuriate you:
On the one hand, overriding keybindings enabled me to create an interface a while back where you could drag and drop midis onto an on-screen keyboard that corresponded to your keyboard, creating a musical instrument that could be played by typing.
On the other hand, more and more I think those things are better done as desktop applications, and that web applications are bad for users.
I don't know. I'd not be very sad either way if key bindings are allowed to be overwritten or not. I'm much more concerned that programmers not override keybindings to create really bad interfaces like this.
My instinct when I hit the site was to use my mousewheel to scroll down, because I didn't immediately realize it was a slide deck. So my mousewheel advanced the deck about a dozen slides and wrecked my back button.
Are you seriously suggesting that a workaround is just as good as having a good UI?
> As for the back button - works fine for me and takes me to the previous slide.
So after I scroll down and literally go through the entire slideshow with a small wave of the hand, I have to click back how many times to get back to hacker news?
As someone who spends a lot of time in OpenGL it's a really solid, rusty API that's quite a joy to work with.
Can someone explain a bit about this? I'm not familiar with Rust, but with C you have to run GL calls from one and the same thread or you're gonna have a bad day.
Bonus question: Anyone that was/is C programmer (not C++) with opinions on Rust?
>I won’t get into much detail about threading, but imagine how the OpenGL skynet-state-machine interacts with multiple threads. GLium ensures only a thread-specific OpenGL context is used on any particular thread.
>By making everything neither Send nor Sync, it prevents you from using resources created by one thread in another, enforcing OpenGL semantics at compile-time.
Basically any type without Send+Sync traits will not work with existing threading APIs(since they require combinations of Send+Sync based on threading semantics) forcing API calls to be done on the right thread.
Rust tackles all the things that are hardest about programming: Correctness, threading and memory management, and validates that you've done it right at compile time.
Think of it as an extensive suite of compile-time unit tests. Definitely forces you to think differently though. Nice side-benefit is that I learned Swift way, way faster (and IMO more idiomatically) than if I'd gone direct from ObjC->Swift.
*: Not that it's right, just that it's what I intended.