Hacker Newsnew | comments | show | ask | jobs | submit | przemoc's comments login

Now that Rust reached v1.0, I may finally look into it without the fear of wasting time to learn stuff that will change within next months. (It's quite sad, though, that recently released Debian Jessie won't have it in its main repositories.)

Allow me to ask some questions, because I cannot easily find the answers and they're IMHO crucial for wider Rust adoption:

1. What CPU architectures/platforms Rust currently supports?

2. What's the state of cross-compilation support?

Most of devs have workstation on i386/amd64, but some of us may work with ARM, PowerPC and other architectures. Rust's memory safety seems appealing for embedded solutions, but can it be used there already?

BTW The install page is not well-thought-out.


Unless I hover on the 32-bit/64-bit links and read addresses they point to, there's no way to tell whether it's for x86 or ARM for instance. And installation via piping script that is being downloaded to shell is something that shouldn't be present at all.

> installation via piping script that is being downloaded to shell is something that shouldn't be present at all

The alternative is you download a binary and run it, at which point that binary can do whatever the shell script could have done.

(The other alternative is you download source, at which point the Makefile or any other piece of the build that you execute can do whatever the shell script would have done.)

As long as the script is available via https the security is equivalent to the alternatives.

> The other alternative is you download source, at which point the Makefile or any other piece of the build that you execute can do whatever the shell script would have done

Part of the problem with this is that since it's a bootstrapped compiler, and the only one for the language so far, "downloading source" mean you need a binary to compile it with, which devolves to the same problem.

Rust was bootstrapped with an OCaml based compiler. Alas I don't think it has been kept up to date, so you won't be able to use it to compile the v1.0 source. Not sure how many generations in between the last OCaml compilable rust and the current rust you'd need to compile to bootstrap, probably quite a few.

A while back, somebody got Cargo running on an unsupported platform, but bootstrapping was a major problem. The compiler had to bootstrap newer versions of itself tens of times, and that was only for a few weeks of breaking changes …

Very, very, very many. https://github.com/rust-lang/rust/blob/master/src/snapshots....

I've still thought about doing it.

1. In a sense, anything LLVM supports. Some stuff is more first-class than others, of course. We test every commit on x86, x86_64, and ARM.

2. Exists, but isn't particuarly easy. The biggest hurdle is getting a copy of the standard library for the target platform, then it's pretty easy. It's gonna be a focus in the near-term.

Alas, ARM describes a mutually incompatible family of related ISAs. Which ARM(s) is Rust tested on? V7-A, V6, V7-M, V4T?

Works for me on ARMv8 (aarch64): https://github.com/dpc/titanos , and I expect it to work on anything that LLVM supports.

Ah, so you know what: I thought we tested Android on ARM, but apparently it's x86 and x86_64 too: http://buildbot.rust-lang.org/buildslaves/linux-64-x-android...

There's a community member that keeps the iOS build green, I forget which ARM that is.

We do test Android on ARM. That's an x86_64 buildbot which builds a cross-compiler to Android on ARM and executes a full test suite on Android emulator using a remote testing support which even includes running gdb remotely.

The official tested support is for V7-A, but V6 support also is in tree.

Thank you for such a prompt answer.


Quick googling revealed that Rust doesn't work with musl libc yet. It will be really nice when it will be fixed.

There is actually preliminary support: https://github.com/rust-lang/rust/pull/24777

Making this work well is a very high priority.

Thanks for mentioning Super User's BSD Cross Reference (bxr.su)! Apparently there are many more OpenGrok installations nowadays than it used to be 5 years ago.


EDIT: Not all of them are working, though.


Jakub Jelinek's announcement on ML:


And the same one for gmane lovers (like me):



I am not a Mac user, so cannot tell whether it's really "especially on the Mac", but I totally agree that most FLOSS with GUI has bad look and often bad UX too. It's easier to find well-thought-out polished CLI/TUI tools than GUI ones (at least on Linux). And no, making terminal apps is not always easier than windowed ones.

Why open source GUIs are bad or inferior to TUIs? I think it's a good question and I dare to say it's definitely not the lack of skills within open source community. Maybe it's mindset-related?

I noticed that the most activity in open source is backend-related, if you know what I mean. People solving "real problems" in kernels, servers, daemons, agents and what not, may see graphical frontends as not important, often bringing additional complexity possibly not worth the trouble.

I could call myself a backend guy too and I think I like working on backends more than frontends, especially GUI frontends (well, I haven't touched GUI frontend matters for some time already, at least any sane one). I believe (maybe I'm wrong?) I am able to do some decent GUI, but somehow I never do them nor really need to do them.

I very much appreciate lot of open source work out there. I find it truly amazing how people find time, energy and motivation to work on something pro publico bono in their spare time. I like the idea and want to be more open-sourcy myself (meant as contributing to open source), but I always struggle to squeeze time and/or energy after work to really do it. (When I am even successful in managing to do OS activities, I don't have much time, so I fiddle then more with my old pet projects covered in dust than anything else, because becoming productive in other software needs much more time. Well, I do some bug reports or send fixes sometimes, but it's not more than a few up to several in a year.)

At the same time I always think and tell others, that open source shouldn't use some special standards, just because people working on it do it voluntarily and are not paid for. We should always aim for the best possible, not mostly working/ok-ish things. Telling devs that UI/UX of their project sucks for instance, doesn't mean we diss these developers. As a developer you should never take critique of the projects you're involved in personally. And constructive critique is always great way to improve our own views, because we're all biased, especially the creators of their own child projects. So while telling that UI sucks may not be constructive, following it with list of problems it has, becomes constructive. There are also these rare cases, when we feel that something is clunky and out of place, but we cannot pinpoint what exactly is wrong here...

OTOH users do not always understand that GUI is usually tip of the iceberg, and even if the tip is massive in some apps, it's still connected to stuff under the hood, and some refactorization may be needed to be able to present decent and responsive GUI that would replace the one previously available. We may try to criticize that devs didn't do their job properly if refactoring is needed for "tiny" GUI improvements and we may be right to some extent, but it's impossible to thought-out everything beforehand.

Last note regarding special standard. In fact many open source backend stuff out there is better than proprietary ones, so this special standard I mentioned before can be also meant positively. But it's also true that many of such successful open source projects do have paid developers after all.

EDIT: typos


XP was RTMed on August 24, 2001 and GA was October 25, 2001.

But I can understand why you wrote what you wrote. Early XP experience for many remains repressed memory, because it was quite buggy (be it OS itself or drivers delivered with it, BSODs were a norm), far from stable 2000 SP4, which I kept using for a long time. XP around SP2 (August 25, 2004) got usable.


Obligatory archived copies of above mentioned pages:

[1] https://archive.today/Pw2gr

[2] https://archive.today/2lHs7

[3] https://archive.today/gc1Gu

[4] https://archive.today/knFJg

[5] https://archive.today/8BFxh

[6] https://archive.today/9EkxY


I didn't know about use of static keyword in array parameter declaration, but I dare to say that a lot of senior C programmers are unaware of this C99 feature.

It's nice to be able to specify function's expectations on that level, yet it looks that only clang (tested with 3.5.0) takes use of it, while gcc (tested with 4.9.1) seems oblivious to it. Be it NULL or shorter string literal than expected, gcc with -Wall -Wextra -pedantic -std=c99 spits nothing. Both mistakes are detected by clang.

Sadly even clang doesn't warn us when fun(int len, char str[static len+1]) is called like fun(5, "test").

But I'm not sure that I agree with the rule Don't use NULL. In any sane C environment NULL is defined as follows (unless __cplusplus is already defined, because then it's defined as 0 or 0L)

    #define NULL ((void*)0)
and IMHO there is nothing wrong with that.

Distinguishing kind of 0 we're dealing with (even if it's not strongly guarded by compiler) is often important for readability and eases maintenance of the code (0 vs '\0' vs NULL). While comparing pointer with NULL (writing p == NULL or p != NULL instead of simply !p or p) may seem superfluous (yet I have nothing against programmers doing so), calling function with pointer parameters providing 0 argument instead of NULL seems less clear to me.

> if you really want to emphasize that the value is a pointer use the magic token sequence (void *)0 directly.

I don't buy it.


One of musl libc guys wrote quite convincing article about NULL: http://ewontfix.com/11/

There was also discussion on musl mailing list (I don't know is this best link to it): http://www.openwall.com/lists/musl/2013/01/09/1


The topic was modern C and in modern C environment NULL is defined as

    (void *)0
There is no point in writing longer form and it's still clearer and safer than 0 alone.

C++ is another story with its

    void* hate
built-in. In this land you rather write



for extra purists), but as you're denoting pointer type already in this notation, there is not much gain in using NULL instead of 0 (well, beside grepability).

In many cases you can be done with 0 alone in C++, that's true, and in such cases NULL at least poses some intention, but if you're not careful enough, you may end up putting NULL alone (without pointer-to-type cast) in some variadic function and things start to blow up all of a sudden (that is if your NULL integer width isn't the same as pointer width). That's why having a habit of writing

is a good thing in this land.

Regarding musl check also:


In short, musl's stddef.h has following lines:

    #ifdef __cplusplus
    #define NULL 0L
    #define NULL ((void*)0)
NULL defined as 0L for C++ is nice workaround, but it works only for LP64 platforms.

In the same vein for Windows C++ x64 environment you need NULL to be 0LL, as it is LLP64 platform.


Looks nice. I was going to perform a shameless plug by mentioning my simple Linux OSD nanoproject (for those wanting to use some other recording matters, but still see the keystrokes on the screen):


but I just remembered that I still haven't fixed a bug I noticed on my computer at work, where I had Gnome back then. Nowadays I have awesome there too (just like on my laptop), so I'll possibly won't reproduce it, but notes I left should be enough to do the fix one day. ;)


You are not. I hate how browsers nowadays, especially browsers on smartphones, are unusable without access to Internet. Sure, there is Pocket for instance, but IMHO there shouldn't be need for such app. And while I'm ranting at Pocket - there is still no automated login for LWN.net. (I know I can go with manual way, but still...)

P.S. I'm thinking about making nice dedicated cross-platform LWN.net articles & comments reader one day (well, maybe more), but it's hard to squeeze out enough time for that kind of fiddling (unless it's really a gravely matter, but it isn't here).


Opera Mobile (the "classic" one before they threw it all away) let you save pages for offline reading. Not perfect but better than nothing. Sadly it did not cache content through restarts which is annoying on mobile where apps get killed a lot. But if I recall correctly at least the navigation back and forward was instant, like on desktop, with no network traffic.


What about appcache manifest, service workers in chrome, and hood.ie? There's ways to make the web work offline.


I bought my T430 (N1T56PB) on August 2013. 10 months later "Tab" key fell off. 2 months later "A" key fell off and few months ago another one - "S". Currently "E" is the one that behaves a bit differently and surely will be the next one to fall off.

At the beginning I thought that these islandish keyboards aren't that bad, but after a year I'm sure they are total crap, which shouldn't be put in stuff that costs $1500+.

Previously I owned R61 (NF55WPB), which was surely lower-end laptop and I didn't have any problems with keyboard for 3 years that I spent using it (until NVS140m exhibited its factory problem and I have no longer seen anything on the screen).


I think the keyboard on my T430s is the best keyboard I have ever used. It feels perfect to use. I love it and I get annoyed with anyone else's laptop. I haven't had any of your problems.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact