Hacker News new | past | comments | ask | show | jobs | submit login

Maybe I'm misunderstanding but there is a pretty serious effort to rewrite all the gnu coreutils in Rust:

https://github.com/uutils/coreutils




That is sort of addressed by Theo:

>Such ecosystems come with incredible costs. For instance, rust cannot even compile itself on i386 at present time because it exhausts the address space.


He has a point though... As we get more CPU/ram we as programmers don't even bother to check how many resources we are using. Personally, I don't know whether its a good thing or a bad thing. Also in terms of systems languages I believe only rust has some potential to truly replace C. Although, a large part of C usage still takes place in the embedded world where rust has yet to be ported to many embedded processors. Go's GC makes it a show stopper for use in many places as for haskell I would be very interested to see more low level embedded programming going on in it.


> He has a point though…

Does he? He "stat[es as] fact" that

> There has been no attempt to move the smallest parts of the ecosystem, to provide replacements for base POSIX utilities.

which as xwvvvvwx notes is categorically wrong, then points out that rustc can't compile itself on i386, which is relevant… how?


> then points out that rustc can't compile itself on i386, which is relevant… how?

Remember he is speaking as the leader of an operating system project. As in, a basic part of the project functioning normally is compiling the whole thing from scratch. If something needs cross-compilation to even get started it won't end up in OpenBSD base.

I seem to recall when it supported more architectures they made a public show about how they weren't going to cross compile even for targeting wimpy/sluggish machines, because recompiling the OS was a good stress test for the kernel itself.


And thus they lock themselves into the lowest common denominator as they target smaller systems. This is ridiculous. OpenBSD looks like performance art, literal security theater.


> And thus they lock themselves into the lowest common denominator as they target smaller systems. This is ridiculous. OpenBSD looks like performance art, literal security theater.

Those lowest common denominator systems tend to find bugs not present elsewhere.

And stating that openbsd looks like performance art and security theater seems to indicate you haven't looked at what openbsd has done for security.


OpenBSD can be security theater and still do great things. But at some point sticking to an 90s Unix/C aesthetic looks deeply anachronistic.

Yes, bugs are found at boundaries and interfaces, large or small. Something has to be different for the output to be different.

If someone writes a bug proof TLS implementation while at the bottom of the pool wearing SCUBA gear, it is still security theater.


Do you use it? I have had it on one machine or another since around 2000, and I have by and large been pretty happy with it. Not every personality type will like it, and sometimes they do make odd decisions, but it is pretty well put together.

Another point about the *BSDs, something doesn't have to be in base for you to use it. The base system is supposed to be small and not have a lot of dependencies. You are free to use things in ports and packages or compile them yourself. So this is not the same as never being able to use rust.


> then points out that rustc can't compile itself on i386, which is relevant… how?

I'm actually gobsmacked this is the case. There used to be a saying, only half-joking, that a language that can't host/compile/bootstrap itself is nothing more than a toy. As others have more eloquently pointed out, it shouldn't have to be explained why people who write operating systems and compilers would consider that a no-go.


Eh, 64-bit desktops were becoming common a decade ago. Considering the small minority of developers using a 32-bit machine for development, I don't see that it is worthwhile to spend effort on that.

Note that Rust can (and does) target various 32-bit platforms (ARM and maybe RISC-V, not sure) for cross-compile. Self-hosting on a 32-bit platform is such a minor drawback these days. 64-bit ARM processors are becoming more common these days as well.


There are still plenty of 32bit machines out there. I personally have maintained and support a Intel 4004 knockoff used for controlling a big industrial machine. And of course more modern variants of the same machine with an 8008. Some of these couldn't even run DOS but it's still supported.

32bit is not even close to be being that old and lots of machines, applications or platforms require it and they need maintenance. Especially OpenBSD is a system that I as a Linux-fan recognize for supporting old hardware much better than Linux.

And it's not only 32bit x86, there is plenty of other architectures limited to 32bit, mostly from the ARM sector but I believe some older MIPS and other even more exotic arch's are supported.

If rust can't selfhost on x86 how can we expect it to selfhost on more exotic 32bit hardware that BSD supports?


Oh, I wasn't implying that OBSD or NBSD should adopt Rust as a development language. Not at all.

My point is more that I wouldn't want the Rust developers to spend time on making it able to self-host on 32-bit platforms. I'd prefer they spend their time elsewhere, that's all.


> I'd prefer they spend their time elsewhere, that's all.

And I'm sure the OpenBSD developers would prefer to spend their time on OpenBSD instead of working around Rust's lack of support for a platform that OpenBSD supports. I'm still kind of surprised that when the question of "why doesn't OpenBSD switch to Rust?" and the answer was "because Rust doesn't self host on a platform we support" that the response has been "well drop that platform." How about: no. How about: if someone wants Rust to be a viable option, then they have to adjust Rust to be a viable option, not ask other projects to massively constrain their currently working support of a platform.

I hate to sound like the old geezer, but I get the impression that many people here have no clue about software beyond desktop and mobile. It's like they don't even realize that firmware and operating systems have to be written and maintained on older hardware. There is a lot of software out there running on legacy hardware that the world depends upon which you never see.


> I'm actually gobsmacked this is the case.

So gobsmacked you apparently couldn't even begin to attempt answering the question but felt you just had to go on a rant as irrelevant as you believe it's righteous, uh?

> There used to be a saying, only half-joking, that a language that can't host/compile/bootstrap itself is nothing more than a toy. As others have more eloquently pointed out, it shouldn't have to be explained why people who write operating systems and compilers would consider that a no-go.

Rust has been self-hosted for almost as long as it's existed. The boostrapping OCaml compiler was left behind back in 2011.


> Rust has been self-hosted for almost as long as it's existed. The boostrapping OCaml compiler was left behind back in 2011.

Not on x86 which is what this whole conversation is talking about. So if OpenBSD used rust in base they would have to drop support for x86.


We haven't even gotten to alpha, hppa, loongson, luna88k, macppc, octeon, sgi, or that backwards beauty of big-endianness, sparc64. But hey, it compiles on amd64! That should be good enough, right?


He's talking specifically about OpenBSD base. Unless you can point to a rust binary in OpenBSD that Theo forgot about, he's not wrong.

i386 is relevant because OpenBSD supports i386.


> He's talking specifically about OpenBSD base.

No, he is very explicitly saying that

> There has been no attempt […] provide replacements for base POSIX utilities.

Which once again is categorically false, a github repository purporting to do exactly that has been provided.

> i386 is relevant because OpenBSD supports i386.

i386 is supported, the issue is compiling the compiler on i386.


> i386 is supported, the issue is compiling the compiler on i386.

which is required for the system to be self hosting

seriously - what is being said is this:

" oh hey lets throw away the functional and perfectly good entire base set of utilities for this 1/2 complete project on github using a language that doesn't even natively build on all of our supported platforms and wouldn't even remove the need for a C compliler in base, and further complicate the base toolchain, not to mention breaking all kinds of other builds which use shell utilities expecting certain behavior, etc ,etc, etc, because somone thought it would be 'neat' to do this. And whyyyy aren't you taking me serously??? "

every few days (hours?) some noobish person desides to ask some fantasy question about whatever topic of interest they are noobing about on openbsd (and other OS) discussion lists, and then gets whiny when they are being called out for being 'green' about life itself. this is another of those cases, and I have no idea why it got crossposted here or upvoted.


Thank you for saving me the trouble of writing that.


> every few days (hours?) some noobish person desides to ask some fantasy question about whatever topic of interest they are noobing about on openbsd (and other OS) discussion lists, and then gets whiny when they are being called out for being 'green' about life itself. this is another of those cases, and I have no idea why it got crossposted here or upvoted.

this sort of attitude is astoundingly hostile and toxic for an open source community to hold.


I agree, but understand how tiresome it can get when people who -- understandably -- don't know any better do a drive-by of your project and suggest things, things which have often already been discussed to death, or don't even need to be discussed because anyone knowledgeable about the project would immediately see there's no need for discussion.

Now, that doesn't mean that a hostile rebuff is required or good policy, but random people who actually do not know what they are talking about, and haven't taken the time to learn enough to know what they're talking about, don't really deserve a long, in-depth, drawn out rebuttal or discussion.


> There has been no attempt to move the smallest parts of the ECOSYSTEM to provide replacements for base POSIX utilities.

You deliberately cut out the part which states he's talking about the ecosystem of OpenBSD, in an OpenBSD mailing list. That is an extremely disingenuous and uncharitable cherry-picked interpretation. He was categorically talking about efforts to port OpenBSD utilities to such a language and merge them into the project (i.e. the OpenBSD ecosystem). What you're suggesting he said is just plain FUD.


And if the end user is unable to compile everything, i386 would be only half-supported (or maybe, for the austere OpenBSD maintainers, not supported at all).


I think it is important for us to think more on the resources, for programs we write consume, when: they are compiling, they are executing and when they are just "dormant" on persistent memory. Experience and history shows that the bigger the program is, the more resources it needs to function, the more bugs it has, which consequently makes it more difficult to debug and prone to failure.


I run full Windows 10 on a tablet (dual core, 2GB RAM), and it's pretty amazing to me how many websites that have no reason to run slow completely fail on it.

I can only imagine it works fine on dev machines with much faster quad+ cores and 64GB of RAM or whatever.

Just as an aside, it's done a lot to have the tablet be my primary "fiddle-at-home" machine: keeps me really conscious of resource limits, including ones I normally don't think of like screen size. (Most websites render terribly in landscape on a 10" tablet.)


Win10 just doesn't work well on 2GiB. Compressed pages are nice but not enough to prevent swapping. It also doesn't help that MS prevents you from running 32-bit on modern hardware to alleviate some of the memory pressure.

The solution is to install 32-bit Linux on it. Then it won't suck.


Windows works great, the issue is the applications, and indirectly the developers of said applications, some of which are on HN.

Most applications are requesting hundreds of megabytes if not entire gigabytes, the system will swap to death after you open an app and a browser tab on facebook.

I remember a friend who bought a 2GB netbook, the thing froze to death whenever he opened just eclipse, he had to return it.


Or i386 OpenBSD.


Rust fell into the same trap that killed many, many gamedev companies:

Performance matters. Even more than features.


It depends what kind of performance. The only performance they are "lacking" seems to be compilation speed (an annoyance, but they're working on it) and compilation memory usage (rarely a problem, considering we also have C++, Java and C# around :) ).


The performance of the compiled code is great, and the performance of the compiler is something they have identified as a major issue to fix, and are working to solve it.


So did the now-dead game companies.

Performance is like money: it's easy to squander and hard to acquire.


> Performance is like money: it's easy to squander and hard to acquire.

In this case, there were several decisions to not care about performance right now in order to emphasize correctness and shipping faster, while making sure there are no technical obstacles to making compilation faster in the future. The main time sink in Rust compilation is that all the abstractions in the Rust code get compiled into the initial bytecode representation passed to the LLVM side, and they are only reduced there, instead of cutting down on the hierarchies on the Rust side. This costs performance in several places -- the creation of all the bytecode, the copying of it, and then LLVM parsing it all in. The upside of doing it this way is that it makes the Rust-specific compiler much simpler and easier to implement, and that the optimizations that remove the towers of abstraction on the LLVM side are extremely well tested.

As Rust matures, optimizations that reduce the complexity of the created initial LLVM bytecode can be, and probably will be done on the Rust side.


Really? I'm pretty sure for most game companies, shipping is king and everything else is secondary.

Also, have you seen gameplay footage of in-development titles, particularly older ones before the days of Unity and UE4? Usually they're choppy as hell because the engine is under development at the same time as the game, and devs prioritize getting a golden but slow codepath working first so that the artists have something to go off of, and all the optimizations are shoved in to recover framerate in the months right before release.


So...which game companies?


You are equating two non-comparable things, Rust the language and gamedev companies using an abstract concept (performance). It could be a lede for an opinion piece or an essay, but it isn't an argument.


Is that still true? PUBG is probably the worst-optimized game ever written, barely getting 60fps on a setup that can run Overwatch at 200+ fps. And yet, it's the most-played game on Steam.


As someone who plays PUBG, I think it's in spite of performance not for irrelevance of performance.


Are you implying that Rust isn't performant?


If compilation exhausts the IA32 address space, then I'd say it's not adequately performant as a whole, regardless of how "efficient" the resulting binaries might be.


Hard disagree. Compilation is like encoding a video— it's a price you pay once, and if you know the resulting binary will be run millions of times, it's totally worthwhile spending a lot of compute and memory upfront to get that binary as fast as possible.


Right, but the problem is that - in the OpenBSD world - it's a price that's paid much more than once. Sure, that binary might be run millions of times, or it might be run only tens or hundreds or thousands of times before an update comes around. And that's just for one platform; OpenBSD supports lots of platforms, both 32-bit and 64-bit, and all of them are expected to be fully usable for (among other things) developing OpenBSD (which includes, you know, actually compiling OpenBSD).

To rephrase that a bit: OpenBSD is designed to be a system where any user on any platform can contribute to OpenBSD using the tools included in OpenBSD's default install. Deviations from that will almost certainly receive a cold reception at best.


What languages do you come from? What is fast compilation for you?

The compilation phase can take hours in C++. Up to a day when compiling huge projects will all the optimization flags.

Live that for some time and it will quickly prove you that you were wrong. Compilation time matters.


Fast compilation: less than a second (feels like not waiting at all)

Slow compilation: more than a minute (makes to start browsing HN, missing the end, thus losing even more time)

To have fast compilation even with big projects is hard. Go, C, and D are usually fast. Scala is usually slow.

I care about development builds primarily. The edit-compile-test loop must be really really fast. Optimization flags are irrelevant, because if performance matters you often must have them enabled for development as well.


This is off topic a bit, but there is a solution for this:

> Slow compilation: more than a minute (makes to start browsing HN, missing the end, thus losing even more time)

See the thread here: https://askubuntu.com/questions/409611/desktop-notification-...

TL;DR install undistract-me, add 2 lines to bashrc, and you will get a desktop popup when a command that takes longer than 10 seconds to complete is finished.

Fedora does this by default on install and I have found it so handy. Kick off a compile etc, then just browse HN/reddit til I get the popup.


Yes, this helps a little. As a fish user, I had to write more than two lines [0] though. A second monitor is detrimental, because the notification is too far away sometimes.

[0] https://github.com/qznc/dot/blob/master/config/fish/config.f...


I don’t think people are saying compilation doesn’t matter. Certainly, I would consider C++ to be a language that is at the high performance end of the spectrum. High performance languages, high level languages like C++, Rust, Ada, Haskell, Ocaml, And Swift have relatively long compile times but I would classify them as languages suitable for applications requiring high performance. Go is an interesting exception in that it produces pretty high performance results without long compile times.

But you do have a point. Things are so much better now than when I started programming 50 years ago. Machines and languages are so much better. Programming is a dream compared to back then.


A fast edit-compile-run cycle makes development a lot more efficient in my experience.


Right, definitely! But in that case, it's really incremental build time that's the important thing. Not that overall/first build time isn't important too, but in general I'd rather see my incremental build go down by 80% than my first build go down by 20%, and I think this is reflected in where the Rust team has historically applied their perf efforts, eg: https://blog.rust-lang.org/2016/09/08/incremental.html

(Appreciating as well that most incremental build gains come from avoiding unnecessary work, so they're as much the domain of the build system as they are of the compiler.)


So, you say Python is totally missing the point and is wrong? Even when encoding videos, performance matters.

Also, that optimizing compilation is important is no reason to not work on i386. This is a point Rust needs to fix. And not only i386 support, but also other architecture families, as host.


Python is slow as fuck.

All the heavy API and computation libraries are wrappers are C binaries that are optimized to death.


This seems like a bit of a trite point unless many rust developers are actually working inside i386. Though the compiler itself might not work very well in i386.

Not many people are whining about our C compiler toolchains not fitting into our microcontrollers.


Every port being self-hosting is a fundamental project value in OpenBSD. The reason being that they believe every port should be useful and functional, not just a novelty. And requiring that every port be self-hosting is a way to enforce this.

For example, the NetBSD project has a dreamcast port, but like most of their ports, it is crosscompiled. The last time I tried the port it would crash when put under high load for a while and would kernel panic when you tried to play audio. The netbsd dreamcast port is not functional in the sense that openbsd would like to enforce, something which is not relevant to netbsd and in no way denigrates them, but merely serves as an example.


Thanks for the context, that wasn't clear to me.


> This seems like a bit of a trite point unless many rust developers are actually working inside i386. Though the compiler itself might not work very well in i386.

Trite point? OpenBSD supports 1386[1] if they pull a rust compiler into base and start rewriting things in rust, then they can't support 1386. Dropping a supported platform is not a "trite point".

[1]: https://www.openbsd.org/plat.html


At this point i386 is legacy for most of the world. OpenBSD is an ultra-conservative, orthodox project, therefore they will probably support i386 for years into the future - I mean, they supported VAX until 2016.

That is a choice they are entitled to make, the trade-off being it would appear to make most modern technologies a poor fit for adoption in OpenBSD - that's the price they have to pay. It is a problem of their own making.


“i386” is OpenBSD's label for the 32-bit Intel architecture (they don't actually support the 80386). Intel still sells these.


They support 486's though, which is quite rare for 2017.


486s and 586s are still sold for usage as embedded systems. They are well understood and some of them managed to pass certification decades ago.


You can still buy 32-bit Xeons, even. Sometimes srs bsns requires stability, too.

https://ark.intel.com/Search/FeatureFilter?productType=proce...


not to mention, 386 or not, there's still other 'odd' 32bit platforms which still have huge use for embedded things where openbsd would work great (hello mips/arm32, for starters)


There's a 32-bit x86 processor in every PC with the Intel Management Engine version 11 or later.


OpenBSD is an ultra-conservative, orthodox project,

...which is exactly the kind of a system one needs for production environments.


If OBSD feels it can make use of older arches then so be it, many users will find less intensive jobs for the respective hardware and it saves it from going to waste/recycling.


Not to mention plenty of people in the "developed" world (let alone "developing") can't afford to buy a new computer, and thus are going to use 32-bit "legacy" desktops and laptops for a very long time.


Unfortunately those sorts of people tend to be less informed about I.T and that eventually leads to their NetBurst, Pentium M and Atom machines performing even slower than they would be if they were maintained probably. They probably waste more power waiting for the machines to accomplish their tasks in an under-maintained state.


So... my main point was that "I have to cross-compile the rust compiler from another arch" is not equivalent to "rustc is not usable in i386". And even if rustc wasn't usable in i386, the binary it produces would be.

So you could have a scenario where the i386 binaries would have to be cross compiled from a 64-bit machine but the end result would work just fine on those machines.

I wasn't aware that OpenBSD's objectives included having each arch be able to build itself. Totally reasonable goal. It's harder to do "dreamcast port"s in those scenarios, though.


How do you even find an i386 processor these days?


AMD still sells Opteron CPUs [0]. You can buy new servers with them [1].

[0] https://www.amd.com/en-us/products/server/opteron

[1] https://www.thinkmate.com/systems/servers/rax/amd#browse


In the OpenBSD world i386 == x86. And it's pretty easy to find an x86 processor now a days.


… such as? You're merely restating the claim, without providing any proof.

Consumer machines, AFAICT, are all amd64 (or x86_64, if you prefer that name). I understood the original post to mean i386 == x86, and I agree — where do you even find an x86 today (for sale, in a non-niche use case, i.e., "pretty easy")?


OpenBSD has as a goal to run on much more than just standard currently-sold consumer hardware. You can certainly disagree with that goal, but that doesn't make it go away.


Oh I totally get this—but what is this hardware that people are still using OpenBSD with that they haven't upgraded in 30 years? Targeting i386 as opposed to, say, i486 or i686 seems like an exercise in idealistic masochism.


I think you're misunderstanding: "i386" is just shorthand for "32-bit Intel". They're not specifically talking about the 80386 chip. The same problem exists when you consider newer 32-bit chip families in that ISA.


I have atleast 3 Pentium machines still lying around, plus one netbook which is 32bit only. These machines are still widely used especially were replacement is expensive or even impossible


Linux and BSD ran for a long time on 32bit systems. 4GB of memory is an ocean in my mind. Those systems should be able to compile their own programs and tools.

On a related note, we will eventually be running development tools on microcontrollers. Not that little 16bit parts will run the tools, but that 16bit parts are going away. In price sensitive areas this will not happen, but for things with a larger budget why not run the tools right on the target? If your controller is an RPi why not use it for development?


They don't mean "x86" (the 32bit instruction set), but i386 aka Intel 80386, a processor introduced in 1985: https://en.wikipedia.org/wiki/Intel_80386


https://en.wikipedia.org/wiki/IA-32

It includes 486, Pentium, etc.


From the article you linked:

> the 80386 instruction set, programming model, and binary encodings are still the common denominator for all 32-bit x86 processors, which is termed the i386-architecture, x86, or IA-32, depending on context.


It’s not a trite point for BSD but if those are the kinds of objections holding back adoption by Linux and bsd, then I think the end result will just be a new Rust-first operating system.


> It’s not a trite point for BSD but if those are the kinds of objections holding back adoption by Linux and bsd, then I think the end result will just be a new Rust-first operating system.

That would be great! Get back to me when it exists, instead of trying to dissuade an already established project from one of it's primary goals and handwaving said goals as "outdated" and "unimportant". Heck, it's open source, so if someone feels like it, they can just fork and go hog wild!


People are still using i386? I'd assume even if they are, it's such a tiny minority that it shouldn't be an excuse to hold everyone else back.


OpenBSD supports a wide variety of hardware platforms, including machines with Alpha, PA-RISC, and SPARC64 processors. On each of them, the base system is able to compile itself.

If rustc cannot even build itself on i386, what kind of support can we expect for other platforms with an even smaller user base?

On a project such as OpenBSD they cannot suddenly drop platforms and only support amd64 as portability is one of their main "selling" points. Furthermore, that would also mean to lose the developers that are interested in these alternative platforms and probably chose OpenBSD because of the platform support. Furthermore, such developers are usually not only contributing to platform-specific parts, but also to system utilities and ports.

For the full list of platforms, see https://www.openbsd.org/plat.html


> OpenBSD supports a wide variety of hardware platforms, including machines with Alpha

Can confirm, spent many happy hours hacking on an AlphaStation running OpenBSD!


> On a project such as OpenBSD they cannot suddenly drop platforms and only support amd64 as portability

They… don't need to? That rustc can't compile itself on i386 doesn't mean you can't ship a rustc for i386, it just means you have to cross-compile it.


I see it as part of portability that you do not need to use any external system for bootstrapping.

Imagine as a developer who compiles base from source you had to find another system only to compile rustc and then transfer it to your machine. And you would not have to do this only once, but for every compiler bug fix coupled with the overall rapid evolution of Rust. I think many in the OpenBSD community would oppose such an approach, even without further considering other aspects such as security implications.


This is the case IIRC with Android - even though I want to build it for a 32-bit ARM architecture, I can't build it on anything but 64-bit. I guess the vast majority of Android systems cannot even compile Android!

https://source.android.com/setup/requirements


> I see it as part of portability that you do not need to use any external system for bootstrapping.

You always need an external system for bootstrapping, you're not assembling the base C compiler with which you're compiling everything else. At one point you need to obtain a compiler from somewhere else.


...And the point is that the OpenBSD project has a policy of self-hosting. The bootstrap compiler for base (along with everything else needed to build base) must exist in base.


Addressed in the article.

> In OpenBSD there is a strict requirement that base builds base.


One of OpenBSDs sweet spots is turning old hardware into useful, secure, reliable, network infrastructure.

i386 might not be as popular as it was on 'normal' OS, but I wouldn't be surprised if OpenBSD had a lot of people still using it.


OpenBSD is a perfectly servicable operating system on an old Atom board I have (nice router!), and on i386 only Core Duo iMacs/Macbooks long abandoned by Apple. These will be perfectly good machines to use for a long time, and it's really great you can get up to date support on these systems from this great project. Obsolete means different things to different people.

I'm a big fan of what Rust promises, but the solution is not that OpenBSD changes its policies or that OpenBSD drops i386. Rust should become self hosting on i386.


Theo would say, if you think so then get to writing code instead of commenting about it.

These aren’t just philosophical comments, the implications for him and OBSD devs is to spend massive time developing these things. Dropping a supported platform and taking on a huge investment of effort needs serious justification.


Linux supported i386 until 2012, I think in this context it is more likely to refer to pre-Pentium 86 CPUs though (I'm not certain on that)

I've seen i386 multiple times to also refer to any x86 Arch.


i386 in this context refers to 32bit intel x86 cpu architecture in general and generic pc compatibles specifically. OpenBSD currently runs on 486s and better:

> All CPUs compatible with the Intel 80486 or better,[0]

[0]https://www.openbsd.org/i386.html


There's significant infrastructure in place that uses i386. It used to be fairly popular, I'm sure you can google it.


Yep:

  $ uname -a
  OpenBSD hostname 5.9 GENERIC.MP#6 i386


>Such ecosystems come with incredible costs. For instance, rust cannot >even compile itself on i386 at present time because it exhausts the >address space.

Is cargo supported on i386 platforms? Also Rust complies itself, afaik there is no way to compile Rust/Cargo but to use previous version of it. If one of the past builds of Rust is backdoored, any version between then and now is backdoored, language is safe, environment... as safe as it was never compromised. OpenBSD compiles everywhere where C code works, Rust/Cargo works where it's supported and it will takes decades to catch-up on some architectures.


Okay, there's a lot of misunderstandings here. Theo is right about some things, and wrong about some things. And people are misunderstanding what things he's right about. The things he's wrong about are very minor.

Rust absolutely works on 32-bit platforms, though we often use the i686 target rather than an i386 one. Platform support list is here: https://forge.rust-lang.org/platform-support.html

Theo is talking about building rustc, not compiling most Rust programs. That's the first distinction that it seems like many people are missing.

However, apparently the compilation process OOMs when building on a i386 box. I don't use those platforms, but I'd believe it. The Rust compiler is large. However, I thought (and looking at our CI, this seems to be true https://travis-ci.org/rust-lang/rust/jobs/311223817) we do compile with an i686-unknown-linux-gnu host (for this build), so I dunno. Maybe it was a fluke, maybe I'm misunderstanding, I'm not sure.

We often provide artifacts via cross-compiling, but this is unacceptable to OpenBSD. That's totally okay. They have good reasons for doing this.


If compilation of rustc runs out of memory it represent an upper limit on the complexity of actually viable Rust programs, and given how much software is larger than a compiler it is a discouraging performance level.


Rustc is more than just a Rust program; we compile LLVM from scratch, for example. Is the OOM in the Rust code, or in the LLVM code, or in the final linking, or what? It's not clear.

The compiler is one of the largest Rust programs that exist. Last I checked, it was three quarters of a million lines of Rust, but a quick loc shows 1.5 million lines of Rust, 2.3 million lines of C++, and 900,000 lines of C (again, mostly LLVM and jemalloc).

Servo is also very large, and they don't report having OOMs, though I'm not sure if they build on 32-bit or just cross-compile.


FWIW, i've compiled llvm/rust with 2G, I never ran out of ram compiling rust, but in llvm gnu-ld would run out of memory, using the gold linker fixes that, as well as the configuration flag for enabling separate debug info -DLLVM_USE_SPLIT_DWARF=ON


Agree. Moreover, Rust compiler contains some dark areas which nobody wants to deal with. See https://github.com/rust-lang/rust/issues/38528 for example. Basically it means that Rust compiler can suddenly take exponential time and space for compilation.

That bug really bites hard any code heavy on iterators (Rust often praised feature!). It has reliable reproduce test-case, but still it's already year old and was down-prioritized!

Hard to believe anybody uses Rust for real large project given so little attention to crucial details.


I mean, that thread has a comment less than a day ago, and Niko says:

> I'm going to lower this from P-high to reflect reality. I'm still eager to investigate but haven't had time, and making it P-high is not helping =)

P-high means someone is actively assigned and working on it, so yeah in some sense this is a down-prioritization, but only from "Someone is working on this, so put your work somewhere else" to "this is open to work on"; the de-prioritization may lead to it getting fixed sooner, as Niko is a busy guy.

So, "nobody wants to deal with" feels like a mischaracterization to me here.


Well, yes. The last comment says the issue is still there :) I mean this bug alone in fact nullify the entire incremental compilation effort. It's kind of weird.

> The de-prioritization may lead to it getting fixed sooner, as Niko is a busy guy

And Niko put "medium" priority month ago :)


True, but we're talking about 32 bit systems. Chrome stopped being able to compile on 32 bit systems years ago, as a C++ programs. I think Firefox is in the same situations.

It does sound kind of bad for Rust, but the competition isn't doing much better :p (weak excuse, I know)


There is a really, really substantial difference in the "required for basic credibility/usage" qualifications of a web browser versus an operating system. Operating system instances can run for decades, executing arbitrary tasks, without needing to be restarted. Web browsers need to be updated incredibly frequently just to remain functional for parts of the internet.

That's not to say either one is "worse" or "better", but comparing the two on an axis like platform support is like comparing tomahawk missiles and tall ships. Totally different requirements and use cases.


Travis is running containers, so even if you use a i686 rustc, you still benefit from a 64-bits kernel, meaning processes still have the full 4GB of address space. On an actual i386 linux kernel, this would be limited to 3GB. Maybe it's even less on openbsd (I don't know, but technically, it could be as low as 2GB). That could explain why it works for you and not for them.


Ah, right. Thanks.

I still thought that we generally kept it down to around 2GB of space, but maybe that's wrong.


I'm pretty sure that attack you describe is mentioned in literature as essentially undefeatable. I really wish I could remember exactly what the name of it was; the gist is, there has to be a first compiler somewhere. If at any point in the chain, the compiler is infected with a self-propagating virus that hides itself in the byte code of the binary, it can ensure that the exploit is in every future version of the compiler.

I may be remembering the details a bit wrong, but it was a good read.


You're looking for "Reflections on Trusting Trust" by Ken Thompson, one of the original co-authors of Unix:

https://dl.acm.org/citation.cfm?id=358210


There is a possible defense: https://www.dwheeler.com/trusting-trust/


... and the hope is that https://github.com/thepowersgang/mrustc will let us do this for Rust.


I think you are referring to “Reflections on Trusting Trust” by Ken Thompson.

https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thomp...


It's not undefeatable. The guy who invented it in 1970's, Paul Karger, told people the concepts on how to defeat it right afterward. Their advice for building systems in a way to catch a lot of subversion was encoded in the first standards for information security. I included most of those methods in my security framework here:

https://pastebin.com/y3PufJ0V

https://en.wikipedia.org/wiki/Trusted_Computer_System_Evalua...

One compiler made to the highest standard in development assurance is CompCert.

http://compcert.inria.fr/

It has specs of everything it does, proofs it does it in tool with minimalist checker, extracts to ML that can be compared against those specs in various ways (eg visually), can optionally be compiled with a mini-ML compiler or Scheme you homebrew yourself, and passed exhaustive testing by third party with only a few bugs in specs (not code). There's another using formal specs called KCC which could be ported to something bootstrappable like META II or Scheme.

The other requirement from TCSEC was that source be supplied to customer to build with their onsite, trusted tools. I looked into even having compilers done in Perl since it's already widely deployed. David A. Wheeler made brilliant suggestion of either bash or awk. I have put tools for those and more on rain1's bootstrapping page. rain1 or someone there called the concept "Ubiquitous Implementations." Note we've focused on technical content, not presentation, on that one being busy folks. Looks rough. :)

http://bootstrapping.miraheze.org/

You also need repo security to protect that source with it either cryptographically sealed and/or sent over secure transport. Link below on repo security from David A. Wheeler. Quite a few forms of transport security now.

https://www.dwheeler.com/essays/scm-security.html

After Thompson wrote on Karger's attack in 1980's, it took a life of its own among people that continue to mostly ignore the prior solutions. It's a problem absolutely solved to death starting with the person who discovered it in MULTICS Security Evaluation. Far as state-of-the-art, the current path of research is exploring how to integrate solutions for many languages, models, or levels of detail in one picture of a system with no abstraction gap attacks with proof of that for all inputs. That's a bit more complex but just an imperative language to assembly delivered and bootstrapped? Straight-forward but tedious, time-consuming work the first time it's done. :) Also, expensive if you buy CompCert which is only free for GPL stuff. Two of us are eyeballed CakeML's lowest-level languages as a cheat around that for verified bootstrapping.

http://cakeml.org/

EDIT: Btw, all that is technical discussion and argument. For fun, you might enjoy the story "Coding Machines" which is about only coding-related story I started reading and couldn't put down. Probably took an hour to read. It covers discovery of a Karger-Thompson-style attack along with how people might respond mentally and in terms of solutions. Some other stuff in that one.

http://www.teamten.com/lawrence/writings/coding-machines/


You want the "trusting trust" keyword.


A talk by Ken Thompson, "trusting trust".


Technically, his Turing award lecture.


And Go: https://github.com/ericlagergren/go-coreutils

Does he really not know this or is he ignoring them to make a point?


It uses remote dependencies. First compilation in a single-run environment will be very slow, there won't be second compilation. Go compiler is fast when you add the `-i` flag, without it it takes a couple of seconds to compile a few hundred lines, a few more minutes when you have to `go get` packages. Now, github goes down, your build is broken for that time.


Not sure what the "it" is in your first sentence, but allow me to address the rest of your points. "First compilation ... will be very slow": the go compiler is in fact quite fast, orders of magnitude faster than C++ or rust compilers. The fact that there's a noticeable pause when you want to compile thousands of files does not mean that its slow. "There won't be a second compilation": not for each top level tool, but there certainly could be shared packages that don't need to be recompiled. "Go compiler is fast when you add the `-i` flag", ok, do that then. "Now, github goes down, your build is broken": you only need to depend on external github references if you want always to build against the latest version of your referenced code. I can't imagine anyone interested in stability wants this. There are lots of options for vendoring your dependencies in tree.


> "Go compiler is fast when you add the `-i` flag", ok, do that then

Only useful when you have mutable environment, most build spaces don't have it because it's insecure. So it's useless for big projects with external dependencies, you HAVE TO download them on each and every build.

>your build is broken": you only need to depend on external github references

Go projects use not only github repos, there is gopkg, gitlab and some others which I don't remember. All of them must be online and works fast, any lag will delay whole build system, which in many cases is pipe-lined. I can't imagine anyone interested in stability wants this.


>Not sure what the "it" is in your first sentence, but allow me to address the rest of your points.

The project in linked repository.


go-coreutils is abandoned, and not POSIX compliant. It was meant to be a proof-of-concept, and it kind of was, in a negative way.


What was the problem?


I have no idea. I have been watching 5-6 similar Go projects (some with the same name) which lost interest (some stopped after first commit), tried to create their own versions of coreutils (incompatible), never saw adoption, testing, or support, and they all tanked.


Some would consider Go to not be as safe as Rust for example. But his argument still holds for POSIX compliance, it is a loveless task so it it gets done at snail's pace.


This doesn't seem to be more than a toy project [1]. Not to mention, it's GPL-licensed, so it's basically useless for OpenBSD [2].

[1] - https://github.com/ericlagergren/go-coreutils/blob/master/xx...

[2] - https://www.openbsd.org/policy.html C-f GPL


This project is very incomplete (most commands are not implemented) and didn't get a commit since almost 6 months.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: