>Such ecosystems come with incredible costs. For instance, rust cannot
even compile itself on i386 at present time because it exhausts the
Does he? He "stat[es as] fact" that
> There has been no attempt to move the smallest parts of the ecosystem, to provide replacements for base POSIX utilities.
which as xwvvvvwx notes is categorically wrong, then points out that rustc can't compile itself on i386, which is relevant… how?
Remember he is speaking as the leader of an operating system
project. As in, a basic part of the project functioning normally is compiling the whole thing from scratch. If something needs cross-compilation to even get started it won't end up in OpenBSD base.
I seem to recall when it supported more architectures they made a public show about how they weren't going to cross compile even for targeting wimpy/sluggish machines, because recompiling the OS was a good stress test for the kernel itself.
Those lowest common denominator systems tend to find bugs not present elsewhere.
And stating that openbsd looks like performance art and security theater seems to indicate you haven't looked at what openbsd has done for security.
Yes, bugs are found at boundaries and interfaces, large or small. Something has to be different for the output to be different.
If someone writes a bug proof TLS implementation while at the bottom of the pool wearing SCUBA gear, it is still security theater.
Another point about the *BSDs, something doesn't have to be in base for you to use it. The base system is supposed to be small and not have a lot of dependencies. You are free to use things in ports and packages or compile them yourself. So this is not the same as never being able to use rust.
I'm actually gobsmacked this is the case. There used to be a saying, only half-joking, that a language that can't host/compile/bootstrap itself is nothing more than a toy. As others have more eloquently pointed out, it shouldn't have to be explained why people who write operating systems and compilers would consider that a no-go.
Note that Rust can (and does) target various 32-bit platforms (ARM and maybe RISC-V, not sure) for cross-compile. Self-hosting on a 32-bit platform is such a minor drawback these days. 64-bit ARM processors are becoming more common these days as well.
32bit is not even close to be being that old and lots of machines, applications or platforms require it and they need maintenance. Especially OpenBSD is a system that I as a Linux-fan recognize for supporting old hardware much better than Linux.
And it's not only 32bit x86, there is plenty of other architectures limited to 32bit, mostly from the ARM sector but I believe some older MIPS and other even more exotic arch's are supported.
If rust can't selfhost on x86 how can we expect it to selfhost on more exotic 32bit hardware that BSD supports?
My point is more that I wouldn't want the Rust developers to spend time on making it able to self-host on 32-bit platforms. I'd prefer they spend their time elsewhere, that's all.
And I'm sure the OpenBSD developers would prefer to spend their time on OpenBSD instead of working around Rust's lack of support for a platform that OpenBSD supports. I'm still kind of surprised that when the question of "why doesn't OpenBSD switch to Rust?" and the answer was "because Rust doesn't self host on a platform we support" that the response has been "well drop that platform." How about: no. How about: if someone wants Rust to be a viable option, then they have to adjust Rust to be a viable option, not ask other projects to massively constrain their currently working support of a platform.
I hate to sound like the old geezer, but I get the impression that many people here have no clue about software beyond desktop and mobile. It's like they don't even realize that firmware and operating systems have to be written and maintained on older hardware. There is a lot of software out there running on legacy hardware that the world depends upon which you never see.
So gobsmacked you apparently couldn't even begin to attempt answering the question but felt you just had to go on a rant as irrelevant as you believe it's righteous, uh?
> There used to be a saying, only half-joking, that a language that can't host/compile/bootstrap itself is nothing more than a toy. As others have more eloquently pointed out, it shouldn't have to be explained why people who write operating systems and compilers would consider that a no-go.
Rust has been self-hosted for almost as long as it's existed. The boostrapping OCaml compiler was left behind back in 2011.
Not on x86 which is what this whole conversation is talking about. So if OpenBSD used rust in base they would have to drop support for x86.
i386 is relevant because OpenBSD supports i386.
No, he is very explicitly saying that
> There has been no attempt […] provide replacements for base POSIX utilities.
Which once again is categorically false, a github repository purporting to do exactly that has been provided.
> i386 is relevant because OpenBSD supports i386.
i386 is supported, the issue is compiling the compiler on i386.
which is required for the system to be self hosting
seriously - what is being said is this:
oh hey lets throw away the functional and perfectly good entire base set of utilities for this 1/2 complete project on github using a language that doesn't even natively build on all of our supported platforms and wouldn't even remove the need for a C compliler in base, and further complicate the base toolchain, not to mention breaking all kinds of other builds which use shell utilities expecting certain behavior, etc ,etc, etc, because somone thought it would be 'neat' to do this. And whyyyy aren't you taking me serously???
every few days (hours?) some noobish person desides to ask some fantasy question about whatever topic of interest they are noobing about on openbsd (and other OS) discussion lists, and then gets whiny when they are being called out for being 'green' about life itself. this is another of those cases, and I have no idea why it got crossposted here or upvoted.
this sort of attitude is astoundingly hostile and toxic for an open source community to hold.
Now, that doesn't mean that a hostile rebuff is required or good policy, but random people who actually do not know what they are talking about, and haven't taken the time to learn enough to know what they're talking about, don't really deserve a long, in-depth, drawn out rebuttal or discussion.
You deliberately cut out the part which states he's talking about the ecosystem of OpenBSD, in an OpenBSD mailing list. That is an extremely disingenuous and uncharitable cherry-picked interpretation. He was categorically talking about efforts to port OpenBSD utilities to such a language and merge them into the project (i.e. the OpenBSD ecosystem). What you're suggesting he said is just plain FUD.
I can only imagine it works fine on dev machines with much faster quad+ cores and 64GB of RAM or whatever.
Just as an aside, it's done a lot to have the tablet be my primary "fiddle-at-home" machine: keeps me really conscious of resource limits, including ones I normally don't think of like screen size. (Most websites render terribly in landscape on a 10" tablet.)
The solution is to install 32-bit Linux on it. Then it won't suck.
Most applications are requesting hundreds of megabytes if not entire gigabytes, the system will swap to death after you open an app and a browser tab on facebook.
I remember a friend who bought a 2GB netbook, the thing froze to death whenever he opened just eclipse, he had to return it.
Performance matters. Even more than features.
Performance is like money: it's easy to squander and hard to acquire.
In this case, there were several decisions to not care about performance right now in order to emphasize correctness and shipping faster, while making sure there are no technical obstacles to making compilation faster in the future. The main time sink in Rust compilation is that all the abstractions in the Rust code get compiled into the initial bytecode representation passed to the LLVM side, and they are only reduced there, instead of cutting down on the hierarchies on the Rust side. This costs performance in several places -- the creation of all the bytecode, the copying of it, and then LLVM parsing it all in. The upside of doing it this way is that it makes the Rust-specific compiler much simpler and easier to implement, and that the optimizations that remove the towers of abstraction on the LLVM side are extremely well tested.
As Rust matures, optimizations that reduce the complexity of the created initial LLVM bytecode can be, and probably will be done on the Rust side.
Also, have you seen gameplay footage of in-development titles, particularly older ones before the days of Unity and UE4? Usually they're choppy as hell because the engine is under development at the same time as the game, and devs prioritize getting a golden but slow codepath working first so that the artists have something to go off of, and all the optimizations are shoved in to recover framerate in the months right before release.
To rephrase that a bit: OpenBSD is designed to be a system where any user on any platform can contribute to OpenBSD using the tools included in OpenBSD's default install. Deviations from that will almost certainly receive a cold reception at best.
The compilation phase can take hours in C++. Up to a day when compiling huge projects will all the optimization flags.
Live that for some time and it will quickly prove you that you were wrong. Compilation time matters.
Slow compilation: more than a minute (makes to start browsing HN, missing the end, thus losing even more time)
To have fast compilation even with big projects is hard. Go, C, and D are usually fast. Scala is usually slow.
I care about development builds primarily. The edit-compile-test loop must be really really fast. Optimization flags are irrelevant, because if performance matters you often must have them enabled for development as well.
> Slow compilation: more than a minute (makes to start browsing HN, missing the end, thus losing even more time)
See the thread here: https://askubuntu.com/questions/409611/desktop-notification-...
TL;DR install undistract-me, add 2 lines to bashrc, and you will get a desktop popup when a command that takes longer than 10 seconds to complete is finished.
Fedora does this by default on install and I have found it so handy. Kick off a compile etc, then just browse HN/reddit til I get the popup.
But you do have a point. Things are so much better now than when I started programming 50 years ago. Machines and languages are so much better. Programming is a dream compared to back then.
(Appreciating as well that most incremental build gains come from avoiding unnecessary work, so they're as much the domain of the build system as they are of the compiler.)
Also, that optimizing compilation is important is no reason to not work on i386. This is a point Rust needs to fix. And not only i386 support, but also other architecture families, as host.
All the heavy API and computation libraries are wrappers are C binaries that are optimized to death.
Not many people are whining about our C compiler toolchains not fitting into our microcontrollers.
For example, the NetBSD project has a dreamcast port, but like most of their ports, it is crosscompiled. The last time I tried the port it would crash when put under high load for a while and would kernel panic when you tried to play audio. The netbsd dreamcast port is not functional in the sense that openbsd would like to enforce, something which is not relevant to netbsd and in no way denigrates them, but merely serves as an example.
Trite point? OpenBSD supports 1386 if they pull a rust compiler into base and start rewriting things in rust, then they can't support 1386. Dropping a supported platform is not a "trite point".
That is a choice they are entitled to make, the trade-off being it would appear to make most modern technologies a poor fit for adoption in OpenBSD - that's the price they have to pay. It is a problem of their own making.
...which is exactly the kind of a system one needs for production environments.
So you could have a scenario where the i386 binaries would have to be cross compiled from a 64-bit machine but the end result would work just fine on those machines.
I wasn't aware that OpenBSD's objectives included having each arch be able to build itself. Totally reasonable goal. It's harder to do "dreamcast port"s in those scenarios, though.
Consumer machines, AFAICT, are all amd64 (or x86_64, if you prefer that name). I understood the original post to mean i386 == x86, and I agree — where do you even find an x86 today (for sale, in a non-niche use case, i.e., "pretty easy")?
On a related note, we will eventually be running development tools on microcontrollers. Not that little 16bit parts will run the tools, but that 16bit parts are going away. In price sensitive areas this will not happen, but for things with a larger budget why not run the tools right on the target? If your controller is an RPi why not use it for development?
It includes 486, Pentium, etc.
> the 80386 instruction set, programming model, and binary encodings are still the common denominator for all 32-bit x86 processors, which is termed the i386-architecture, x86, or IA-32, depending on context.
That would be great! Get back to me when it exists, instead of trying to dissuade an already established project from one of it's primary goals and handwaving said goals as "outdated" and "unimportant". Heck, it's open source, so if someone feels like it, they can just fork and go hog wild!
If rustc cannot even build itself on i386, what kind of support can we expect for other platforms with an even smaller user base?
On a project such as OpenBSD they cannot suddenly drop platforms and only support amd64 as portability is one of their main "selling" points. Furthermore, that would also mean to lose the developers that are interested in these alternative platforms and probably chose OpenBSD because of the platform support. Furthermore, such developers are usually not only contributing to platform-specific parts, but also to system utilities and ports.
For the full list of platforms, see https://www.openbsd.org/plat.html
Can confirm, spent many happy hours hacking on an AlphaStation running OpenBSD!
They… don't need to? That rustc can't compile itself on i386 doesn't mean you can't ship a rustc for i386, it just means you have to cross-compile it.
Imagine as a developer who compiles base from source you had to find another system only to compile rustc and then transfer it to your machine. And you would not have to do this only once, but for every compiler bug fix coupled with the overall rapid evolution of Rust. I think many in the OpenBSD community would oppose such an approach, even without further considering other aspects such as security implications.
You always need an external system for bootstrapping, you're not assembling the base C compiler with which you're compiling everything else. At one point you need to obtain a compiler from somewhere else.
> In OpenBSD there is a strict requirement that base builds base.
i386 might not be as popular as it was on 'normal' OS, but I wouldn't be surprised if OpenBSD had a lot of people still using it.
I'm a big fan of what Rust promises, but the solution is not that OpenBSD changes its policies or that OpenBSD drops i386. Rust should become self hosting on i386.
These aren’t just philosophical comments, the implications for him and OBSD devs is to spend massive time developing these things. Dropping a supported platform and taking on a huge investment of effort needs serious justification.
I've seen i386 multiple times to also refer to any x86 Arch.
> All CPUs compatible with the Intel 80486 or better,
$ uname -a
OpenBSD hostname 5.9 GENERIC.MP#6 i386
Is cargo supported on i386 platforms? Also Rust complies itself, afaik there is no way to compile Rust/Cargo but to use previous version of it. If one of the past builds of Rust is backdoored, any version between then and now is backdoored, language is safe, environment... as safe as it was never compromised. OpenBSD compiles everywhere where C code works, Rust/Cargo works where it's supported and it will takes decades to catch-up on some architectures.
Rust absolutely works on 32-bit platforms, though we often use the i686 target rather than an i386 one. Platform support list is here: https://forge.rust-lang.org/platform-support.html
Theo is talking about building rustc, not compiling most Rust programs. That's the first distinction that it seems like many people are missing.
However, apparently the compilation process OOMs when building on a i386 box. I don't use those platforms, but I'd believe it. The Rust compiler is large. However, I thought (and looking at our CI, this seems to be true https://travis-ci.org/rust-lang/rust/jobs/311223817) we do compile with an i686-unknown-linux-gnu host (for this build), so I dunno. Maybe it was a fluke, maybe I'm misunderstanding, I'm not sure.
We often provide artifacts via cross-compiling, but this is unacceptable to OpenBSD. That's totally okay. They have good reasons for doing this.
The compiler is one of the largest Rust programs that exist. Last I checked, it was three quarters of a million lines of Rust, but a quick loc shows 1.5 million lines of Rust, 2.3 million lines of C++, and 900,000 lines of C (again, mostly LLVM and jemalloc).
Servo is also very large, and they don't report having OOMs, though I'm not sure if they build on 32-bit or just cross-compile.
That bug really bites hard any code heavy on iterators (Rust often praised feature!). It has reliable reproduce test-case, but still it's already year old and was down-prioritized!
Hard to believe anybody uses Rust for real large project given so little attention to crucial details.
> I'm going to lower this from P-high to reflect reality. I'm still eager to investigate but haven't had time, and making it P-high is not helping =)
P-high means someone is actively assigned and working on it, so yeah in some sense this is a down-prioritization, but only from "Someone is working on this, so put your work somewhere else" to "this is open to work on"; the de-prioritization may lead to it getting fixed sooner, as Niko is a busy guy.
So, "nobody wants to deal with" feels like a mischaracterization to me here.
> The de-prioritization may lead to it getting fixed sooner, as Niko is a busy guy
And Niko put "medium" priority month ago :)
It does sound kind of bad for Rust, but the competition isn't doing much better :p (weak excuse, I know)
That's not to say either one is "worse" or "better", but comparing the two on an axis like platform support is like comparing tomahawk missiles and tall ships. Totally different requirements and use cases.
I still thought that we generally kept it down to around 2GB of space, but maybe that's wrong.
I may be remembering the details a bit wrong, but it was a good read.
One compiler made to the highest standard in development assurance is CompCert.
It has specs of everything it does, proofs it does it in tool with minimalist checker, extracts to ML that can be compared against those specs in various ways (eg visually), can optionally be compiled with a mini-ML compiler or Scheme you homebrew yourself, and passed exhaustive testing by third party with only a few bugs in specs (not code). There's another using formal specs called KCC which could be ported to something bootstrappable like META II or Scheme.
The other requirement from TCSEC was that source be supplied to customer to build with their onsite, trusted tools. I looked into even having compilers done in Perl since it's already widely deployed. David A. Wheeler made brilliant suggestion of either bash or awk. I have put tools for those and more on rain1's bootstrapping page. rain1 or someone there called the concept "Ubiquitous Implementations." Note we've focused on technical content, not presentation, on that one being busy folks. Looks rough. :)
You also need repo security to protect that source with it either cryptographically sealed and/or sent over secure transport. Link below on repo security from David A. Wheeler. Quite a few forms of transport security now.
After Thompson wrote on Karger's attack in 1980's, it took a life of its own among people that continue to mostly ignore the prior solutions. It's a problem absolutely solved to death starting with the person who discovered it in MULTICS Security Evaluation. Far as state-of-the-art, the current path of research is exploring how to integrate solutions for many languages, models, or levels of detail in one picture of a system with no abstraction gap attacks with proof of that for all inputs. That's a bit more complex but just an imperative language to assembly delivered and bootstrapped? Straight-forward but tedious, time-consuming work the first time it's done. :) Also, expensive if you buy CompCert which is only free for GPL stuff. Two of us are eyeballed CakeML's lowest-level languages as a cheat around that for verified bootstrapping.
EDIT: Btw, all that is technical discussion and argument. For fun, you might enjoy the story "Coding Machines" which is about only coding-related story I started reading and couldn't put down. Probably took an hour to read. It covers discovery of a Karger-Thompson-style attack along with how people might respond mentally and in terms of solutions. Some other stuff in that one.
Does he really not know this or is he ignoring them to make a point?
Only useful when you have mutable environment, most build spaces don't have it because it's insecure. So it's useless for big projects with external dependencies, you HAVE TO download them on each and every build.
>your build is broken": you only need to depend on external github references
Go projects use not only github repos, there is gopkg, gitlab and some others which I don't remember. All of them must be online and works fast, any lag will delay whole build system, which in many cases is pipe-lined. I can't imagine anyone interested in stability wants this.
The project in linked repository.
 - https://github.com/ericlagergren/go-coreutils/blob/master/xx...
 - https://www.openbsd.org/policy.html C-f GPL