Hacker News new | past | comments | ask | show | jobs | submit login
Weird architectures weren’t supported to begin with (2021) (yossarian.net)
107 points by dbaupp on Feb 18, 2023 | hide | past | favorite | 93 comments



I never agreed with this. Who defines what is considered "weird"? Should all projects declare up-front that it only is designed to work on: - x86_64 and aarch64 - linux, darwin and windows - glibc (no musl) - binutils x.xx - ...

... where do you stop before you're defining an exact linux distribution?

If it weren't for people building and using packages for architectures that are "unsupported", I should point out that there would be no good userspace support for ARM or aarch64. These too were once "weird architectures" that only have good support now because of people building for (and using) these architectures without permission, then shaking all the bugs out. This is currently still happening for risc-v.

I have to contrast this attitude to CPython, who, while I'm sure they never originally said that their code was designed to run on s390 (risc-v, pick your architecture..), would not go ahead with a change that froze out these users.

I also don't think the "we're just a poor bunch of small-time package maintainers whose package accidentally became popular" angle quite works, because they call themselves the "Python Cryptographic Authority" and use the prominent package name `cryptography`, which certainly appears to imply some sort of official status. I wouldn't blame people for inferring that its support policies were somewhat aligned with the python foundation's.


I can't be bothered to support you if my choice of programming language doesn't work on your washing machine/Itanium server/IBM mainframe, and I don't see why others should. Maybe if you donated the necessary hardware to test the software against and pay for the extra effort required to make sure the software works, but only if the maintainers are interested in spending their time that way in the first place.

All of these ports exist because volunteers or companies decide to put effort into porting these projects. They're usually outside the control of the original maintainers.

If you want software to work on your machines, submit patches to make it work, pay someone to make it work for you, or pay for a support contract with someone who can make it work.

You're entitled to the terms stated in the contract you've signed, which is usually nothing. If using a modern language is incompatible with your ISA of choice, ask the ISA developers to port the necessary compilers so it does work.

In this instance, the language weird architectures don't support is Rust, which has a GCC frontend, assuming your operating system is at least somewhat up to date.

This "if you call it open source you must make your code work for my niche use case" mentality is just one of the many ways of being entitled that make maintaining open source software suck. If you absolutely need software to keep working for you, buy hardware that's actually supported in the first place.


"support" means two things. On the one extreme end, it means "provide commerical support for a product", on the other end it means "permit it to happen". I wouldn't support anything in the first sense, but if I would maintain a popular package I would strive to support as much as possible in the second sense.

In my experience, keeping code general and cross platform is a great way to ensure correctness. Especially in C(++). I can't count how many bugs I caught by cross compiling Linux code on Windows, or with different compilers, or different architectures. Usually the kind of sleeping bug due to undefined behavior or wrong assumptions that kind of worked but might blow up a long time later.

In most projects, if somebody provides a small patch to make my code work for their use case, and it's not too much trouble for me, and my tests pass, then I'll gladly include it. I'll not "support-support" it, but I think one should support it. At least if I open-sourced it because I want it to be useful to other people.


Many (most?) projects bypass this by simply not specifying which platforms are supported, and making decisions about breaking support on an ad hoc basis. Of course OSS developers are entitled to do this if they think it's best, but that's a good way to either create a burden for yourself when people start using your package in unintended ways, or to annoy people when something suddenly stops working because you never intended to support it.

Rust is doing it right by explicitly stating which targets are supported and to what extent. You can use it for other targets (I've written a lot of code for AVR which isn't supported at all) but if something breaks they stated up front that it wasn't supported.


> I have to contrast this attitude to CPython, who, while I'm sure they never originally said that their code was designed to run on s390 (risc-v, pick your architecture..), would not go ahead with a change that froze out these users.

You have a wrong recollection of history. CPython removed support for several platforms, including former major ones like Windows XP and Vista.

> Attached PR removes code to support Windows Vista.

https://bugs.python.org/issue32592

And Python 3.9+ doesn't work anymore on Windows 7:

https://discuss.python.org/t/windows-7-support-for-python-3-...


Perl might be a better stand in for the argument. It has supported oddball platforms for its entire history, with pretty extensive build support that checks for lots of platform specific settings.


“Congratulations! You aren’t running Eunice!” From the early Perl configuration script.


I'm not counting platforms removed because of deprecation.


Surely Hewlett Packard’s 1990s CPUs are at least as deprecated as Windows 7?

Or IBM’s System 390 mainframes, which were discontinued in 2004?

What is the difference?


> I also don't think the "we're just a poor bunch of small-time package maintainers whose package accidentally became popular" angle quite works, because they call themselves the "Python Cryptographic Authority" and use the prominent package name `cryptography`, which certainly appears to imply some sort of official status.

Author here; I think you might have misunderstood this point. It’s not about official status, blessing, size, or, funding: it’s the fact that cryptography (and many other things) are hard to do safely and reliably across multiple platforms, when the common layer of interaction is fundamentally unreliable.

The problem here isn’t your own initiative (by all means, compile anything on any architecture you please), but whether it’s reasonable for projects to pretend to paper over those unreliabilities (and, in doing so, accept responsibility for platforms they didn’t want to support in the first place).

My intuition is that it isn’t reasonable on its own, and that expecting maintainers to do this (particularly when so much of it boils down to free corporate support) is unfair and a security hazard.


> The problem here isn’t your own initiative (by all means, compile anything on any architecture you please), but whether it’s reasonable for projects to pretend to paper over those unreliabilities (and, in doing so, accept responsibility for platforms they didn’t want to support in the first place).

Borderline agree, though the debate around this post always expanded to cover projects in general (which is not your fault).


Yep -- I don't want this post to be read as a dictum, but as an identification of a conflict between two principles the OSS communities generally tries to uphold: we all generally agree that (1) we should be able to do whatever we like with free and open software, and (2) that maintainers are not responsible for what we do, or for fixing our problems.

Open source ate the world, which has meant a heavy lilt towards (1): we see people go out and do stuff without questioning the underlying design and system invariants that made a piece of open source software secure and desirable in the first place. That ends up eroding (2); I'd like to see us rebalance a bit back towards maintainers' interests.


Blogger who wrote a blog post about this too. [1]

I disagree with you somewhat.

> expecting maintainers to do this (particularly when so much of it boils down to free corporate support) is unfair and a security hazard.

This would be true if its premise was true, but its premise is not quite true.

One of the maintainers works for Red Hat Security Engineering. Another has a computer security company.

If the latter wanted to charge for work on the library, he certainly could. The former probably gets paid to work on it already.

So yeah, I'd expect something better from them.

[1]: https://gavinhoward.com/2021/02/rust-zig-and-the-futility-of...


> One of the maintainers works for Red Hat Security Engineering. Another has a computer security company.

I don't know where you got this from. I know Cryptography's maintainers well, and neither works at Red Hat or owns a security company.

I know for a fact that neither gets paid to work on Cryptography. Even if they did, there is no guarantee that having a job in a company grants you the mandate (or latitude) to maintain support for random platforms, especially ones used by other companies.


aarch64 and riscv got ports and support primarily because they have major industry backing, not because of hobbyists.


Do you think it's hobbyists trying to run cryptography on s390?


Not sure, but I’d love to know more about their specific usage, because it seems something fixable.


I one had to deal with a 12-bit OS because in the early Cold War the Navy had to do some targeting calculation that required exactly 12 bits. So they ordered a computer with 12 bits.


> The security of a program is a function of its own design and testing, as well as the design, testing, and basic correctness of its underlying platform: everything from the userspace, to the kernel, to the compilers themselves.

You don't even have to worry about bugs. There are many cryptography primitives that require constant-time to guarantee security. That means that the latency of each machine code instruction could be vital to security. The latency of instructions between vendors within the same architecture is already a concern.


This is a good point! But to be maximally precise: variations between each ISA implementation's latencies is not particularly important; what's important is that instructions on data-dependent paths are not themselves data-dependent in timing.

In other words: it's okay if AMD64 takes 9 cycles to execute an instruction while IA32e takes 7; the problem arises when AMD64 takes 9 + k cycles for k bytes of input while cryptographic engineers have tested on IA32e and assumed a constant overhead of 7 cycles.


> the problem arises when AMD64 takes 9 + k cycles for k bytes of input while cryptographic engineers have tested on IA32e and assumed a constant overhead of 7 cycles.

Why is that a problem? Doesn't that just leak something you could determine by just looking at the CPU type?


That also leaks how many bytes of data are being processed. This sometimes matters.

When checking for a password match you have to check all characters in the.string otherwise you leak where the mismatch was. Even in a properly salted and hashed scheme that makes breaking the password easier.


These side channel attacks allow perfectly secure algorithms to leak plaintext or even complete keys.


In an ideal world, a cryptography library wouldn't have to care about these kind of details. It would be written in a completely cross-platform, mathematically pure fashion, and it would be the job of the platform to prevent information leaks (constant time, constant power consumption, ...).

I wonder if there is a CPU that you can switch to "constant resource mode" where it runs everything in fixed time, albeit slower? Or maybe you could run cryptographic operations in a black box (like an extra chip or a networked node) that replies on a fixed schedule?


> I wonder if there is a CPU that you can switch to "constant resource mode" where it runs everything in fixed time, albeit slower? Or maybe you could run cryptographic operations in a black box (like an extra chip or a networked node) that replies on a fixed schedule?

I believe you can disable the CPU cache on x86, which at least gets closer.

However, “slower” in this case means thousands of times slower.


no problem..i'll just hang out here until you come up with a general way to transform an O(n) operation for arbitrary n into a 1000x constant time one.


Bounded time. If you have some operation that could leak information, like checking if a key can decrypt something, take the longest possible time and then add some, and only return after that time. If it takes 50 ms per request then so be it.

Edit in response to sibling: Imagine it doesn't just take 1s but it also makes a clicking noise like a relay. You could still do 10000 requests/second if you install 10000 of them :-D. I wonder if for certain use cases it would be worth the trade-off, if you had proof of absolutely being side-channel free.


Sleep until 1s total has passed. If the operation took more than that, panic.

I don't think that's very practical, though. :V


There’s always been a weird double standard in open source software where it has been seen as ‘reasonable’ to expect the same code to compile and work on everything from an Arduino to an IBM mainframe, PDP11 to a RISC-V.

..But getting it to work on Windows? What? Do you expect the developers to fork out for Windows licenses? Don’t be ridiculous. Heck, some software is still downright snooty about the absurd idea that it should run cleanly on a Mac.


What's always seemed weird about that, to me, is that I've never been surprised when some lone hacker — who is not the developer of the tool in question — volunteers to use the PA-RISC machine they have access to make something work in that environment. Contrast that with the Windows world, where maybe two thirds of all developers that exist on the planet use Windows, somehow they still can't find one bloke who'll do the same work there. How can the Windows world be so rich with talented professionals but still never seem to find one who'll contribute a patch.


> How can the Windows world be so rich with talented professionals but still never seem to find one who'll contribute a patch.

The obvious explanation is that the set of Windows users that are also competent and willing to work on these projects is almost empty.


“Why will nobody from the Windows community contribute to our software? Is it because I said that my baseline assumption is that they’re all incompetent?”


I know plenty of extremely competent software developers that are Windows users but, it seems, they use different software. The intersection I found was people inclined to help who are forced by corporate powers to use Windows on environments where WSL doesn’t work well (I’m one of those).


Well, it’s often not just a ‘patch’, is it?

If the software is written in ways that deeply embed assumptions of POSIX-derived concepts then it’s not just popping in a few extra #ifdef WINDOWS blocks.


Most software shouldn’t be looking to the OS that closely anyway and anything that has to interact closely with the platform in non-portable ways should be, in an ideal world, separated into its own library so it never needs to be ported to anything other than the platform it calls home.


Playing devil's advocate: since the operating system mostly abstracts the underlying hardware, when programming with a high-level language, an IBM mainframe running Linux and a MIPS router running Linux are both closer to an AMD64 desktop running Linux than the same AMD64 desktop running Windows. And Windows is a particularly annoying case, since its native API is so different from the Linux native API (and details leak even when using API wrappers, for instance removing a recently-created file can fail on Windows because some other process like an antivirus grabbed a handle to it).

But yes, I agree that a nontrivial fraction of the objections to making software run on Windows (or Mac) is for ideological reasons.


Everyone is welcome to report issues in my projects. I'm allowed to pick which ones I work on. I use Linux, so issues reproducible on Linux have priority. RISC-V sounds fun, I'd probably took a look if I could reproduce the issue with qemu. I don't find Windows fun, and I doubt I would fix any Windows-speficic issue myself (unless it's something trivial, like newline handling).

Pull requests are a different beast, Windows-users are always welcome to fix the issues they face and submit their improvements back.


Obviously I get that something that is written in ‘portable C’ for Linux-flavored POSIX plus GNU tools is going to be harder to get working on Windows than on m68k Linux. That’s not in dispute.

It’s more the way that people act like targeting Linux-flavored POSIX with ‘portable C’ both A) is the be-all and end-all of portability and B) entitles people who have got Linux-flavored POSIX environments and C tool chains up and running in arbitrarily weird contexts to full support.

There are other environments. Some of them are very widely used. Acting bemused by the idea someone would want to run your software in somewhere other than a Linux context, while being unsurprised that people want to take that Linux context and run it in arbitrarily weird ways is the double standard I am confused by.

Obviously it’s not universal - there are plenty of open source projects that take different paths and philosophies towards portability - think of Python, say, or Firefox.

But there is a persistent attitude that because Linux can run ‘anywhere’, supporting Linux is portability.


To be fair, software written against Unix-flavored POSIX runs on almost every commercially supported operating system out there, from Linux and QNX, macOS and all the way to z/OS, which is a certified UNIX operating system (incredibly).

The only significantly used OS that’s not there is Windows. Also, IBMi, Unisys’s MCP and Atos’s GCOS and GECOS (surprisingly still supported). And OpenVMS too.


… you’re making my point for me.

“Why are you complaining that you can’t use this software on Windows? It’s portable - You can run it on a zMachine! Heck, it even runs on BSDs

Windows is not some obscure platform nobody uses. Wanting to run software on it is reasonable.

And I feel I should point out I am saying this as a lifelong Mac user, not a Microsoft shill. Parts of the Linux-centric community’s blind obstinate insistence that Windows just isn’t relevant because it’s non-POSIX just looks petulant.


> Windows is not some obscure platform nobody uses. Wanting to run software on it is reasonable.

If the maintainers don’t use Windows, it’s kind of not really their problem. If the user wants to use Windows and wants the software to run on Windows (and not under WSL), then they can fix it. Patches are usually welcome.

On the Mac side, it’s kind of in a sweet spot: the GUI is good, lots of excellent software for it, AND it’s a good Unix. When working on a Mac, I very rarely need to resort to a virtual Linux environment the way I do on Windows.


Microsoft has too many billions of dollars to count. Maybe they should be the ones to be held accountable for upholding community standards, not the free volunteers.


Which ‘community standards’ do you mean?

POSIX?


POSIX is a corporate standard.

Community standards are usually not formalized.


> The C abstract machine, despite looking a lot like a PDP-11, leaks the underlying memory and ordering semantics of the architecture being targeted. The result is that even seasoned C programmers regularly rely on architecture-specific assumptions when writing ostensibly cross-platform code: assumptions about the atomicity of reads and writes, operation ordering, coherence and visibility in self-modifying code, the safety and performance of unaligned accesses, and so forth. Each of these, apart from being a potential source of unsafety, are impossible to detect statically in the general case: they are, after all, perfectly correct (and frequently intended!) on the programmer’s host architecture.

I don't disagree with his general point that C isn't really cross-platform, but these are bad examples:

- The C memory model addresses the issues regarding atomicity, memory ordering, and the safety of unaligned accesses (although users might not like the answer for the last one).

- Coherence within a C program is guaranteed by the language, so issues with coherence only arise when interacting with an incoherent external agent. How could any language solve this?

- Are there any examples of self-modifying code that don't already make assumptions about the underlying ISA? If you're already doing that, dealing with data/instruction memory consistency doesn't seem particularly onerous. Even when only targeting AArch64, it isn't possible to write platform-independent userspace assembly code that does this because instruction cache maintenance instructions might not be enabled for EL0.


A lot of the problem is that GCC hasn't supported Rust, and LLVM doesn't support many platforms. Since this article was written (2021), there's been work to implement Rust fully in GCC, as well as ways to call from LLVM the GCC backend (which supports far more architectures). So there's a reasonable expectation that some of this problem will be fixed in the long-term.

IBM mainframes write most paychecks, and companies do pay some OSS developers to support some of the architectures the author doesn't like. Making it impossible is different.


I think the frustrating thing is that, while rust may consider one of these architectures “tier 2” or unsupported or whatever, there’s a Python interpreter that presumably doesn’t, and whoever is using the Python cryptography package expects it to work the same place all their other code does. And very likely it’s some dependency several layers deep. If my code is going to end up architecture-dependent then why bother with an interpreted language in the first place?


> why bother with an interpreted language in the first place

+1

As a person who uses Python for 12 years professionally I usually try to avoid it as much as I can. When you have a Python problem you usually have a C, C++, Rust, libc, arch problem that you do not realise. Most of the useful parts of Python are written in C, C++, Fortran, Rust so when you try to deploy it some less frequently used platform it can burst into flames the worst kind of ways. You can try to deploy the AWS Lambda / Python 3.9 and see the lolz. I have spent more hours on trying to get some Python lib work on a platform than learning Rust.

I think interpreted languages are a dead end especially with bad practices. I make my living writing Python but it is a misery every direction. It is not an accident that Rust is the most loved language continuously because it just works. Anything I try to do in it works as expected and I am not walking on a mine field.

Let me give you a simple example how Python can surprise you. Lets create an app in Python that uses a lib called X. You build your project and everything works locally, unit tests are ok, integration tests too and so on. Now deploy this code to AWS Lambda (not your choice, employer decided to go with that). You package everything up on a Linux that matches the architecture of the target (lets say X64). If you are not familiar with setting the target Python version with pip (and most documentation does not mention that for Lambda) you deploy your code and try to invoke it. Fail. You have X.311.so in your deployment package and Lambda tries to load X.39.so. Now you need to figure out how to set the version or have a build env that matches the CPU AND the Python version with the target system.

I could continue this rabbit hole for some more but the point is that you can't talk about Python alone, you need to pull in the Cartesian product of libc, libXX, C, C++, Fortrant, all the compilers for these, cpu architectures and Python versions. On a lucky they you might have a working system.

With Rust everything just worked the first time we tried to use it. I could not believe it how easy it was to put out a working system at the first try. It only beats Python by an order of magnitude in terms of performance but the amount of effort it took us to deploy it was also much less.


> Most of the useful parts of Python are written in C, C++, Fortran, Rust so when you try to deploy it some less frequently used platform it can burst into flames the worst kind of ways.

You can always write it in python.

This is the attitude which prevails those days. C is to blame because someone wrote Python in C.

What stops those people implementing the standard POSIX in rust or python ? Or even the X windows system or (for clairvoyants) the Wayland in Python or Rust ?

Or even better, they can make their own OS written in these languages (and even call it MULTICS).


> You can always write it in python.

Except you can't. Python has horrendous performance. You can write it in C and pretend it is Python.

> What stops those people implementing the standard POSIX in rust or python ? Or even the X windows system or (for clairvoyants) the Wayland in Python or Rust ?

> Or even better, they can make their own OS written in these languages (and even call it MULTICS).

This is exactly what is happening in the industry with really nice progress. We are entering the era when bad practices and subpar performance is not acceptable anymore. I am really hoping that Rust takes over devops and data at the very least. It started to enter the IoT space and some OS development (Linux supports it).

I am really hoping that this trend continuous and we start to see more an more device drivers in Rust and other safety and security critical systems.

As far as Python goes, I would be totally happy if Python would become the interpreted language that I could use on the top of Rust and I had to deal with only Rust problems.


It's still easier to bring a cryptography implementation to work there than doing this for all your code, all your libraries AND cryptography.


I've been trying to officially mark some open source software I work on as not supporting big endian. We don't test it, and I think it's very likely there are some subtle bugs.

Sometimes someone will ask for support, but (unsurprisingly, I don't blame them) they don't want to put the work into testing up to the standard we achieve on ARM and x64.


IBM zSeries runs Linux in big-endian mode and is still a mainstream product.

If your code doesn’t work on big-endian, it’s usually poorly designed.


> > I've been trying to officially mark some open source software I work on as not supporting big endian. We don't test it, and I think it's very likely there are some subtle bugs.

> IBM zSeries runs Linux in big-endian mode and is still a mainstream product.

For most hobbyist open source software developers, it's a niche product in practice. I can easily test on 32-bit and 64-bit x86 (most desktops and laptops can run it natively); I can easily test on 32-bit and 64-bit ARM (a Raspberry Pi 3B or newer is a common enough device, and using Termux on a phone is also an option); but how would one get access to an IBM zSeries?

> If your code doesn’t work on big-endian, it’s usually poorly designed.

The parent comment mentioned "subtle bugs". It's not hard to accidentally introduce byte order dependencies; one simple example (though from the big endian side) would be forgetting to use "htons" on a port number. Since more and more protocols and file formats are natively little endian nowadays, it's easy to miss a conversion between "little endian" and "native endian", since it will still work perfectly on little endian architectures.


> but how would one get access to an IBM zSeries?

qemu-system-s390x - I run Linux in it for some automatic tests, finds some funny bugs occasionally.


Also an option on Travis CI. Another option is getting Hercules, which can emulate a 390x, but, IIRC, falls short on some post z14 features more recent Linuxes rely upon. If you need time on an actual machine, there is the LinuxONE community cloud (Linux under z/VM) and IBM provides hardened (and expensive for 24x7 operation) Linux on Z instances (under KVM).

The IBM mainframe developer relations team is unusually friendly and approachable as well.


Depending on what your code is doing, even if there are good endian-agnostic abstractions, endianness bugs are still likely to slip in from time to time unless you're defending against them by having your CI run on a big-endian platform. QEMU does this, and although CI-fails-on-BE is rare, it happens often enough that I wouldn't want to lose the coverage. (On the theme of the original article, IBM actively support this by giving us CI resource via the IBM LinuxONE Community Cloud machines, as well as being active in upstream development.)


Eh, I think it's a stretch to say that any possible dependency on little-endian can be rounded down to "poor design". If you're operating with a format or protocol that's unconditionally little-endian, then it takes strictly less work to operate on the data structures in place, or on direct bitwise copies of the data structures, rather than conditionally transforming the values to native-endian, operating on them, and conditionally transforming them back to little-endian. You can argue that everyone's code ought to take the longer route due to the existence of big-endian systems, but that's what this blog post is arguing against.


Any code which cares about splitting ints, pointers or floats into individual bytes, or doing network traffic, will have to care about endianness.

Now, there are of course ways to handle this, but in C and C++ it's easy to make mistakes, and it's hard to test.

I'd be happy supporting big-endian if someone wants to buy me a z series equivalent in power to a mid-level AMD laptop, and also get GitHub to add them as a standard target for GitHub actions.


Or you could set up a Raspberry Pi with NetBSD/aarch64eb and just let it take its time.


Poorly designed... or you don't want to have to think about Endianness just to support some weird niche architecture that only a handful of enormous corporations use?

I really don't think there's anything wrong with assuming little endian in this age. It's like assuming 8 bit bytes, or 2s complement signed numbers.


That's an interesting take. It seems like a modern slant on the old trope of "all the world's a VAX", then "who cares if it's not i386?", and now, "of course it's portable - we support both kinds - amd64 AND aarch64".

But it's a bit disingenuous. Are developers being overwhelmed by problem reports for things that hardly matter? Looking at NetBSD's pkgsrc, which supports more OSes than just NetBSD, we see about 64,000 patch files for about 26,000 packages. While many packages don't require any patches, some require many.

This is because for many programs, upstream just don't care. It's not just about odd architectures - they just don't care about NetBSD support, or support for different, alternative OSes. Perhaps that's fine, but consider this - the patches already exist and are tested, and they're maintained in pkgsrc.

There are times people become indignant about being told how to fix their software for use under systems other than Windows, common Linux and macOS, on CPUs other than amd64 and aarch64. Yes, indignant. Some Python folks don't want to consider anyone would want to use Python on systems (like embedded) that don't have full IEEE floating point emulation. postgresql would rather mark platforms as broken than just let them compile the standard c portions of their code.

People like to say, "they don't want to spend the time and energy", as if it adds work. No - there are examples where people choose the broken path when the not broken path is just as easy if not easier. Nobody is obligated to add or patch anything, even if the patches are freely given and well tested. But the fact that some people actively fight against portability is disturbing, and should be viewed with suspicion.

The author of the article seems to have forgotten some history. They mention downsides of the c ecosystem, like the lack of standardized ways to build, the lack of consistent package management, and so on. But in every instance where those things have been imposed, the imposition has been problematic, has it not? Have we not seen security issues with PyPi? Have we not seen the dependency hell which is Ruby?

The point is that nobody can tell anybody else how to do open source, so we'll always have a hundred different ways to do things. We also can't help that some people will be gatekeepers. But we personally can ignore the gatekeepers and help make things portable.

Making excuses for why portability is somehow extra work is only encouraging gatekeeping. We don't need to do that - gatekeepers already have enough energy on their own.


One more example that security cannot be abstracted, packaged and bought in a box.

The same goes for performance (but that one can be packaged at least sometimes)


> As someone who likes C: this is all C’s fault. Really.

I think I agree with almost everything in this post except this sentiment.

Blaming the tool is silly. You may as well blame assembly language, and if you're doing that you may as well blame CPUs and electricity and you know what, physics is harmful, we should rewrite it in Rust.


If C had a standard way to test, build, and distribute packages then may of the authors concerns would be resolved. But it doesn’t, and other languages do.


> Imagine, for a moment, that you’re a maintainer of a popular project. [...] You’ve also got a CI/CD pipeline that produces canonical releases of your project on tested architectures; [...] Because your project is popular, others also distribute it: Linux distributions, third-party package managers, and corporations seeking to deploy their own controlled builds.

> You don’t know about any of the above until the bug reports start rolling in: users will report bugs that have already been fixed, bugs that you explicitly document as caused by unsupported configurations, bugs that don’t make any sense whatsoever.

If 3rd-party build recipients are sending their bugs directly to you, that's a failure of the 3rd-party builders to take responsibility for their packages. They should be telling you to submit your bugs to them, so they can check their packaging, and then the packagers should talk to upstream only if there are issues to be resolved there.

> You struggle to debug your users’ reports, since you don’t have access to the niche hardware, environments, or corporate systems that they’re running on.

Yes, C supports lots of environments, and Rust supports quite a few (and hopefully more, soon), but I think it's perfectly fine for an upstream author to only support a subset of those. The benefits of Free Software is that if you want to get some software running on platforms the original author doesn't support but the language does, you can do that.

Investigate the bug yourself. Figure out if it's in the app, or a 3rd-party library, or even the toolchain. Submit a patch.

A good proportion of authors will happily accept patches for systems they themselves can't test on, if it's not too intrusive, and doesn't cause regressions on the platforms the author does support. They might be willing to entertain an intrusive patch series that allows for better cross-arch support, if you're willing to work with them on that.

And if an author is not interested in helping you scratch your itch (ew! :-), create a fork. That should be easier than ever these days with the version control tools we have now, so much more than when Free Software was first envisioned. The author may even be willing to point other people who want to use their software on your platform your way (e.g. in their README), if only to get those users of their back!


> The benefits of Free Software is that if you want to get some software running on platforms the original author doesn't support but the language does, you can do that.

The exact cause of this entire controversy is that the software in question is switching to a language that supports less platforms.


And? They don't (and never have) promised you guarantees that future versions would work in your use case. It's open source - you are welcome to use the last version that worked for your case, and if you want to change it and get new features, you can fork it.

You got a hand from someone who built a thing you can use and gave it to you for free. This doesn't obligate them to continue to help you, there's no contract (implicit or explicit), and there's no warranty. It's your choice (or maybe obligation) to have your software work for your case, you agreed to it. If you made that agreement based on the assumption that someone else will keep giving you effort for free, the only problem when they say "no" is your bad assumption.


Huh. I didn't read the article as being about the pyca/cryptography-on-rust controversy.

I read it as using the pyca/cryptography-on-rust controversy as a jumping off point to discuss the difference between platforms-supported-by-the-toolchain, and platforms-supported-by-upstream-software, and whether users should expect to consider all platforms-supported-by-the-toolchain as being supported by all software written for that toolchain. Or not. And how far authors should go to help users on "niche" platforms they don't have access to.

(Might be the 2.5 rewrites making it easy for different readers to come away with different conclusions?)


> Give up on weird ISAs and platforms

NEVER!


I didn’t interpret it as “say goodbye to niche arch support entirely” but rather as “it’s about time we stop externalizing to the OSS volunteer community the huge burden of supporting weird platforms, let’s rather shift that labor and cost onto the actual stakeholders of those architectures.”

That doesn’t imply end users of niche architectures are supposed to lose their favorite apps.


If you look at the market shares, you’ll quickly realize that this would mean abandoning all the BSDs, Haiku, Hurd, all the obscure Linux distributions and so on.

Just support macOS, RHEL/SLES, Debian, Ubuntu and Windows.

Everything else gets tagged with “UNSUPPORTED/WONTFIX”.

But please don’t complain when your favorite operating system is not supported.

Or, you know, we could just stop trying to tell others what targets to use.

And, FWIW, Rust is actually getting support for more architectures thanks to the GCC codegen and GCC frontend.


> Or, you know, we could just stop trying to tell others what targets to use.

If it's not ok to tell others what targets to use, then it's also not ok to tell others what targets to support.


You're welcome to do it that way for your software. Others are allowed to choose some subset or superset of your choice. If they don't like your choice, they are allowed to fork the software to support what they want.

Case in point - the gcc projects for rust are not officially supported by rust. They are alternative rust compilers - the authors of those have explicitly stated they will follow the features (etc) of the official rustc, and maybe the rust language team will consider those other compilers when designing new features, but it's not on the rust language team to do so.

Another case in point - many projects don't support most linux distros. The job of the linux distro is maintain a fork (usually in the form of patches against upstream in the source package) of the software that works well with their system.


Why so many architectures? Ensure your software can only run on Windows: https://gs.statcounter.com/os-market-share/desktop/worldwide

:-).

Vive la difference.


If updates are assuming they don’t have to support whatever those peculiarities are, won’t the support burden be massive? As opposed to starting in a supported state and enforcing a policy of not breaking existing support (until it’s so disused no one complains).

Who decides what degree of “popularity” is sufficient to maintain default support?


The problem is, I find I never get lovely clean patches, I just get GitHub PRs that day "hey, your code doesn't work on my weird CPU", or get a patch that fixes KettleBSD which breaks Cygwin on windows (which we care much more about supporting).

I don't think anyone is complaining about well written fully functioning patches, that break no existing features or OSes.


I think part of this could be fixed if Github, which seems to be the most used, lowest common denominator, had bug report mechanisms that were the equivalent of:

"This bug report has been reviewed and can't continue without more information. This report is now considered inactive unless you do these prescribed actions and report back with this specific information. Click here to submit this specific information."


Some projects seem to be assigning labels such as `needs-response` or `awaiting-user-response`, and have a bot in place that closes the issue if it gets stale.


> I don't think anyone is complaining about well written fully functioning patches, that break no existing features or OSes.

While most wouldn’t be complaining, I think any maintainer should absolutely feel free to refuse a well-written, fully working patch for any reason. For example, that patch may be too large or too complex, thus might become a liability at some point in the future. Maybe managing user expectations is a priority for the maintainer. Maybe they wish to reduce support burden because they’d like spend more time with their family. All that should be deemed normal and acceptable.

It can feel disappointing (numerous pull requests that I wrote have been denied or ignored), but I think it’s important that we, as a community, normalize that.


> Who decides what degree of “popularity” is sufficient to maintain default support?

Each upstream project and upstream distro is responsible for deciding on such a policy for themselves.

Many upstream distros do just that already with their packaging. For example, you may not find a Linux kernel image that will work on your Pentium III CPU on Debian’s main repositories. Or an OpenSSL binary package.

Such decisions are highly arbitrary and at the discretion of the people who do the maintenance work and pay the hosting bills.

Being able to draw the line somewhere is a feature. It can help keep maintainers from burning out or giving up under the load of support requests.

Given the situation of a specific project, its individual goals, and the people involved, I would expect their decision to take into account many more factors than just the “so disused that no one complains” axis.


> That doesn’t imply end users of niche architectures are supposed to lose their favorite apps.

In this case it does, for everyone out there on a RISC-V system who wants to use one of the over 400,000 programs which depend on cryptography.


Firstly, unless I’m missing something, RISC-V seems to be on Rust’s Tier 2 with Host Tools, which is still way high up in Rust’s support pyramid.

Secondly, let’s assume – in favor of your argument – that for some reason, it’s still not working for RISC-V users. Then what keeps them from volunteering their time, labor, or money, and help make the thing fly and keep it that way? Niche or single-purpose distributions (such as Arch Linux ARM or AmberElec) have been doing just that anyway: those people regularly maintain special patches in order to support their specific target anyway, no matter whether the upstream projects are even interested in their platform, or whether they’re accepting their patches.


Yeah why should we even care about s390 for some things?

https://github.com/solvespace/solvespace/issues/1264

I don't think big commercial customers are designing airplanes with it.


That github issue is about the 64-bit s390x. The "weird architecture" being talked about here is the older 31-bit s390.


Personally C for Arduino programming drives me up the wall. With 32 general purpose registers in assembly language you can often keep all the variables in your inner loop in registers and still have a few left over for the interrupt handler. In C on the other hand you know it is moving the stack pointer around meaninglessly just so it can support recursive functions which aren’t really appropriate for embedded systems.

I still write C for it because as beautiful as AVR-8 is, it is a dead end and if I need more capability I can take my C program to an ARM and maybe someday a RISC-V board. (I dream of embedding a soft AVR-8 on an FPGA with some special logic but the engineering students I know tell me that the Verilog class was like getting mauled by a bear.). At least gcc meets me halfway and has 24 bit ints in AVR-8.

For that matter I just got a VisionFive 2 board that I need to put in a case and bring up with Linux. It has been a long time since I had to mess with other peoples C programs and bring them up on a new architecture but I think I’ll be doing it again.


Are there architectures which LLVM doesn't support, which aren't retrocomputing or 16-bit microcontrollers?


One example is the Xtensa architecture, used in the 32-bit ESP8266/ESP32 microcontrollers that are fairly popular with hobbyists (though Espressif seems to be moving to RISC-V for their newer offerings).


Yeah, the RISC-V version seems to be supported: https://lib.rs/crates/esp8266


That crate requires a fork of the Rust compiler to use.


Mainline LLVM still doesn't support Xtensa or ARC AFAIK. ESP32 was Xtensa until their recent RISC-V based designs.


Title needs (2021).


If only we had stuck with Ada.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: