Hacker News new | past | comments | ask | show | jobs | submit login
Weird architectures weren't supported to begin with (yossarian.net)
353 points by woodruffw 10 months ago | hide | past | favorite | 270 comments



This is definitely the most reasonable case I've heard made from this argument, and it's changed my stance on the issue.

That said, I feel the cryptography project handled this poorly. I encountered the issue when one day, installing a Python Ansible module the same way I did the day before required a compiler for a different language that I've never used before and that is hardly used, if at all, in the rest of my stack. The Ansible project seemed to be taken by surprise, and eventually I found the root upstream issue, where the initial attitude was "too bad". Some people making those comments worked for Red Hat / IBM, Ansible is heavily backed by that. What company cares more about Linux on S390 than Red Hat / IBM? I would suggest none. So the fact that they had this attitude and community reaction to me suggests the problem is not one of expecting free work for a corporation on a non-free architecture. It was IMHO a combination of a lack of forethought, communication, and yes, perhaps a change that is overdue and will just be a little painful. The suggestion to maintain an LTS release while people figure out how to proceed is the right move.


> Some people making those comments worked for Red Hat / IBM, Ansible is heavily backed by that.

Okay, so, I don't mean to pick on you here, but I've seen this sentiment cropping up a few times. Not everyone can be familiar with everything, but I would caution you against immediately assuming corporate politics are at the root here.

Anyone familiar with Alex Gaynor and his work over the last few years would know that he cares about Python, and memory safety. That has remained consistent regardless of his employer. Immediately assuming that this has something to do with company politics, rather than just a tireless open source contributor working to improve the things that he cares about, for years, is making a bit of a category error, in my opinion.


I agree - I don't feel picked on :D I guess my point is that it's overly dismissive to say "no free work for companies on niche architectures" when projects backed by the same company were a bit blind-sided by this and a bunch of people lost time chasing down what happened and why. To me that's a sign that this isn't just a mismatch of ideology or someone wanting free work: it was a failure in communication and mismatched expectations. If it had just been on a major version bump, I wonder if we'd even be having this discussion.


Glad to hear it :)

I think it gets really hard the larger the company you talk about. Before I worked at bigger places, I assumed a lot more coherence than is actually the case.


Sure, but the message - that if it's a problem for Red Hat's Ansible team or Red Hat's mainframe Linux team, they should be doing the work needed to make it not a problem.

Ansible Tower subscriptions aren't exactly cheap, and neither is RHEL s390x. If there's not the fat to be uplifting the core infrastructure needed to run RHEL or Ansible on Red Hat's products, that's most likely a choice.


I think corporate politics are at the root here, even if they're not Alex Gaynor's corporate politics. This blog post amounts to blessing amd64 and aarch64 as the only sustainable instruction set architectures.

Rust is currently the apex of a tall stack of hundreds of millions of lines of code, once you account for LLVM etc. Using it as the basis for other software means only processors with sufficient market penetration are 'worthy' candidates. In the long run, if Rust is as successful as lots of us hope it will be, this will kill what innovation is left in the hardware space. If a company is sufficiently motivated to care, it will almost certainly be cheaper to fork the code back to using C than it would be to forklift LLVM to a new architecture.

Is anyone aware of a direct Rust compiler project? Even discounting GCC, Go was bootstrapped from relatively simple (and naive) C compilers until it became self-hosting. I think a basic non-optimizing Rust compiler would go a long way toward leaving the door open for onboarding older -- and more importantly, novel -- architectures into the ecosystem.


I didn’t compare HEADs because I’m on my phone, but found reports that in 2019, LLVM was 7 million LOC, and gcc was 15. Both are C++ compiler projects. Why the double standard?

I also don’t know why re-writing an entire project and creating a compiler for it is somehow easier than writing an LLVM backend.


The question has probably been asked a thousand times before, but wouldn't all those bootstrapping problems for niche platforms be solved if Rust had a C backend? Are there technical issues which would prevent compiling LLVM bytecode to a C blob, similar to what wasm2c does for WASM bytecode?


I think, like majewsky says, it is a bit more complicated than it may initially seem. However, even if we assume that it is trivial, there's other problems. Sure, maybe it would. But who is going to do that work? We're an open source project. Effort is not fungible. On some level, we can only get stuff done when there's a sufficient need for it, and while there have been some folks talking about this in the last week or so, historically, it just hasn't been a massive issue. If it is a massive problem for someone, they should solve it! The Rust project's stance has been open to new platforms, and will continue to be so. But we need experts in those areas to help us help themselves.


Having a C backend does not solve the hard issues. Because of undefined behavior in the C specification, sometimes there just is no way to write down a particular expression in a portable manner in C without treading through undefined territory. This may not be as big of a deal for boring application code, but we're talking about cryptographic code here, which needs to work hard to avoid memory corruption, integer overflows, timing side channels, etc.


I'd imagine the generated code could be hardened similar to the code generated by clang ASAN, UBSAN and TSAN, and also wouldn't generate code that depends on undefined behaviour in the first place. Or you could do a little detour through WASM:

https://kripken.github.io/talks/2020/universal.html#/

In any case, that's better than shrugging off esoteric platforms, IMHO.


What double standard? I wouldn't recommend a language tie itself to GCC any more than I would recommend LLVM. The point is that the Rust compiler catalog is unhealthily small, and that introduces problems like this one.

To your other point, writing an LLVM backend is one thing. Getting it upstreamed is another, and maintaining it is another still. Then you have to navigate the politics of two foundations, both of whose boards of directors are basically the Who's Who of competing interests. I've watched more than one project fail to navigate those waters.

Anyway, the cryptography package going from python + c to python + C + C++ + Rust is a cambrian explosion of build time complexity, and in my work we found it simpler to just get rid of the python and the cryptography package, so it's mostly academic to me.


> I wouldn't recommend a language tie itself to GCC

Okay! I misunderstood you then, sorry. I think one of the hardest parts about this conversation is that there are so many different people with various, but overlapping, opinions. A lot of folks do think this way, and I thought that's what you were saying. My bad.

> Getting it upstreamed is another

You do not need it to be upstreamed in order to build Rust, we build with our own modified LLVM by default, so using it is quite easy.


Maybe this alternative Rust compiler?

https://github.com/thepowersgang/mrustc


The switch hasn't been sudden. It's just that many levels of disconnect between the project authors and downstream users meant it was basically impossible to communicate to all the affected users — nobody looks at their deps-of-deps until they break.

And they've released Rust as optional component you can disable, precisely because nobody paid attention until it actually shipped.


> The switch hasn't been sudden. It's just that many levels of disconnect between the project authors and downstream users meant it was basically impossible to communicate to all the affected users — nobody looks at their deps-of-deps until they break.

Exactly, from the original shitstorm issue:

> Rust bindings kicked off in July 2020 with #5357. Alex started a thread on the cryptography developer mailing list in December to get feedback from packagers and users. The FAQ also contains instrutions how you can disable Rust bindings.

> Do you have constructive suggestions how to communicate changes additionally to project mailing lists, Github issue tracker, project IRC channel, documentation (changelog, FAQ), and Twitter channels?

At one point there's sadly not much the project can do and still make progress.


> > Do you have constructive suggestions how to communicate changes

That one kinda got me, when python intentionally has a runtime developer-to-developer communication system, per say:

https://docs.python.org/3/library/warnings.html


Warnings are a funny thing, lots of projects like to turn all warnings into failing errors in CI, sometimes even at package build time, they think it's a best-practice ... but it means that nobody else can use warnings to communicate things, or else everything breaks, nullifying the utility of warnings.

see also https://lwn.net/Articles/740804/


The easy answer to this is "warnings should be warnings" but the hard question is "how do we get people to stop treating them as errors?"


The easy answer is to stop treating errors as warnings! I’m working on a project where for the first time in my career I’ve made my environment/workflow/tooling less shouty about problems.

I’m writing for my blog, so I installed a spell check extension. But its dictionary stinks. So I just turn off its warnings before a final post pass.

Most of the time when I see yellow in my editor it’s stuff I would expect to be red. Even from my primary language (TypeScript) which officially doesn’t even have warnings.

More things should be errors and treated as such! And more things that legitimately qualify as warnings should error in final checks to ensure they were addressed somehow, even with just an explicit dismissal.


"Sudden" is relative to the rate of propagation. Maybe we can say that the difficulty of communicating to all stakeholders of a package is egregious, and even endemic to the ecosystem as a whole, but it still sounds like a better communication effort could have been made, even if doing it perfectly is impossible


There was advocacy and pressure to adopt pyca/cryptography in other popular python packages. Don't roll your own crypto, don't use old unmaintained crypto libraries (pycrypto ... but these days there's pycryptodome) etc. If pyca/cryptography was instead advertised as "python crypto library for rust lovers" and other python developers knew that a _gigantic_ and _obnoxious_ rust toolchain dependency was involved, I think many would have avoided cryptography, or made it optional. This was a bait-and-switch.

This is obnoxious for people who want to start with a tiny standard distro image with just a C toolchain, and build their app and all dependencies from source. It is also obnoxious for distros that want to build all packages from source. rust depends on llvm, itself a huge complicated finicky package ... but not just that, it depends on a custom-patched llvm! ...but not just that, it also depends on an extremely recent version of rust! this problem is recursive! And there are many other niche cases ... and "niche cases" of "unauthorized" porting/modification/substitution is how much of the open source ecosystem we love got its start.

This is really about issues of paranoia and control in the "security" realm. They consider it irresponsible to enable users to do anything they can't guarantee "safe". "Your organization should be paying for a $$$ corporate support contract if you want to do anything we don't officialy support." Maybe you could have fixed a portability bug in some python or C code (I've fixed a couple when bringing up a little-endian mips64 system a decade ago), but no you'll need rust+llvm experts to port that whole crazy toolchain. It's for your own good.


What you're saying makes sense in any other domain except cryptographical primitives. With those, the basic correctness of the code will depend on details of the compiler and processor architecture, and it is extremely likely that the code will be fundamentally incorrect when used on other architectures or compilers.

And this is not just a matter of fixing some little-endian assumption, it's a matter of understanding microoptimizations, instruction reordering, prefetch behavior, branch prediction etc. Ensuring code has predictable runtime in the presence of an optimizing compiler and out of order processor is extremely difficult.

It's probably just as smart to roll your own crypto instead of trying to use a known library on a new platform, because the hardest parts of rolling your own will have to be done anyway.


Here you seem to be demanding that the authors and maintainers of their own cryptography library lighten up about security, because you find the way they manage their own package to be controlling. You see some irony in that, right?


I don't read /u/ploxiln as demanding anything. They seem frustrated with the changing dependency profile of the dependency, and a little frustrated with the way some people like to shout "security" as if it's a trump card, ignoring that security is a spectrum and that the only perfectly secure software does nothing (perfectly).


This is a security library. In fact, it's a security library that was created specifically to harden and user-proof less secure alternatives. If you don't care as much as they do about security, use a different library (or just keep using the pre-Rust version of this one).


> If pyca/cryptography was instead advertised as "python crypto library for rust lovers"

I feel like this really only makes sense if pyca/cryptography had planned on adding the Rust dependency from the very beginning (or from very early on). Is there any indication that was the case?

> but not just that, it depends on a custom-patched llvm!

This doesn't seem to be true [0, 1].

[0]: https://rustc-dev-guide.rust-lang.org/backend/updating-llvm....

[1]: https://news.ycombinator.com/item?id=26217182


>> Strongly prefer to upstream all patches to LLVM before including them in rustc.

> That is, this is already the case. We don't like maintaining a fork. We try to upstream as much as we can.

> But, at the same time, even when you do this, it takes tons of time. A contributor was talking about exactly this on Twitter earlier today, and estimated that, even if the patch was written and accepted as fast as possible, it would still take roughly a year for that patch to make it into the release used by Rust. This is the opposite side of the whole "move slow and only use old versions of things" tradeoff: that would take even longer to get into, say, the LLVM in Debian stable, as suggested in another comment chain.

> So our approach is, upstream the patches, keep them in our fork, and then remove them when they inevitably make it back downstream.

... so there are always some outstanding patches rust applies to the llvm codebase.


For Rust's LLVM fork, sure. But as Steve Klabnik noted in the first comment I linked, unmodified LLVM is supported.

In addition, later down in the comment chain:

cycloptic:

> Can't there be a build option to not use the LLVM submodule, and instead use the system LLVM?

steveklabnik:

> There is. We even test that it builds as part of our CI, to make sure it works, IIRC.

For a more concrete example, Fedora supports choosing between system and bundled LLVM when building Rust [0, 1].

[0]: https://news.ycombinator.com/item?id=26222190

[1]: https://src.fedoraproject.org/rpms/rust//blob/rawhide/f/rust...


> I feel like this really only makes sense if pyca/cryptography had planned on adding the Rust dependency from the very beginning (or from very early on). Is there any indication that was the case?

I am sure this idea surfaced several times in IRC or possibly in the mailing lists. Certainly, the authors have been toying with handling ASN.1 in rust since 2015 [1], which I guess will be the next logical step.

I do agree that this is mostly a political stance. pyca/cryptography is a wrapper sandwiched between a gigantic runtime written in C (CPython/PyPy) and a gigantic library written in C (openssl).

The addition of Rust as dependency enables the inclusion of just 90 lines of Rust [2] where the only part that really couldn't be implemented in pure Python is a line copied from OpenSSL [3] (i.e. it was already available), and which is purely algebraic, therefore not mitigating any real memory issue at all (the reason to use rust in the first place).

The change in this wrapper (pyca/cryptographic) does not move the needle of security in any significant way, and it is really only meant to send the signal that adding Rust in all other Python packages and especially in the runtime itself will now come at no (political) cost.

[1] https://github.com/alex/rust-asn1

[2] https://github.com/pyca/cryptography/blob/main/src/rust/src/...

[3] https://github.com/openssl/openssl/blob/OpenSSL_1_1_1i/inclu...


> The addition of Rust as dependency enables the inclusion of just 90 lines of Rust

My understanding is that this is just the beginning, and the whole reason it's only a small amount is precisely to do it in small steps, correctly, rather than re-writing the entire world in one go.


And why should they re-write the entire world? It's just a wrapper library to openssl and it was marketed heavily to the community (at the beginning) as one where maintainers would follow good practices and not try to write security-sensitive code as they are not security experts, but just rely on openssl so that all focus go into the same place.

So either that's still valid (and not too much of rust will come in to make a real difference security-wise) or they have revisited their position and will re-implement a bunch of openssl logic (in rust apparently though, and not in Python as it would be more logical, and as golang does successfully). And in the latter case, why just not focus on wrapping rustls instead?

In either case, the hand is being forced: it's a small amount, and I don't see how it would have been an excessive burden to maintain such a small logic as an opt in for a period. It makes much more sense to read this as a move to forcefully push rust into the python ecosystem.


> So either that's still valid (and not too much of rust will come in to make a real difference security-wise) or they have revisited their position and will re-implement a bunch of openssl logic (in rust apparently though

Looks like it's the former. From the initial (?) GitHub issue discussing the move to Rust [0]:

> We do not plan to move any of the crypto under the scope of FIPS-140 from OpenSSL to cryptography. We do expect to move a) our own code that's written in C (e.g. unpadding), b) ASN.1 parsing. Neither of those are in scope for FIPS-140.

> and not in Python as it would be more logical

Is Python actually suitable for cryptographic code, especially if constant-time operations are needed?

> I don't see how it would have been an excessive burden to maintain such a small logic as an opt in for a period.

And then when said logic stops being opt-in, why wouldn't the same problem arise?

[0]: https://github.com/pyca/cryptography/issues/5381#issuecommen...


> Is Python actually suitable for cryptographic code, especially if constant-time operations are needed?

No, but as seen in the first chunk of rust code added, the amount of logic that needs to be constant-time is a) very, very limited to a few primitives only b) algebraic in nature (put differently, it's not where memory bugs will pop up, so using rust over C doesn't even buy you much).

For instance, ASN.1 parsing doesn't need to done in constant time in the vast majority of cases (and in all cases I am aware of).

> And then when said logic stops being opt-in, why wouldn't the same problem arise?

Some friction will certainly remains, but it will be nothing compared to the current breakage.


> No, but as seen in the first chunk of rust code added, the amount of logic that needs to be constant-time is a) very, very limited to a few primitives only b) algebraic in nature (put differently, it's not where memory bugs will pop up, so using rust over C doesn't even buy you much).

That's fair, though if some other part of the codebase is ported to Rust anyways sticking to C doesn't save you, unfortunately.

> For instance, ASN.1 parsing doesn't need to done in constant time in the vast majority of cases (and in all cases I am aware of).

I'm curious why this has to be done in C/Rust. Performance?

> Some friction will certainly remains, but it will be nothing compared to the current breakage.

What would be different then compared to now that would reduce breakage to such an extent?


> That's fair, though if some other part of the codebase is ported to Rust anyways sticking to C doesn't save you, unfortunately.

This should be turned around. There is no evidence that part for the codebase really need to be ported and for which Rust makes a real difference.

> I'm curious why this has to be done in C/Rust. Performance?

ASN.1 is only used during handshakes and I/O of files (the chunk of rust added covers loading of PKCS#7 files which are typically quite small, and not typically dealt with in massive numbers). I doubt the performance hit would be so high. Also, the package wbond/asn1crypto has shown that doing ASN.1 in pure Python can be quite fast.

> What would be different then compared to now that would reduce breakage to such an extent?

The ability to try the feature out and still have a plan B if it doesn't work out, and the possibility to have a smooth, long, relaxed upgrade plan with good warnings and more time to prepare. But to be honest, while writing all this chain of comments, I really doubt there was a real need to add rust into the mix (other than the political angle I already mentioned).


> This should be turned around. There is no evidence that part for the codebase really need to be ported and for which Rust makes a real difference.

I was thinking that if the maintainers were planning on porting some other larger part of the codebase, then starting with something small/relatively inconsequential would be a good first step, and keeping it in C wouldn't provide much benefit once said other larger part were ported.

> Also, the package wbond/asn1crypto has shown that doing ASN.1 in pure Python can be quite fast.

Interesting! I'm curious why the maintainers didn't opt for that approach instead.

> The ability to try the feature out and still have a plan B if it doesn't work out

Would people have tried this feature out if it were made opt-in? It's clear that the initial announcements reached far fewer people than one might like, and I honestly have no idea how many people would see a build-time warning.


> rust depends on llvm, itself a huge complicated finicky package

gcc is also a "huge complicated finicky package"... Any decently sized compiler is.


> installing a Python Ansible module

It's funny that you mentioned having to install Rust to use some Python because, for us non-Python users, everything needs Python :). I don't use Python at all, but I need both versions installed.

Just pointing out that many people are already wearing that shoe.

Python is the Lingua Franca of getting shit done. Rust is seemingly becoming the Lingua Franca of [more] secure code.


Red Hat doesn't care at all about Linux on 32-bit s390, and neither does IBM as far as I know (except possibly the consulting group, which is interested in anything that makes them money).

s390x has no problem with switching to Rust/LLVM. Red Hat and IBM both employ engineers working on LLVM, specifically for s390x in IBM's case.


There's a phenomenon in lots of domains where "a change that is itself neutral or good can be bad if it happens too suddenly". I think that was the case here; we can say that this is a shift that reasonably can or should happen, but can still have caused needless disruption by catching a bunch of people off-guard and not giving them time to adapt to it


Why is it necessarily Cryptography’s fault that Python Ansible was taken by surprise? Or any of the other affected parties along the chain? That Cryptography was starting to include Rust in the project was announced on the mailing list by Gaynor last summer. And that email said exactly what future (at that time) release would require a Rust toolchain if the project was to be built from source.

The reply to that by the Gentoo guy (who started the Github issue) was that package maintainers cannot follow every single mailing list of every dependency. That is debatable (e.g. maybe you should make an exception for security applications), but let’s take that as a given for now. In that case, what is Cryptography to do? Where should they announce such things in a way that orgs like Gentoo will see it? And also notice it and not just mentally gloss over it as some kind of “spam”? If the Gentoo guy didn’t see it half a year ago, would he have seen an announcement (or the reminders) if it was made five years ago?


I think making a change like this warrants a major version bump. It won't eliminate all the surprise for everyone, and I do have sympathy for the people who did go out of their way to talk about this on the mailing list and then surprised everyone. But it's common to pin yourself to a minor or maintenance release line to automatically pick up security fixes, etc. I expect breakage when changing the major (or even minor), and that's almost always a manual upgrade. And that's when I do read all the release notes, run tests, etc. before committing to the change.


What is a “major version bump”? Before you answer, consider that the library doesn’t use semantic versioning. Before this all blew up, the versioning scheme was this[1]:

> Given a version cryptography X.Y.Z,

> - X.Y is a decimal number that is incremented for potentially-backwards-incompatible releases.

> - - This increases like a standard decimal. In other words, 0.9 is the ninth release, and 1.0 is the tenth (not 0.10). The dividing decimal point can effectively be ignored.

> - Z is an integer that is incremented for backward-compatible releases.

The system has since changed, but it continues not to be semantic versioning. (It’s effectively the same, in fact, but protects against dependents who think it is semantic.)

By that scheme, it was already a “major” (signifying potential backwards-incompatibility) release.

[1]: https://cryptography.io/en/latest/api-stability.html#previou...


This reads to me like an argument for semantic versioning, because otherwise I need to internalize the rules of every package and know that some will break compatibility on the Y, some on the Z, ... Etc.


Even SemVer doesn't help here, as changing the build toolchain isn't generally considered an API breakage (after all, the resulting binaries are API compatible)


Changing the build toolchain/requirements in such a significant way does seem like a major version break, as it could break downstream consumers attempting to install the package.

Because the build toolchain is "visible", that is, pip isn't just downloading a prebuilt binary every time, I think breaking changes that could cause CI systems or user installs to fail is part of the API contract. Think of what major distributions or software packages do when they want to deprecate support for certain platforms - those are major bumps that typically only occur on incrementing the most significant component of the version.

Hypothetical: suppose the the authors changed setup.py so that it only built on Red Hat Enterprise Linux(tm) version 6. Again, they could do that, it wouldn't change the runtime API. And on all other distributions or installers, it would error.

Would that be a major semver change? Of course it would be. The API contract has to include everything from packaging to use.


You know, I had that thought while writing it.

I wouldn’t want to force any particular versioning scheme on any particular developer, but maybe the “SemVer façade” versioning scheme they switched to is the best compromise. It has defensive value, at least.

Then again, PEP 440 has nothing to say about the semantics of versioning, only requiring:

  [N!]N(.N)*[{a|b|rc}N][.postN][.devN]
PyPA themselves describe various expected versioning schemes, but listing Semantic as preferred[1]. If I squint, I can fit `cryptography`’s previous scheme into “Hybrid”. The biggest lesson I take from this is that if your version scheme isn’t SemVer, work hard to make it look obviously different from SemVer.

[1]: https://packaging.python.org/guides/distributing-packages-us...


Would SemVer even strictly require a jump here? They didn't change what's usually thought of as the API of the library (i.e. if I don't compile it myself I don't really notice the change?), and that's what SemVer uses: "MAJOR version when they make incompatible API changes,"


Also a great point!

I can’t think of a clean alternative other than coming full-circle back to the “developer advocacy” solution, with its clear problems. Someone smarter than I am probably has it in the palm of their hand, though.



On the other hand, from what I understand they do ship precompiled wheels for many platforms, just missed one lots of people use in their CI setups (Alpine, which uses musl and thus isn't compatible with other Linux wheels - personally I think that's an odd choice but whatever, people do it)? Easy to imagine that many more people compiled it than they expected.


> The reply to that by the Gentoo guy (who started the Github issue) was that package maintainers cannot follow every single mailing list of every dependency.

It seems reasonable that, if you are the packager for critical packages, that you follow critical dependencies?

If the problem is that the distro is supporting so many things that the folks working on it can't keep up - well, that's precisely the author's point: stop pretending that you can support HPPA and MIPS or whatever as well as you can support x86_64. But you don't get to tell a million people that they have to have a less secure Python because 3 people have a toy in a closet they want treated as a first class citizen.


The number of people in the original thread who didn’t appear to be version pinning and then getting upset that a package that they directly relied upon automatically upgraded is eye-watering.


> stop pretending that you can support HPPA and MIPS or whatever as well as you can support x86_64

And then the corresponding uptightness will be “FOO_PROJECT is aligned with the Intel monopoly”, and just as many people will be unhappy. You can see this in many recent threads about Apple not providing free access to M1 documentation for alternative OSes they’re under no obligation to support.


That was, of course, a complaint about Linux back in the day. It turned out nobody cared enough to stop Linux development.


That’s pretty much verbatim the reply I had when people on here were prognosticating it for the M1, other than adding that they also promoted Linux virtualization in the announcement.


Alright, let's do some digging...

On 2013-03-21, urllib3 added an optional dependency to pyopenssl for SNI support on python2 - https://github.com/urllib3/urllib3/pull/156

On 2013-12-29, pyopenssl switched from opentls to cryptography - https://github.com/pyca/pyopenssl/commit/6037d073

On 2016-07-19, urllib3 started to depend on a new pyopenssl version that requires cryptography - https://github.com/urllib3/urllib3/commit/c5f393ae3

On 2016-11-15, requests started to depend on a new urllib3 version that now indirectly requires cryptography - https://github.com/psf/requests/commit/99fa7bec

On 2018-01-30, portage started to enable the +rsync-verify USE flag by default, which relies on the gemato python library maintained by mgorny himself, and gemato depended on requests. So 5-6 levels of indirection at this point? I lost count.

On 2020-01-01, python2 was sunset. A painful year to remember, and a painful migration to forget. And just when the year was about to end...

On 2020-12-22, cryptography started to integrate rust in the build process, and all hell broke loose - https://github.com/pyca/cryptography/commit/c84d6ee0

Ultimately, I think mgorny only has himself to blame here, by injecting his own library into the critical path of gentoo, without carefully taking care of its direct and indirect dependencies. (But of course it is also fair game to blame it on the 2to3 migration)

In comparison, few months before this, the librsvg package went through a similar change where it started to depend on rust, and it was swift and painless without much drama - https://bugs.gentoo.org/739820 and https://wiki.gentoo.org/wiki/Project:GNOME/3.36-notes


Folks with an eye towards backwards compatibility typically don't implement breaking changes without a year of deprecation warnings emitted by the software. Contrast that to a notice posted in a sub-basement 6 months before instituting a breaking change. The shock and alarm do seem warranted IMO.


How do you do a deprecation warning for a new language being a build dependency? What does it look like? "your build environment is being deprecated"?


In C you can implement a "your build environment is being deprecated" message like this:

    #ifndef I_UNDERSTAND_THAT_SOON_RUST_WILL_BE_MANDATORY
    #error "Starting with version xxx this package will need Rust to compile. Recompile with -DI_UNDERSTAND_THAT_SOON_RUST_WILL_BE_MANDATORY to acknowledge that you understand this warning."
    #endif
Anyone building from source would get notified in a way that's impossible to miss but easy to turn off.


And thousands upon thousands of automated CI jobs and docker container builds fail. You're basically causing massive developer stress to anyone who automatically compiles your package. Most would consider that a bad tradeoff to help a tiny fraction of your users who'd be impacted and also refuse to follow your mailing list.


If the change is transparent to all but a tiny fraction, you can of course guard the above with

    #if !(defined(__AMD64__) || defined(__AARCH64__))
(or whatever the accepted names of those platform macros are).

For whatever it's worth, if the C code in the cryptography package didn't already contain the above check, along with a some hurdle requiring the user to compile with -DNOT_OFFICIALLY_SUPPORTED_TARGET, then the article's point is false: If you don't prevent users from compiling your security-relevant, reliant-on-exact-memory-semantics software on System/390, then you are implicitly supporting System/390.

And for that matter, the Rust code should probably contain a similar check: Even if someone ported Rust to System/390, the cryptography library shouldn't magically start working there, unless the developers actually test there.


> And thousands upon thousands of automated CI jobs and docker container builds fail. You're basically causing massive developer stress to anyone who automatically compiles your package.

This isn’t a major source of stress: you include migration instructions and even tooling to automate it. The thing that’s stressful is when your build fails with completely unexpected errors and no indication of what went wrong.

Loudly announcing breaking changes is disruptive to some extent, but not doing so either means more disruption or nothing can ever change at all.


Isn't this basically equivalent to what they did? Their actual action was to include a dummy rust requirement just to break/warn people who wouldn't be prepared for it as an actual dependency.


You're going to do that a few versions down the line anyway. Why not ahead of time?

The only downstreams this affects more are the ones who decide to suppress the warning for now and ... put it off until you release the change that actually required breakage.


These developers cause themselves stress by building against the latest library without a care in the world. Lock your dependencies, regularly upgrade them while looking at patchnotes, and it won't be a problem.


CI builds should be locked to exact versions anyways, for the sake of reproducibility.


This is for all intents and purposes exactly what the cryptography maintainers did, except they did one better: they made sure this release is capable of validating the new configuration. They made rust an optional, but on-by-default, dependency. That is the equivalent of #error plus validation. They made it possible to turn this dependency off. That is the equivalent of -DI_UNDERSTAND.


True, that's possible, but quite brutal and I'd expect that would merely have lead to shouting and people going wild over their builds breaking a year earlier.


Yes. It's pretty simple to insert a stub module which is imported in a try/catch, where failure to load results in a warning.


Hm yeah, that sounds like a bit of a hassle to set up (have it build where possible but not fail the build), but otherwise an elegant solution, since it would pass cleanly for people that have a suitable environment.


> project mailing lists, Github issue tracker, project IRC channel, documentation (changelog, FAQ), and Twitter channels

It's hardly a notice posted in a sub-basement 6 months before instituting a breaking change. They communicated via numerous mechanisms.


It's a sub-basement from the perspective of folks for whom this is a second-order dependency. And it sounds like there are many more of them than there are folks that caught wind of this.

Regardless. The common practice is a year of emitting warnings from the software, that the end-user will see. That's prominent and allows package maintainers enough time to work around the upcoming breakage. Six months notice on a mailing list is simply not prominent enough, and it's half the standard year which puts undue strain downstream.


I do think that we're going to see more of this. This is just a relatively early example. Rust and LLVM bring things to the table that make them inevitable if we as an ecosystem, on the whole, value privacy and security. This is where the rubber of the ethos hits the road. C for all it's good work and history, is a leaky mess and the source of so many zero-days, especially the bad bad state actor level ones.

If we are to move to a more abstracted and safer system, the ideas behind LLVM are just going to be a fact of life in years to come. The solution to this is to either fund greater LLVM integration (or similar that isn't on the scene yet) or accept the status-quo. I choose the former, but I sincerely hope the future direction leaves as few out in the cold as possible through the effort of smart people. But protecting hobbyists is a stretch goal in my mind compared to improving the security and privacy of the global interconnected world we're in.


Don't know about crypto. But any open source package, ported to a new environment (one not explicitly tested by the maintainter) is going to have issues. That's the nature of software.

E.g. I just added pnglib to an embedded assembly. Has a display among other things. Wanted to put compressed PNG images into (limited) flash, then decompress into (plentiful) RAM at boot.

Of course pnglib didn't build for my environment. Never mind it has a C library. There are 50+ compile switches in pnglib, and I was setting them in a pattern that wasn't part of the tested configurations. It didn't even compile. Once it compiled (after some source changes) it didn't run. More source changes and it would run until it hit some endianness issues. Fix those, and it would do (just) what I wanted, which was to decompress certain PNG image formats with limited features, once.

No problem. That was my goal, achieved. But at no time did I blame the maintainers for not anticipating my use case.

I would say this: maintainers, flag compile-time options that aren't tried both ways in your test environments. To give me some chance of estimating how hard my job is going to be.


The software in this story is working as intended in its new configuration. It's not reasonable to ask for features to be flagged off on the off chance that someone is running your code on an S/390.


If your test cases don't try it, I'd recommend either removing the code or documenting that the flag is not optional? For industrial code that would be an ordinary, expected process. But open source gets a lot of passes on process.


Problem is that ARM & AArch64 were considered a "weird architecture" by the non-Android linux stack until really very recently, and the migration to a new architecture is still not plain-sailing in many peoples' experience. Without the assumption that most open source package authors are implicitly trying to be architecture-independent to some degree, we will literally all be stuck on x86_64 for the rest of our lives (the migration to amd64 itself I remember as being a number of years of people working on "unsupported" stuff FWIW).

For users on "weird architectures" to be petitioning that a move the pyca authors are making is causing inconvenience to them is perfectly reasonable in my eyes.


I don't think it's a good comparison. The architectures mentioned in the pyca/cryptography repo are:

- Alpha: introduced 1992, discontinued in the early 2000s

- HP/PA: introduced 1986, discontinued in 2008

- Itanium: introduced 2001, end of life 2021

- s390: introduced 1990, discontinued in ~1998

- m68k: introduced 1979, still in use in some embedded systems but not developed at Motorola since 1994.

ARM was once not as popular as it is nowadays but it was never moribund and in my experience has always had decent tooling and compiler support. Furthermore I'm sure that if tomorrow HP/PA makes a comeback for some reason, LLVM will add support for it. Out of the list I'd argue that the only two who may be worth supporting are Motorola 68k and maybe Itanium but even then it's ultra niche.

I personally maintain software that runs on old/niche DSPs and I like emulation, so I can definitely feel the pain of people who find new release of software breaking on some of the niche arch they use (I tried running Rust on MIPS-I but couldn't get it to work properly because of lack of support in LLVM for instance). These architectures are dead or dying, not up-and-coming like, say, RISC-V which has gained some momentum lately.

But while I sympathize with people who are concerned by this sort of breakage, it's simply not reasonable to expect these open source projects to maintain backward compatibility with CPUs that haven't been manufactured in decades. As TFA points out it's a huge maintenance burden: you need to regression test against these architectures you may know nothing about, you may not have an easy way to fix the bugs that arise etc...

>open source groups should not be unconditionally supporting the ecosystem for a >large corporation’s hardware and/or platforms.

Preach. Intel is dropping Itanium, HP dropped HP/PA a long time ago. Why should volunteers be expected to provide support for free instead?

It's like users who complain that software drops support for Windows 7 when MS themselves don't support the OS anymore.


SystemZ, what was s390x/390 seems relatively well supported by IBM and Red Hat in my experience.


See footnote 6 in the original article:

"That’s the original S/390, mind you, not the 64-bit “s390x” (also known as z/Architecture). Think about your own C projects for a minute: are you willing to bet that they perform correctly on a 31-bit architecture that even Linux doesn’t support anymore?"


Is it not reasonable to throw this back at the CPU makers? If you want to bring out a new cpu architecture, port all the compilers to it before you start selling it.


The problem is that chipmakers have historically made their development environments closed-source and, often, not very pleasant to work with. Maybe this is more of a problem with demonstration boards meant primarily for embedded systems people, but if you rely on TI, for example, to provide a compiler, they'll give you a closed-source IDE for their own C compiler which may or may not be especially standards-compliant.

I hesitate to imagine what it would take to get a hardware maker to contribute patches to LLVM.


And we have seen, that is to the detriment to the chip maker. It’s said that ATMEL chips became way more popular than PIC because of AVR Dude and cheap knock off programmer boards on eBay. A modern day architecture would be competing with these already established open source tool chains so they would either remain obscure like FPGAs are now, open source their stuff, or be on the scale of Apple or Microsoft where they are able to outcompete them open source stuff (for what purpose though)


If new CPU makers are expected to update all existing compilers, wouldn't the counterpoint be that new compiler writers are expected to support all existing CPUs?


IMO it depends who stands to gain from it. If you make a new compiler, you need to make sure x86 and ARM work because that’s what most of your users will be using. There is almost no gain in adding support to some ancient cpu that no one uses anymore.

On the other side, if you make a new cpu architecture, all of your users (people buying the chip) will gain from porting compilers.

No one is expected to do anything (unless they are being paid). It’s just logical for people to work this way.


Sure, but that wouldn't have helped in this case since 68K, Alpha, PA-RISC, and S/390 were not "existing" CPUs at the time Rust was invented.


FWIW, s390 wasn't really discontinued in 1998. There's still new s390 chips being designed and used.


s390 is the 31-bit-only variant, that has been discontinued for some time. Modern variants are 64-bit based, and still supported.

All that being said, it's quite worthwhile to include these "dead" architectures in LLVM and Rust, if only for educational reasons. That need not imply the high level of support one would expect for, e.g. RISC-V.


Two architectures currently being added to LLVM are m68k and csky. I don't think either are that new (I thought csky was, but it was explained to me by Linux kernel architecture folks that it has old roots from Motorola, with folks from alibaba using that for 32b but moving to riscv for 64b).


Yes, csky is mcore derivative. It's not entirely compatible, like m68k and ColdFire.


Lots of 32 bit code still gets run on these machines.


Could you expand on that? Are you saying that s390x can run binaries compiled for s390 and that today binaries are being compiled to s390 for the purpose of being run on s390x?


Yes to both (at least for user mode code, or "problem mode" in IBM parlance. Kernel and hypervisor code is 64-bit only on newer chips). There's something like a 30% average memory savings for 32-bit code, so if your program its in 2GB, it's a win on these massive machines that'll be running 1000s of VMs at close to100% load. Nice for your caches too.


But it is unreasonable, for one of two reasons:

1) If the architecture is in active production, there is someone somewhere trying to make money by selling it. If they are intent on only supporting proprietary compilers, they need to accept the consequences of that decision: users won't use their hardware because they can't use the software that they want to use. If they want the architecture to be widely used, they have a fiduciary obligation to ensure that they have reliable and well tested backends to major compilers.

2) If the user is using old architectures that are no longer in production or no longer supported, there isn't ever any reasonable expectation of continuing software support. You're stuck with old software, full stop.

In the case of your objection, AArch64 and ARM manufacturers have the obligation to develop openly available backends for their architectures. And they've taken that seriously, as should any newcomer architectures.


> If the user is using old architectures that are no longer in production or no longer supported, there isn't ever any reasonable expectation of continuing software support. You're stuck with old software, full stop.

That's not a very reasonable POV. Many of these architectures are very well understood and very easily supported via emulation. There's no need to run them on actual hardware, especially if you aren't dealing with anything close to bare-metal quirks.


Incidentally, aarch64-unknown-linux-gnu became a Tier 1 supported platform in Rust recently, in part because of the support of ARM themselves.

(My day job involves a lot of Tier 2 ARM work, and I don't personally run into any more bugs than Tier 1 platforms. YMMV.)


> Problem is that ARM & AArch64 were considered a "weird architecture"

The ARM world is a blizzard of proprietary, undocumented implementations with limited support for the upstream kernels, often can boot only a vendor-specific distro that is quickly abandoned, and full of boards that blink in and out of existence at the drop of a hat. It absolutely is a weird architecture.

> For users on "weird architectures" to be petitioning that a move the pyca authors are making is causing inconvenience to them is perfectly reasonable in my eyes.

Yes, this is exactly the sense of entitlement that the author is talking about when he describes the destruction of people's interest in working on open source.


That may be true for peripherals, but all a compiler has to care about is the core ISA. The board zoo is very much not relevant.


But to meaningfully support a thing - per the original author - "compiles on a version of the ISA" (and there are many of those for ARM) - and "actually works as intended" do have to care about things like "this ARM core runs these extensions. This ARM core is really just a coprocessor to a binary blob processor. This ARM core is buggy as fuck but no longer supported by the vendor" matter. Where's your source of randomness for crypto, just as a starting point?

People want - to borrow from the BSD world - FreeBSD levels of support for specific chipsets and features, with OpenBSD levels of support for security, and NetBSD levels of portability. These are not compatible outcomes, and folks should stop pretending that they are.


> this is exactly the sense of entitlement

Now come on with your "entitlement". It's not as though we're talking about some random people who made some little package for their own use and decided to make it available in case anyone else found it useful, and now the community demands from them are becoming too much and are something they never asked for. This is a group that have named themselves the Python Cryptographic Authority and have chosen the prominent pypi package name of just "cryptography". They couldn't have done any more to encourage the broader community to depend on it and make it a core part of their stack.

In comparison, I couldn't imagine the python core team (also largely unpaid) doing this with one of their stdlib modules and then dismissing those objecting as "entitled".

(FWIW I'm not particularly interested in taking a side in this issue, but think your labelling as "entitled" is unhelpful)


I agree with the idea that it is entitled. Hell, Python itself is only directly supported on a couple of architectures and operating systems. It has even fewer "tier 1" targets than Rust does! It is made available in source format only for other packagers to use as they see fit, but it is not python's responsibility to support it. Why should a library maintainer feel any obligation to support platforms that the language doesn't provide first class support for?


But ARM/AArch64 did always have good compiler support. ARM was the second arch added to llvm.


If I fire up an Alpha CPU today, I'm not expecting that the latest versions of all my favorite free software is going to run on it. Asking maintainers to "fix it for me" then would be unreasonable. Part of running hardware that far out of date is hunting for the last versions of anything that supported it; whether that was FOSS or commercial its still the way the world is and has been.

It'll be interesting to see how the Rust community responds to this; are they so eager to absorb everything that they'll put effort into supporting niches to get more users,or will they take the opportunity to be "opinionated" and exclusive and shed the effort of catering to obsolescence?


> If I fire up an Alpha CPU today, I'm not expecting that the latest versions of all my favorite free software is going to run on it. Asking maintainers to "fix it for me" then would be unreasonable.

No one does that. What people complain about is that code that used to be perfectly portable for years suddenly becomes locked to a very limited set of targets with the argument that memory safety is more important than anything else.

> It'll be interesting to see how the Rust community responds to this; are they so eager to absorb everything that they'll put effort into supporting niches to get more users,or will they take the opportunity to be "opinionated" and exclusive and shed the effort of catering to obsolescence?

Rust just needs to help get one of the several alternative Rust implementations based on gcc officially supports similar to gccgo.

Then the portability issue will have been fixed once and for all and Rust will be chosen even for code on obscure targets such as Elbrus 2000, Sunway or Tricore.


The code that used to be perfectly portable still exists, fork it and keep using it.

But if you're asking for a project to be maintained, then it means you want maintainers to put more work to keep new code working on an old platform. Constraints of old/niche platforms cause extra work for developers when adding new features or improving security.


> What people complain about is that code that used to be perfectly portable for years suddenly becomes locked to a very limited set of targets with the argument that memory safety is more important than anything else

As TFA points out, this is a mistaken understanding of the situation. What we have here is code that gave the illusion of being “perfectly portable” (while not actually being written to target or tested against the peculiarities of niche architectures like Itanium and PA-RISC it happened to successfully compile on) being replaced with a new version that only build on machines its authors have actually given any consideration to the security properties of.

That this inconveniences people is obvious. Why they imagine this is a net security loss for them is less obvious – the older C versions still exist, and any concerns that they’re missing out on new security updates are swamped out by the fact that the older versions may well never have behaved securely because nobody from the project was ever writing the code with PA-RISC’s memory and instruction ordering properties in mind to begin with.


> Then the portability issue will have been fixed once and for all

That’s just not true. These targets may make different assumptions about various low level things such as memory ordering, byte-width, behavior on overflow, etc. While C might be okay to defer to the architecture on these questions, Rust is more strict.

I personally think you’ll just end up with a bunch of broken binaries.


> What people complain about is that code that used to be perfectly portable

Literally the whole point of the article is that this assertion is bullshit. It was never perfectly portable. It merely happened to compile, and maybe work, and maybe actually was secure.

> Rust just needs to help get one of the several alternative Rust implementations based on gcc officially supports similar to gccgo.

Who is "Rust"? Why would they pour money and effort into this? Would the gcc community finally ignore rms' demands to make gcc as hostile as possible to implementing new front ends?

> Then the portability issue will have been fixed once and for all and Rust will be chosen even for code on obscure targets such as Elbrus 2000, Sunway or Tricore.

This does not solve the problem the author describes.


I think the point this article is making is that the portability was a sham and a mirage from the get go.

Yes, it might compile on an oddball architecture, but a lot of that would be the autotools build system and the C compiler fudging around important details that could easily leave you with something that kinda-sorta runs but is a wide open security flaw. Or that runs for a while and then breaks in ways nobody could have predicted because you aren't meant to try to run cryptographic software on something out of the museum of historical computing.


What if, instead of Rust, they adapted C99 or newer which is not supported on all obscure platforms. Should they be shamed for using the quality of life improvements that make it easier to maintain the project because someone used their project on an unexpected platform?


They can pin an older version and have it work.


Your question makes me wonder what niche Rust is trying to fill. Anything that can't bootstrap literally everything then will never replace C.


I think the primary goal is Mac/Windows/Linux/BSD/iOS/Android on x86/ARM (32bit and 64bit) variants. That covers 99% of consumer and server computing. And aside from anything else: making that secure would be a huge win.

But it's not like Rust doesn't have wide platform support. For example, it's already possible to run Rust on Risc-V. And it's is improving all the time.


The niche of software that doesn't take over your computer with malware because you made an off by one error.


Wonder if rust (or llvm) could just have a C backend as a fallback for unsupported architectures. Perhaps some stuff would be slow but likely faster than flat out emulation


llvm had a C backend at one point, but my understanding is that it bitrotted and was removed. I think there's been some work to bring it back? Not 100% sure.



Yeah, this is a great project, but different than what I was talking about :)


Indeed; I meant it as a continuation to my:

> Wonder if rust (or llvm) ...


I take no position on Rust other than interested observer; thus the question. There's several factions there, i think the "lets make a better language and spread it everywhere so it gets used" faction ares going to be opposed by the "opinionated zealots of Correct Thinking" and i wonder which gets steamrolled.


What's the "Correct Thinking"?


It seems like the easiest answer is to fork the cryptography library:

- current maintainers and those who are on supported architectures can use the Rust implementation. The current maintainers no longer want to maintain the C implementation and that is their prerogative as this article describes.

- new maintainers and those on unsupported architectures can continue to use the C implementation. Not everyone in the current user base (to include some distribution maintainers) is able to use the new Rust implementation at this time, but they still need the library.

It's not ideal, but it seems like the only practical way ahead that meets everyone's needs.


> It seems like the easiest answer is to fork the cryptography library

I would suggest that not only isn't the easiest answer, it's not an answer at all. Because...

> new maintainers and those on unsupported architectures can continue to use the C implementation

...there will be no new maintainers. This came up a few times three weeks ago, and so far, a (small!) number of people have hopefully suggested that it would be nice if "someone" volunteered, none of them have actually followed through, and as far as I can tell, interest as only waned since.


There is a high chance that there is no pyca/cryptography users on these niche platforms that Gentoo is trying to support. If there are any, they should pay for support not expect maintainers to support theirs fridge platform.


If Gentoo doesn't work on fridges, what's up with this:

> The Gentoo/s390 Project works to keep Gentoo the most up to date and fastest s390 distribution available.

That's a declaration of support, and if that's not what they mean, they could list some limitations on their wiki. [1]

A lot of system distributions declare some platforms as supported and others as best effort and still others as probably not working, but you're welcome to try. That's reasonable, of course, but it's nice if you're upfront about it.

[1] https://wiki.gentoo.org/wiki/Project:S390


Then this is up to Gentoo and IBM to support the latest versions on the platform discontinued in 1998!. It is unreasonable to expect pyca/cryptography maintainers to support these platforms that are not even supported by python itself.


sounds great! I look forward to gentoo and IBM stepping up to the plate to maintain support


pyca/cryptography was an indirect dependency of Gentoo's package manager, Portage. Portage is written in Python. So by definition, there were users.

"was", because after this incident, careful review revealed that it isn't necessary, and dependency got removed. So yes, probably no users now.


> support theirs fridge platform.

I love this typo in context of s390!


Speaking of fridge platforms, does LLVM target any PIC ISAs?


No free work for platforms that only corporations are using. No, this doesn’t violate the open-source ethos; nothing about OSS says that you have to bend over backwards to support a corporate platform that you didn’t care about in the first place.


> Companies should be paying for this directly: if pyca/cryptography actually broke on HPPA or IA-64, then HP or Intel or whoever should be forking over money to get it fixed or using their own horde of engineers to fix it themselves.

This about sums it up.


I thought very hard about this problem as I've developed Virgil [https://github.com/titzer/virgil] over the years. Bootstrapping any system, not the least of which a new programming language is a hard problem.

Virgil has an (almost) hermetic build. The compiler binaries for a stable version are checked into the repository. At any given revision, that stable compiler can compile the source code in the repo to produce a new compiler binary, for any of the supported platforms. That stable binary is therefore a cross-compiler. There are stable binaries for each of the supported stable platforms (x86-darwin, x86-linux, JVM), and there are more platforms that are in the works (x86-64-linux, wasm), but don't have stable binaries.

What do you need to run one of the stable binaries?

1. JVM: any Java 5 compliant JVM

2. x86-linux: a 32-bit Linux kernel

3. x86-darwin: a 32-bit Darwin kernel*

[*] sadly, no longer supported past Mavericks, thanks Apple

The (native) compiler binaries are statically-linked, so they don't need any runtime libraries, DLLs, .so, etc.

Also, nothing depends on having a compiler for any other language, or even much of a shell. There is test code in C, but no runtime system or other services. The entire system is self-hosted.

I think this is a decent solution, but it has limitations. For one, since stable executables absolutely need to be checked in, it's not good to rev stable too often, since it will bloat the git repo. Also, checking in binaries that are all cross-compilers for every platform grows like O(n^2). It would be better to check in just one binary per platform, that contains an interpreter capable of running the compiler from source to bootstrap itself. I guess I'll get to that at platform #4.


I’m wondering if bootstrapping from WebAssembly would make sense someday, under the assumption that everyone has a browser? (Though a stand-alone interpreter is preferable.)


That's not a bad long-term plan (if there is a lightweight standalone Wasm interpreter), but Wasm is not quite ubiquitous enough. Hopefully!


I think it would be much better to do Bootstrappable Builds instead of checking generated files into the repo. If no-one else can reproduce the builds of those files, then it will be hard to trust them.

http://bootstrappable.org/ http://reproducible-builds.org/


Question from a point of ignorance - why would you target 32bit for something that is being actively developed?

Are we not at a point where 64bit should be the expected target?


Webassembly for instance is a 32 bit target. So are most arm hobby boards (even if hw is 64 bit, they ship with 32 bit OSes)


A lot of "IoT" devices are 32bit. And there's no reason for them not to be. (There's 8bit ones too.)


I bootstrapped on the JVM first, and then the first native bootstrap was around 2011. I am almost finished with my x86-64-linux port.


This yet another example of why gift economy members need to understand that you don't owe anyone support. If you publish As-Is, thank you. If you promise to support the OS/Hardware in your lab, thank you. If you accept patches from users with weird use-cases, thank you.

If you get too much bug spam, you need to set up filters and auto replies and volunteer helpers to help you find the reports you care about.

You don't owe anyone support.


So, we have to choose between...

- A tiny community of hobbyists willing to support niche architectures that have zero relevance to any mainstream computing,

OR

- Embrace a newer, stricter ecosystem with more guarantees and clearer communicated support tiers that's also constantly improved upon by a big number of both dedicated volunteers who donate their time and effort, and paid professionals. The only tradeoff: it supports less architectures. For now.

Am I understanding the article correctly?

If so, I am definitely in favor of the latter, and I think many others are as well.


It's interesting that the community praising multiculturalism favors hegemony of only a few computer architecture. Portability used to be paramount of design...


Everybody can do whatever they like with their time. But if they want their exotic CPU architectures then they can support them themselves, no?

However, demanding mainstream tools to lag behind because of said exotic architectures is unrealistic. At one point we all want to progress and advance our craft, especially the one that pays our bills.

I'm not against multi-culturalism. But we can't have it at the expense of everybody else outside your small bubble having their tooling hampered and/or lagging behind on features that a lot of us need for commercial work (and not only, I'd argue).

Backwards compatibility is like everything else: it can't be praised as an absolute value and damn everything else.


> However, demanding mainstream tools to lag behind because of said exotic architectures is unrealistic

Rust is mainstream on HN, not in the real world. Because "Uber" or whatever unicorns uses it doesn't make you mainstream.

https://madnight.github.io/githut/#/pull_requests/2020/4 - Rust is barely reaching 1%.


I strongly disagree on those metrics and I would question their coverage with the real world out there.

Rust is getting more and more prevalent and I'm saying that as a person that has barely worked in only one SV company for the last 5 years.

I'm working outside the mainstream companies and I'm still seeing Rust gathering mindshare all the time wherever I go. And I'm not even hired for Rust positions.

Anecdotal for sure, I'll agree, but your observation is no less anecdotal than mine.

Rust brings very real advantages to the table and seeing people rebelling against it only on principle (and not on merit) is getting increasingly baffling. Feels like an emotional rebellion versus resistance based on facts and merit.


Show me a metric which makes Rust a relevant language.

> Anecdotal for sure, I'll agree, but your observation is no less anecdotal than mine.

I haven't seen Rust in any "top X language" news. Prove me wrong.


Do you participate in any language popularity study that you stumble upon? I know I don't. Add 99% of all my colleagues ever don't as well.

Even if I found a study that corroborated my observation I'd still not trust it. I don't make a habit out of supporting dubious studies only because they support my point of view.

From what I've seen for 19 years of career, most working programmers refuse to participate in such studies.

Hence I don't trust them either way. They work with a non-representative sample of the population. Not a big enough sample for the study to be valid.


Instead of shooting the messengers, please provide me a non-anecdotal reference about Rust being relevant.


Have you seen the stack overflow survey over the last few years? It has some interesting data.

https://insights.stackoverflow.com/survey/2020#technology-mo...


This will not go anywhere. :)

I prefer to look around -- this has always been giving me much more objective info throughout my entire career.

I get your skepticism but you are not arguing in good faith. I already asserted that to me those language popularity contests are dubious and non-representative.

If you disagree with that premise then we have zero common ground and can't discuss the topic. ¯\_(ツ)_/¯


> This will not go anywhere. :)

Of course, you don't have any argument, so you're posturing and running away. Very immature.


Most CPU architectures are "fake diversity"; for example both Alpha and ARM64 are 64-bit little-endian with a weak memory model. Sure, S/390 is 31-bit and supports BCD while in PA-RISC the stack grows upwards and IA-64 is VLIW, but these are trivia that are not comparable to the diversity of human cultures. For decades programmers have wasted their time porting software to different-but-not-better architectures, mostly for the benefit of the vendors who fragmented the market in the first place instead of standardizing.


> For decades programmers have wasted their time porting software to different-but-not-better architectures

Microarchitecture, register layout, ABI also constitute differences which have real-world uses, Not to mention sheer competition to avoid architecture-rot. Only targeting intel & ARM opens yourself to problems, cf. the upcoming nVidia hell ARM is about to experience.

Just because Rust-preaching (without practicing it, of course) Starbucks-spipping average HN readers don't know about it doesn't make it inexistent or necessarily wrong-think.


> Microarchitecture, register layout, ABI also constitute differences which have real-world uses

They do. I started coding on a 6502-based machine and used machine code to find prime numbers, some 27 years ago. I've used 1-2 other non-mainstream CPUs (whose names I don't even know) before diving neck-deep into the mainstream. It was fun, absolutely. It has potential, absolutely. Was it realized? Nope.

However, I can't resist but asking: if those things do have their uses then why didn't the hobbyists support them through patches to GCC / clang and LLVM?

Don't get me wrong. If you tell me we are stuck in a local maxima in CPU architectures, I'll immediately agree with you! But what would you have the entire industry do, exactly? Business pays our salaries and they need results in reasonable timeframes. Can you tell the guy who is paying you: "I need 5 years to integrate this old CPU arch with LLVM so we can have this feature you wanted last month", with a straight face?

> Just because Rust-preaching (without practicing it, of course) Starbucks-spipping average HN readers don't know about it doesn't make it inexistent or necessarily wrong-think.

That is just being obnoxious and not arguing in good faith. Example: I do use Rust, although not 100% of my work time.

You should try the Rust language and tooling -- and I mean work actively with it for a year -- and then you could have an informed opinion. It would make for a more interesting discussion.

Do I like how verbose can Rust be? No, it's irritating.

Do I like how cryptic it can look? No, and it wastes time mentally parsing it (but it does get better with time so 50/50 here).

Does it get stuff done mega-quickly and safer than C (and most C++)? Yes.

Does it have amazing tooling? Yes.

Does it get developed more and more and serve many needs? Yes.

Does it reduce security incidents? I'd argue yes although I have no direct experience. Memory safety is definitely one of the largest elephants in the room when security is involved.

---

You have a very wrong idea about the average Rust user IMO. I don't like parts of the whole thing but it helped me a lot several times already -- and it gave me a peace of mind. And I've witnessed people migrating legacy systems to it and showing graphs in meetings that demonstrate that alarms and error analytics percentages plunged to 0.2% - 2% (and they were always 7% - 15% before).

Just resisting something because it starts going mainstream is a teenager rebellion level of attitude and it's not productive. Do use Rust yourself a bit. Then you can say "I dislike Rust because of $REASON" and then we can have a much more interesting discussion.


> However, I can't resist but asking: if those things do have their uses then why didn't the hobbyists support them through patches to GCC / clang and LLVM?

They didn't decide to create a whole new language and make everything dependent on it.

At some point, when you reach a critical mass, you have to spend more on seemlessly "irrelevant" tasks, like supporting other architecture. Don't shift the problem away by making ridiculing it, own your shortcomings.


Okay, that's a more fair and balanced point of view.

However, let's not forget one of the main points of original article: nobody promised those people that their dependency's dependencies will never change. The crypto authors made a decision to go with Rust. If dependents want to continue using it, they have to adapt or stop using it.

As I've said above: backwards compatibility is an admirable goal but it doesn't override everything.


> As I've said above: backwards compatibility is an admirable goal but it doesn't override everything.

You'll never get a job at either Microsoft, or in any system jobs where backward compatibility is paramount (say, the Linux kernel) for millions, if not billions. Just going the Apple "fuck you" way is arrogant at best, disillusioned at worst, especially when you're an irrelevant language.


I don't think the discussion will ever get anywhere if we only compare polar opposites.

I'm not advocating for either extremity, what about you?


Backward compatibility is, by definition, an all or nothing binary deal. You can't have it otherwise.

[and yes, this is gonna be unpopular in a post-modernist era where everything get constantly redefined and where there is no such thing as "meaning".]


Sorry that your work has made you so frustrated. It sounds stressful. IMO you should consider exiting your current company or area. Judging by your comments, you are pretty jaded (and set in your ways).

I am not interested in discussing extremes as mentioned in two separate sub-threads now but you do sound like you need a break. Good luck, man.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: