That said, I feel the cryptography project handled this poorly. I encountered the issue when one day, installing a Python Ansible module the same way I did the day before required a compiler for a different language that I've never used before and that is hardly used, if at all, in the rest of my stack. The Ansible project seemed to be taken by surprise, and eventually I found the root upstream issue, where the initial attitude was "too bad". Some people making those comments worked for Red Hat / IBM, Ansible is heavily backed by that. What company cares more about Linux on S390 than Red Hat / IBM? I would suggest none. So the fact that they had this attitude and community reaction to me suggests the problem is not one of expecting free work for a corporation on a non-free architecture. It was IMHO a combination of a lack of forethought, communication, and yes, perhaps a change that is overdue and will just be a little painful. The suggestion to maintain an LTS release while people figure out how to proceed is the right move.
Okay, so, I don't mean to pick on you here, but I've seen this sentiment cropping up a few times. Not everyone can be familiar with everything, but I would caution you against immediately assuming corporate politics are at the root here.
Anyone familiar with Alex Gaynor and his work over the last few years would know that he cares about Python, and memory safety. That has remained consistent regardless of his employer. Immediately assuming that this has something to do with company politics, rather than just a tireless open source contributor working to improve the things that he cares about, for years, is making a bit of a category error, in my opinion.
I think it gets really hard the larger the company you talk about. Before I worked at bigger places, I assumed a lot more coherence than is actually the case.
Ansible Tower subscriptions aren't exactly cheap, and neither is RHEL s390x. If there's not the fat to be uplifting the core infrastructure needed to run RHEL or Ansible on Red Hat's products, that's most likely a choice.
Rust is currently the apex of a tall stack of hundreds of millions of lines of code, once you account for LLVM etc. Using it as the basis for other software means only processors with sufficient market penetration are 'worthy' candidates. In the long run, if Rust is as successful as lots of us hope it will be, this will kill what innovation is left in the hardware space. If a company is sufficiently motivated to care, it will almost certainly be cheaper to fork the code back to using C than it would be to forklift LLVM to a new architecture.
Is anyone aware of a direct Rust compiler project? Even discounting GCC, Go was bootstrapped from relatively simple (and naive) C compilers until it became self-hosting. I think a basic non-optimizing Rust compiler would go a long way toward leaving the door open for onboarding older -- and more importantly, novel -- architectures into the ecosystem.
I also don’t know why re-writing an entire project and creating a compiler for it is somehow easier than writing an LLVM backend.
In any case, that's better than shrugging off esoteric platforms, IMHO.
To your other point, writing an LLVM backend is one thing. Getting it upstreamed is another, and maintaining it is another still. Then you have to navigate the politics of two foundations, both of whose boards of directors are basically the Who's Who of competing interests. I've watched more than one project fail to navigate those waters.
Anyway, the cryptography package going from python + c to python + C + C++ + Rust is a cambrian explosion of build time complexity, and in my work we found it simpler to just get rid of the python and the cryptography package, so it's mostly academic to me.
Okay! I misunderstood you then, sorry. I think one of the hardest parts about this conversation is that there are so many different people with various, but overlapping, opinions. A lot of folks do think this way, and I thought that's what you were saying. My bad.
> Getting it upstreamed is another
You do not need it to be upstreamed in order to build Rust, we build with our own modified LLVM by default, so using it is quite easy.
And they've released Rust as optional component you can disable, precisely because nobody paid attention until it actually shipped.
Exactly, from the original shitstorm issue:
> Rust bindings kicked off in July 2020 with #5357. Alex started a thread on the cryptography developer mailing list in December to get feedback from packagers and users. The FAQ also contains instrutions how you can disable Rust bindings.
> Do you have constructive suggestions how to communicate changes additionally to project mailing lists, Github issue tracker, project IRC channel, documentation (changelog, FAQ), and Twitter channels?
At one point there's sadly not much the project can do and still make progress.
That one kinda got me, when python intentionally has a runtime developer-to-developer communication system, per say:
see also https://lwn.net/Articles/740804/
I’m writing for my blog, so I installed a spell check extension. But its dictionary stinks. So I just turn off its warnings before a final post pass.
Most of the time when I see yellow in my editor it’s stuff I would expect to be red. Even from my primary language (TypeScript) which officially doesn’t even have warnings.
More things should be errors and treated as such! And more things that legitimately qualify as warnings should error in final checks to ensure they were addressed somehow, even with just an explicit dismissal.
This is obnoxious for people who want to start with a tiny standard distro image with just a C toolchain, and build their app and all dependencies from source. It is also obnoxious for distros that want to build all packages from source. rust depends on llvm, itself a huge complicated finicky package ... but not just that, it depends on a custom-patched llvm! ...but not just that, it also depends on an extremely recent version of rust! this problem is recursive! And there are many other niche cases ... and "niche cases" of "unauthorized" porting/modification/substitution is how much of the open source ecosystem we love got its start.
This is really about issues of paranoia and control in the "security" realm. They consider it irresponsible to enable users to do anything they can't guarantee "safe". "Your organization should be paying for a $$$ corporate support contract if you want to do anything we don't officialy support." Maybe you could have fixed a portability bug in some python or C code (I've fixed a couple when bringing up a little-endian mips64 system a decade ago), but no you'll need rust+llvm experts to port that whole crazy toolchain. It's for your own good.
And this is not just a matter of fixing some little-endian assumption, it's a matter of understanding microoptimizations, instruction reordering, prefetch behavior, branch prediction etc. Ensuring code has predictable runtime in the presence of an optimizing compiler and out of order processor is extremely difficult.
It's probably just as smart to roll your own crypto instead of trying to use a known library on a new platform, because the hardest parts of rolling your own will have to be done anyway.
I feel like this really only makes sense if pyca/cryptography had planned on adding the Rust dependency from the very beginning (or from very early on). Is there any indication that was the case?
> but not just that, it depends on a custom-patched llvm!
This doesn't seem to be true [0, 1].
> That is, this is already the case. We don't like maintaining a fork. We try to upstream as much as we can.
> But, at the same time, even when you do this, it takes tons of time. A contributor was talking about exactly this on Twitter earlier today, and estimated that, even if the patch was written and accepted as fast as possible, it would still take roughly a year for that patch to make it into the release used by Rust. This is the opposite side of the whole "move slow and only use old versions of things" tradeoff: that would take even longer to get into, say, the LLVM in Debian stable, as suggested in another comment chain.
> So our approach is, upstream the patches, keep them in our fork, and then remove them when they inevitably make it back downstream.
... so there are always some outstanding patches rust applies to the llvm codebase.
In addition, later down in the comment chain:
> Can't there be a build option to not use the LLVM submodule, and instead use the system LLVM?
> There is. We even test that it builds as part of our CI, to make sure it works, IIRC.
For a more concrete example, Fedora supports choosing between system and bundled LLVM when building Rust [0, 1].
I am sure this idea surfaced several times in IRC or possibly in the mailing lists. Certainly, the authors have been toying with handling ASN.1 in rust since 2015 , which I guess will be the next logical step.
I do agree that this is mostly a political stance. pyca/cryptography is a wrapper sandwiched between a gigantic runtime written in C (CPython/PyPy) and a gigantic library written in C (openssl).
The addition of Rust as dependency enables the inclusion of just 90 lines of Rust  where the only part that really couldn't be implemented in pure Python is a line copied from OpenSSL  (i.e. it was already available), and which is purely algebraic, therefore not mitigating any real memory issue at all (the reason to use rust in the first place).
The change in this wrapper (pyca/cryptographic) does not move the needle of security in any significant way, and it is really only meant to send the signal that adding Rust in all other Python packages and especially in the runtime itself will now come at no (political) cost.
My understanding is that this is just the beginning, and the whole reason it's only a small amount is precisely to do it in small steps, correctly, rather than re-writing the entire world in one go.
So either that's still valid (and not too much of rust will come in to make a real difference security-wise) or they have revisited their position and will re-implement a bunch of openssl logic (in rust apparently though, and not in Python as it would be more logical, and as golang does successfully). And in the latter case, why just not focus on wrapping rustls instead?
In either case, the hand is being forced: it's a small amount, and I don't see how it would have been an excessive burden to maintain such a small logic as an opt in for a period. It makes much more sense to read this as a move to forcefully push rust into the python ecosystem.
Looks like it's the former. From the initial (?) GitHub issue discussing the move to Rust :
> We do not plan to move any of the crypto under the scope of FIPS-140 from OpenSSL to cryptography. We do expect to move a) our own code that's written in C (e.g. unpadding), b) ASN.1 parsing. Neither of those are in scope for FIPS-140.
> and not in Python as it would be more logical
Is Python actually suitable for cryptographic code, especially if constant-time operations are needed?
> I don't see how it would have been an excessive burden to maintain such a small logic as an opt in for a period.
And then when said logic stops being opt-in, why wouldn't the same problem arise?
No, but as seen in the first chunk of rust code added, the amount of logic that needs to be constant-time is a) very, very limited to a few primitives only b) algebraic in nature (put differently, it's not where memory bugs will pop up, so using rust over C doesn't even buy you much).
For instance, ASN.1 parsing doesn't need to done in constant time in the vast majority of cases (and in all cases I am aware of).
> And then when said logic stops being opt-in, why wouldn't the same problem arise?
Some friction will certainly remains, but it will be nothing compared to the current breakage.
That's fair, though if some other part of the codebase is ported to Rust anyways sticking to C doesn't save you, unfortunately.
> For instance, ASN.1 parsing doesn't need to done in constant time in the vast majority of cases (and in all cases I am aware of).
I'm curious why this has to be done in C/Rust. Performance?
> Some friction will certainly remains, but it will be nothing compared to the current breakage.
What would be different then compared to now that would reduce breakage to such an extent?
This should be turned around. There is no evidence that part for the codebase really need to be ported and for which Rust makes a real difference.
> I'm curious why this has to be done in C/Rust. Performance?
ASN.1 is only used during handshakes and I/O of files (the chunk of rust added covers loading of PKCS#7 files which are typically quite small, and not typically dealt with in massive numbers). I doubt the performance hit would be so high.
Also, the package wbond/asn1crypto has shown that doing ASN.1 in pure Python can be quite fast.
> What would be different then compared to now that would reduce breakage to such an extent?
The ability to try the feature out and still have a plan B if it doesn't work out, and the possibility to have a smooth, long, relaxed upgrade plan with good warnings and more time to prepare.
But to be honest, while writing all this chain of comments, I really doubt there was a real need to add rust into the mix (other than the political angle I already mentioned).
I was thinking that if the maintainers were planning on porting some other larger part of the codebase, then starting with something small/relatively inconsequential would be a good first step, and keeping it in C wouldn't provide much benefit once said other larger part were ported.
> Also, the package wbond/asn1crypto has shown that doing ASN.1 in pure Python can be quite fast.
Interesting! I'm curious why the maintainers didn't opt for that approach instead.
> The ability to try the feature out and still have a plan B if it doesn't work out
Would people have tried this feature out if it were made opt-in? It's clear that the initial announcements reached far fewer people than one might like, and I honestly have no idea how many people would see a build-time warning.
gcc is also a "huge complicated finicky package"... Any decently sized compiler is.
It's funny that you mentioned having to install Rust to use some Python because, for us non-Python users, everything needs Python :). I don't use Python at all, but I need both versions installed.
Just pointing out that many people are already wearing that shoe.
Python is the Lingua Franca of getting shit done. Rust is seemingly becoming the Lingua Franca of [more] secure code.
s390x has no problem with switching to Rust/LLVM. Red Hat and IBM both employ engineers working on LLVM, specifically for s390x in IBM's case.
The reply to that by the Gentoo guy (who started the Github issue) was that package maintainers cannot follow every single mailing list of every dependency. That is debatable (e.g. maybe you should make an exception for security applications), but let’s take that as a given for now. In that case, what is Cryptography to do? Where should they announce such things in a way that orgs like Gentoo will see it? And also notice it and not just mentally gloss over it as some kind of “spam”? If the Gentoo guy didn’t see it half a year ago, would he have seen an announcement (or the reminders) if it was made five years ago?
> Given a version cryptography X.Y.Z,
> - X.Y is a decimal number that is incremented for potentially-backwards-incompatible releases.
> - - This increases like a standard decimal. In other words, 0.9 is the ninth release, and 1.0 is the tenth (not 0.10). The dividing decimal point can effectively be ignored.
> - Z is an integer that is incremented for backward-compatible releases.
The system has since changed, but it continues not to be semantic versioning. (It’s effectively the same, in fact, but protects against dependents who think it is semantic.)
By that scheme, it was already a “major” (signifying potential backwards-incompatibility) release.
Because the build toolchain is "visible", that is, pip isn't just downloading a prebuilt binary every time, I think breaking changes that could cause CI systems or user installs to fail is part of the API contract. Think of what major distributions or software packages do when they want to deprecate support for certain platforms - those are major bumps that typically only occur on incrementing the most significant component of the version.
Hypothetical: suppose the the authors changed setup.py so that it only built on Red Hat Enterprise Linux(tm) version 6. Again, they could do that, it wouldn't change the runtime API. And on all other distributions or installers, it would error.
Would that be a major semver change? Of course it would be. The API contract has to include everything from packaging to use.
I wouldn’t want to force any particular versioning scheme on any particular developer, but maybe the “SemVer façade” versioning scheme they switched to is the best compromise. It has defensive value, at least.
Then again, PEP 440 has nothing to say about the semantics of versioning, only requiring:
I can’t think of a clean alternative other than coming full-circle back to the “developer advocacy” solution, with its clear problems. Someone smarter than I am probably has it in the palm of their hand, though.
It seems reasonable that, if you are the packager for critical packages, that you follow critical dependencies?
If the problem is that the distro is supporting so many things that the folks working on it can't keep up - well, that's precisely the author's point: stop pretending that you can support HPPA and MIPS or whatever as well as you can support x86_64. But you don't get to tell a million people that they have to have a less secure Python because 3 people have a toy in a closet they want treated as a first class citizen.
And then the corresponding uptightness will be “FOO_PROJECT is aligned with the Intel monopoly”, and just as many people will be unhappy. You can see this in many recent threads about Apple not providing free access to M1 documentation for alternative OSes they’re under no obligation to support.
On 2013-03-21, urllib3 added an optional dependency to pyopenssl for SNI support on python2 - https://github.com/urllib3/urllib3/pull/156
On 2013-12-29, pyopenssl switched from opentls to cryptography - https://github.com/pyca/pyopenssl/commit/6037d073
On 2016-07-19, urllib3 started to depend on a new pyopenssl version that requires cryptography - https://github.com/urllib3/urllib3/commit/c5f393ae3
On 2016-11-15, requests started to depend on a new urllib3 version that now indirectly requires cryptography - https://github.com/psf/requests/commit/99fa7bec
On 2018-01-30, portage started to enable the +rsync-verify USE flag by default, which relies on the gemato python library maintained by mgorny himself, and gemato depended on requests. So 5-6 levels of indirection at this point? I lost count.
On 2020-01-01, python2 was sunset. A painful year to remember, and a painful migration to forget. And just when the year was about to end...
On 2020-12-22, cryptography started to integrate rust in the build process, and all hell broke loose - https://github.com/pyca/cryptography/commit/c84d6ee0
Ultimately, I think mgorny only has himself to blame here, by injecting his own library into the critical path of gentoo, without carefully taking care of its direct and indirect dependencies. (But of course it is also fair game to blame it on the 2to3 migration)
In comparison, few months before this, the librsvg package went through a similar change where it started to depend on rust, and it was swift and painless without much drama - https://bugs.gentoo.org/739820 and https://wiki.gentoo.org/wiki/Project:GNOME/3.36-notes
#error "Starting with version xxx this package will need Rust to compile. Recompile with -DI_UNDERSTAND_THAT_SOON_RUST_WILL_BE_MANDATORY to acknowledge that you understand this warning."
#if !(defined(__AMD64__) || defined(__AARCH64__))
For whatever it's worth, if the C code in the cryptography package didn't already contain the above check, along with a some hurdle requiring the user to compile with -DNOT_OFFICIALLY_SUPPORTED_TARGET, then the article's point is false: If you don't prevent users from compiling your security-relevant, reliant-on-exact-memory-semantics software on System/390, then you are implicitly supporting System/390.
And for that matter, the Rust code should probably contain a similar check: Even if someone ported Rust to System/390, the cryptography library shouldn't magically start working there, unless the developers actually test there.
This isn’t a major source of stress: you include migration instructions and even tooling to automate it. The thing that’s stressful is when your build fails with completely unexpected errors and no indication of what went wrong.
Loudly announcing breaking changes is disruptive to some extent, but not doing so either means more disruption or nothing can ever change at all.
The only downstreams this affects more are the ones who decide to suppress the warning for now and ... put it off until you release the change that actually required breakage.
It's hardly a notice posted in a sub-basement 6 months before instituting a breaking change. They communicated via numerous mechanisms.
Regardless. The common practice is a year of emitting warnings from the software, that the end-user will see. That's prominent and allows package maintainers enough time to work around the upcoming breakage. Six months notice on a mailing list is simply not prominent enough, and it's half the standard year which puts undue strain downstream.
If we are to move to a more abstracted and safer system, the ideas behind LLVM are just going to be a fact of life in years to come. The solution to this is to either fund greater LLVM integration (or similar that isn't on the scene yet) or accept the status-quo. I choose the former, but I sincerely hope the future direction leaves as few out in the cold as possible through the effort of smart people. But protecting hobbyists is a stretch goal in my mind compared to improving the security and privacy of the global interconnected world we're in.
E.g. I just added pnglib to an embedded assembly. Has a display among other things. Wanted to put compressed PNG images into (limited) flash, then decompress into (plentiful) RAM at boot.
Of course pnglib didn't build for my environment. Never mind it has a C library. There are 50+ compile switches in pnglib, and I was setting them in a pattern that wasn't part of the tested configurations. It didn't even compile. Once it compiled (after some source changes) it didn't run. More source changes and it would run until it hit some endianness issues. Fix those, and it would do (just) what I wanted, which was to decompress certain PNG image formats with limited features, once.
No problem. That was my goal, achieved. But at no time did I blame the maintainers for not anticipating my use case.
I would say this: maintainers, flag compile-time options that aren't tried both ways in your test environments. To give me some chance of estimating how hard my job is going to be.
For users on "weird architectures" to be petitioning that a move the pyca authors are making is causing inconvenience to them is perfectly reasonable in my eyes.
- Alpha: introduced 1992, discontinued in the early 2000s
- HP/PA: introduced 1986, discontinued in 2008
- Itanium: introduced 2001, end of life 2021
- s390: introduced 1990, discontinued in ~1998
- m68k: introduced 1979, still in use in some embedded systems but not
developed at Motorola since 1994.
ARM was once not as popular as it is nowadays but it was never moribund and in my experience has always had decent tooling and compiler support.
Furthermore I'm sure that if tomorrow HP/PA makes a comeback for some reason,
LLVM will add support for it. Out of the list I'd argue that the only two who
may be worth supporting are Motorola 68k and maybe Itanium but even then it's
I personally maintain software that runs on old/niche DSPs and I like emulation,
so I can definitely feel the pain of people who find new release of software
breaking on some of the niche arch they use (I tried running Rust on MIPS-I but
couldn't get it to work properly because of lack of support in LLVM for
instance). These architectures are dead or dying, not up-and-coming like, say, RISC-V which has gained some momentum lately.
But while I sympathize with people who are concerned by this sort of breakage,
it's simply not reasonable to expect these open source projects to maintain
backward compatibility with CPUs that haven't been manufactured in decades. As
TFA points out it's a huge maintenance burden: you need to regression test
against these architectures you may know nothing about, you may not have an easy
way to fix the bugs that arise etc...
>open source groups should not be unconditionally supporting the ecosystem for a
>large corporation’s hardware and/or platforms.
Preach. Intel is dropping Itanium, HP dropped HP/PA a long time ago. Why should
volunteers be expected to provide support for free instead?
It's like users who complain that software drops support for Windows 7 when MS
themselves don't support the OS anymore.
"That’s the original S/390, mind you, not the 64-bit “s390x” (also known as z/Architecture). Think about your own C projects for a minute: are you willing to bet that they perform correctly on a 31-bit architecture that even Linux doesn’t support anymore?"
I hesitate to imagine what it would take to get a hardware maker to contribute patches to LLVM.
On the other side, if you make a new cpu architecture, all of your users (people buying the chip) will gain from porting compilers.
No one is expected to do anything (unless they are being paid). It’s just logical for people to work this way.
All that being said, it's quite worthwhile to include these "dead" architectures in LLVM and Rust, if only for educational reasons. That need not imply the high level of support one would expect for, e.g. RISC-V.
1) If the architecture is in active production, there is someone somewhere trying to make money by selling it. If they are intent on only supporting proprietary compilers, they need to accept the consequences of that decision: users won't use their hardware because they can't use the software that they want to use. If they want the architecture to be widely used, they have a fiduciary obligation to ensure that they have reliable and well tested backends to major compilers.
2) If the user is using old architectures that are no longer in production or no longer supported, there isn't ever any reasonable expectation of continuing software support. You're stuck with old software, full stop.
In the case of your objection, AArch64 and ARM manufacturers have the obligation to develop openly available backends for their architectures. And they've taken that seriously, as should any newcomer architectures.
That's not a very reasonable POV. Many of these architectures are very well understood and very easily supported via emulation. There's no need to run them on actual hardware, especially if you aren't dealing with anything close to bare-metal quirks.
(My day job involves a lot of Tier 2 ARM work, and I don't personally run into any more bugs than Tier 1 platforms. YMMV.)
The ARM world is a blizzard of proprietary, undocumented implementations with limited support for the upstream kernels, often can boot only a vendor-specific distro that is quickly abandoned, and full of boards that blink in and out of existence at the drop of a hat. It absolutely is a weird architecture.
> For users on "weird architectures" to be petitioning that a move the pyca authors are making is causing inconvenience to them is perfectly reasonable in my eyes.
Yes, this is exactly the sense of entitlement that the author is talking about when he describes the destruction of people's interest in working on open source.
People want - to borrow from the BSD world - FreeBSD levels of support for specific chipsets and features, with OpenBSD levels of support for security, and NetBSD levels of portability. These are not compatible outcomes, and folks should stop pretending that they are.
Now come on with your "entitlement". It's not as though we're talking about some random people who made some little package for their own use and decided to make it available in case anyone else found it useful, and now the community demands from them are becoming too much and are something they never asked for. This is a group that have named themselves the Python Cryptographic Authority and have chosen the prominent pypi package name of just "cryptography". They couldn't have done any more to encourage the broader community to depend on it and make it a core part of their stack.
In comparison, I couldn't imagine the python core team (also largely unpaid) doing this with one of their stdlib modules and then dismissing those objecting as "entitled".
(FWIW I'm not particularly interested in taking a side in this issue, but think your labelling as "entitled" is unhelpful)
It'll be interesting to see how the Rust community responds to this; are they so eager to absorb everything that they'll put effort into supporting niches to get more users,or will they take the opportunity to be "opinionated" and exclusive and shed the effort of catering to obsolescence?
No one does that. What people complain about is that code that used to be perfectly portable for years suddenly becomes locked to a very limited set of targets with the argument that memory safety is more important than anything else.
> It'll be interesting to see how the Rust community responds to this; are they so eager to absorb everything that they'll put effort into supporting niches to get more users,or will they take the opportunity to be "opinionated" and exclusive and shed the effort of catering to obsolescence?
Rust just needs to help get one of the several alternative Rust implementations based on gcc officially supports similar to gccgo.
Then the portability issue will have been fixed once and for all and Rust will be chosen even for code on obscure targets such as Elbrus 2000, Sunway or Tricore.
But if you're asking for a project to be maintained, then it means you want maintainers to put more work to keep new code working on an old platform. Constraints of old/niche platforms cause extra work for developers when adding new features or improving security.
As TFA points out, this is a mistaken understanding of the situation. What we have here is code that gave the illusion of being “perfectly portable” (while not actually being written to target or tested against the peculiarities of niche architectures like Itanium and PA-RISC it happened to successfully compile on) being replaced with a new version that only build on machines its authors have actually given any consideration to the security properties of.
That this inconveniences people is obvious. Why they imagine this is a net security loss for them is less obvious – the older C versions still exist, and any concerns that they’re missing out on new security updates are swamped out by the fact that the older versions may well never have behaved securely because nobody from the project was ever writing the code with PA-RISC’s memory and instruction ordering properties in mind to begin with.
That’s just not true. These targets may make different assumptions about various low level things such as memory ordering, byte-width, behavior on overflow, etc. While C might be okay to defer to the architecture on these questions, Rust is more strict.
I personally think you’ll just end up with a bunch of broken binaries.
Literally the whole point of the article is that this assertion is bullshit. It was never perfectly portable. It merely happened to compile, and maybe work, and maybe actually was secure.
> Rust just needs to help get one of the several alternative Rust implementations based on gcc officially supports similar to gccgo.
Who is "Rust"? Why would they pour money and effort into this? Would the gcc community finally ignore rms' demands to make gcc as hostile as possible to implementing new front ends?
> Then the portability issue will have been fixed once and for all and Rust will be chosen even for code on obscure targets such as Elbrus 2000, Sunway or Tricore.
This does not solve the problem the author describes.
Yes, it might compile on an oddball architecture, but a lot of that would be the autotools build system and the C compiler fudging around important details that could easily leave you with something that kinda-sorta runs but is a wide open security flaw. Or that runs for a while and then breaks in ways nobody could have predicted because you aren't meant to try to run cryptographic software on something out of the museum of historical computing.
But it's not like Rust doesn't have wide platform support. For example, it's already possible to run Rust on Risc-V. And it's is improving all the time.
> Wonder if rust (or llvm) ...
- current maintainers and those who are on supported architectures can use the Rust implementation. The current maintainers no longer want to maintain the C implementation and that is their prerogative as this article describes.
- new maintainers and those on unsupported architectures can continue to use the C implementation. Not everyone in the current user base (to include some distribution maintainers) is able to use the new Rust implementation at this time, but they still need the library.
It's not ideal, but it seems like the only practical way ahead that meets everyone's needs.
I would suggest that not only isn't the easiest answer, it's not an answer at all. Because...
> new maintainers and those on unsupported architectures can continue to use the C implementation
...there will be no new maintainers. This came up a few times three weeks ago, and so far, a (small!) number of people have hopefully suggested that it would be nice if "someone" volunteered, none of them have actually followed through, and as far as I can tell, interest as only waned since.
> The Gentoo/s390 Project works to keep Gentoo the most up to date and fastest s390 distribution available.
That's a declaration of support, and if that's not what they mean, they could list some limitations on their wiki. 
A lot of system distributions declare some platforms as supported and others as best effort and still others as probably not working, but you're welcome to try. That's reasonable, of course, but it's nice if you're upfront about it.
"was", because after this incident, careful review revealed that it isn't necessary, and dependency got removed. So yes, probably no users now.
I love this typo in context of s390!
This about sums it up.
Virgil has an (almost) hermetic build. The compiler binaries for a stable version are checked into the repository. At any given revision, that stable compiler can compile the source code in the repo to produce a new compiler binary, for any of the supported platforms. That stable binary is therefore a cross-compiler. There are stable binaries for each of the supported stable platforms (x86-darwin, x86-linux, JVM), and there are more platforms that are in the works (x86-64-linux, wasm), but don't have stable binaries.
What do you need to run one of the stable binaries?
1. JVM: any Java 5 compliant JVM
2. x86-linux: a 32-bit Linux kernel
3. x86-darwin: a 32-bit Darwin kernel*
[*] sadly, no longer supported past Mavericks, thanks Apple
The (native) compiler binaries are statically-linked, so they don't need any runtime libraries, DLLs, .so, etc.
Also, nothing depends on having a compiler for any other language, or even much of a shell. There is test code in C, but no runtime system or other services. The entire system is self-hosted.
I think this is a decent solution, but it has limitations. For one, since stable executables absolutely need to be checked in, it's not good to rev stable too often, since it will bloat the git repo. Also, checking in binaries that are all cross-compilers for every platform grows like O(n^2). It would be better to check in just one binary per platform, that contains an interpreter capable of running the compiler from source to bootstrap itself. I guess I'll get to that at platform #4.
Are we not at a point where 64bit should be the expected target?
If you get too much bug spam, you need to set up filters and auto replies and volunteer helpers to help you find the reports you care about.
You don't owe anyone support.
- A tiny community of hobbyists willing to support niche architectures that have zero relevance to any mainstream computing,
- Embrace a newer, stricter ecosystem with more guarantees and clearer communicated support tiers that's also constantly improved upon by a big number of both dedicated volunteers who donate their time and effort, and paid professionals. The only tradeoff: it supports less architectures. For now.
Am I understanding the article correctly?
If so, I am definitely in favor of the latter, and I think many others are as well.
However, demanding mainstream tools to lag behind because of said exotic architectures is unrealistic. At one point we all want to progress and advance our craft, especially the one that pays our bills.
I'm not against multi-culturalism. But we can't have it at the expense of everybody else outside your small bubble having their tooling hampered and/or lagging behind on features that a lot of us need for commercial work (and not only, I'd argue).
Backwards compatibility is like everything else: it can't be praised as an absolute value and damn everything else.
Rust is mainstream on HN, not in the real world. Because "Uber" or whatever unicorns uses it doesn't make you mainstream.
https://madnight.github.io/githut/#/pull_requests/2020/4 - Rust is barely reaching 1%.
Rust is getting more and more prevalent and I'm saying that as a person that has barely worked in only one SV company for the last 5 years.
I'm working outside the mainstream companies and I'm still seeing Rust gathering mindshare all the time wherever I go. And I'm not even hired for Rust positions.
Anecdotal for sure, I'll agree, but your observation is no less anecdotal than mine.
Rust brings very real advantages to the table and seeing people rebelling against it only on principle (and not on merit) is getting increasingly baffling. Feels like an emotional rebellion versus resistance based on facts and merit.
> Anecdotal for sure, I'll agree, but your observation is no less anecdotal than mine.
I haven't seen Rust in any "top X language" news. Prove me wrong.
Even if I found a study that corroborated my observation I'd still not trust it. I don't make a habit out of supporting dubious studies only because they support my point of view.
From what I've seen for 19 years of career, most working programmers refuse to participate in such studies.
Hence I don't trust them either way. They work with a non-representative sample of the population. Not a big enough sample for the study to be valid.
I prefer to look around -- this has always been giving me much more objective info throughout my entire career.
I get your skepticism but you are not arguing in good faith. I already asserted that to me those language popularity contests are dubious and non-representative.
If you disagree with that premise then we have zero common ground and can't discuss the topic. ¯\_(ツ)_/¯
Of course, you don't have any argument, so you're posturing and running away. Very immature.
Microarchitecture, register layout, ABI also constitute differences which have real-world uses, Not to mention sheer competition to avoid architecture-rot. Only targeting intel & ARM opens yourself to problems, cf. the upcoming nVidia hell ARM is about to experience.
Just because Rust-preaching (without practicing it, of course) Starbucks-spipping average HN readers don't know about it doesn't make it inexistent or necessarily wrong-think.
They do. I started coding on a 6502-based machine and used machine code to find prime numbers, some 27 years ago. I've used 1-2 other non-mainstream CPUs (whose names I don't even know) before diving neck-deep into the mainstream. It was fun, absolutely. It has potential, absolutely. Was it realized? Nope.
However, I can't resist but asking: if those things do have their uses then why didn't the hobbyists support them through patches to GCC / clang and LLVM?
Don't get me wrong. If you tell me we are stuck in a local maxima in CPU architectures, I'll immediately agree with you! But what would you have the entire industry do, exactly? Business pays our salaries and they need results in reasonable timeframes. Can you tell the guy who is paying you: "I need 5 years to integrate this old CPU arch with LLVM so we can have this feature you wanted last month", with a straight face?
> Just because Rust-preaching (without practicing it, of course) Starbucks-spipping average HN readers don't know about it doesn't make it inexistent or necessarily wrong-think.
That is just being obnoxious and not arguing in good faith. Example: I do use Rust, although not 100% of my work time.
You should try the Rust language and tooling -- and I mean work actively with it for a year -- and then you could have an informed opinion. It would make for a more interesting discussion.
Do I like how verbose can Rust be? No, it's irritating.
Do I like how cryptic it can look? No, and it wastes time mentally parsing it (but it does get better with time so 50/50 here).
Does it get stuff done mega-quickly and safer than C (and most C++)? Yes.
Does it have amazing tooling? Yes.
Does it get developed more and more and serve many needs? Yes.
Does it reduce security incidents? I'd argue yes although I have no direct experience. Memory safety is definitely one of the largest elephants in the room when security is involved.
You have a very wrong idea about the average Rust user IMO. I don't like parts of the whole thing but it helped me a lot several times already -- and it gave me a peace of mind. And I've witnessed people migrating legacy systems to it and showing graphs in meetings that demonstrate that alarms and error analytics percentages plunged to 0.2% - 2% (and they were always 7% - 15% before).
Just resisting something because it starts going mainstream is a teenager rebellion level of attitude and it's not productive. Do use Rust yourself a bit. Then you can say "I dislike Rust because of $REASON" and then we can have a much more interesting discussion.
They didn't decide to create a whole new language and make everything dependent on it.
At some point, when you reach a critical mass, you have to spend more on seemlessly "irrelevant" tasks, like supporting other architecture. Don't shift the problem away by making ridiculing it, own your shortcomings.
However, let's not forget one of the main points of original article: nobody promised those people that their dependency's dependencies will never change. The crypto authors made a decision to go with Rust. If dependents want to continue using it, they have to adapt or stop using it.
As I've said above: backwards compatibility is an admirable goal but it doesn't override everything.
You'll never get a job at either Microsoft, or in any system jobs where backward compatibility is paramount (say, the Linux kernel) for millions, if not billions. Just going the Apple "fuck you" way is arrogant at best, disillusioned at worst, especially when you're an irrelevant language.
I'm not advocating for either extremity, what about you?
[and yes, this is gonna be unpopular in a post-modernist era where everything get constantly redefined and where there is no such thing as "meaning".]
I am not interested in discussing extremes as mentioned in two separate sub-threads now but you do sound like you need a break. Good luck, man.