> It affected her social life too and Sophie would leave bars and restaurants early because of the "overwhelming noise".
This can also be attributed to enshittification. Restauranteurs and bar owners found that noisier places increase turnover and cause people to drink more.
Totally this. Acoustics in bars and restaurants has gone completely to shit. Even if they don't turn the music up too loud, it can be extremely hard to have a conversation without shouting, even if the place isn't crowded. I understand the need to not have people linger at a popular spot, but there has to be a better way then to make the place actively unpleasant to be in.
It's also a kind of prisoner's dilemma or positive feedback loop, even without music: people in crowded bars talk loudly (especially with alcohol) which requires people nearby to talk louder to be heard, then it gets even louder and so on.
I'd be curious to see a "high-level structured assembly language" that gives the programmer actual control over things like register allocations, in a more systematic way than C's attempt. You might say "optimizers will do a better job", and you're right, but what I want to see is a language that isn't designed to fed into an optimizing backend at all, but turned into machine code via simple, local transformations that a programmer can reliably predict. In other words, as high-level systems languages like C lean more and more heavily on optimizing backends and move away from being "portable assembly", I think that opens up a conceptual space somewhere below C yet still above assembly.
Jasmin is something like this. It is essentially a high-level assembler, will handle register allocation (but not spills) for you, has some basic control flow primitives that map 1-to-1 to assembly instructions. There is also an optional formal verification component to prove some function is equivalent to its reference , is side-channel free, etc.
You may think that’s too close to the hardware, but if you want “actual control over things like register allocation”, you will be writing your code for a specific register set size, types of (vector) registers, etc, so you’ll soon be targeting a specific CPU.
Also, was C ever “portable assembly”? There may have been a brief period where that was true, but it started as a language where programmers could fairly reliably predict what assembly the compiler would generated but that wasn’t portable, and once it had become portable, users wanted optimizations, so predicting what code it would generate was a lot harder.
> You might say "optimizers will do a better job", and you're right
That's probably why nothing good has been created to fill that space yet (that I know of). Any serious project is just going to opt for compiler optimizations.
The problem is that there are a handful of domains where optimizations need to be actively fought against, like low-level cryptography primitives. You can't write these in C reliably, so you need to drop into assembly to deliberately inhibit the optimizer, but that doesn't mean that assembly the ideal choice, only that it's the only choice.
It's nice to wish for the optimizer to do the [almost] perfect job, but sometimes that never arrives. Consider for example the case of AMD GPU ISA (GCN) generated by LLVM: it's been so far from optimal for so long, that one can lose hope that'll ever happen; and wish for a simple solution that works in the meantime.
Is the amdgcn GCC backend any better? (Or maybe final register allocation is not performed before llvm-mc is called—GCC reuses llvm-mc due to lack of binutils support.)
I would call Rust comparable to C when it comes to giving you the ability to have control over memory management and machine code. Which is to say, if you have a problem with how C does it, then I doubt that Rust will make you any happier in this regard. What pain points in C is this referring to specifically?
Note that the "runtime parts" in question here refers to initializing OS-level threads, the same as in C. It's not referring to any sort of userspace green thread runtime (which you would need to bring in yourself).
no_std disables the parts of the standard lib that rely on having an OS (e.g. threads) or having an allocator. (You can get the allocator parts back in a no_std program if you define your own allocator.)
"Panics" aren't something you'd disable, a panic is just a mechanism for crashing the program in a controlled manner. Rust lets you decide whether or not you want panics to abort the program immediately or whether to unwind the program and run destructors, and you can configure this in any Rust codebase via a config key/compiler flag. (If you're using no_std then you can technically also define panics as being an infinite loop, rather than crashing.)
Someday in the unimaginable future, Microsoft will be a memory, Word will be lostech available only via running a cracked binary inside fifteen nested VMs, and a working copy of the LibreOffice source will still be kicking around on an FTP server somewhere and developed by users communicating over IRC.
Not even close to 100%, the reason that it feels like every major C codebase in industry is pinned to some ancient compiler version is because upgrading to a new toolchain is fraught. The fact that most Rust users are successfully tracking relatively recent versions of the toolchain is a testament to how stable Rust actually is in practice (an upgrade might take you a few minutes per million lines of code).
Try following your favourite distro's bug tracker during GCC upgrade. Practically every update breaks some packages, sometimes less, sometimes more (esp. when GCC changes their default flags).
The LKML quote is alleging that the upstream language developers (as opposed to random users on Reddit) are opposed to the idea of multiple implementations, which is plainly false, as evidenced by the link to the official blog post celebrating gccrs. Ted T'so is speaking from ignorance here.
I think it’s more pointed towards people like me who do think that gccrs is harmful (I’m not a Rust compiler/language dev - just a random user of the language). I think multiple compiler backends are fine (eg huge fan of rustc_codegen_gcc) but having multiple frontends I think can only hurt the ecosystem looking at how C/C++ have played out vs other languages like Swift, Typescript etc that have retained a single frontend. In the face of rustc_codegen_gcc, I simply see no substantial value add of gccrs to the Rust ecosystem but I see a huge amount of risk in the long term.
> opposed to the idea of multiple implementations, which is plainly false, as evidenced by the link to the official blog post celebrating gccrs. Ted T'so is speaking from ignorance here.
Why use so strong words? Yes, there's clearly a misunderstanding here, but why do we need to use equally negative words towards them? Isn't it more interesting to discuss why they have this impression? Maybe there's something with the communication from the upstream language developers which hasn't been clear enough? It's a blog post which is a few months old so if that's the only signal it's maybe not so strange that they've missed it?
Or maybe they are just actively lying because they have their own agenda. But I don't see how this kind of communication, assuming the worst of the other part, beings us any closer.
I'm not going to mince words here. Ted T'so should know better than to make these sorts of claims, and regardless of where he got the impression from, his confident assertion is trivially refutable, and it's not the job of the Rust project to police whatever incorrect source he's been reading, and they have demonstrably been supportive of the idea of multiple implementations. This wouldn't even be the first alternative compiler! Several Rust compiler contributors have their own compilers that they work on.
The kernel community should demand better from someone in such a position of utmost prominence.
It's not insane, the author has been bitten by their poor experiences with dependencies in other languages and is misapplying that experience to Rust out of hand.
Listen, I'd be as happy as anyone to have random numbers in the Rust standard library. Compared to the Rust developers, I'm a believer in stdlib maximalism, downsides be damned. But all this recent hand-wringing about dependencies is a tiresome moral panic.
"moral panic" is a bit of a reach don't you think? Increasing dependencies is a real problem with real downsides. There are plenty of characters expressing unreasonable things, but that doesn't mean everyone expressing concern about dependencies is indulging in a moral panic. There is nuance!
If there weren't real costs to dependencies then I personally never would have published regex-lite.
The OP isn't addressing the real costs of dependencies, the moral panic in question is the automatic assertion that more dependencies is worse than fewer dependencies, which implies that e.g. all the work you have done to cleanly separate regex out into reusable regex-syntax and regex-automata crates has done a disservice to your users. There are real arguments to be made about wrangling one's trusted computing base, but this isn't making that argument, and by throwing the baby out with the bathwater it sets us back as a profession.
The older I get the less stock I put in merely pointing out flaws without offering solutions.
You might say "I don't need to be able to propose a solution in order to point out problems", and sure, but that's missing the point. Because by pointing out a problem, you are still implicitly asserting that some solution exists. And the counter to that is: no, no solution exists, and if you have no evidence in favor of your assertion that a solution exists, then I am allowed to counter with exactly as much evidence asserting that no solution exists.
Propose a solution if you want complaints to be taken seriously. More people pointing out the problems at this point contributes nothing; we all know everything is shit, what are you proposing we do about it?
Defining or clarifying the specifics of the problem is a critical step in solving (or not solving) it. We don't have a good understanding of all of the factors and how they contribute to this problem so having more people take a stab at understanding the problem and sharing that is a net positive. You may think that "we all know it already" but we don't. I discover new and meaningful ways that systems and people are fucking up software just about every year and have been for 25-30 years so I take strong issue with your "we all know" when clearly we don't, and in fact very much still disagree on the details of that problem, the very things we need to understand in order to best solve the problem.
My rather broad solution has always been: let engineers own a part of a stack. Let an engineer own the UI for an app, own the database front-end. Let an engineer own the caching mechanism, let an engineer own the framework.
You give an engineer ownership and let them learn from their own mistakes, rise to the occasion when the stakes are high. This presumes they will have the last word on changes to that sand box that they own. If they want to rewrite it — that's their call. I'm the end they'll create a codebase they're happy to maintain and we will all win.
This can also be attributed to enshittification. Restauranteurs and bar owners found that noisier places increase turnover and cause people to drink more.
reply