Reaching parity with Linux's immense suite of device drivers is perhaps the single biggest hurdle.
That's one of the biggest reasons why alternative kernels either remain fringe or fail.
Initiatives such as NetBSD's Rump kernel but for Linux may provide a bridge to Linux's drivers but it's brittle. There's already LKL - the Linux Kernel Library project with similar aims but not much traction.
That's quite unlikely. What's more likely is the emergence of a better approach for incremental inclusion of Rust in the kernel. This policy is a decent stake in the ground.
Incremental is not going to work for someone who doesn't want to install any Rust stuff to compile their kernel or understand Rust to work in the kernel.
Possibly. What's more likely is that folks would want to ask why major corporations are willing to invest the dollars to commit to Rust and it's take on memory safety, at scale.
Once that's internalised, those 'someone's may either align or be the outliers that don't matter in the greater scheme.
Linux kernel work is being done by countless embedded outfits all around the globe. Rust is completely misaligned with the embedded skill set and attitude; there will be a clash and backlash. Embedded people don't want to be told, you can't shove that value obtained here into there because of some cockamamie language rule.
I assume you allude to Rust's borrow checker. If you are, your concern is misplaced: which is a common occurrence unfortunately when it comes to this topic. Note that most of the interaction with the borrow checker's rules would be tackled by the interfaces between Rust and C that are being incrementally added to the kernel. By the time the 'end users' (the embedded Linux device driver authors you allude to) are involved, all they are doing is using safe Rust wrappers for loads and stores to MMIO, as an example, where there is no fundamental interaction with the borrow checker (because those happen at another level in the call graph involved).
That said: To appreciate the value Rust provides there is going to be some experience driven knowledge gain needed but the efforts underway should help.
Arm has been enabling server/data-center class SoCs for a while now (eg Amazon Graviton et al). This is only going to pick up further (eg Apple Private Cloud Compute).
Also, there's nothing fundamentally stopping chiplet pick-up in traditional embedded domains. It's probably quite likely.
They have been "enabling" them but not designed the best of them(°), and I'm not sure how serious they are about the top end because their results are rather half-assed compared to Apple, AMD and Intel. As is, their bread and butter and main focus is still mobile and embedded chips.
(°) The best of them also seem to use barely any ARM standards except for the ISA itself
Arm's definitely trying to push on the laptop, tablet, desktop, and server markets. The fastest cluster on the top500 was arm for several years, most of the big clouds either have home grown arm servers (like graviton) or will soon.
Arm doesn't only do ISA. It essentially wrote the standards for the AMBA/AXI/ACE/CHI interconnect space. Standardizing chip-to-chip interconnects is very much in Arm's interests. It is a double edged sword though since Chiplets will likely enable fine grained modularity allowing IP from other vendors to be stitched around Arm (eg RISC-V IOMMU instead of Arm SMMUv3 etc).
Surely that's an incredibly broad categorisation ?
Learning Rust, like any other language, is a strategic investment that pays off with experience. Companies that are willing to invest, benefit accordingly.
Evidently, several companies that care about memory safety and programmer productivity have invested and benefited from Rust.
Finally: this is subjective of course but the borrow checker isn't something that necessarily needs fighting 'for a month or two'. There's just so many excellent resources available now that learning how to deal with it is quite tractable.
I think words matter and it's important that we all agree on the meaning if we want to convey ideas. My approach may be pedantic - but it also saves me from wasting my time reading an article that my be full of idiosyncrasies. It's okay if other people take a stab at parsing the article and/or derive value from it through.
Although not a hard and fast rule, it is commonplace to use the term clusters for CPUs that share a common cache at some level (typically L2). This is quite prevalent in Arm designs, for example, such as big.LITTLE compositions where a big CPU cluster would share an L2 cache, the LITTLE cluster would share another and system software would set up the cache ability and share ability domains (arm parlance for silos where cache maintenance operations can propagate) to reflect that topology.