Hm I agree completely. Even as someone who appreciates SLIME and emacs. IntelliJ and even VS Code are excellent, even if heavy. Just use it on a beefy laptop and it won’t feel slow and bloated at all. If you find it distracting, it’s because you don’t know which settings to use to make them just right for your taste. Both can behave as Notepad if you want.
There's not really a black community either, it's a demographic. There are many communities of black people, but we really need to stop equating demographics with communities (not just this case).
Yeah, IMO extended metadata attributes are fine for caching data that can be recovered via other means but generally violate the principal of least surprise. For them to be successful a standardized transparent container format or something would be necessary, but at that point the FS abstraction is leaking.
A dataset can persist across multiple file systems. A UUID is a way to know that one dataset is equivalent (identical) to another.
Now you can cache, store-and-forward, archive and retrieve and know what you have.
UUIDs aren't very good for this use case, a sufficiently large CRC or cryptographic hash is better because it's intrinsically tied to the data's value while UUIDs are not
UUIDs are necessary. It's possible for file contents to be identical (e.g. short configuration files may coincidentally coincide over time and space). Would the hash then be unique?
Consider: you want a certain data object, with a given UUID. You can find it anywhere, even from a malicious server. Then look up the hash in a trusted database, verify it. Impossible then for the MITM to fool you. No more virus scanning executables.
We once again discover that Plan9 and UNIX were right. The most powerful, lowest common denominator interface is text files exposed over a file system. Now to get back to making 9p2026.
The article gets some fundamentals completely wrong though: file systems are full graphs, not strict trees and are definitely not acyclic
Plan9 doesn't really have a single killer feature beyond 9P and the universal consistency and simplicity of its APIs. It has a very clean syscall interface and takes "everything is a file" to its logical conclusion and does it well (IMO). Pretty much everything is a file(system) and it's all accessed via the 9P protocol.
You could sorta bolt these features on with FUSE, but to see real benefits you'd want something closer to Inferno, which is like an OS/application runtime that runs on top of another OS host.
In my mind, the security model is the closest thing to a killer feature it has. Because everything is a file(system) and the fork/rfork and bind syscalls let you precisely control what resources/files/services/etc. a child process has access to via easily understandable shell commands (or using libc functions if you want), it means you don't need special APIs for namespacing (ie. containers) and access controls. It's very clean. When a parent process forks or spawns a child process, it can chose whether that process inherits the namespace or gets a clean slate that it can then bind filesystems onto, controlling precisely what it has access to.
The deeper magic is that the kernel interface is completely rewritten compared to *nix or even Linux. Most programs in plan9 are expected to issue requests through userspace-provided services, not bespoke syscalls.
I don't see how an API couldn't have full parity with a web interface, the API is how you actually trigger a state transition in the vast majority of cases
I don't think most Linux package managers would fall under the scope of this law either as the vast majority require administrative privileges on the computer to run. The law could be made better by adding an administrator definition to distinguish between privileged and unprivileged accounts, but that might be asking too much of those who wrote the law.
There's clear liability put on the owner of the device, which cannot be a child, but the child's parent. The "Account Holder" definition and subsequent penalties make that pretty clear. The parent is ultimately responsible for locking down the child's account and inputting the correct information.
What happens when the child downloads a Linux iso and then live boots or overwrites the install? I have a hard time understanding how this law does not purposefully set the foundation from which they can push for actual ID verification.
My contention is that I vastly prefer this to what is demonstrably already happening, which is every 3rd party webapp implementing or paying yet another 3rd party to collect my ID and face scan for the privilege of using their service.
Makes sense; according to Geekbench, 9955XX has about a 25% lead in multi-core over the base M4, and about a 5% lead in multi-core over the base M5. And more cores, so better for parallel Rust compilation.
I'm comparing it to my M2 laptop, but in practice the 9955HX is substantially faster than even the M4 Pro I have in my Mac Mini, about 30%~ or so in wall clock time for Rust compilation.
Yep, Pro only has 12 cores, and a third of those are efficiency cores. Even the Max loses some of its performance to efficiency cores. This is why I was so upset to see Intel replace a bunch of performance cores with efficiency cores. (Remember how Intel used to offer enthusiast chips with up to 18 full fucking cores? Now they think 8 full cores + 16 small useless cores is the answer? I am appalled. Even aside from HEDT they used to offer up to 10 full cores.) More, and more performant, hardware threads is almost always the path to faster Rust compilation. Lose a few of those to efficiency cores and even Apple can fall behind.
This is an experience that is 15 years out of date.
reply