Hacker Newsnew | past | comments | ask | show | jobs | submit | packetlost's commentslogin

> Modern IDEs don’t improve the feedback loop much unfortunately, more often it’s quite the opposite. They are slow, bloated and distracting.

This is an experience that is 15 years out of date.


Hm I agree completely. Even as someone who appreciates SLIME and emacs. IntelliJ and even VS Code are excellent, even if heavy. Just use it on a beefy laptop and it won’t feel slow and bloated at all. If you find it distracting, it’s because you don’t know which settings to use to make them just right for your taste. Both can behave as Notepad if you want.

even neovim with an lsp can be a very good experience, if one doesn't mind configuring it

Or Helix, if you want a TUI with modal editing, but you do mind configuring it.

It's fine to not like them, but calling them slow is just not really true for "modern" IDEs, that's a big part of what makes them modern.

There's not really a black community either, it's a demographic. There are many communities of black people, but we really need to stop equating demographics with communities (not just this case).

Yeah, IMO extended metadata attributes are fine for caching data that can be recovered via other means but generally violate the principal of least surprise. For them to be successful a standardized transparent container format or something would be necessary, but at that point the FS abstraction is leaking.

What about using the file name itself as the metadata storage?

I have used this approach with exiftool to add custom tags for “album”.

https://stackoverflow.com/a/68284922

Here is my source file for it. It was so long ago I don’t recall all the details but you can retrieve this information using exif commands.

https://github.com/jmathai/elodie/blob/master/configs/ExifTo...


That works but has strict length limits and is visibly ugly. Fine for very limited cases I guess

Files in most file systems are uniquely identified by inode and can be referenced by multiple files. Why does everyone forget links?

A dataset can persist across multiple file systems. A UUID is a way to know that one dataset is equivalent (identical) to another. Now you can cache, store-and-forward, archive and retrieve and know what you have.

UUIDs aren't very good for this use case, a sufficiently large CRC or cryptographic hash is better because it's intrinsically tied to the data's value while UUIDs are not

UUIDs are necessary. It's possible for file contents to be identical (e.g. short configuration files may coincidentally coincide over time and space). Would the hash then be unique?

Consider: you want a certain data object, with a given UUID. You can find it anywhere, even from a malicious server. Then look up the hash in a trusted database, verify it. Impossible then for the MITM to fool you. No more virus scanning executables.


We once again discover that Plan9 and UNIX were right. The most powerful, lowest common denominator interface is text files exposed over a file system. Now to get back to making 9p2026.

The article gets some fundamentals completely wrong though: file systems are full graphs, not strict trees and are definitely not acyclic


So what are Plan 9's killer features, and can they be bolted on with FUSE or is there a deeper magic at play?

Plan9 doesn't really have a single killer feature beyond 9P and the universal consistency and simplicity of its APIs. It has a very clean syscall interface and takes "everything is a file" to its logical conclusion and does it well (IMO). Pretty much everything is a file(system) and it's all accessed via the 9P protocol.

You could sorta bolt these features on with FUSE, but to see real benefits you'd want something closer to Inferno, which is like an OS/application runtime that runs on top of another OS host.

In my mind, the security model is the closest thing to a killer feature it has. Because everything is a file(system) and the fork/rfork and bind syscalls let you precisely control what resources/files/services/etc. a child process has access to via easily understandable shell commands (or using libc functions if you want), it means you don't need special APIs for namespacing (ie. containers) and access controls. It's very clean. When a parent process forks or spawns a child process, it can chose whether that process inherits the namespace or gets a clean slate that it can then bind filesystems onto, controlling precisely what it has access to.


FUSE is dog slow and it's a bad hack compared to mount anything anywhere in 9front without root permissions.

Also the API is much simpler than POSIX.

Hurd with settrans made things much easier than the classical Unix, but it's still in alpha and it still has to implement POSIX for convenience.


I agree. tbh I would immediately switch to something with the hardware support of Linux and the user space of Plan9/9front

The deeper magic is that the kernel interface is completely rewritten compared to *nix or even Linux. Most programs in plan9 are expected to issue requests through userspace-provided services, not bespoke syscalls.

namespaces and no difference between local and remote devices. You just bind directories. FUSE it's a dog slow toy in comparison.

You don't even need root permissions to do tons of stuff in 9front with filesystems. You don't even need a root account.


I don't see how an API couldn't have full parity with a web interface, the API is how you actually trigger a state transition in the vast majority of cases

I don't think most Linux package managers would fall under the scope of this law either as the vast majority require administrative privileges on the computer to run. The law could be made better by adding an administrator definition to distinguish between privileged and unprivileged accounts, but that might be asking too much of those who wrote the law.

There's clear liability put on the owner of the device, which cannot be a child, but the child's parent. The "Account Holder" definition and subsequent penalties make that pretty clear. The parent is ultimately responsible for locking down the child's account and inputting the correct information.

What happens when the child downloads a Linux iso and then live boots or overwrites the install? I have a hard time understanding how this law does not purposefully set the foundation from which they can push for actual ID verification.

It's the parents responsibility regardless, they own the device and it's their child. This is exactly the correct way to do this, if you must.

My contention is that there is no reason to do this, and it shouldn't be done.

My contention is that I vastly prefer this to what is demonstrably already happening, which is every 3rd party webapp implementing or paying yet another 3rd party to collect my ID and face scan for the privilege of using their service.

DNS doesn't generally distribute applications, so no it doesn't apply.

but it facilitates the download.

If that’s your bar, then so does the power company and who ever manufactured your router.

thats-the-joke-meme.jpg

Facilitating the download is not sufficient; the service would have to both distribute and facilitate the download to satisfy the definition.

So a torrent tracker isn't in-scope because it doesn't distribute and only facilitates peer discovery?

What if a dns software has an RCE and a prosecutor thinks it satisfies "facilitates download" clause ?

The clause says "distributes and facilitates the download", not "distributes or facilitates the download".

It's a bit slow, but still workable for Rust too. I prefer doing my daily work on a much more powerful 9955HX though.

Makes sense; according to Geekbench, 9955XX has about a 25% lead in multi-core over the base M4, and about a 5% lead in multi-core over the base M5. And more cores, so better for parallel Rust compilation.

I'm comparing it to my M2 laptop, but in practice the 9955HX is substantially faster than even the M4 Pro I have in my Mac Mini, about 30%~ or so in wall clock time for Rust compilation.

Yep, Pro only has 12 cores, and a third of those are efficiency cores. Even the Max loses some of its performance to efficiency cores. This is why I was so upset to see Intel replace a bunch of performance cores with efficiency cores. (Remember how Intel used to offer enthusiast chips with up to 18 full fucking cores? Now they think 8 full cores + 16 small useless cores is the answer? I am appalled. Even aside from HEDT they used to offer up to 10 full cores.) More, and more performant, hardware threads is almost always the path to faster Rust compilation. Lose a few of those to efficiency cores and even Apple can fall behind.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: