Hacker Newsnew | past | comments | ask | show | jobs | submit | torginus's commentslogin

Why do people keep reinventing OS features?

There's Docker, OverlayFS, FUSE, ZFS or Btrfs snapshots?

Do you not trust your OS to do this correctly, or do you think you can do better?

A lot of this stuff existed 5, 10, 15 years ago...

Somehow there's been a trend for every effing program to grow and absorb the features and responsibilities of every other program.

Actually, I have a brilliant idea, what if we used nodejs, and added html display capabilities, and browser features? After all Cursor has already proven you can vibecode a browser, why not just do it?

I'm just tired at this point


This exact thing solves a huge problem with SEA binaries as he points out in his post. You can include complicated assets easily and skip an ugly unpack step entirely. This is very useful.

One of the worst is media players that all insist on grafting their own "library" on top of my already-working OS filesystem. So I can't just run the media player and play files. No, that would be too simple. I have to first "import" my media into a "library" abstraction and then store that library somewhere else on my filesystem. Terrible!

There's a legitimate problem they're trying to solve there: there are several ways to sort media that don't match up well with a hierarchical filesystem¹. They solve it badly. Good players maintain a database for efficient queries of media metadata, and periodically rescan the folders to update it. Shitty media players try to manage the files themselves, and still end up needing to maintain a database. The worst of these use the database to manage the contents of their storage files (or store the files themselves in the database), if something isn't in the database they delete the files. Adobe Lightroom Classic does this, if your database gets corrupted it deletes all your RAW files!

¹E.g. if you've got music, and it's sorted `artist/album/track<n>.extension`, and two artists collaborate on an album, which one gets the album in their folder? What if you want to sort all songs in the display by publication date? Even if they use the files on your filesystem without moving them, some sort of metadata database will be needed for efficient display & search.


Mimalloc my beloved. The fact that jemalloc is this fiendishly complex allocator with a gazillion algorithms and approaches ( and a huge binary), yet mimalloc (a simple allocator with one bitmap-tracked pool per allocation size, and one pool collection per thread) is one of the bigger wins in software simplicity in recent memory.

The biggest evidence against collaborative editing working and being useful is that programmers don't use it. We go through the pain of having git branches and manual merges.

We're the nerdiest bunch in the world, absolutely willing to learn and adapt the most arcane stuff if it gives us a real of percieved advantage, yet the fact that Google Docs style CRDTs have completely elided the profession speaks volumes about their actual usefulness.


> The biggest evidence against collaborative editing working and being useful is that programmers don't use it. We go through the pain of having git branches and manual merges.

Hmm -- this seems a bit apples and oranges to me: collaborative editing is sync; git branches, PRs, etc. are all async. This is by design! You want someone's eyes on a merge, that's the whole rationale behind PRs. Collab editing tries to make merges invisible.

Totally different use case, no?


Collaborative coding is a niche but possibly interesting use case. I’m thinking of notebook cells with reactive inputs and outputs. Actually not dissimilar to a spreadsheet in many ways.

The biggest evidence for collaborative editing is the immense popularity of Google Docs, Notion and Figma.

Just because programming code isn't a good use case for automated conflict resolution doesn't mean everything else isn't.

Just imagine non-technical people using git to collaborate on a report, essay, or blog post. It's never going to happen.


> The biggest evidence against collaborative editing working and being useful is that programmers don't use it. We go through the pain of having git branches and manual merges.

But git branches are collaborative editing! They're just asynchronous collaborative editing.

It would be possible to build a git clone on top of CRDTs, which had the same merge conflict behaviour. The advantage of a system like that would be that you could use the same system for both kinds of collab. editing - realtime collab editing and offline / async collab editing. Its just, nobody has built that yet.

> the fact that Google Docs style CRDTs have completely elided the profession speaks volumes about their actual usefulness.

Software engineers still rely on POSIX files for local cross-app interoperability. Eg, I save a file in my text editor, then my compiler reads it back and emits another file, and I run that. IMO the real problem is that this form of IPC is kind of crappy. There's no good way to get a high fidelity change feed from a POSIX file. To really use CRDTs we'd need a different filesystem API. And we'd need to rewrite all our software to use it.

That isn't happening. So we're stuck with hacks like git, which have to detect and reconstruct all your editing changes using diffs every time you run it. This is why we don't have nice things.


> There's no good way to get a high fidelity change feed from a POSIX file.

Personally, my main point of frustration with git is the lack of a well-supported ast-based diff. Most text in programming are actually just representations of graphs. I’m sure there is a good reason why it hasn’t caught on, but I find that line-based diffs diverging from what could be a useful semantic diff is the main reason merge conflicts happen and the main reason why I need to stare hard at a pull request to figured out what actually changed.


Normal documents don't have broken builds when lines are incomplete. It's a completely different situation and makes sense why manually controlling it in chunks is better.

>Google Docs style CRDTs

Google Docs is OT though.


Also note that our use case is much simpler. The programming language tells you whether your merge created a valid document.

I've never seen that information actually being used in any merge tool, with the notable exception of Visual Studio/C# (where you get symbol resolution for the merged doc, but even there the autogenerated result is a bit hit and miss)

I think the reason is that the algorithms want to be content-agnostic.

But it's of course weird — as a user — to see a conflict resolution tool confidently return something that's not even syntactically valid.


It's interesting that you contrast Sweden and Russia, considering while I have not lived and worked in Russia, I've worked with Swedes quite a bit and my experience with them is that they don't really emphasize red tape that much - in the context of development, they don't really mind if you bend the rules if it's for a good cause - what I mean is there's a general attitude of pursuing sensible outcomes over blindly following processes.

They're also not big on oversight and I got what it looked like to me a surprising amount of autonomy and responsibilty in a very short amount of time, that I felt out of depth for a while, but got accustomed to it. A very laissez faire way of work.

I felt much of the system was informal, and based on the expectation of not abusing trust. Which was very refreshing, as most companies in my experience exist in a state of bureaucratic gridlock - you need to push the change to repo X, but Y needs to sign off on it, and it depends on changes by person Z, who's held up by similar issues etc.

It's a very emotionally draining and unproductive way of working, and is usually overseen by bosses who create these processes, because they don't trust their employees, or to get a feeling of power and control, or they simply don't understand how and what their subordinates do, so they kind of try to force things into these standard flows.

Which also doesn't work, but it accountably doesn't work. Even if a days' changes take a week, and still end up lacking, you can point to that Task A is blocked by deliverable B, which is at a low priority at team Foo, so lets have a meeting with that teams manager to make sure to prioritize that in the next sprint etc etc etc.

This is how most places turn into that meme picture where there's one guy digging a hole and 5 people oversee him.


I didn't mention Russia, and I've never had the misfortune of living there - though I speak the language and am well familiar with the capture.

The Swedish term for how you describe work is "frihet under ansvar" - translated, "freedom under responsibility". That's a common approach at workplaces where you're doing qualified work, like engineering, and the meaning is that you're given a lot of flexibility and freedom in how you do your work as long as you reach the expected result and you take responsibility if things don't work out. That's good, and yes companies here are very informal. We don't even culturally like things like managers instructing employees on what to do, it's all phrased very casually.

In context of government work or the public sector, I'd say we take rules and procedures seriously, which is one of my favorite things about the country. To me, that makes interactions much more predictable than in countries with a "people before systems" culture.


One interesting effect of LLMs getting so good at generating code, all of the process related things you mention take up a greater and greater percentage of the overall time to develop and deploy a feature, making them even more salient.

They always have. I would guess the majority of people employed and salaries paid on a given project basically goes to waste. Just today I had an hour-long meeting about an impact of a bug, which was clear as day with a simple fix, but would've involved so much red tape to fix (for no good reason), that the couple minute fix-deploy-test-merge cycle would've taken at least a week of effort spread across people.

You can calibrate any sensor, its just a manufacturing step, and while cheap ones may be inaccurate and drift over time, I'm pretty sure the good enough ones (which cost tens of dollars, not fractions of a dollar) are accurate enough to work for the seconds-to-minutes flight time of a rocket like this.

Seconds, yes. Minutes, not so much. Then you will need another layer.

Lasers imo don't really have IRL advantages over machine guns and rockets, and their line of sight nature is a huge limitation.

Laser:

- are cheap to shoot - do not fall on someones head if they miss (unlike firing bullets and rockets at a drone that will come down again) - do hit the target immediately if aimed right

Problems with lasers are, cooling, power consumption limiting mobile use - and indeed targeting and fog and clouds.


You need about 2 MJ (or 2000 Watt seconds) to boil away 1L of water. The Dragonfire ship class laser puts out 50 KW, so it would take 40 seconds to do that, assuming it can fire without pause, all of the enrgy makes it into the target, and none of it gets reflected.

This is a container sized system that needs to be mounted on a ship.

Meanwhile in Ukraine you have auto turrets made from anything ranging from heavy machineguns to old AA guns, with some added optics and/or radar, which are super cheap and you can carry them around in vans.


And rain.

From what I can tell, Ukrainians are having some success with converting guns into automatic turrets that can track and shoot down drones via sensors, and the rifle-equivalent of birdshot.

I think MEMS gyroscopes and accelerometers used in consumer drones should be just about good enough to measure orientation and acceleration, and those are cheap and easy to get.

You could integrate acceleration to get speed - the flight is short enough to make compounding errors easy to ignore.

I think thanks to drones and RC hobbyists, there's a generally nice body of knowledge on how to get good enough data from consumer hardware to keep things flying.


> You could integrate acceleration to get speed - the flight is short enough to make compounding errors easy to ignore.

‘Easy to ignore’ is not a term I would use here, especially given the motion environment of a rocket. It seems like it might be beginning to be borderline possible.


> You could integrate acceleration to get speed - the flight is short enough to make compounding errors easy to ignore.

False, given how noisy MEMS IMUs are, and the accuracy required. Even Ring Laser Gyros drift quickly.


I did a bit of googling and this was the first result:

https://www.h4-lab.com/store/p/qmu102

This sensor has a 16G limit, which is well above what an amateur rocket cold and the compounding velocity error at 10G would be something like 0.0002 (m/s)/s. Which is way more than good enough, at least for short flights measured in minutes max.


Sure, but the exploit presented doesn't really look practical for the everyman. And I'm not sure if it can be patched in HW/SW, and in any case this is just the first step to a fully fake secure boot.

All of this is beyond horrific.

Mucking about in the kernel basically bypasses the entire security and stability model of the OS. And this is not theoretical, people have been rooted through buggy anticheats software, where the game sent malicious calls to the kernel, and hijacked to anti cheat to gain root access.

Even in a more benign case, people often get 'gremlins', weird failures and BSOD due to some kernel apis being intercepted and overridden incorrectly.

The solution here is to establish root of trust from boot, and use the OSes sandboxing features (like Job Objects on NT and other stuff). Providing a secure execution environment is the OS developers' job.

Every sane approach to security relies on keeping the bad guys out, not mitigating the damage they can do once they're in.


Unfortunately (or fortunately depending on what side of the fence you live), boot chain security is not taken as seriously in the PC ecosystem as it is on phones. As as a result, even if you relying on os features, you cannot trust them. This is doubly the case in situations where the user owns the kernel (eg Linux) or hypervisor. Attestation would work, but the number of users that you could probably successfully attest are on on a trustworthy setup is fairly small, so it's not really a realistic option. And that is why they must reach for other options. Keep in mind that even if it's not foolproof, if it reduces the number of cheaters by a statistically significant amount, it's worthwhile.

I really thought this might change over time given strong desire for useful attestation by major actors like banks and media companies, but apparently they cannot exert the same level of influence on the PC industry as they have on the mobile industry.


I think it's fortunate that I own at least one of the computing devices I paid for.

Yea, but it'd be real nice if we could trust the software we run on our own devices, no?

Secure boot with software attestation could also be used for good.


Only if I get to set the keys or no keys - under all circumstances.

There should be a physical button inside the case labeled "set up secure boot"


under the doctrine that software "trust" is needed YOU are the attacker. It's entirely about stripping your control (thus ownership) from the hardware you paid for (see the safetynet shitshow).

There's a second use whereby I somehow bind my own OS hash to my own data encryption key, so nobody who changes the OS can read the data. The technical distinction between this and the previous: if it's designed for the device owner's protection, the device owner can reset the system.

Just like with HTTPS, you can enrol your own keys in the TPM module, or sign your binaries with a key thats already trusted by your system.

This is just establishing chain of trust, and does not prevent you from doing anything on your system.

True, this could be hypothetically extended to disallow booting third party binaries, but I would say that's just extrapolation for now and not reality.


Every sane approach to security relies on checking you are doing permitted actions on the server, not locking down the client.

Which isn't practical for multiplayer action games, so we end up here.

Doesn’t matter. There’s no world where a multiplayer action game is worth it, and anyway this is a classic example of trying to solve a social problem with technology.

The reason cheating is a problem at all is that instead of playing with friends, you use online matchmaking to play with equally alienated online strangers. This causes issues well in excess of cheating, including paranoia over cheating.


> There’s no world where a multiplayer action game is worth it

To you. I’m perfectly happy to run a kernel level anticheay - I’m already running their code on my machine, and it can delete my files, upload them as encrypted game traffic, steal my crypto keys, screenshot my bank details and private photos all without running at a kernel level.

> trying to solve a social problem with technology

I disagree. I’m normally on the side of not doing that but increasing the player pool and giving players access to more people at the their own skill level is a good thing


Its not just for multiplayer games, considering one of my employers has been a victim of a supply chain attack, I would say it's super important that you can check and verify the authenticity of every piece of code that runs on your infra (checking that a binary/docker image can be traced back to an artifact, which can be traced to a git commit, and making sure the server running it hasn't been tampered with in any way)

To do real time analysis and interception probably not. But for after the fact analysis, if a player is moving on knowledge he couldn’t have had because it shouldn’t have been rendered yet or something, then you can assume cheating.

I’m not a particularly skilled overwatch player, but I know the cooldowns of probably half the characters to muscle memory. I can hit an ability pretty much perfectly on cooldown 90+% of the time.

The vast, vast majority of skilled FPS players will predict their shots and shoot where they think the enemy player will be relative to the known hit detection of the game. In high level play for something like r6 siege, I’d say it’s 99% shooting before you can possibly know where they are by “feeling”


This. Also the client knows more than its allowed to show the user, like the positions of enemy players. You can make aimbots and wallhacks without needing to tamper with the game state.

And you can see the player is tracking players through walls way more than by chance.

Are you saying that the solution here is to sell computers so locked down that no user can install anything other than verified software?

The idea is that it would require a verified hypervisor, and verified operating system for the game, but you could still at the same time be running an unverified operating system with unverified software. The trusted and untrusted software has to be properly sandboxed from one another. The computer does not need to be locked down so you can't run other hypervisors, it just would require that the anticheat can't prove that it's running on a trusted one when it isn't.

The security of PCs is still poor. Even if you had every available security feature right now it's not enough for the game to be safe. We still need to wait for PCs to catch up with the state of the art, then we have to wait 5+ years for devices to make it into the wild to have a big enough market share to make targeting them to be commercially viable.


But if you can get in before the OS, you can change what it does. You'd need attestation in the hardware itself so the server can know that what's running isn't signed by Microsoft's key, for example.

Attestation is how the user mode anticheat would prove that it is running on a secure system / unmodified game.

I'm still not seeing how that would solve it. These are all multiplayer games. You could intercept the network traffic before it reaches the machine and then use a separate device to give you audio or visual cues. In StarCraft, reading the network traffic with a pi and hearing "spawning 5 mutalisk" is gonna completely change the game.

You can't do anything with a locked-down computer. It can encrypt all its traffic and you can't see anything.

That’s what I want as a gamer. I want a PC that works as a console. Whether I want that for other use cases or this machine doesn’t matter. I’m happy to sandbox _everything else_, boot into a specific OS to game etc.

The thing about gaming is that it’s not acceptable to leave 5% performance on the table whereas for other uses it usually is.


Question for you - why don’t you buy a console? (I agree with you by the way, it’s why I have a ps5)

I never played using a controller and I never will. And I do want a high end PC for other use cases.,

_most_ games now do KBM on console and matchmake separately for it. It's still not perfect, but it's gotten much better.

> And I do want a high end PC for other use cases.,

Right, you don't want two devices (that's fair). How can you _possibly_ trust the locked down device won't interfere with the other open software it's installed side by side with?


Those use cases don't work with completely locked down OS.

Also you can plug a mouse in a console… that's a weird excuse.


I don’t need to game in the same OS that I do other things. But having two sets of hardware seems like a waste.

Having a useless locked down machine isn't a waste?

Not if I can just leave that sandbox when I want to (boot another OS/mode/leave a sandbox etc) no?

You can't leave, it's a locked machine, that's the whole point of a locked machine.

Just know that it will still get cracked and cheats will exist. I suspect this is Microsoft's next "console" as they have been developing "anti-cheat" for quite some time.

Microsoft will shoehorn their Anti-Cheat into the next Xbox project and force it down the throats of Windows 11/Windows 12 users. This will cause game developers to gobble this technology up.

This is the only defense Microsoft has against the growing Linux Gaming crowd. Microsoft forgot how to make good, consumer friendly software.


> it’s not acceptable to leave 5% performance on the table whereas for other uses it usually is.

I think that’s an incredibly rare stance not held by the vast majority of gamers, including competitive ones.


I don’t think a sandbox like a VM would work even if it could be done with only 5% perf hit? Wouldnt any game run in a VM be possible to introspect from the hypervisor in a way that is hard to see from inside the VM? And that’s why these anticheats disallow virtualization?

That would mean those who are concerned about the integrity would want to sandbox everything else instead. And even if people are ok with giving up a small bit of perf when gaming, I’m sure they’re even more happy to give up perf when doing online banking.


Mid range hardware can run majority of games at high fps. You can easily leave performance on the table.

No. No it can not. Unless you mean a 5070/80 is mid range.

Get a console then.

Or we just boot into some console-esque gaming OS or mode to game. I’m not sure why this would be so controversial. The alternative is the one we see here.

But that requires you not owning your computer, which I hope is controversial.

That’s not really incompatible with this? That’s just how secure boot works. You can re-enlist keys for a different root of trust, or disable it and accept the trade-off there.

No. I'm saying we should all drink the blood of babies to stay eternally youthful. You didn't read between the lines deeply enough.

You want to eliminate the freedom of running the software you desire for everyone to hopefully mitigate cheating?

How have you made this logical leap - that root of trust implies only being able to run vendor-approved software?

Hacks aren't vendor approved…

Then the game just won't start?

> Every sane approach to security relies on keeping the bad guys out, not mitigating the damage they can do once they're in.

That’s not true at all in the field of cybersecurity in general, and I have doubts that it’s true in the subset of the field that has to do with anticheat.


>Mucking about in the kernel basically bypasses the entire security and stability model of the OS. And this is not theoretical, people have been rooted through buggy anticheats software, where the game sent malicious calls to the kernel, and hijacked to anti cheat to gain root access.

If you got RCE in the game itself, it's effectively game over for any data you have on the computer.

https://xkcd.com/1200/


>All of this is beyond horrific.

Hot take: It's also totally unnecessary. The entire arms race is stupid.

Proper anti-cheat needs to be 0% invasive to be effective; server-side analysis plus client-side with no special privileges.

The problem is laziness, lack of creativity and greed. Most publishers want to push games out the door as fast as possible, so they treat anti-cheat as a low-budget afterthought. That usually means reaching for generic solutions that are relatively easy to implement because they try to be as turn-key as possible.

This reductionist "Oh no! We have to lock down their access to video output and raw input! Therefore, no VMs or Linux for anyone!" is idiotic. Especially when it flies in the face of Valve's prevailing trend towards Linux as a proper gaming platform.

There's so many local-only, privacy-preserving anti-cheat approaches that can be done with both software and dirt cheap hardware peripherals. Of course, if anyone ever figures that out, publishers will probably twist it towards invasive harvesting of data.

I'd love to be playing Marathon right now, but Bungie just wholesale doesn't support Linux nor VMs. Cool. That's $40 they won't get from me, multiply by about 5-10x for my friends. Add in the negative reviews that are preventing the game's Steam rating from reaching Overwhelmingly Positive and the damage to sales is significant.


I don't understand why do you think that having the option to have secure boot and a good, trustworthy sandbox for processes implies you cant run Linux on a VM or Linux beside Windows etc.

People always freak out when I mention secure boot, and the funniest response usually are the ones who threaten to abandon Windows for macOS (which has had secure boot for more than a decade by default)

I'm not super technically knowledgeable about secure boot, but as far as I understand, you need to have a kernel signed by a trusted CA, which sucks if you want to compile your own, but is a hurdle generally managed by your distro, if you're willing to use their kernel.

But if all else fails you can always disable secure boot.


Secure Boot cuts both ways. The techniques anti-cheat software are allowed to use on Windows machines aren't even remotely allowed on macOS machines.

>I don't understand why do you think that having the option to have secure boot and a good, trustworthy sandbox for processes implies you cant run Linux on a VM or Linux beside Windows etc.

I'm a big secure boot fan. The fact that it's becoming increasingly common for games to require it is the stupid part.

Said games tend to want that root of trust to be known OEM keys, not custom keys. At that point they're dictating the operating system and hardware you can use, and that's where that approach becomes garbage.

Hardware attestation should not be a requirement nor concern for games, period.


Afaik Vanguard (Anticheat of League of Legends, Valorant, other Riot Games titles) requires secure boot. This is pretty quickly getting to where mobile phones are today when it comes to disadvantages of opting out of trusted computing.

> But if all else fails you can always disable secure boot.

for now


yes. this is why there's one box for work, & another for play.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: