Hacker News new | past | comments | ask | show | jobs | submit login

Perhaps a distinction here can be made between running as a server OS and a desktop OS.

In a server I generally want a crash immediately. But on a desktop I'd rather it limp along and give me a chance to finish writing my Hackernews post.




I'd rather push the other way on that. We should treat desktops with as much security paranoia at servers.

I'd much rather _not_ have my desktop "limp along" in a poorly understood and probably exploitable fashion while the malware gets a chance to finish encrypting all my files...

If that costs the world the "benefit" of my shared wisdom in a half written Hackernews post, I'm good with that.


I run a a lot more untrusted code on my laptop than on my cloud servers. Likewise for work, even more so as I don’t myself trust the spyware/malware they jam on the laptops.


Basically, there's no solution at this level of granularity. One can also argue that the desktop is where the most important stuff is that we least want hacked, e.g., your family photos, documents, other stuff with a high priority of not being backed up, so we must treat security even higher than a server at this point.

I call these the "already lost" situations. You've already lost, we're just arguing about how to distribute the lossage. While those discussions aren't completely pointless, it is important to keep it clear in our head we're arguing about how to pick up the bodies at a crash site and not how to prevent the crash in the first place; it's a different mindset.

Despite some moderately-justified mockery in the other messages in this thread, the answer really is "just don't crash and have secure code here", which is to say, "don't lose". It's exceeding hard to write and it's a very high bar, but at the same time, it's very difficult to imagine how to secure a single system when you can't even stipulate a core of trusted software exists. If you don't even have a foundation, you're not going to build a secure structure. In this case, by "secure" I don't just mean security, but also, functionality and everything else.


On a server I'd usually want it to limp along too... Better fire an alert but keep happy customers than cause a massive outage just because of someone's overly strict checkfail...

It depends really what your server does, and what the consequences of it doing the wrong thing are.


If the consequences of one server wedging itself is a "massive outage" and "unhappy customers", then you probably don't really care about that outage or those customers. If you don't have enough redundancy and alerting and automated disaster recovery to keep your customer facing shit up when one server panics, you're just relying on luck to keep your customers happy.

Fire an alert, remove that server from the load balancer, and fix the problem without your customers even noticing.

Or make sure if you're running a hobby-project architected platform that your customer expectations and SLAs are clear up front, and let it go down until Monday morning when you'll get around to fixing it.


Or just don't put stuff that crashes in pid1


seriously, if you're doing allocations in pid1 you fucked up.


Yeah, great idea: rather than worry about how to deal with software bugs, just never have bugs...


Exactly. This is why systemd is a terrible design. The fastest and most secure code, that never crashes and never needs patching, is code that doesn't exist.


runit.c is 330 lines of code

Keeping it small and simple to minimize bugs is perfectly viable and reasonable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: