Hacker News new | past | comments | ask | show | jobs | submit login

You’re missing the biggest reason this is relevant: enterprise IT shops with strict change management processes amd, especially in government, years of austerity budgets cutting resources for both sysadmins and rigorous testing.

Either of the targets you mentioned are more the symptom than the root cause: management setting up bad incentives. If you have a charge management process which takes a month to approve updates, the problem is not the sysadmin. If years of skimping means that the operators are afraid to patch because they’ll be punished if it breaks things and they don’t have a robust testing process, the problem is not the sysadmin.




I have a friend who works at a place where all their IT is outsourced, who's accounts payable stretches every invoice out to 120 days, and who's outsourcing company downs tools on all 90+ day delinquent accounts.

Every 3 months for at least the last 18 months, they've ended up with no updates/patching for a month. Friend informs compliance and risk manager, compliance and risk management scream at accounts payable to get the outsourcer paid in full to get this month's security update done, and the circle of life repeats.


I feel these organizations that have a process that prevents critical fixes have a broken process... you either have to be ok with having your servers compromised eg data stolen or leaking user data or you have to be ok accepting that sometimes the engineer fixing a bug, adding a new feature might mess something up. I am inclined to believe a bit more to the side of move fast break things is bette than move so slow you get pwned... but sort of a delicate balancing act...


I studied IT security quite a lot, and implement Windows patches for dozens of companies. While you are technically right, Microsoft releases broken patches _constantly_. If we pushed out every single patch the moment they were released, we would constantly be down and fighting fires. Most small and mid-sized companies don't have hacking campaigns run against them most times. Given this, it just doesn't make sense to push out every single patch immediately. Microsoft's patches are a whole lot more stable when they're a couple months old.


This has been a real problem again in the Windows 10 era. By around 2008, Microsoft seemed to have finally gotten their patch process cleaned up to the point that if you were only taking security patches, they generally installed cleanly and mostly didn't break random things. By about 2016 this has backslid and now Windows 10 seems intent on large scale combined updates and constant servicing stack updates that with undocumented consequences.

It's been a giant pain having spent years trying to get organizations to accept the need and learn to do this stuff reliably only to have the primary source of misery (Microsoft) repeatedly start biting them in the ass again for what should be best practices.

Meanwhile in the same timeframe most BSD and Linux releases have not only gotten their core software updates down to a science, they've also managed to build workflows that can include huge swathes of 3rd party open source and commercial software, which is so hilariously awful on windows that multiple companies build businesses around doing it.


> "This has been a real problem again in the Windows 10 era. By around 2008, Microsoft seemed to have finally gotten their patch process cleaned up to the point that if you were only taking security patches, they generally installed cleanly and mostly didn't break random things. By about 2016 this has backslid and now Windows 10 seems intent on large scale combined updates and constant servicing stack updates that with undocumented consequences."

Microsoft laid off all their QA staff in 2014, so it's hardly surprising. If anything, it's a wonder that it's not much, much worse than it is now.


It’s partly broken process - my point being that the people at the top are more to blame than the sysadmin - but also that this is more expensive than people like to admit. You either need to accept lower security/reliability or spend more on staff, capacity, and licenses. Lots of places try to cut that corner and it’ll seem to work until, as Warren Buffet likes to say, the tide goes out.

This is a really tricky problem in government because the pay scales can be very hard to change. For example, the U.S. federal scale has hard caps - the GS scale max is currently $170k, which might not sound that bad but historically the higher-level positions were senior and relatively limited, so it’s not like you can just effortlessly bump all of your developer positions up to the highest grade without hitting budget caps and other people being upset that someone outside of IT needed 25 years of experience and managing a bunch of people to get to the same rank as you’re proposing to offer to non-entry level developers. That probably means you’re hiring people at lower levels which are more like entry level pay.

A few years back they actually had to try to have a chance of hiring good infosec people but that requires a lot of political wrangling even if everyone agrees that it’s a good idea. (I know someone who got tired of waiting - jumped to a well known tech company for a cool 200% raise)

https://www.opm.gov/policy-data-oversight/pay-leave/referenc...


Many vulnerable organizations do not have "engineers who fix bugs", they have teams of accountants and bookkeepers who run Excel and Xero, or teams of lawyers and paralegals who run Word, or medical practices or marketing firms or chemical wholesalers, or or or... The nearest thing most of them have to "an IT department" is the admin person who liaises with their outsourced IT provider and the manager who signs of on the bills every month.


Isn’t this the sort of issue where Defense in Depth comes in? You don’t want to rely on a secure LAN, but having a secure LAN _and_ a hardened server reduces your attack surface in the case of a 0-day


This is exactly that sort of issue.


You also need to have someone who will be able to articulate this and be heard.


My wife just started working for a city (~1M pop) government. Her work computer is running Windows 7.


There is worse out there, last year I had a customer ask about installing my software on a Windows 2000 server.


I wonder how that is still possible? When I order servers the OS is typically included as a line item. This leads me to believe there are companies out there running 20yo server platforms on 15yo hardware. The markup for replacement parts for equipment that old is insane.


Net net that's probably more secure over the course of the past 5 years than upgrading, and certainly less buggy


> You’re missing the biggest reason this is relevant: enterprise IT shops with strict change management processes amd, especially in government, years of austerity budgets cutting resources for both sysadmins and rigorous testing.

What's worse?

a) Deploying a fix and (unlikely) some, perhaps all, related systems fail until they are fixed internally or

b) Not deploying a fix and have your servers owned by an adversary?


Both are low probability/high impact. I don't expect a typical company to expose its domain controller to the WAN. And if the domain controller is down because of a botched update, pretty much everything else in the organisation is down. Not clear to me which one is worse.


You’re absolutely right in every regard, I just want to throw in a little flavor from my experience as a security consultant. I’ve worked with state governments where we had to tune out alerting of failed logins on their domain controller because the public login for their public facing site was backed directly by their internal Active Directory server and we were seeing thousands of failed login alerts every day.

The state of infosec is still that bad and unfortunately most consumers can’t know of these problems let alone choose to opt out. Right now much of the cost of a breach is borne by the end users who didn’t choose the poor level of security the organization implemented, and I am increasingly of the opinion that it’s better to bring down your organization’s IT infrastructure than to suffer a catastrophic breach. Because if the pain if borne by the internal IT teams more than the end user (who again often has no knowledge or no choice), eventually the company will be forced to implement better processes.

As long as the real cost of a breach is paid for by end users, organizations have very little incentive to improve.


Thanks, I feel that’s an unpopular opinion for purist, but very real for most people’s day-to-day .


I'm not sure the second one is so low probability. While the domain controller is not exposed to an external network, it is still exposed to the workstations.


Good on you for framing this in a realistic way.


I’d second cm2187’s general “it depends” and also note that in the environment I was describing this decision isn’t happening in a vacuum. The policy was probably set 15 years ago when someone updated a printer driver and after the production systems were back up someone chewed out the IT manager and said their job was on the line if it happened again. Now you need the CIO to approve updates and have a lengthy delay before touching core infrastructure. Sure, you can request an emergency waiver but that’s a lot of work, it’s frowned upon, and sitting on this patch probably won’t cause problems since this happens fairly often…

What the CISA memo does is change that dynamic: now most of the government has instructions to act and it’s thus personally riskier not to act promptly.


You assume these organizations are filled with developers. From working for government, state higher ed, and health care for over 20 years, I can tell you, they are not. They are filled with techs who know how to call vendors (or are required to by leadership). So, when Epic (the EMR) system breaks and doctors and nurses can’t look up the info for an unconscious patient and the patient ends up dying, then yeah, the IT shop takes forever to patch things.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: