Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

About 'Default Deny': 'It's not much harder to do than 'Default Permit,' but you'll sleep much better at night.'

Great that you, the IT security person, sleeps much better at night. Meanwhile, the rest of the company is super annoyed because nothing ever works without three extra rounds with the IT department. And, btw., the more annoyed people are, the more likely they are to use workarounds that undermine your IT security concept (e.g., think of the typical 'password1', 'password2', 'password3' passwords when you force users to change their password every month).

So no, good IT security does not just mean unplugging the network cable. Good IT security is invisible and unobtrusive for your users, like magic :)



A friend of mine has trouble running a very important vendor application for his department. It stopped working some time ago, so he opened a ticket with IT. It was so confusing to them that it got to a point that they allowed him to run Microsoft's packet capture on his machine. He followed their instructions, and captured what was going on. Despite the capture, they were unable to get it working, so out of frustration, he sent the capture to me. Even though our laptops are really locked down, as a dev, I get admin on my machine, and I have MSDN, so I downloaded Microsoft's tool, looked over the capture, and discovered that it the application was a client/server implementation ON THE LOCAL MACHINE. The front end was working over networking ports to talk to the back end, which then talked to the vendor's servers. I only knew that I had just undergone a lot of pain with my own development workflow, because the company had started doing "default deny," and it was f*king with my in several ways. Ways that, as you say, I found workarounds for, that they probably aren't aware of. I told him what to tell IT, and how they could whitelist this application, but he's still having problems. Why am I being vague about the details here? It's not because of confidentiality, though that would apply. No, it's because my friend had been "working with IT" for over a year to get to this point, and THIS WAS TWO YEARS AGO, and I've forgotten a lot of the details. So, to say that it will take "3 extra rounds" is a bit of an understatement when IT starts doing "default deny," at least in legacy manufacturing companies.


> Good IT security is invisible and unobtrusive for your users

I wish more and more IT administrators would use seat belt and airbags as models of security: they impose a tiny, minor annoyance in everyday usage of your cars, but their presence is gold when an accident happens.

Instead, most of them consider it normal to prevent you from working in order to hide their ignorance and lack of professionalism.


Wise IT admins >know< they are ignorant and design for that. Before an application gets deployed, its requirements need to be learned - and the users rarely know what those requirements are, so cycles of information gathering and specification of permitted behavior ensue. You do not declare the application ready until that process converges, and the business knows and accepts the risks required to operate the application. Few end users know what a CVE is, much less have mitigated them.

I also note that seatbelts and airbags have undergone decades of engineering refinement; give that time to your admins, and your experience will be equally frictionless. Don't expect it to be done as soon as the download finishes.


I think you are missing the main point of my analogy: seatbelts and airbags work on damage mitigation, while the kind of security that bothers users so much is the one focused on prevention.

Especially in IT, where lives are not at stake, having a good enough mitigation strategy would help enormously in relaxing on the prevention side.


Depending on your sector, I would argue that in IT, lives can be at stake. Imagine the IT department of a hospital, a power company, or other vital infrastructure.

Most mitigation tends to be in the form of backup and disaster recovery plans, which, when well implemented and executed, can restore everything in less than a day.

The issue is that some threats can lurk for weeks, if not months, before triggering. In a car analogy, it would be like someone sabotaging your airbag and cutting your seatbelt without you knowing. Preventing a crash in the first place is far more effective and way less traumatic. Even if the mitigation strategy allows you to survive the crash, the car could still be totaled. The reputation loss you suffer from having your database breached can be catastrophic.


Prevention in the car analogy would be like adding a breathalyzer and not allowing it to start if the person in the driver's seat fails.

It's been a gimmick idea for decades but I'm not aware of any car that actually comes with that as a feature. Kinda think there's a reason with how much friction it would add - I just did a quick search to double check and found there are add-ons for this, but without even searching for it most of the results were how to bypass them.


Damage. Pinhole is just as damaging to corporation that may result in leakage of password files, sales projections, customer records, confidential data, and mass encamping of external hackers infesting your company's entire networked infrastructure.


Slowing down everyone is also incredibly damaging to the corporation though. And as others have pointed out might even be counterproductive as workers look for workarounds to route around your restrictions which may come with bigger security issues than you started out with.


So much this.

There is a default and unremovable contention between usability and security.

If you are "totally safe" then you are also "utterly useless". Period.

I really, really wish most security folks understood and respected the following idea:

"A ship in harbor is safe, but that is not what ships are built for".

Good security is a trade. Always. You must understand when and where you settle based on what you're trying to do.


Really well put and I always tell people this when talking about security. It's a sliding scale, and if you want your software to be "good" it can't be at either extreme.


Good IT security isn't invisible; it's there to prevent people from deploying poorly designed applications that require unfettered open outbound access to the internet. It's there to champion MFA and work with stakeholders from the start of the process to ensure security from the outset.

Mostly, it's there to identify and mitigate risks for the business. Have you considered that all your applications are considered a liability and new ones that deviate from the norm need to be dealt with on a case by case basis?


But it needs to be a balance. IT policy that costs tremendous amounts of time and resources just isn't viable. Decisions need to be made such that it's possible for people to do their work AND safety concerns are address; and _both_ of them need to compromise some.

As a simplified example

- You have a client database that has confidential information

- You have some employees that _must_ be able to interact with the data in that database

- You don't want random programs installed on a computer <that has access to that database> to leak the information

You could lock down every computer in the company to not allow application installation. This would likely cause all kinds of problems getting work done.

You could lock down access to the database so nobody has access to it. This also causes all kinds of problems.

You could lock down access to the database to a very specific set of computers and lock down _those_ computers so additional applications cannot be installed on them. This provides something close to a complete lockdown, but with far less impact on the rest of the work.

Sure it's stupidly simple example, but it just demonstrates the idea that compromises are necessary (for all participants)


I think the idea is that if you don't work with engineering or product, people will perceive you as friction rather than protection. Agreeing on processes to deploy new applications should satisfy both parties without restrictions being perceived as an unexpected problem.


I believe a "default deny" policy for security infrastructure around workstations is a good idea. When some new tool that uses a new port or whatever comes into use, the hassle of getting IT to change the security profile is far less expensive then leaking the contents of any particular workstation.

That being said, in my opinion, application servers and other public facing infrastructure should definitely be working under a "default deny" policy. I'm having trouble thinking of situations where this wouldn't be the case.


> When some new tool that uses a new port or whatever comes into use, the hassle of getting IT to change the security profile is far less expensive then leaking the contents of any particular workstation.

Many years ago, we had , in our company's billing system a "Waiting for IT". They weren't happy.

Some things got _days_ to get fixed.


Company IT exists to serve the company. It should not cost more than it benefits.

There’s a balancing act. On the one hand, you don’t want a one-week turnaround to open a port; on the other you don’t want people running webservers on their company desktops with proprietary plans coincidentally sitting on them.


The problem is that security making things difficult results in employees resorting to workarounds like running rogue webservers to get their jobs done.

If IT security's KPIs are only things like "number of breaches" without any KPIs like "employee satisfaction", security will deteriorate.


The biggest problem I can see with default deny is that it makes if far harder to get uptake for new protocols once you get "we only allow ports 80 and 443 through the firewall".


Wich also makes the security benefit moot as now all malware also knows to use ports 80 and 443.


Yes, I think blocking outgoing connections by port is not the most useful approach, especially for default deny. Blocking incoming makes more sense, and should be default deny with allow for specific ports on specific servers.


One-week turnaround to open a port would be a dream in most large companies.


That's because IT security reports to the C level, and their KPI's are concerned with security and vulnerabilities, but not the performance or effectiveness of the personnel.

So every time, if there is a choice, security will be prioritized at the cost of personnel performance / effectiveness. And this is how big corporations become less and less effective to the point where the average employee rarely has a productive day.


> Meanwhile, the rest of the company is super annoyed because nothing ever works without three extra rounds with the IT department

This is such an uninformed and ignorant opinion.

1. Permission concepts don't always involve IT. In fact, they can be designed by IT without ever involving IT again - such is the case in our company.

2. The privacy department sleeps much better knowing that GDPR violations require an extra u careful action, than being a default. Management sleeps better knowing that confidential projects need to be shared, instead of forgetting to deny access for everybody first. Compliance sleeps better because all of the above. And users know that data they create is private until explicitly shared.

3. Good IT security is not invisible. Entering a password is a visible step. Approving MFA requests is a visible step. Granting access to resources is a visible step. Teaching users how to identify spam and phishing is a visible step. Or teaching them about good passwords.


hm I don't think that passwords are an example of good IT security. There are much better options like physical tokens, biometric features, passkeys etc. that are less obtrusive and don't require the users to follow certain learned rules and behaviors.

If the security concept is based on educating and teaching people how to behave it's prone to fail anyway, as there will always be that one uninformed and ignorant person like me that doesn't get the message. As soon as there is one big gaping hole in the wall, the whole fortress becomes useless (Case in point: haveibeenpwned.com) Also, good luck teaching everyone in the company how to identify a personalized phishing message crafted by ChatGPT.

For the other two arguments: I don't see how "But we solved it in my company" and "Some other departments also have safety/security-related primary KPIs" justifies that IT security should be allowed to just air-gap the company if it serves these goals.


> Meanwhile, the rest of the company is super annoyed because nothing ever works

Who even cares if they're annoyed. The IT security gets to sleep at night, but the entire corporation might be operating illegally because they can't file the important compliance report because somebody fiddled with the firewall rules again.

There is so much more to enterprise security than IT security. Sometimes you don't open a port because "it's the right thing to do" as identified by some process. Sometimes you do it because the alternative RIGHT NOW is failing an audit.


> Good IT security is invisible and unobtrusive for your users, like magic

Why is this a standard for "good" IT security but not any other security domain? Would you say good airport security must be invisible and magic? Are you troubled by having to use a keycard or fingerprint to enter secure areas of a building?

Security is always a balance between usability and safety. Expecting the user to be completely unaffected through some magic is unrealistic.


> Would you say good airport security must be invisible and magic?

Very possibly. IMO a lot of the intrusive airport security is security theatre. Things like intelligence do a lot more. Other things we do not notice too, I suspect.

THe thing about the intrusive security is that attackers know abut it and can plan around it.

> Are you troubled by having to use a keycard or fingerprint to enter secure areas of a building?

No, but they are simple and easy to use, and have rarely stopped me from doing anything I needed to.

> Security is always a balance between usability and safety. Expecting the user to be completely unaffected through some magic is unrealistic.

Agree entirely.


I never quite understood the security theater thing. Isn’t the fact that at each airport , you will be scanned and possibly frisked a deterrent and you can’t measure what dissent occur so the only way to know if it works is observe a timeline where it doesn’t exist?


For one thing the rules adopted vary and different countries do very different things. It struck me once on a flight where at one end liquids were restricted, but shoes were not checked, and at the other we had to take our shoes off but there were no restrictions on liquids.

So an attacker who wanted to use a shoe bomb would do it at one end, and one who wanted to use liquids would do it at the other.

There are also some very weird things like rules against taking things that look vague like weapons. An example in the UK were aftershave bottles that are banned - does this look dangerous to you? https://www.fragrancenet.com/fragrances?f=b-spicebomb

Then there are things you can buy from shops after security that are not allowed if you bring them in before (some sharp things). Then things that are minimal threats (has anyone ever managed to hijack a plane with small pen knife? I would laugh at someone trying to carjack with one).

> know if it works is observe a timeline where it doesn’t exist?

Absolute proof maybe, but precautions need to be common sense and evidence based.


>has anyone ever managed to hijack a plane with small pen knife?

Well, the 9/11 hijackers used box cutters. Might as well be the same thing.


> An example in the UK were aftershave bottles that are banned - does this look dangerous to you? https://www.fragrancenet.com/fragrances?f=b-spicebomb

It's shaped like a grenade, so yes.


A very small grenade, made of glass, and fill with liquid?

It looks like a grenade in the same way a doll looks like a human being.


In full color vision sure, but not to the machines used to scan the insides of bags. You pretty much just get a silhouette.


If you have two security models that provide identical actual security, and one of them is invisible to the user and the other one is outright user-hostile like the TSA, yes of course the invisible one is better.


It is the standard for all security domains - police, army, etc.

I would reword it to say that security should work for the oblivious user, and we should not depend on good user behavior (or fail to defend against malicious or negligent behavior).

I would still say the ideal is for the security interface to prevent problems - like having doors so we don't fall out of cars, or ABS to correct brake inputs.


That’s what I gave my firewall, all out traffic is default deny, then as the screaming began, I started opening the necessary ports to designated IPs here and there. Now the screaming is not so frequent. A minor hassle… the tricky one is the DNS over HTTPS… that is a whack-a-mole if I ever saw one.


"If you're able to do your job, security/it/infosec/etc isn't doing theirs." Perhaps necessary at times, but true all too often.


the article is great, but reading some of the anti security comments are really triggering for me.


good IT security is invisible, allows me to do everything I need, protects us from every threat, costs nothing, and scales to every possible technology the business buys. /s




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: