Hacker News new | past | comments | ask | show | jobs | submit login
Obscurity Is a Valid Security Layer (danielmiessler.com)
141 points by danielrm26 on Oct 24, 2017 | hide | past | favorite | 109 comments



Yes... This is well known and not actually at all controversial. The only people who are against adding an additional layer of security are the ones who don't actually understand the concept, they only heard "security through obscurity is bad." Those people shouldn't be securing systems.

For example shutting up chatty webservers is a good and well established security practice (stuff like removing x-powered-by response headers)[1]. This is one of the security policies of the government systems I work on. but... it's security through obscurity, however, it's far from the only practice a website used to keep itself secure.

I don't know if its true but I also heard that the NSA doesn't publish some of their physical addresses and the highway exit are unmarked - that's security through obscurity. Again, that doesn't mean they go ahead and leave the doors unlocked.

Another recommended security practice, don't use usernames like 'root,' 'admin,' etc.

In meatspace there's the advice of "don't leave valuables in your car in plain sight," that's uncontroversial but its also security through obscurity, covering up your iPad when you leave it in the car doesn't mean you don't lock your door.

But, the prerequisite is really, actually understanding security, as a concept, including understanding tradeoffs. Without a good understanding you aren't ever going to succeed in securing any systems.

[1] https://www.troyhunt.com/shhh-dont-let-your-response-headers...


This should be uncontroversial. Anything that increases the amount of work needed to carry out a successful attack increases its security. The only concern is whether you end up being overall less secure because of a misplaced trust in the obscurity layer.


But obscuring may take away time from securing and it adds complexity to the system but systems with less complexity are easier to secure. So you at least have to be careful.


"But obscuring may take away time from securing"

That's because you're looking at the order entirely wrong - you secure then obscure.


It is also important to consider the complexity too.

There is no such thing as "Security through unnecessary complexity", only the opposite.

The examples about changing port numbers are great, they are simple configuration changes, when people start wanting to add obscurity "features" they often wander down the path of complexity, inevitably adding vulnerabilities.


> you secure then obscure

pillage THEN burn


Going back to what I said, if you actually fully understand the concept, you wouldn't misplace your trust in the obscure layer. If you don't understand the basic concepts its game over, you have already failed.

Most attackers usually just go for the lowest of the low hanging fruit. Obscuring systems from them is not a bad thing.


> Anything that increases the amount of work needed to carry out a successful attack increases its security.

By this logic, any system which is more obscure is more secure. So for example, a 20 year old Sun server running telnet and has never been patched is more secure than a brand new server, because you might have to learn SPARC assembly or sniff the traffic/create a telnet parser. If you're trying to argue that added complexity equals security, that makes even less sense.


> Anything that increases the amount of work needed to carry out a successful attack increases its security.

Jesus christ this is a stupid statement.

Of course, if you use a tool in a stupid fashion, you get stupid results. In physical security terms, security is measured in the amount of time it would take for an attacker to penetrate the defense. This also works in terms of computer security. I'd carefully choose a bit of obscurity which would force an attacker to improvise on the fly, while under time constraint or working against a chance of discovery.

A good analogy would be a moat around a castle. A moat can be nothing more than an empty ditch. Just having an empty ditch surrounding a building would make for a rotten castle. However, having such a ditch just outside the walls interferes with the deployment of siege engines and ladders in exactly the place where one has to worry most about counterattack and so is worthwhile.

So in one sense, you are correct. You don't just put anything up without thinking about cost/benefit. Costs might be in the form of increased attack surface, or increased operating costs. The cost might even be in the form of reduced overall security.

So for example, a 20 year old Sun server running telnet and has never been patched is more secure than a brand new server

If looking up the old exploits is easier than finding the zero-days on the new server, then this is less obscure by definition. It's a badly thought out straw man.

Why don't I just put six different proxies in front of my webserver? That's six times the effort at least. Dang that must be secure.

Going back to cost/benefit, if the 6 proxies spoil your operating costs and latency, then it probably doesn't work out.


When I say work I meant more in the computational sense (although wall clock time also counts). If port numbers were on a 32 bit address space, using a random port for your service instead of the standard one would be a strong security benefit, precisely because of the amount of "work" needed to cross that barrier.

But no, there's no good way to justify using an outdated system because of the obscurity factor. It's the asymmetry of work that obscurity offers that provides the benefit. If you're making your own life miserable in the process you're doing it wrong.


"Security through obscurity is bad" because most companies that employ it tend to think that's enough.

This is similar to "don't roll your own crypto", which doesn't actually apply to everyone, but it's meant to stop the majority from screwing up in case they decided to do just that.


Agreed. It very very rapidly becomes a crutch. Seen it a zillion times. By forcing transparency you force developers to write good code. You also get more eyes on the problem.

We have to deal differentiate obscurity versus runtime information leaking.


Security by obscurity is like a moat around a castle. A moat makes assaulting castle walls harder. However, no one who could afford something better would propose to make a castle by putting a ditch around a house. (There were some early castles that were like this, but they became obsolete.)


> the highway exit are unmarked

I'm sure that they also have clandestine locations, but:

https://www.google.com/search?tbm=isch&q=nsa+freeway+exit


I said "some of their physical addresses," not "every one of their physical addresses" and I prefaced it with "I don't know if its true but I heard."


I didn't mean to suggest that what you heard was wrong in general -- just the part about the highway exit in particular.


You didn't even suggest that:

OP is claiming some locations are hidden, not their main office.

The main office is well marked giant office complex next to the public cryptographic museum.


You also made note of the highway being unmarked, which is a demonstrable lie. This is a typical NSA redirection tactic. In fact it's one of the first tactics taught (my former USMC and CIA grandfather taught me well.)

I think you better come up with a far better rationalization than what you're currently proposing.


It should be uncontroversial but I don't think it is.


Probably because there is no war, probably because camouflage on an M1 tank is somehow more valuable than changing ssh from 22 to 24, probably because this so called field of computer security is just more pop-culture.


I don't think I follow.


The concept seems valid, but the obscurity term should apply to the implementer, not the attacker. i.e. if you as the system designer don't understand how it all works under the covers then it's definitely a problem, but if the attacker doesn't, then it's good practice, because it adds another hurdle to clear.


The NSA exit actually is marked. Going down it if you're not authorized to do so might get you shot.

https://mobile.nytimes.com/2015/03/31/us/nsa-maryland-gate.h...


Going down one, refusing to follow instructions, and speeding towards a police car.. might get you shot.


... while driving a stolen car.


Most attacks are just scripts that constantly scan everything looking for services on well known ports. This sort of attack isn't dangerous if you've got the basics right, so obscurity gives you nothing very useful. I guess it might result in less noise in the logs which is nice but it's not 'more secure'.

The far less common but much more dangerous attack is a malicious third party intent on gaining access to your servers specifically. Hiding a service on a different port isn't even going to slow that attacker down - they'll use a port scanner to find every port that's listening. The service is going to be found regardless of whether or not you've changed the port. You could certainly mitigate the problem by modifying the service not to output anything until the user is authenticated, and you can use a port knocking strategy to stop it connecting on the first try, but those aren't really 'obscurity' per se.

That's not to say you shouldn't do it if you want to; I'm just not sure it actually makes anything more secure.


He gives stats in the article:

"for a single weekend, and received over eighteen thousand (18,000) connections to port 22, and five (5) to port 24."

You're right in that it may not help you, but the numbers seem to indicate it could help you, at least buying some time before you notice you need to patch something. And the reduced log noise makes it easier to confirm that nobody tried the latest/greatest exploit.


But here's the point: Do you want people to spend their 10 minutes picking good passwords or setting up public key auth or should the spend them switching their server to port 24? Security BY obscurity is bad as the article states and unless you have infinite resources everything is a trade-off.


That's a false choice. No competent sysadmin is going to say "well, I was going to setup a public key but I spent all my time changing the SSH port number, so screw it".

Also when securing a box with public key encryption you should be configuring sshd to disable password authentication and disable root login. Editing an extra line in the config file isn't going to throw your schedule.


You're talking about 10 minutes for a simple way to filter 18,000 attempts down to 5.

Security, like everything else out there, should be prioritized according to ROI. This is a pretty good ROI...maybe not better than picking good passwords, but definitely better than many practices that IT departments advocate.


It shouldn’t even take 10 seconds to `sed -i s/22/24/ /etc/ssh/sshd_config; systemctl restart sshd` to do this.


I don't think I advocated changing the port as higher priority than more important measures.


If you rely on logging as a “high priority” in your security architecture, then it follows that reducing noise is of parallel importance.

Personally, this is why I change SSH ports every time on a public service and add extra firewall rules if possible. If for some reason I want to watch port 22 “attacks”, I can do so.

I’m not even sure I place this in the security OR obscurity categories at this point ... more of a disk hygiene issue.


I don't think it gives you nothing. As an example, say a 0/1-day gets dropped and someone starts compromising systems.

You're trying to get round your estate ensuring patches are in place, but that takes time.

The bots are starting with the low hanging fruit, systems on default ports.

Being not on a default port helps, by buying you more time to react.


But getting the service off a particular port means that a scan across multiple ports is needed to discover it. Such a scan should be detected and mitigated. By moving to an unusual port you are forcing the attacker to engage in something he wouldn't previously have to do, something that does no real harm but that you can use to identify the attacker. That's a win.


Does e.g. failtoban help with detecting port scans and autoblocking the IP?

Not that this would help against a distributed port scan...


I would imagine that, in a fashion similar to port knocking, if being scanned (regardless of IP), you could blackhole all new connections from non-whitelisted IPs for a period of time.


> Most attacks are just scripts that constantly scan everything looking for services on well known ports.

Why do you think port scans represent the majority of attacks? From watching my servers, I don't think that's true. Port scans take a ton of time, and they're easier to detect and block before a single auth attempt. Legitimate services usually have to wait until after a couple auth attempts before blocking.

I can verify the author's experience across multiple services, not just ssh. I'd bet he got 5 attacks on port 24 mostly because it's a 2-digit port number. I've moved my ssh port to a 5 digit number before, and it went from thousands of attempts per day to 0 over many months.


This example is good, but my problem with obscurity, especially in legacy products is this: complacence.

A product's perceived security /= a product's actual security. Obfuscation can lead to complacence, whereas transparency leads to paranoia, which is no bad thing in this domain. By adding an obfuscation layer, we give bad code a place to hide.


So to me the answer there might be to address the complacense which is the real problem and not remove obscurity ...

The idea of revealing all to improve paranoia rather sounds like the idea of attaching a sharp spike to your steering wheel to encourage safe driving :P


That idea might not actually be all that far-fetched. IIRC there have been a couple studies suggesting that some safety features on roads (e.g. safety rails, lights, etc.) might actually cause an increase in the number of car crashes because drivers become complacent and less paranoid about accidentally driving off a cliff.


It can also make maintenance more difficult, especially if there is a team with turnover involved in the maintenance.


The rule of avoiding "security through obscurity" is not 1) "you should let a potential attacker known everything about your system", but 2) "your system must be designed so that even if an attacker knows everything about it (except the keys/passwords/other secrets), still they cannot gain access". Ordinarily people should be aiming at point 2. Since occasionally it can happen that a system is found vulnerable, obscurity layers can, as others have noted, buy some time. This can be enough to restore point 2 before it is too late, so in this scenario obscurity plays a useful role.

In other words, you should always happen that "given enough time, a determined attacker can learn anything about your system".


I have always heard "assume the attacker knows your system better than you".

If you are relying on the attacker not knowing how some mechanism works you are assuming that nobody is particularly familiar with it. You can use this as a heuristic to determine what parts of your system to focus on protecting, but how effectiveness of this method is entirely dependent on how well you know the system.


As I'm sure someone else on this thread has observed, this is a silly example, because the SSH example forgets the denominator, which would show that even with 18,000 attack requests, the probability of a compromise on a properly configured system is nonexistent --- and if your system isn't configured properly, SSH becomes an example of obscurity layered on "instead of" proper security.


Do you think there's any benefit in reduced log noise making a serious attacker more obvious to SoC analysts?

I.e. if I run SSH on 24956/TCP and start seeing attacks, it's a fair bet this is targeted (someone has taken the time to do 65K port scans, not common for untargeted attackers), so it's a stronger signal for the blue team to look at that activity more closely than the noise on 22/TCP.


It’s worth noting that running sshd above port 1024 on most systems adds the risk that non-root users can bind their own process to its port if they can crash it or wait for it to crash, and if you break into the ephemeral range, you’re risking non malicious conflicts as well.


Run sshd on port 22, use pf to redirect a high port down.


malicious local user is a slightly different threat model though with a number of other possible controls.


There are a number of controls available for pretty much every threat model, so I’m not sure what you’re claiming about my point that using an non-privileged port adds risk to the system that would need to be accepted or dealt with.


So to elaborate. Many Internet facing systems are application servers (e.g. web servers). They typically have very few local users, administrative/Ops staff are the primary users.

At that point an attack requiring the ability to execute arbitary code on the host as a local user is possibly less relavant as, if an attacker is in that position, they likely have a number of other options to further their goals.

The reason I made the comment about alternate controls, is that the original discussion and point I was making revolved around Internet focused attackers, rather than local attackers, so it's not too surprising that I didn't try to cover that case :) No sinister intent, honest!

Heck however if we want to then lets theorize that I can just use some form of firewall to port forward the high port that's presented externally to 22/TCP internally to get the best of both worlds, both a less visible external service and an internal port that requires root to bind.


Gotcha. All valid points, and I’m a big fan of firewall-based port rerouting like you describe.

I agree that an attacker who gets code exec on an app server is in a pretty fun spot already, and has a lot of different paths to escalate/persist/etc that don’t involve misuse of your ssh daemons port.


Really good obscurity is hard to come by.

Using Symbolics machine, Commodore 64, or Amiga 500 is probably safe from most automated attacks and tools. If someone breaks in, they earned it.


I hear you on this, but how do you feel about programs such as fail2ban. Even if breakable via 18K requests, if the ssh host has fail2ban installed, the attacker will never get off those 18K requests (unless, and this is a big unless he/she controls 100s of unique IP addresses).


I think fail2ban is pointless.


Coming from you, of all people, could you elaborate a bit or point us to some literature?


I'm not him, but this isn't a surprise to me at least from the don't-roll-your-own-crypto guy.

It probably has something to do with this:

> Fail2Ban is able to reduce the rate of incorrect authentications attempts however it cannot eliminate the risk that weak authentication presents


I am also not him.

Fail2ban is ineffective against distributed brute force -- if someone has a 100k strong botnet, they can try 100k user/pass guesses, and fail2ban won't lift a finger.

But stepping back a bit, in the case where you're using keys for auth and have passwords disabled, fail2ban adds nothing.


true, but it's a simple tweak to fail2ban to treat failed login attempts (irrespective of IP address) similar to how it treats failed logins linked to a particular IP address. Of course, the exponential back off would have to be much slower to avoid making the server very susceptible to DDOS attacks.

All that being said, yes, ssh keys are the gold standard, but I don't find them to be very portable.


Using anything but SSH keys is engineering malpractice.


perhaps, but have you ever tried to get non-technical or semi-technical staff to master ssh code key generation and use?


Allowing non-technical or semi-technical staff ssh access to computers is engineering malpractice.


Engineering is engineering malpractice too.


Not the parent commenter, but could you explain why? AT my work we're setting up a large deployment and I'm planning on configuring fail2ban on all our instances. Is there any downside to doing so that I'm missing?


There’s no downside that I know of, but if you’re using keys there’s no upside, and if you’re not exclusively using keys there’s a huge downside to that.


The major downside of fail2ban is actually that it punishes you for using keys. If you have different keys for different machines and haven't configured your SSH client to pair them up you might attempt to login several times with the wrong key before getting in. You won't even notice this normally but fail2ban will trigger and ban you from the machine.

https://github.com/fail2ban/fail2ban/issues/1263


See, I didn't know this, because I would never consider setting it up on any machine I run, because what would be the point?

fail2ban is rubber chicken security.


Huh, well never knew about this. Thanks!


Out of curiosity can you elaborate?


If you like it you can put the lazyness of attackers in your threat model.

Most attackers are just systems that are scanning parts of the internet for the low hanging fruit. They want easy targets, they don't want to spend time on your systems, and they like using their usual tools that work for everyone else. They aren't going to put in the effort to work out your slightly different hashing method and make a GPU based cracker for it. They aren't going to employ a giant network to bypass fail2ban. They arent looking for nonstandard ports. Etc, etc.

Yes, you can hypothetically have an attacker that works around all your obfuscation, but it simply requires much more effort. By employing these kind of techniques, you beat the script kiddies and the automated systems, which in my experience is 99% of attackers.


> They aren't going to put in the effort to work out your slightly different hashing method and make a GPU based cracker for it.

If you have a “slightly different hashing method” – especially one for which a GPU-based cracker would be useful – you’re doing it wrong. Argon, scrypt, bcrypt: all much more valuable.


"which in my experience is 99% of attackers."

And that's how I know you're not trustworthy in security. I design scripts to look like humans, and you're none the wiser because you think it's not possible.

Good job securing ANY of your systems against me. I've been at this for over 30 years.


Yes I understand an experienced pentester will have a different approach, YOU are at it for 30 years, you're not bulk-scanning the internet on port 22, and you're not a script kiddie trying out hydra for the first time.

> Good job securing ANY of your systems against me.

You completely missed the point of my post. To quote another post;

> In meatspace there's the advice of "don't leave valuables in your car in plain sight," that's uncontroversial but its also security through obscurity, covering up your iPad when you leave it in the car doesn't mean you don't lock your door.


It's a valid additional security layer. If it's not displacing other things you should be doing, it probably adds value.

His example of moving ssh to a non-default port is compelling.


Isn’t every security layer a valid “additional” security later?


Sure. You just have to balance it against convenience, as well as consider effort vs value.

Changing the SSH port is fine, and having to remember/teach people that it's a non-standard port is pretty easy. Although it is security-by-obscurity, it's decent value because you're significantly less likely to get dictionary attacks. Fairly low value, but also very low effort.

Restricting SSH connections to specific source IPs further reduces your risk, but adds inconvenience: you have to be in a specific place or use a VPN first, or remember to add new IPs for new people that need to connect. If you have only your office white-listed, and something happens to your office, now what?

Using port knocking can provide even more security-by-obscurity, but is much more inconvenient to connect to, harder to train new people on, etc. I've not used this myself but I'd also be worried about possibility of it not working.

When you consider these in the context of effort vs value, I'm not sure they're really there. They definitely add some security value, but it's a pretty tiny amount compared to something like using key-based authentication. Arguably both are a bit more secure than just changing the SSH port, but that comes at a significantly higher effort.


Right, but what GP meant to say was that obscurity is not sufficient as the ONLY layer of security. There might be some techniques that are perfectly sufficient (for example, encrypting all the user data with a password and storing a hash+salt version of the password coule be argued to be sufficient), but obscurity is not.


I meant "additional" more like "optional". Things you would do after doing the bare minimum. It shouldn't displace things that are of higher importance. Running a currently patched sshd, disabling password auth, etc, would be higher priority than running on a non-standard port.


it's only compelling until people do it regularly.


But is it worth the extra effort? Something I've noticed many people ignore is the cost to standardization these kinds of "security" upgrades place. When training a junior/new/contract IT person all these little gotchas need to be mentioned and will slow down workflows regardless. Pile on enough of these tweaks and the infrastructure starts to become less manageable.


I like to say that obscurity should not be used _for_ security but in _addition_ to security.

For example running ssh on a non-default port. It's obscurity. But it should still have correct key strengths and all the settings as if it was running on the default port. It shouldn't be weakened somehow because it is running on that port.

So why run it on a non-default port, then? Perhaps to get less log noise. So it doesn't add to security but it makes parsing the logs easier, because it's less stuff to search through.


Not it is not, because obscurity usually assumes human limitations on information gathering and searching. So that layer that would bin a human to search for a life-time is non-existent for a proper machine search. The hidden folder, in a sea of thousand folders is not hidden for a machine.

Obscurity was a valid layer, while we did not have machines to eliminate it. Now its gone, and remains a lingering illusions, created by our own limitations.


There is still the fact that a number of automated vulnerability scanners still check for common/default configurations. By not conforming to these patterns, at the very least you are less likely to be subject to bots just trolling for systems easy to compromise.


I think the analogies tend to confuse the difference between obscurity on one hand, and randomness in the algorithm on the other.

With cryptography, by design, there will always be hidden "obscure" secrets that can be used to break into the system: passwords, private keys, etc. The useful mathematical insight of cryptography is to isolate the "obscurity" into these secret bits and to pick them randomly with high entropy, while not necessarily assuming the rest of the algorithm is hidden.

The physical examples of decoy vehicles or randomizing one's route are examples of cryptographic protocols, not security via obscurity. You can tell because the algorithm is public but there are some randomly-chosen bits that are secret.

I'm not disagreeing with the core concept, but I don't see that the ssh example is very convincing either -- it seems to also illustrate the danger of false confidence when using security by obscurity....


Obscurity buys you time, and that's it.


Exactly and only in case you are random victim of a larger hack and not if you're (or your equipment is) a marked target.


I think security though obscurity can be a massive deterrent for all but the most dedicated attackers.

For example, say that I not only move ssh to port 24, but it's also completely disabled by default. Then I have a small script scanning icmp logs looking for a ping of a particular size on another obscure port, and if it gets one, it enables the ssh server for 30 seconds. If no one opens an ssh connection in that window, it re-disables.

How would anyone besides an insider even figure out how to enable your ssh port let alone try to break in? Sure, if this method became widespread the script kiddies would adapt accordingly and it would no longer be as effective, but staying one step ahead of the kiddies is pretty easy.



What if you used port knocking, but instead of opening a closed port on a correct knock sequence, you switch the service listening to the target port from honeypot mode to normal mode? Anyone connecting and presenting genuine authorization credentials during that window gets genuine access, while everyone else gets routed to the honeypot.

The 40th basement door from the left opens to a storage closet until someone says, "swordfish" at the 30th door from the left, then 15 seconds later, it opens to a vestibule with an imposing, riveted-iron door for the next 60 seconds. That locked door requires a genuine invitation to admit you to the speakeasy.

If you didn't know the speakeasy was there, you might not bother trying to dig through the back wall of the closet with a pickaxe. If you watched someone else go in, and copied their actions, you still don't have the invitation. Any noise you make banging around trying to fool the automated bouncer is much more noticeable when all the casual traffic and robot-driven attackers are mostly just stealing boxes of detergent out of the decoy closet.


Very valid points.

Reminds me of the oft-repeated phrase, "Goto considered harmful!", regardless of its valid use-cases or of the context in which that original paper was published. I mean jeeze, even the Linux kernel uses GOTO on occasion for error cleanup.


GOTO considered harmful is talking about spaghetti code with no higher order control structures, not about GOTO as a language feature. There are appropriate cases for the use of the latter.

Lots of quotes get abused like this. Another favorite of mine is "premature optimization is the root of all evil." Lots of people take this as "never think about performance" or "performance doesn't matter" when its true meaning is "don't let premature concern for performance blind you to other concerns or short-circuit your creativity."


I think people forget GOTO was used instead of functions, if/loop blocks etc.


I would say it is not only valid but a very interesting method to deal with 0-day exploits and automatic scanners.

I have a number of services running at home, all outside the standard ports - sip is on 5099 (remote gateway is on 5088), SSH on 5225, etc - and the difference in the number of attempts to log into my box (and make international calls..) is huge - actually, I did not have a single attempt to put a call through my asterisk box since I changed the ports outside the default range.

Of course, it's not the only security measure, but I'd argue it can be as important and as effective as any other.


Obscurity is a "security layer" in the same way as camouflage - that is to say, it doesn't improve security, it just "hides" the thing that you were actually supposed to secure. It can easily hurt security, too, as often people depend on obscurity as if it were a real security measure, and are defeated by a tiny amount of effort on the part of an attacker. You're an idiot if you rely on obscurity.


This is one of those nuanced things that can't be generally applied to everything. Operating SSH on a port other than 22 can/may protect you from random bots/scripts but won't protect you from a determined attacker. In real world, misdirection like operating services on non-standard port does not go that far.


> So, given this highly effective armor, would the danger to the tank somehow increase if it were to be painted the same color as its surroundings?

If there were a crowd of script kiddies rapping on the armour of every tank they could see, then yes, making your tank less visible would endanger it. The internet is different from the battlefield.

> Is anyone willing to argue that someone unleashing such an attack would be equally likely to launch it against non-standard port vs. port 22? If not, then your risk goes down by not being there, it’s that simple.

Yes, I'm willing to argue that. It sounds like you were being attacked by 17,995 dumb bots and 5 somewhat less dumb bots and/or genuinely sophisticated attackers. The former aren't going to pick up the zero-day.

> at some point of diminishing return for impact reduction it is likely to become a good idea to reduce likelihood as well.

Disagree. Obscurity-based methods have such a poor cost/benefit that they're likely to never be a good choice.


> If there were a crowd of script kiddies rapping on the armour of every tank they could see, then yes, making your tank less visible would endanger it.

I don't follow. If your tank is less visible, it gets seen (and thus interacted with) less on average, regardless of how many people are looking for tanks.


It gets interacted with less by the less sophisticated attackers. But you want those attackers to be targeting you, because they'll find holes and use them for relatively harmless things. Whereas if your only attackers are the sophisticated ones, the holes in your security will be used only for serious attacks.


eh?

runnning a service on an alternate port is generally extremely easy to do and has several benefits

1) It makes it easy to pick out the serious attackers. If you run SSH on 34985/TCP for example and start getting password brute force, you've got an idea it's a targeted attack, whereas on 22/TCP you get hammered by dumb bots all the time.

2) If someone is slamming round as fast as possible popping boxes with an 0-day they'll likely only bother with default ports (e.g. SMB worms, they compromised a lot of systems, but only on default ports)


>If there were a crowd of script kiddies rapping on the armour of every tank they could see, then yes, making your tank less visible would endanger it.

How do you figure? You're not ignoring it by changing the colour and saying that's well enough, you're making it to where you can focus more clearly on the ones that do knock on it despite the colour change.


Well duh no matter how good your lock is hiding the key hole itself will improve security.


However, the effort of doing that is pointless if there's a nearby window that is left open and/or can be broken.


There are so many problems with this piece, I hardly know where to begin.

Kerckhoff's Principle states that a system is secure if and only if the security architecture (as in, not the keys) is publicly available and non-key-holding attackers are literally unable to successfully attack the system in spite of their knowledge of the security architecture.

Battlefield examples are horrible counter-examples. To take an extreme example, if I drop a nuke on an enemy soldier, he's going to die. If I drop a nuke on a tank, it's going to vaporize. There is literally no amount of armor in the world that can create an unattackable battlefield-security architecture, which is the whole reason why militaries rely on camouflage. The use of camouflage is a tacit admission that "yes, in the real world, something could successfully attack us, so we need to rely on other measures."

Modern security engineers don't mindlessly spend time and money to "improve their security posture" without an appreciation for the consequences thereof. They understand that the A in CIA stands for Availability, and that not using the default ports hurts legitimate users expecting the default and confused by the lack thereof far more than it foils attackers. They understand that security engineering is about raising the cost of mounting an attack to be more expensive than the value of the target, and worry about the cost of new security measures versus the benefit of those new security measures (because it now costs $X > $old to successfully attack the target) versus the expected resources of an attacker (by running detailed risk and threat analyses to identify potential adversaries and estimating their capabilities). If it costs $X to attack a target which is worth $Y < $X and your adversaries only have $Z < $Y < $X to attack then spending $any to further "improve your security posture" is not just irrational and indefensible but ultimately destructive to the target itself which you are supposed to be protecting, because those resources could be spent more productively elsewhere to the benefit of the target.

Which brings me to the presidential convoy example. Which vehicle the president is in is not a secret key in the president's security architecture, because knowing which car the president is in does not easily and magically give you access to the president. The point of having the additional obfuscation of additional vehicles is about raising the cost of a successful attack. Let's say the attacker's "nuke" is a shoulder-mounted anti-tank missile which will successfully destroy the target. If there's only one vehicle, then the attacker only needs one missile. But if the convoy has three vehicles, then a successful attack will cost more than three times as much - not just the cost of the additional missiles, but also the cost of finding additional trustworthy people to carry the additional missiles and carry out the attack, plus the cost of training and coordinating the attackers to work in concert and successfully carry out the attack, plus the additional risk of the plans accidentally leaking due to additional people being involved in the planning and execution of the attack.

Changing from port 22 to port 24 does absolutely nothing to raise the cost for anyone but the opportunistic script kiddie hacker who is paying virtually $0 to add your public IP's to a list of targets. Dedicated internal threats will be aware of the port change and dedicated external threats will become aware of the change when they swipe an unencrypted employee laptop or phish a common password, and you will not be able to change the ports on all your servers from 24 to something else without inflicting massive pain on every legitimate user whose machines are configured to expect 24 but suddenly won't successfully connect anymore.


Right, you shouldn't use it as your only security, but it's fine to use in conjunction with other things.


Now them's fighting words.


So is client-side validation. Anything qualifies as "Valid Security Layer" as long as it prevents your grand-ma from attacking your system. The layer that protects against the most motivated attacker is the one usually known as "security".


I have always been amused that folks who say "security through obscurity is stupid" are never willing to give me their passwords.

It's all about threat prioritization and defense in depth.


That’s not what “security through obscurity” means. It specifically refers to security based on the secrecy of the system’s design details. Like many terms of art, the meaning is not exactly the literal meaning of the words.


That's not what it means. No one disputes you should keep passwords, private keys, certificates, etc. safe.

It's about obscuring the architecture of a system in order to protect it. And I agree this shouldn't be a tactic. It can be a byproduct of your disclosure strategy (i.e. AWS don't disclose how their products are built) but not a security mechanism (i.e. AWS don't meet all the certification standards BECAUSE they're not disclosing how their products are built). Just my 2 cents.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: