
Obscurity Is a Valid Security Layer - danielrm26
https://danielmiessler.com/study/security-by-obscurity/
======
astura
Yes... This is well known and not actually at all controversial. The only
people who are against adding an _additional_ layer of security are the ones
who don't actually understand the concept, they only heard "security through
obscurity is bad." Those people shouldn't be securing systems.

For example shutting up chatty webservers is a good and well established
security practice (stuff like removing x-powered-by response headers)[1]. This
is one of the security policies of the government systems I work on. but...
it's security through obscurity, however, it's _far from_ the only practice a
website used to keep itself secure.

I don't know if its true but I also heard that the NSA doesn't publish some of
their physical addresses and the highway exit are unmarked - that's security
through obscurity. Again, that doesn't mean they go ahead and leave the doors
unlocked.

Another recommended security practice, don't use usernames like 'root,'
'admin,' etc.

In meatspace there's the advice of "don't leave valuables in your car in plain
sight," that's uncontroversial but its also security through obscurity,
covering up your iPad when you leave it in the car doesn't mean you don't lock
your door.

But, the prerequisite is _really, actually understanding security, as a
concept,_ including understanding tradeoffs. Without a good understanding you
aren't _ever_ going to succeed in securing any systems.

[1] [https://www.troyhunt.com/shhh-dont-let-your-response-
headers...](https://www.troyhunt.com/shhh-dont-let-your-response-headers/)

~~~
hackinthebochs
This should be uncontroversial. Anything that increases the amount of work
needed to carry out a successful attack increases its security. The only
concern is whether you end up being overall less secure because of a misplaced
trust in the obscurity layer.

~~~
peterwwillis
> Anything that increases the amount of work needed to carry out a successful
> attack increases its security.

By this logic, any system which is more obscure is more secure. So for
example, a 20 year old Sun server running telnet and has never been patched is
more secure than a brand new server, because you might have to learn SPARC
assembly or sniff the traffic/create a telnet parser. If you're trying to
argue that added complexity equals security, that makes even less sense.

~~~
stcredzero
_> Anything that increases the amount of work needed to carry out a successful
attack increases its security._

 _Jesus christ this is a stupid statement._

Of course, if you use a tool in a stupid fashion, you get stupid results. In
physical security terms, security is measured in the amount of time it would
take for an attacker to penetrate the defense. This also works in terms of
computer security. I'd carefully choose a bit of obscurity which would force
an attacker to improvise on the fly, while under time constraint or working
against a chance of discovery.

A good analogy would be a moat around a castle. A moat can be nothing more
than an empty ditch. Just having an empty ditch surrounding a building would
make for a rotten castle. However, having such a ditch just outside the walls
interferes with the deployment of siege engines and ladders in exactly the
place where one has to worry most about counterattack and so is worthwhile.

So in one sense, you are correct. You don't just put anything up without
thinking about cost/benefit. Costs might be in the form of increased attack
surface, or increased operating costs. The cost might even be in the form of
reduced overall security.

 _So for example, a 20 year old Sun server running telnet and has never been
patched is more secure than a brand new server_

If looking up the old exploits is easier than finding the zero-days on the new
server, then this is less obscure by definition. It's a badly thought out
straw man.

 _Why don 't I just put six different proxies in front of my webserver? That's
six times the effort at least. Dang that must be secure._

Going back to cost/benefit, if the 6 proxies spoil your operating costs and
latency, then it probably doesn't work out.

------
onion2k
Most attacks are just scripts that constantly scan _everything_ looking for
services on well known ports. This sort of attack isn't dangerous if you've
got the basics right, so obscurity gives you nothing very useful. I guess it
might result in less noise in the logs which is nice but it's not 'more
secure'.

The far less common but much more dangerous attack is a malicious third party
intent on gaining access to your servers specifically. Hiding a service on a
different port isn't even going to slow that attacker down - they'll use a
port scanner to find every port that's listening. The service is going to be
found regardless of whether or not you've changed the port. You could
certainly mitigate the problem by modifying the service not to output anything
until the user is authenticated, and you can use a port knocking strategy to
stop it connecting on the first try, but those aren't really 'obscurity' per
se.

That's not to say you shouldn't do it if you want to; I'm just not sure it
actually makes anything more secure.

~~~
tyingq
He gives stats in the article:

 _" for a single weekend, and received over eighteen thousand (18,000)
connections to port 22, and five (5) to port 24."_

You're right in that it may not help you, but the numbers seem to indicate it
could help you, at least buying some time before you notice you need to patch
something. And the reduced log noise makes it easier to confirm that nobody
tried the latest/greatest exploit.

~~~
HurrdurrHodor
But here's the point: Do you want people to spend their 10 minutes picking
good passwords or setting up public key auth or should the spend them
switching their server to port 24? Security BY obscurity is bad as the article
states and unless you have infinite resources everything is a trade-off.

~~~
saosebastiao
You're talking about 10 minutes for a simple way to filter 18,000 attempts
down to 5.

Security, like everything else out there, should be prioritized according to
ROI. This is a pretty good ROI...maybe not better than picking good passwords,
but definitely better than _many_ practices that IT departments advocate.

~~~
GcVmvNhBsU
It shouldn’t even take 10 seconds to `sed -i s/22/24/ /etc/ssh/sshd_config;
systemctl restart sshd` to do this.

------
sambaynham
This example is good, but my problem with obscurity, especially in legacy
products is this: complacence.

A product's perceived security /= a product's _actual_ security. Obfuscation
can lead to complacence, whereas transparency leads to paranoia, which is no
bad thing in this domain. By adding an obfuscation layer, we give bad code a
place to hide.

~~~
raesene6
So to me the answer there might be to address the complacense which is the
real problem and not remove obscurity ...

The idea of revealing all to improve paranoia rather sounds like the idea of
attaching a sharp spike to your steering wheel to encourage safe driving :P

~~~
yellowapple
That idea might not actually be all that far-fetched. IIRC there have been a
couple studies suggesting that some safety features on roads (e.g. safety
rails, lights, etc.) might actually cause an _increase_ in the number of car
crashes because drivers become complacent and less paranoid about accidentally
driving off a cliff.

------
giomasce
The rule of avoiding "security through obscurity" is not 1) "you should let a
potential attacker known everything about your system", but 2) "your system
must be designed so that even if an attacker knows everything about it (except
the keys/passwords/other secrets), still they cannot gain access". Ordinarily
people should be aiming at point 2. Since occasionally it can happen that a
system is found vulnerable, obscurity layers can, as others have noted, buy
some time. This can be enough to restore point 2 before it is too late, so in
this scenario obscurity plays a useful role.

In other words, you should always happen that "given enough time, a determined
attacker can learn anything about your system".

~~~
colemannugent
I have always heard "assume the attacker knows your system better than you".

If you are relying on the attacker not knowing how some mechanism works you
are assuming that nobody is particularly familiar with it. You can use this as
a heuristic to determine what parts of your system to focus on protecting, but
how effectiveness of this method is entirely dependent on how well _you_ know
the system.

------
tptacek
As I'm sure someone else on this thread has observed, this is a silly example,
because the SSH example forgets the denominator, which would show that even
with 18,000 attack requests, the probability of a compromise on a properly
configured system is nonexistent --- and if your system isn't configured
properly, SSH becomes an example of obscurity layered on "instead of" proper
security.

~~~
raesene6
Do you think there's any benefit in reduced log noise making a serious
attacker more obvious to SoC analysts?

I.e. if I run SSH on 24956/TCP and start seeing attacks, it's a fair bet this
is targeted (someone has taken the time to do 65K port scans, not common for
untargeted attackers), so it's a stronger signal for the blue team to look at
that activity more closely than the noise on 22/TCP.

~~~
akerl_
It’s worth noting that running sshd above port 1024 on most systems adds the
risk that non-root users can bind their own process to its port if they can
crash it or wait for it to crash, and if you break into the ephemeral range,
you’re risking non malicious conflicts as well.

~~~
raesene6
malicious local user is a slightly different threat model though with a number
of other possible controls.

~~~
akerl_
There are a number of controls available for pretty much every threat model,
so I’m not sure what you’re claiming about my point that using an non-
privileged port adds risk to the system that would need to be accepted or
dealt with.

~~~
raesene6
So to elaborate. Many Internet facing systems are application servers (e.g.
web servers). They typically have very few local users, administrative/Ops
staff are the primary users.

At that point an attack requiring the ability to execute arbitary code on the
host as a local user is possibly less relavant as, if an attacker is in that
position, they likely have a number of other options to further their goals.

The reason I made the comment about alternate controls, is that the original
discussion and point I was making revolved around Internet focused attackers,
rather than local attackers, so it's not too surprising that I didn't try to
cover that case :) No sinister intent, honest!

Heck however if we want to then lets theorize that I can just use some form of
firewall to port forward the high port that's presented externally to 22/TCP
internally to get the best of both worlds, both a less visible external
service and an internal port that requires root to bind.

~~~
akerl_
Gotcha. All valid points, and I’m a big fan of firewall-based port rerouting
like you describe.

I agree that an attacker who gets code exec on an app server is in a pretty
fun spot already, and has a lot of different paths to escalate/persist/etc
that don’t involve misuse of your ssh daemons port.

------
flipp3r
If you like it you can put the lazyness of attackers in your threat model.

Most attackers are just systems that are scanning parts of the internet for
the low hanging fruit. They want easy targets, they don't want to spend time
on your systems, and they like using their usual tools that work for everyone
else. They aren't going to put in the effort to work out your slightly
different hashing method and make a GPU based cracker for it. They aren't
going to employ a giant network to bypass fail2ban. They arent looking for
nonstandard ports. Etc, etc.

Yes, you can hypothetically have an attacker that works around all your
obfuscation, but it simply requires much more effort. By employing these kind
of techniques, you beat the script kiddies and the automated systems, which in
my experience is 99% of attackers.

~~~
lightedman
"which in my experience is 99% of attackers."

And that's how I know you're not trustworthy in security. I design scripts to
look like humans, and you're none the wiser because you think it's not
possible.

Good job securing ANY of your systems against me. I've been at this for over
30 years.

~~~
flipp3r
Yes I understand an experienced pentester will have a different approach, YOU
are at it for 30 years, you're not bulk-scanning the internet on port 22, and
you're not a script kiddie trying out hydra for the first time.

> Good job securing ANY of your systems against me.

You completely missed the point of my post. To quote another post;

> In meatspace there's the advice of "don't leave valuables in your car in
> plain sight," that's uncontroversial but its also security through
> obscurity, covering up your iPad when you leave it in the car doesn't mean
> you don't lock your door.

------
tyingq
It's a valid _additional_ security layer. If it's not displacing other things
you should be doing, it probably adds value.

His example of moving ssh to a non-default port is compelling.

~~~
pgwhalen
Isn’t every security layer a valid “additional” security later?

~~~
gregmac
Sure. You just have to balance it against convenience, as well as consider
effort vs value.

Changing the SSH port is fine, and having to remember/teach people that it's a
non-standard port is pretty easy. Although it is security-by-obscurity, it's
decent value because you're significantly less likely to get dictionary
attacks. Fairly low value, but also very low effort.

Restricting SSH connections to specific source IPs further reduces your risk,
but adds inconvenience: you have to be in a specific place or use a VPN first,
or remember to add new IPs for new people that need to connect. If you have
only your office white-listed, and something happens to your office, now what?

Using port knocking can provide even more security-by-obscurity, but is much
more inconvenient to connect to, harder to train new people on, etc. I've not
used this myself but I'd also be worried about possibility of it not working.

When you consider these in the context of effort vs value, I'm not sure
they're really there. They definitely add _some_ security value, but it's a
pretty tiny amount compared to something like using key-based authentication.
Arguably both are a bit more secure than just changing the SSH port, but that
comes at a significantly higher effort.

------
rdtsc
I like to say that obscurity should not be used _for_ security but in
_addition_ to security.

For example running ssh on a non-default port. It's obscurity. But it should
still have correct key strengths and all the settings as if it was running on
the default port. It shouldn't be weakened somehow because it is running on
that port.

So why run it on a non-default port, then? Perhaps to get less log noise. So
it doesn't add to security but it makes parsing the logs easier, because it's
less stuff to search through.

------
Pica_soO
Not it is not, because obscurity usually assumes human limitations on
information gathering and searching. So that layer that would bin a human to
search for a life-time is non-existent for a proper machine search. The hidden
folder, in a sea of thousand folders is not hidden for a machine.

Obscurity was a valid layer, while we did not have machines to eliminate it.
Now its gone, and remains a lingering illusions, created by our own
limitations.

~~~
michaelmior
There is still the fact that a number of automated vulnerability scanners
still check for common/default configurations. By not conforming to these
patterns, at the very least you are less likely to be subject to bots just
trolling for systems easy to compromise.

------
bo1024
I think the analogies tend to confuse the difference between obscurity on one
hand, and randomness in the algorithm on the other.

With cryptography, by design, there will always be hidden "obscure" secrets
that can be used to break into the system: passwords, private keys, etc. The
useful mathematical insight of cryptography is to isolate the "obscurity" into
these secret bits and to pick them randomly with high entropy, while not
necessarily assuming the rest of the algorithm is hidden.

The physical examples of decoy vehicles or randomizing one's route are
examples of cryptographic protocols, not security via obscurity. You can tell
because the algorithm is public but there are some randomly-chosen bits that
are secret.

I'm not disagreeing with the core concept, but I don't see that the ssh
example is very convincing either -- it seems to also illustrate the danger of
false confidence when using security by obscurity....

------
ProAm
Obscurity buys you time, and that's it.

~~~
iRobbery
Exactly and only in case you are random victim of a larger hack and not if
you're (or your equipment is) a marked target.

------
ythn
I think security though obscurity can be a massive deterrent for all but the
most dedicated attackers.

For example, say that I not only move ssh to port 24, but it's also completely
disabled by default. Then I have a small script scanning icmp logs looking for
a ping of a particular size on another obscure port, and if it gets one, it
enables the ssh server for 30 seconds. If no one opens an ssh connection in
that window, it re-disables.

How would anyone besides an insider even figure out how to _enable_ your ssh
port let alone try to break in? Sure, if this method became widespread the
script kiddies would adapt accordingly and it would no longer be as effective,
but staying one step ahead of the kiddies is pretty easy.

~~~
calc_exe
[https://en.wikipedia.org/wiki/Port_knocking](https://en.wikipedia.org/wiki/Port_knocking)

~~~
logfromblammo
What if you used port knocking, but instead of opening a closed port on a
correct knock sequence, you switch the service listening to the target port
from honeypot mode to normal mode? Anyone connecting and presenting genuine
authorization credentials during that window gets genuine access, while
everyone else gets routed to the honeypot.

The 40th basement door from the left opens to a storage closet until someone
says, "swordfish" at the 30th door from the left, then 15 seconds later, it
opens to a vestibule with an imposing, riveted-iron door for the next 60
seconds. That locked door requires a genuine invitation to admit you to the
speakeasy.

If you didn't know the speakeasy was there, you might not bother trying to dig
through the back wall of the closet with a pickaxe. If you watched someone
else go in, and copied their actions, you still don't have the invitation. Any
noise you make banging around trying to fool the automated bouncer is much
more noticeable when all the casual traffic and robot-driven attackers are
mostly just stealing boxes of detergent out of the decoy closet.

------
AdmiralAsshat
Very valid points.

Reminds me of the oft-repeated phrase, "Goto considered harmful!", regardless
of its valid use-cases or of the context in which that original paper was
published. I mean jeeze, even the Linux kernel uses GOTO on occasion for error
cleanup.

~~~
api
GOTO considered harmful is talking about spaghetti code with no higher order
control structures, not about GOTO as a language feature. There are
appropriate cases for the use of the latter.

Lots of quotes get abused like this. Another favorite of mine is "premature
optimization is the root of all evil." Lots of people take this as "never
think about performance" or "performance doesn't matter" when its true meaning
is "don't let premature concern for performance blind you to other concerns or
short-circuit your creativity."

------
leovonl
I would say it is not only valid but a very interesting method to deal with
0-day exploits and automatic scanners.

I have a number of services running at home, all outside the standard ports -
sip is on 5099 (remote gateway is on 5088), SSH on 5225, etc - and the
difference in the number of attempts to log into my box (and make
international calls..) is huge - actually, I did not have a single attempt to
put a call through my asterisk box since I changed the ports outside the
default range.

Of course, it's not the only security measure, but I'd argue it can be as
important and as effective as any other.

------
peterwwillis
Obscurity is a "security layer" in the same way as camouflage - that is to
say, it doesn't improve security, it just "hides" the thing that you were
actually supposed to secure. It can easily hurt security, too, as often people
depend on obscurity as if it were a real security measure, and are defeated by
a tiny amount of effort on the part of an attacker. You're an idiot if you
rely on obscurity.

------
Bhilai
This is one of those nuanced things that can't be generally applied to
everything. Operating SSH on a port other than 22 can/may protect you from
random bots/scripts but won't protect you from a determined attacker. In real
world, misdirection like operating services on non-standard port does not go
that far.

------
lmm
> So, given this highly effective armor, would the danger to the tank somehow
> increase if it were to be painted the same color as its surroundings?

If there were a crowd of script kiddies rapping on the armour of every tank
they could see, then yes, making your tank less visible would endanger it. The
internet is different from the battlefield.

> Is anyone willing to argue that someone unleashing such an attack would be
> equally likely to launch it against non-standard port vs. port 22? If not,
> then your risk goes down by not being there, it’s that simple.

Yes, I'm willing to argue that. It sounds like you were being attacked by
17,995 dumb bots and 5 somewhat less dumb bots and/or genuinely sophisticated
attackers. The former aren't going to pick up the zero-day.

> at some point of diminishing return for impact reduction it is likely to
> become a good idea to reduce likelihood as well.

Disagree. Obscurity-based methods have such a poor cost/benefit that they're
likely to never be a good choice.

~~~
tdoggette
> If there were a crowd of script kiddies rapping on the armour of every tank
> they could see, then yes, making your tank less visible would endanger it.

I don't follow. If your tank is less visible, it gets seen (and thus
interacted with) less on average, regardless of how many people are looking
for tanks.

~~~
lmm
It gets interacted with less by the less sophisticated attackers. But you want
those attackers to be targeting you, because they'll find holes and use them
for relatively harmless things. Whereas if your only attackers are the
sophisticated ones, the holes in your security will be used only for serious
attacks.

------
xbmcuser
Well duh no matter how good your lock is hiding the key hole itself will
improve security.

~~~
gregmac
However, the effort of doing that is pointless if there's a nearby window that
is left open and/or can be broken.

------
solatic
There are so many problems with this piece, I hardly know where to begin.

Kerckhoff's Principle states that a system is secure if and only if the
security architecture (as in, not the keys) is publicly available and non-key-
holding attackers are literally unable to successfully attack the system in
spite of their knowledge of the security architecture.

Battlefield examples are _horrible_ counter-examples. To take an extreme
example, if I drop a nuke on an enemy soldier, he's going to die. If I drop a
nuke on a tank, it's going to vaporize. There is literally no amount of armor
in the world that can create an unattackable battlefield-security
architecture, which is the whole reason why militaries rely on camouflage. The
use of camouflage is a tacit admission that "yes, in the real world, something
could successfully attack us, so we need to rely on other measures."

Modern security engineers don't mindlessly spend time and money to "improve
their security posture" without an appreciation for the consequences thereof.
They understand that the A in CIA stands for Availability, and that not using
the default ports hurts legitimate users expecting the default and confused by
the lack thereof far more than it foils attackers. They understand that
security engineering is about raising the cost of mounting an attack to be
more expensive than the value of the target, and worry about the cost of new
security measures versus the benefit of those new security measures (because
it now costs $X > $old to successfully attack the target) versus the expected
resources of an attacker (by running detailed risk and threat analyses to
identify potential adversaries and estimating their capabilities). If it costs
$X to attack a target which is worth $Y < $X and your adversaries only have $Z
< $Y < $X to attack then spending $any to further "improve your security
posture" is not just irrational and indefensible but ultimately destructive to
the target itself which you are supposed to be protecting, because those
resources could be spent more productively elsewhere to the benefit of the
target.

Which brings me to the presidential convoy example. Which vehicle the
president is in is _not_ a secret key in the president's security
architecture, because knowing which car the president is in does not easily
and magically give you access to the president. The point of having the
additional obfuscation of additional vehicles is about raising the cost of a
successful attack. Let's say the attacker's "nuke" is a shoulder-mounted anti-
tank missile which will successfully destroy the target. If there's only one
vehicle, then the attacker only needs one missile. But if the convoy has three
vehicles, then a successful attack will cost more than three times as much -
not just the cost of the additional missiles, but also the cost of finding
additional trustworthy people to carry the additional missiles and carry out
the attack, plus the cost of training and coordinating the attackers to work
in concert and successfully carry out the attack, plus the additional risk of
the plans accidentally leaking due to additional people being involved in the
planning and execution of the attack.

Changing from port 22 to port 24 does absolutely nothing to raise the cost for
anyone but the opportunistic script kiddie hacker who is paying virtually $0
to add your public IP's to a list of targets. Dedicated internal threats will
be aware of the port change and dedicated external threats will become aware
of the change when they swipe an unencrypted employee laptop or phish a common
password, and you will not be able to change the ports on all your servers
from 24 to something else without inflicting massive pain on every legitimate
user whose machines are configured to expect 24 but suddenly won't
successfully connect anymore.

------
whipoodle
Right, you shouldn't use it as your only security, but it's fine to use in
conjunction with other things.

------
liberte82
Now them's fighting words.

------
Daycrawler
So is client-side validation. Anything qualifies as "Valid Security Layer" as
long as it prevents your grand-ma from attacking your system. The layer that
protects against the most motivated attacker is the one usually known as
"security".

------
gumby
I have always been amused that folks who say "security through obscurity is
stupid" are never willing to give me their passwords.

It's all about threat prioritization and defense in depth.

~~~
mikeash
That’s not what “security through obscurity” means. It specifically refers to
security based on the secrecy of the system’s _design details_. Like many
terms of art, the meaning is not exactly the literal meaning of the words.

