
Obscurity Is a Valid Security Layer - danielrm26
https://danielmiessler.com/study/security-by-obscurity/?fb_ref=wfGNVWZlpn-Hackernews
======
Confiks
It's good someone finally took the effort to write this up. When discussing
some security by obscurity measure, other people almost invariably come up
with the "security by obscurity is bad" slogan without having spent much
thought on /why/ it is bad in that instance. Now this piece can be
conveniently linked.

Somewhat differently, sometimes security by obscurity is confused with using a
proper key. For example, with most webservers you can pretty safely restrict
access to a file by giving it a filename containing some key that cannot
realistically be guessed. Barring other ways in which an attacker could find
out about the presence of the file – which is quite easy to fuckup in some
environments, and is why you shouldn't do this if you really want to be secure
– someone without the key cannot access the file, and thus it shouldn't be
called security by obscurity; the method is public, the key is not.

~~~
mikeash
What should be said is that security _only_ through obscurity is bad. Security
through obscurity is fine as long as you're not relying on it. At the worst,
it doesn't hurt, just don't count on it to save your bacon.

~~~
ademarre
> _At the worst, it doesn 't hurt_

I do agree with the article, but at its worst, obscurity _can_ hurt. Obscurity
can add complexity, and it's said that "complexity is the enemy of security".

For example, sysadmins and maintainers could lose track of which layers are
security and which are obscurity. In turn, this leads to decisions that
compromise security because the maintainers overvalued the obscurity layers.

~~~
zanny
Not just added complexity, but added sense of _false_ security. To be in a
situation where you consider obscurity even a cursory part of your
infrastructure is the moment you are relying on it by the nature of
considering it even a fraction of total security.

It is only reasonable to consider when its both insignificant in the total
security stem and you have the willpower of a monk to resist ever considering
it valuable.

------
omginternets
I've often felt there was an interesting discussion to have about obscurity.
It's interesting because although the infosec community has such a strong
reaction to the phrase "security by obscurity", it has a very different
reaction to the phrase "obfuscation".

I'd like to propose a distinction between the two:

\- "security by obscurity" would refer to the non-declaration of configuration
and parameters

\- "obfuscation" would refer to the deliberate fuzzing of _statistical_ means
by which such configuration and parameters can be discovered.

The distinction is admittedly fuzzy, but the idea is that "security by
obscurity" is trivially defeated by treating the target system as a
statistical problem. If said system can be repeatedly probed, then useful
information can be gleaned. With obfuscation, one is deliberately injecting
noise into the signal that the attacker is trying to analyze.

What do you all think? Is this a useful distinction? To be perfectly clear, I
don't mean to imply that obfuscation is a sufficient security mechanism, but
rather:

1\. that obfuscation is more useful than plain-old obscurity

2\. we should understand that infosec attacks are often statistical in nature

Regarding point 2, I think this ends up shedding more light on what, exactly,
is wrong with "security by obscurity" as it's often debated.

~~~
blakesterz
1\. & 2\. Totally agree.

On point 2 I've recently come to the conclusion that I can't worry about every
damn thing. I follow all the ITSec news pretty closely and there's just SO
MUCH WRONG with everything (or at least it feels like it). Security
researchers seem to love to freak out about everything (it's their job after
all)_but I just don't have the time. If something is going to be a likely
attack, easily done, remotely, today or next month, then it's something I need
to worry about. If this is a theoretical attack based on something happening
in the room next door I need to shrug it off, for now. There's many many
things that fall in between those two extremes, and that's where I'm afraid
I'm not sure what to worry about sometimes.

~~~
omginternets
I'd like to follow up on your post because you raise a lot of good points. In
particular, we have a problem in infosec whereby "security" is implicitly
equated to "security with cryptographic guarantees".

I understand that "cryptographically secure" is the gold standard for infosec,
but this misses a certain reality on the ground. Often times, your system
doesn't need to be Fort Knox: it just needs to be a more difficult target than
the other guy's system.

I think this is where obfuscation comes in. With regards to well-known
vulnerabilities, clearly cryptographic-grade solutions are required, but the
best defense against zero-day attacks is to make your system hard to probe for
odd behaviors. Why waste time doing statistical analyses on a system in the
hopes of extracting a faint signal when another system literally responds to a
port scan?

I think the camouflage analogy in the article is a good one. Hardening your
asset is necessary, but hiding it is even better.

In my former military days, we were always told that "concealment is better
than cover". It's hard to pull a flanking maneuver on a target you can't
locate, but trivial to do so against a target you simply can't hit. I wouldn't
go as far as to claim that the same is true in infosec, but it stands to
reason that there's a place for concealment.

~~~
bigiain
I think camouflage and decoys are useful analogies.

I run ssh on non-standard ports. Not because I think it makes me any more
secure, but because it allows me to either ignore all the automated
scans/connections to port 22 altogether, or to proactively blackhole any ip
address who attempts to connect to a decoy port 22...

It doesn't mean I don't continue to need to do all the things I need to keep
ssh secure as well - keep my sshd updated, disable password auth, shut down
root connections, etc...

~~~
omginternets
>I think camouflage and decoys are useful analogies.

Indeed I've found that most military analogies work quite well for infosec!

>Not because I think it makes me any more secure, but because it allows me to
either ignore all the automated scans/connections to port 22 altogether, or to
proactively blackhole any ip address who attempts to connect to a decoy port
22...

This is another excellent point. Not only are we fuzzing the signal with
respect to automated attacks, we're also improving the signal-to-noise ratio
in our _own_ security analyses.

------
danpalmer
It seems here that the author has a different definition of security through
obscurity to many who criticise it.

The important aspect is whether the knowledge required to break the system is
the system itself, or the key. AES is not security through obscurity because
everyone can look up how it works, and it's only the key that we have to keep
secret. Port knocking is the same (just with a weaker key). Moving your SSH to
port 24 however is security through obscurity because if anyone knows the
mechanics of _how_ you use it, they _can_ use it.

This differentiation is subtle, and can be somewhat of a grey area sometimes,
but I find it good at differentiating between the two most of the time.

~~~
kevincox
Couldn't you argue that the port SSH listens on is a 16 bit key? It's small
and easy to brute force (among numerous other ways it is easy to discover) but
I don't really see distinction you are trying to make.

~~~
danpalmer
Yes, this is what I mean by it being a subtle and not always clear difference,
but I think it's usually pretty easy to apply common sense to this.

------
tptacek
I agree that obscurity has value in security designs --- security is in large
part about imposing asymmetric costs on attackers.

I do not believe that port knocking and single-packet authentication are good
examples of this; both are sort of textbook examples of cosmetic security
mechanisms that are made irrelevant by SSH public key authentication.

~~~
rsync
"I do not believe that port knocking and single-packet authentication are good
examples of this; both are sort of textbook examples of cosmetic security
mechanisms that are made irrelevant by SSH public key authentication."

If there's a remote-root buffer overflow for sshd that does not involve login
at all, doesn't hiding the sshd add quite a bit of value ?

That is, there are attack vectors that we know exist, and that we have seen in
the past many times for server daemons, that occur before, and override, any
authentication mechanism.

By hiding the service, you take yourself out of that population of low hanging
fruit ...

You don't find that valuable ? I personally find it extremely valuable and
consider port knocking to be one of the _very few_ security optimizations that
has such low cost/complexity for such a pronounced and well defined gain. I
would not think of deploying an Internet facing system without port knocking
on the daemons that don't need public access.[1]

~~~
tptacek
If you believe that sshd might harbor a _preauth_ RCE --- something that
hasn't happened in a very long time --- then deploy _real_ security in front
of it: stick it on the other end of a simple encrypted tunnel.

~~~
rsync
I agree that port knocking is _not the christ child_. It will not _solve world
hunger_ nor will it allow the wolves to lay down with the lambs.

But it's _something_.

~~~
CiPHPerCoder
It's the ROT13 of defense mechanisms.

~~~
drudru11
What would you recommend?

~~~
CiPHPerCoder
SSH key with public key auth over VPN.

------
diziet
The real problem with obscurity as a security layer is that it complicates day
to day work that you'd be doing with that system. Each new person needs to be
constantly trained and reminded about your specific system quicks and
configuration outside the standard norm before they can do work using the
system. That has a real cost, and it is a cost analysis question.

~~~
jlg23
That really depends on the use case. We too have sshd listening on a non-
standard port and the overhead to tell people what to put into their
.ssh/config is negligible.

We use SPA only on very few select servers that less people need to access and
those just get the shell script they need to run instead of typing "ssh
$host".

After about 2 years of running things this way: The time saved in checking
system logs in a single week was much more than the time spent on "training"
people in those 2 years.

------
RawInfoSec
There's a number of things I disagree with in the article, but it does have a
few good points.

Here's what I disagree with and why:

\- Portknocking. I've found from experience that it's far better to allow SSH
access (for example) from only known IP addresses. Portknocking is far too
easy to beat and really doesn't impede much.

\- Non-standard ports. Sure if you're only interested in blocking bulk network
scanners that limit themselves to known ports. Any manual scan or a solid in-
depth scan is going to map every one of the lower 1024 ports, and possibly the
rest depending on how interesting the target is.

\- The Tank camouflage example. It all sounds fine and dandy until a
maintenance crew roam the desert for 10 days looking for a tank they can no
longer see. Same with security and IT... obscurity leads to lots of wasted
time when newer techs try to diagnose things that aren't as they seem, and are
undocumented. Not only that, but the since the enemy know that the new armour
requires a special ammunition to beat, they will just throw new ammo at
everything that moves in case it is a tank. i.e. you're going to scan for
hidden SSID's, your going to nmap every port, etc etc. Takes more time, but
you still get in.

\- If there's a 0-day SSH vector, it's getting owned no matter which port it's
on unless your security team are on top of patching. What if the new-hire
that's told to go patch all the SSH servers accidentally misses the
undocumented one that's running on port 24? It also doesn't matter if there's
10x more hits on port 22 than 24. All it takes is 1. It's that simple.

I just don't think obscurity belongs in an environment where clarity matters
so much.

~~~
lazyant
> Portknocking is far too easy to beat and really doesn't impede much.

If you have to guess a random 3 port sequence in a 65k port space, how long
will it take you to break? at 1 try of 3 ports per second I get almost 9
million years for exhaustive search.

~~~
RawInfoSec
Why guess when you can just sniff the network for the sequence?

Port knocking _requires_ the network that you're using to knock is in fact as
secure and trusted as the one you're knocking. So there's really no point as
you could easily just limit SSH access to that network and save yourself all
the bother and risk.

------
marcosdumay
Well, yes, technically any amount of obscurity you add to your service
increases your overall security.

So let's go. You make a 256 bit level key. That's 256 bits of security there.
For each binary thing that must be guessed together with the key (let's call
those multiplicative), you gain an extra bit there. So, if you could somehow
(what you can't) not disclose your encryption algorithm, you'd gain some easy
3 extra bits of security there.

Now, if you have a secret that can be verified independently from your other
secrets (let's call those additive), you'll gain one extra bit of security for
every secret that has your overall security level. That is, if you add a 256
bit secure port knocking step (16 knocks on completely random pots) to your
256 bit secure key, you'll get 257 bits of security overall. If you add a 16
bit non-standard port, you'll get some fraction of a bit, starting with some
dozens of zeros after the comma.

Thus, since security is all about trade-off, think very hard about the costs
of any additive measure you want to create.

~~~
stestagg
I think this reasoning is missing something.

The maths assumes that the security of your system is entirely defined by the
complexity of your key.

Obscurity gives you the option of hiding the /lock/, which is different.

~~~
marcosdumay
I think you should read my comment again.

Anyway, hiding your lock is additive (spreading fake locks would be
multiplicative).

How many bits of entropy does your lock obscurity have again? What does it
cost?

------
goodside
This argument seems disingenuous. It's no great insight that port knocking
reduces unauthorized login attempts. What's disputed is whether it's better to
have a system that's rarely challenged vs. a system that's regularly
challenged and still holds. The argument against is that adding layers of bad
security makes it easier for problems in the "real" security to go undetected
or ignored since the obscurity layer works too well.

~~~
dogma1138
I agree people confuse obscurity with "operational security".

Reducing your footprint, and keeping information disclosure to a minimum isn't
obscurity.

On the more specific examples while port knocking is an acceptable security
mechanism (so is geoblocking, and even whitelisting specific IP's only) it's
again not obscurity, putting connections on random ports is and it can also be
a very bad practice because obscurity often works both ways.

In a large organization even one with great documentation and knowledge
management things will fall between the cracks, if we take their example to
the then while the attacker might be slightly less likely to identify a non
standard SSH port (because for some reason they assume that port scanning is
expensive) it can also mean that your own security teams and tools can miss
them just as easily.

When heartbleed came out for example people run it on the common SSL ports
internally, pretty much all of it was patched rather quickly but from time to
time you still find an instance of it and usually it's because some developer
somewhere decided port 23216 is a good port for SSL.

The attackers are considerably more likely to perform allot of information
gathering on their target, more so than internal teams, internal security
tools often limit their tests to standard and specified ports because when the
security team has a 4 hour window each month to turn their vuln scan a full
TCP port scan is probably out of the question on a network with few 1000's of
IP's and larger.

By adding unnecessary obscurity to your system you are effectively only
increasing the likelihood of you missing something while the attacker is just
as likely to find it as anything else.

------
seanwilson
Heavily agree with this. I read people stating "security by obscurity is bad"
without really thinking about it all the time.

For example, I think changing the SSH server port and WordPress login pages
are a good idea because 1) if a hacker cannot find the login location in the
first place, your chance of getting hacked must decrease and 2) the number of
intrusion attempts being logged will be significantly less so you can more
usefully survey these for targeted attacks.

Of course, relying only on obscurity is a terrible idea but you should have
several layers of obstacles so if one fails the security violation will get
caught at the next layer. Avoiding security through obscurity completely is
more something you should do when designing a secure protocol or algorithm.

~~~
TickleSteve
There is the concept of "Defense in depth", used by the military for many
years...

This acknowledges that each layer is bound to have holes, therefore the best
you can hope for is to delay the inevitable breach. Therefore, any additional
layer you can add to the security increases the time taken and hence gives you
a better chance at rebutting the attack.

------
dlitz
It's still pretty expensive for very little added security benefit, which the
added complexity might negate. How many bits of security does this add? 5 or
6? Maybe 32?

People deploy ad hoc obscurity hacks like port-knocking because we have doubts
our standard access-control infrastructure (e.g. libssl, SSH). It would be
less wasteful to take the resources being spent on these hacks, and pool them
together into projects that would boost our confidence in the infrastructure,
instead.

I can think of lots of security infrastructure projects that could probably
use more resources:

\- The various libssl projects could probably use more resources.

\- There is at least one team working on formal proofs of implementation
correctness for libssl.

\- Sandstorm.io is developing ways to move web access control (including CSRF
protection) to an intermediate layer, rather than relying on individual web
applications to get it right.

\- U2F is trying to bring cheap hardware tokens to web users, but it needs
better support across the web.

\- Let's Encrypt/ACME is working on making it feasible to deprecate plaintext
HTTP. There's still a lot of distro integration work to be done.

\- There's an IETF working group working on moving transport security into TCP
itself (tcpinc). If it's done well (i.e. gets enough eyeballs), this could
replace "libssl version hell" with straightforward socket file descriptors.

\- Most developers have no idea how to develop formal proofs of implementation
correctness alongside their code. Educational materials would be great, here!

\- There are many, many legacy systems running still known-insecure
software/protocols that need to be upgraded or worked around.

------
jfindley
Moving the SSH port is a bad idea. The article falls into a number of common
pitfalls here.

A connection to your SSH server does not usefully equate to someone "trying to
hack" your server. If I had $1 for every time I've heard this complaint, I'd
be very rich, but it's total rubbish. Stop worrying about it.

The mass scanners that fill your logs with brute force attempts on port 22 are
looking for trivially obvious username/password combinations. If there is any
chance they could actually get in with this approach, you've screwed up so
badly that moving the port will not save you. If you're using key-based
authentication, no amount of scanning is ever going to compromise your server,
unless the attacker learns your key.

The reason moving your port is pointless is that against an attacker
sophisticated enough to have any chance of compromising your SSH server, no
amount of hiding the port is going to do more than delay them a few minutes.
Compromising SSHv2 is _hard_. You need either a key compromise (difficult to
achieve, likely some sort of APT malware against your laptop), or a zero-day
sshd vulnerability. Against someone motivated and able to do that, your
"clever trick" of moving sshd to port 24 is totally useless.

It's a bad idea to propose, too, as it will require users to either update
SELinux rules to allow sshd to bind to a different port, or disable SELinux
entirely. And as many of the ports <= 1024 are used for other things, users
will tend to do things like bind to ports > 1024, such as 2222. As any system
user can bind to ports > 1024, you're actually reducing security, and opening
up a potential priv escalation vector (user with unpriv shell access causes
sshd to crash via XYZ, starts their own malicious daemon on 2222, steals your
credentials, etc).

There are cases where obscurity helps. Reducing the amount of information an
attacker has about the environment they are attacking is often a very valid
defence, and is something any security audit should consider. Moving the sshd
port is not one of these cases.

In short, make sure you're using key-based authentication (ed25519 is
preferred IMO, but happy for others to debate this). Make sure password
authentication is disabled. Make sure direct root ssh is disabled, and
passwordless sudo is disabled. Leave the port sshd listens on well alone -
it's just fine where it is.

~~~
yummyfajitas
Did you even read the article?

 _Let’s say that there’s a new zero day out for OpenSSH that’s owning boxes
with impunity. Is anyone willing to argue that someone unleashing such an
attack would be equally likely to launch it against non-standard port vs. port
22? If not, then your risk goes down by not being there, it’s that simple._

~~~
jfindley
Yes. Please don't insinuate that others haven't read the article. It might be
worth reading the HN guidelines here:
[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)

Also, from the article, please note paragraphs like:

"I configured my SSH daemon to listen on port 24 in addition to its regular
port of 22 so I could see the difference in attempts to connect to each (the
connections are usually password guessing attempts). My expected result is far
fewer attempts to access SSH on port 24 than port 22, which I equate to less
risk to my, or any, SSH daemon."

I was pointing out that this is silly. Those "attempts to connect" are totally
meaningless, as I explained.

~~~
yummyfajitas
He provides a very specific use case (exploiting a zero day) where they are
not meaningless.

------
throw7
Obscurity is "bad" security because when you're told "how it works" it's
usually very weak and easily defeated.

Pairing up obscurity with "good" security should be more of a "do the benefits
outweigh the costs?" type of question and only each site can answer this
question.

I sincerely believe no one, crosses finger, argues that security by obscurity
alone is good security. -.-

------
lmm
> Let’s say that there’s a new zero day out for OpenSSH that’s owning boxes
> with impunity. Is anyone willing to argue that someone unleashing such an
> attack would be equally likely to launch it against non-standard port vs.
> port 22? If not, then your risk goes down by not being there, it’s that
> simple.

You need to compare equal-cost approaches to security. For the amount of
effort it takes to use a non-standard port, you could use a real security
measure like single-use passwords instead - which would increase your security
more?

And you need to measure what your outcomes look like. If there's a new zero-
day, do you really think those 5 attacks on the non-standard port won't use
it? Do you think a targeted attack wouldn't use it? The result of getting
caught in an ssh-scan of the whole internet is relatively benign - your server
gets used to send some spam, you wipe it and rebuild. That's not the
existential threat that security measures are about preventing.

------
Daviey
Security BY Obscurity is always bad. This article doesn't address this, but
rather as an additional tool.

ie, I wouldn't use telnet to control all of my servers and then think i am
being secure because the IP addresses are not put in a server-list.txt on each
server.

However, using a secure protocol and not providing a network map on each
server does provide a level of obscurity.

------
zeveb
> Remember, the NSA most likely has great algorithms, but they still don’t
> _publish_ them.

Not quite:
[https://en.wikipedia.org/wiki/NSA_Suite_B_Cryptography](https://en.wikipedia.org/wiki/NSA_Suite_B_Cryptography)

They don't publish _all_ of them, I'm sure, but they did publish Suite B.

~~~
nickpsecurity
The ones relevant to the quote are here:

[https://en.wikipedia.org/wiki/NSA_Suite_A_Cryptography](https://en.wikipedia.org/wiki/NSA_Suite_A_Cryptography)

They along with the Type 1 development and certification process are used for
stuff they trust the most. Suite B algorithms can be used with Type 1
implementation for very, critical stuff as well. The Type 1 is the key for
assurance more than the algorithms themselves. It ensures the protocols and
algorithms are rigorously implemented. Includes considerations on RNG's,
common coding flaws, covert channel analysis, and TEMPEST shielding.

Only smartcard sector comes anywhere near assurance activities that go into
Suite A or Type 1 products in terms of crypto.

------
eslaught
There is a certain danger with obscurity in it can obscure to your own
vulnerabilities. I believe this is most well-known with crypto, where
essentially any proprietary crypto algorithm should be assumed to be broken
(because crypto is just that hard), but it applies to all forms of security.

To put it another way: public (and popular) security measures have the benefit
of having been validated by many eyes. When you choose proprietary security
instead (which obsurity requires by definition, otherwise the technique would
be known), you're betting that your own security team can do better. Is that a
bet you can sometimes win? Perhaps, but I personally wouldn't want to bet too
hard.

------
cdevs
Agreed considering I know someone who web scraped a entire dataset because
they were selling 2500 sources for $15,000 a year and the API request were
sequential ids 0-2500....so just curl 2500 request and call it a day because
you didn't hash your ids

~~~
majewsky
By the way, hashing IDs would be a perfect example of security by obscurity if
the hash was not salted. You just have to notice that you were given
sha256("1234") instead of "1234".

------
Illniyar
The question isn't whether Obscurity provides security at all, but rather if
the tradeoff you make with ease of use and maintenance is worth it.

From my experience obscurity is usually trivial for a determined attacker to
overcome, and if it's not a determined attacker your normal security layer
will probably be insurmountable to an attacker of opportunity.

For the small time (let's say an hour) that it'll add to a determined
attacker's time you force your users to use an extra key just to detect the
ssh port your using (the example he used in the article).

~~~
danielrm26
The point is to apply all your real security and then add obscurity afterwards
to lower your risk even further.

Wear the armor, then engage your cloaking device.

Don't substitute the cloaking device for the armor, but also don't dismiss the
value of not being targeted.

------
fideloper
Can port knocking be used in the "real world"? In other words, would
~/.ssh/config or some other setting be able to automate the sequence?

I'm picturing some current workflows I use if port knocking was enabled.

In particular, unless there's a way to automate the knocking sequence, SSH'ing
in via Ansible would be an issue and a SaaS we use to help with deployments
would no longer work.

(Altho I'd imagine making firewall exception rules (e.g. "allow this ip
address in") for these services would be a way around that).

~~~
justinsaccount
I used to use something like this in ~/.ssh/config :

    
    
      Host Box
      Proxycommand ~/bin/do_knock %h %p
    

Where do_knock was simply

    
    
      #!/bin/sh
      host=$1
      port=$2
    
      knock $host 1 2 3 4 5
      nc $host $port && exit
      knock $host 1 2 3 4 5
      sleep 1.5
      nc $host $port
    

Simply, but it worked well enough for a few years.

On android I used to port knocking app that integrated with connectbot, so I
had one tap access to the host.

------
seanwilson
For people saying security through obscurity should always be avoided, if you
have a web service that is only required internally on the private network,
would you make it public?

~~~
Kliment
There's a difference between public, as in publicly accessible, and public as
in externally visible. In your situation, you should not make it externally
accessible, so it doesn't make a difference if it's externally visible or not
(though there is no reason to make it externally visible).

~~~
seanwilson
Is making it externally invisible not a form of obscurity though? If so, why
is this not frowned upon as well?

------
gravypod
I think this is something very unsafe to preach to people.

I'm not sure if this is a joke, or if this person and other comment authors
are serious. The idea of Security-by-Obscurity is flawed inherently.

Let me first start by defining security: "the state of being free from danger
or threat." Now this is definitely not the best definition, it's just the one
that came up when I googled the word and so this will work for now.

The only way for security by obscurity to work, is for you to be able to
design a system that is impossible to figure out or comprehend.

Let's assume that one was able to design a system that is incomprehensible to
anyone. Let's initially ignore the fact that if the system is not
understandable to the user, it couldn't have been invented in the first place.

I'll pose these questions:

    
    
      - If the system is so obscure to foreign users, how will it be maintained
      - If someone who knows the secrets of how this system is fired, what happens if they sell of their knowledge? 
      - What would happen if there turns out to be a bug in this massive amorphous blob of crap that no one understands? How do you start debugging it without invalidating it's "security"
    

I'll never use security-by-obscurity as a model. This is mainly due to one of
my core beliefs: there are much smarter people out in the world then you. If
you think "this is un-guessable" or "this is unbreakable" when slapping a
bitshift on a stream of data and calling it "encryption" you need to
understand that there are people smart enough in this world that can smell
that from a mile away.

I've worked with some of these people, and before then I may have said "yea
security by obscurity" is fine; but having worked along side people who are
FAR more intelligent then I am. Anything I can think of to circumvent their
actions can be trivially figured out by someone out there who is smarter then
I am.

------
burnstek
Great sentiments. I don't see security being discussed nearly enough in terms
of risk and ROI. I usually see it discussed only in absolute terms, i.e.
unless a solution fits the "CIA" model to a T, then it's unacceptable.

I think that we should layer the CIA triad on top of the Time-Cost-Quality
triad when implementing application security.

------
justinsaccount
Nice work. There are a lot of claims out there about this sort of thing, and
not a lot of hard data.

If you had used a port other than 24 you would have seen 0 attempts. Port 24
is still scanned fairly frequently, something like 22X where X !=2 is almost
never scanned. If you block port 22 and use iptables to redirect a random high
port to port 22, you'll never see any connection attempts - unless someone is
really targeting you.

I did a bunch of reporting/math against our data a few months ago the last
times this came up on reddit. I found a few interesting things and tried to
counter some bogus claims.

First bogus claim: Shodan scans everything, so you can't hide.

I checked for the entire month of feburary for a large address space, shodan
scanned these ports:

0 3 7 11 13 15 17 19 21 22 23 25 26 37 49 53 67 69 70 79 80 81 82 83 84 88 102
110 111 119 123 129 137 143 161 175 179 195 311 389 443 444 445 465 500 502
503 504 515 520 523 554 587 623 626 631 666 771 789 873 902 992 993 995 1010
1023 1025 1099 1177 1200 1234 1311 1434 1471 1604 1723 1777 1883 1900 1911
1962 1991 2000 2067 2082 2083 2086 2087 2123 2152 2181 2222 2323 2332 2375
2376 2404 2455 2480 2628 3000 3128 3306 3386 3388 3389 3460 3541 3542 3689
3749 3780 3784 3790 4000 4022 4040 4063 4064 4369 4443 4444 4500 4567 4848
4911 4949 5000 5001 5006 5007 5008 5009 5060 5094 5222 5269 5353 5357 5432
5555 5560 5577 5632 5672 5800 5900 5901 5984 5985 5986 6000 6379 6664 6666
6667 6881 6969 7071 7218 7474 7547 7548 7657 7777 7779 8000 8010 8060 8069
8080 8081 8086 8087 8089 8090 8098 8099 8112 8139 8140 8181 8333 8334 8443
8554 8649 8834 8880 8888 8889 9000 9001 9002 9051 9080 9100 9151 9160 9191
9200 9443 9595 9600 9943 9944 9981 9999 10000 10001 10243 11211 12345 13579
14147 16010 17000 18245 20000 20547 21025 21379 23023 23424 25105 25565 27015
27017 28017 30718 32400 32764 37777 44818 47808 49152 49153 50070 50100 51106
55553 55554 62078 64738

2nd bogus claim: "A single host on a decent connection will be able to scan
all ports on a /16 in less then an hour. I see scans like this all the time."

It is the same amount of work to scan every ipv4 address on port 22 as it is
to scan every port on a /16.

Every port on a /16 is 2^32 (65536 ports on 65536 hosts) or 4294967296 ports.

Saying it can be done on a "decent" connection in less than an hour, at
4294967296 ports in 3600 seconds works out to 1,193,046 packets/second.

Line rate gigE maxes out at 1,488,095 pps. You would need to saturate a full
gigabit for an hour to fully scan a /16 - and a full gigabit at whatever site
you are scanning. with 0% packet loss.

3rd bogus claim: (re port knocking) Faster internet speeds means more scanning
power which means that it gets easier to find your hidden service

If I am using a non cryptographic port knocking daemon with a 4 port knocking
sequence, that has 65536^4 combinations. One should probably figure out what
the length of the shorter De Bruijn sequence is for guessing 4 port sequences,
which I believe just works out to around "only" 65536^4 packets instead of 4
times that. Also assuming the correct sequence would be found halfway through,
that would mean 9223372036854775808 packets required to be sent. At line rate
gigE, that would take 6198107000463 seconds, or 196,540 years. Only 19,654
years at line rate 10gig though!

~~~
ryanlol
>you block port 22 and use iptables to redirect a random high port to port 22,
you'll never see any connection attempts

This is a really bad idea, stick to lowports. Despite SSH doing host
authentication, you really don't want non-root users being able to hijack the
sshd port.

~~~
detaro
If they are using iptables to do the redirect instead of changing sshds port,
wouldn't an attacker need a way to change/disable iptables, which also
requires root? There shouldn't be a way for an application to put itself in
front of iptables.

~~~
ryanlol
Ah! You're right, I didn't think that through. iptables is indeed a safe way
to do this, however changing your port in the configs to a highport isn't.

------
grymoire1
The phrase I use is "Security through obscurity provides temporary security
which degrades over time"

------
Havoc
Its valid in a way, but the second you count it as one of you security layers
you risk relying on it...and thats where it becomes dangerous.

Should really be more of a bonus layer so to speak.

------
domador
As long as the goal of obscurity isn't to hide a weak or poor underlying
security system, there might be some benefit to obscurity as an ADDITIONAL
layer, as claimed.

------
jrochkind1
I mean, I guess a password can be considered "security by obscurity", in the
sense that it relies on keeping your password secret, the lack of knowledge
about your password.

So, sure, obscurity is a valid security layer.

The problem is when people start thinking that port knocking is something
different than a needlessly (emphasis on needlessly) complex method of
implementing a very weak system-wide password.

------
mrdrozdov
Passive scans that hit your port 22 can hardly be called a security issue, and
changing your port number definitely does not add any sort of security. This
is a confusing concept, since changing your port number might decrease the
probability your server gets taken advantage of (temporarily). It's a trivial
fix for attackers everywhere to scan multiple ports instead of one at marginal
cost.

------
apaprocki
I know some people who try to extend this thinking to software libraries. Ask
yourself if the world would be better served if OpenSSL was closed-source and
therefore could not be analyzed for bugs so easily by attackers. It's a
slippery slope... but I see regular arguments why a particular piece of
software is "technically" more secure by some level of obscurity (restricting
access).

------
BWStearns
The other thing to consider is that obscurity makes threat detection easier.
If your sshd_config is set up for some random port and you get 18K attempts,
then it's pretty clear that someone is interested in your server. If you're
running it on port 22, then a slew of login attempts could be difficult to
disambiguate from regular portscanning weather.

------
EGreg
It is only useful if you don't publish open source software.a new, underfunded
open source project is not the best place to find top-notch security. All the
code is published so there is no obscurity, either. It takes a long time for
security to be worked out, and in the meantime, systems can be compromised by
dedicated hackers.

------
Grollicus
Why is moving the SSH port Security by Obscurity? Its log cleaning by avoiding
annoying portscans, nothing else.. There is NO Security gained by moving the
port. If you think random portscans are a security risk to your SSH server you
should seriously reconsider your SSH configuration.

~~~
zippergz
So you don't think you're less likely to be compromised by a 0day OpenSSH worm
if your sshd is running on a non-standard port? Why?

~~~
xyience
You might not get compromised in the first wave of IP scanners that try the
0day on port 22, port 2222, and maybe a few other common alternatives. This
first wave will pass in about 5 minutes.

Subsequent waves will just try every port. This isn't costly, they have an
army of compromised machines from the first wave.

~~~
zippergz
Can you give me an example of a real-world worm that has scanned every port on
every system on the internet? I haven't seen one, but if it exists, I'd be
curious to see it. I'm speaking based on the actual exploits I've seen in the
wild over the past 15-20 years, not what is hypothetically possible.

~~~
xyience
Not every port, but maybe you remember this:
[http://arstechnica.com/security/2013/03/guerilla-
researcher-...](http://arstechnica.com/security/2013/03/guerilla-researcher-
created-epic-botnet-to-scan-billions-of-ip-addresses/) The article also
details an earlier cataloging where a researcher probed 18 ports 3-4 times a
day over the ipv4 address space. Tools like ZMap or MASSCAN make it easy for
anyone to scan as many ports as they can, but I haven't heard of any worm that
systematically tried all 65535 ports of all addresses. Though I would bet a
lot of money that an OpenSSH 0day that bypassed all authentication would
result in several such worms from multiple actors who already control hundreds
of thousands of devices.

------
nickpsecurity
Obfuscation has been one of my strongest measures for security for a long
time. Cold War espionage writing taught me it's absolutely critical to
defeating nation-state opponents given they'll always outsmart your specific,
known techniques. What obfuscation does, if used effectively, is require the
attacker to already have succeeded in some attack to even launch an attack.
Defeating that paradox forces them to attack you in many ways, increasing work
and exposure risk. The more obfuscation you have built in, the more that goes
up. Very important moves to keep them effective are to ensure the obfuscation
is invisible from users' or network perspective, make sure obfuscation itself
doesn't negate key properties of security controls, make darned sure there are
security controls rather than only obfuscation, only a few individual people
knowing the obfuscations, and air gapped (or guarded) machines controlling
them.

Here are some obfuscations I've used in practice with success, including
against strong attackers, per monitoring results, third party tests, and
occasional feedback from sysadmins that apply them or independently invented
them:

1\. Use non-x86 and non-ARM processor combined with strong Linux or BSD
configuration that also _advertises as x86 box_. Leave no visible evidence
you're buying non-x86 boxes. This can work for servers. Some did it with PPC
Mac's after they got discontinued. This one trick has stopped so many code
execution attempts for so long it's crazy. I really thought a clever shortcut
would appear by now outside browser Javascript, memory leaks, or something. An
expansion on it with FPGA's is randomized instruction sets with logs & fail-
safe for significant, repeated failures.

2\. Non-standard ports, names, whatever for about everything. Works best if
you're not relying on commercial boxes that might assume specific ports and
such. So, be careful there. This one, though, just keeps out riff raff.
Combine it with strong HIDS and NIDS in case smarter attackers slip up. Don't
rely on it for them, though.

3\. Covert, port-knocking schemes. An example of a design I think I modified
and deployed was SILENTKNOCK. It gives no evidence a port-knocking scheme is
in use unless they have clear picture of network activity. Even still, they
can't be sure _how_ your traffic was authorized by looking at the packets.
Modifications to that scheme that don't negate security properties and/or use
of safety-enhanced languages/compilers can improve its effectiveness. My
deployment strategy for this and guards was a box in front of the server that
did it transparently. Lets you protect Windows services prone to 0-days. Think
it stopped an SSH attack or something on Linux once. Can't recall. Very
flexible. Can be improved if combined with IP-level tunneling protocol
machine-to-machine in intranet. Which can also be obfuscated.

4\. Use of unpopular, but well-coded, software for key servers or apps. I
especially did this for mail, DNS, web servers, and so on. Black hat economics
means they usually focus on what brings them the most hacks for least time
investment. This obfuscation counters their economic incentive by making them
invest in attacking a niche market with almost no uptake. Works on desktops,
too, where I recommended alternative Office suits, PDF readers, browsers, and
so on that had at least same quality but not likely same 0-days as what was
getting hit.

5\. Security via Diversity. This builds on 4 where you combine economics and
technology to force black hats to turn a general, one-size-fits-all hack into
a targeted attack specifically for _you_. You might choose among safe
libraries, languages, protocols, whatever without advertising their use in the
critical app or service. Additionally, there's work in CompSci on compilers
that automatically transform your code into equivalent, but slightly
different, code with different probabilities of exploits due to different
internal structure. That's not mature, yet, imho. You could say all the
randomization schemes in things like OpenBSD and grsecurity fit into this too.
Those are more mature & field-tested. If Googling, the key words for CompSci
research here are "moving target," "security," "diversity," and "obfuscation"
in various combinations.

6\. My old, polymorphic crypto uses obfuscation. The strongest version
combined three AES candidates in counter mode in layers. The candidates, their
order, the counters, and of course the keys/nonces were randomized with
exception being same one couldn't be used twice. That came from only criticism
I got with evidence: DES meet in middle. FPGA's got good at accelerating
specific algorithms. So, I modified it to allow weaker ciphers like IDEA or
Blowfish in middle layer but _no less than one_ AES candidate in _evaluated
configuration and implementation_ preferrably on outer layer. Prefferably two
AES + 1 non-AES for computational complexity. All kinds of crypto people
griped about this but never posted a single attack against such a scheme.
Whereas, I provably stop one-size-fits-all attacks on crypto by layering
several randomly with at least one strong one. Later, I saw TripleSec do a
tiny subset of it with some praise. Also convinced Markus Ottella of Tinfoil
Chat to create a non-OTP variant using a polycipher. He incorporated that plus
our covert-channel mitigations to prevent traffic analysis. Fixed-size, fixed-
transmission is obfuscation that does that I learned from high-security,
military stuff.

7\. Last one, inspired by recent research, is to use any SW or HW improvements
from academia that have been robustly coded and evaluated. These usually make
your system immune to common attacks [2], mitigate unauthorized information
flows [3], create minimal TCB's [4] [5], use crypto to protect key ops [6], or
obfuscate the crap out of everything [7]. I mainly recommend 1-6, though. ;)
Then, don't advertise which ones you use. Also, I encourage FOSS developers to
build on any that have been open-sourced to get them into better shape and
quality than academics leave them. Academics tend to jump from project to
project. They deserve the effort of making something production-quality if
they designed a practical approach and kindly FOSS'd the demo for us.

[1] [http://www-users.cs.umn.edu/~hopper/silentknock_esorics.pdf](http://www-
users.cs.umn.edu/~hopper/silentknock_esorics.pdf)

[2]
[https://www.cis.upenn.edu/acg/softbound/](https://www.cis.upenn.edu/acg/softbound/)

[3]
[https://www.cs.cornell.edu/projects/fabric/](https://www.cs.cornell.edu/projects/fabric/)

Note: See related project in bottom-right for other good tech this builds on
or was inspired by.

[4] [http://genode.org/](http://genode.org/)

[5] [https://robigalia.org/](https://robigalia.org/)

[6]
[https://theses.lib.vt.edu/theses/available/etd-10112006-2048...](https://theses.lib.vt.edu/theses/available/etd-10112006-204811/unrestricted/edmison_joshua_dissertation.pdf)

[7] [http://www.ics.forth.gr/_publications/papadog-asist-
ccs.pdf](http://www.ics.forth.gr/_publications/papadog-asist-ccs.pdf)

~~~
AstralStorm
The recommendation to use rarely employed software is a nice example of a trap
of obscurity. Enough eyes does indeed make bugs shallow.

How do you decide that a piece of software is "well-written"? Even very strict
review processes missed critical issues... and none of those are team for rare
software.

Instead, the proper advice is to reduce attack surface, not rely on some
allegedly obscure piece of software.

Using results from academia is often supremely impractical. The actual
software and designs are often unavailable or perhaps impossible, very
definitely unverified.

Your homegrown CTR crypto might expose you for a related key attack. How do
you know, since nobody targets it, until it is broken?

Guessing target architecture given an exploitable but is trivial. You need a
simplest of data leaks. There aren't many options available either. You can
use x86, MIPS or arm, the latter two in big endian or little endian. Other
much more unlikely targets can be POWER or ia64. Custom micros can be PIC or
MCP51, maybe Atmel. That gives only a bunch of options to try out.

You already have to tangle with much stronger security measures such as ASLR
or NX. Or various operating systems.

~~~
drudru11
I would label ASLR as an obscurity measure.

~~~
nickpsecurity
It is an obfuscation. It tries to obscure where something will be rather than
directly prevent or detect the attack as strong controls do. Useful, as other
obfuscations are, to add speedbumps in for the attacker while preserving some
level of compatibility with existing code or performance that strong security
might sacrifice. That an obfuscation was recommended in a counterpoint against
obfuscations was most interesting. :)

------
jijojv
Agree that changing SSH ports on public facing servers is a great use case for
obscurity but I hate when "security minded" people use that inside private
data centers which just makes scp/rsync etc. very annoying without adding the
same security value.

------
floatboth
I'm not even sure if I agree or not. I get the point, but I kinda _want_
random attack bots to try logging into my sshd with common passwords. It's
like a free security scan!

~~~
AstralStorm
Use a separated or virtualized clone in a honeypot though. It is not good to
risk your main servers and security for this.

~~~
ryanlol
This shouldn't qualify as a security risk, if it does there's something _very_
wrong with your setup.

~~~
AstralStorm
Of course. But in case something is seriously wrong, you probably want the
compromise to happen on an isolated system and not main production one or one
containing critical data.

Separation of concerns and compartmentalization are both very good things from
security point of view.

------
jdblair
I just checked my home ssh endpoint, which runs on a non-standard port on my
firewall. I was surprised to discover there have been zero scan attempts.

I am using comcast xfinity for my home internet access.

------
TheGuyWhoCodes
Moving the SSH port if you are already using port knocking with SPA is kinda
useless, nevertheless I agree with the premise of the article, I really like
the M1 analogy.

------
geggam
I think port knocking is like anti virus ... it makes you feel safe so you run
obsolete software and get p0wned

------
Tiquor
A valid security layer, but obscurity is not security.

------
hammock
This argument is a strawman. Does OP have a real example of someone arguing
that obscurity as a layer on top of good security is a bad thing?

~~~
pjungwir
I hear people call moving the ssh port "security by obscurity" all the time.
For instance:

[http://serverfault.com/questions/189282/why-change-
default-s...](http://serverfault.com/questions/189282/why-change-default-ssh-
port)

[http://serverfault.com/questions/316516/does-changing-
defaul...](http://serverfault.com/questions/316516/does-changing-default-port-
number-actually-increase-security)

