
Security by obscurity is underrated - pcr910303
https://utkusen.com/blog/security-by-obscurity-is-underrated.html
======
cataflam
Agree with the article.

People have been misinterpreting "security by obscurity is bad" to mean any
obscurity and obfuscation is bad. Instead it was originally meant as "if your
only security is obscurity, it's bad".

Many serious real-world scenarios do use obscurity as an additional layer. If
only because sometimes, you know that a dedicated attacker will be able to
breach, what you are looking for is to delay them as much as possible, and
make a successful attack take enough time that it's not relevant anymore when
it's broken.

~~~
Verdex
In nature, prey animals will sometimes jump when they spot a predator[1]. One
of the explanations is that this is the animal communicating to the predator
that it is a healthy prey animal that would be hard to catch and therefore the
predator should choose to chase someone else.

I think we can kind of view obscurity in the same way. It's a way to signal to
a predator that we're a hard target and that they should give up.

Of course in the age of automation, relying on obscurity alone is foolish
because once someone has automated an attack that defeats the obscurity, then
it is little or no effort for an attacker to bypass it.

Of course, sprinkling a little bit of obscurity on top of a good security
solution might provide an incentive for attackers to go someplace else. And I
can't help but think of the guy who was trying to think of ways to perform
psychological attacks against reverse engineerers [2].

[1] -
[https://en.wikipedia.org/wiki/Stotting](https://en.wikipedia.org/wiki/Stotting)

[2] -
[https://www.youtube.com/watch?v=HlUe0TUHOIc](https://www.youtube.com/watch?v=HlUe0TUHOIc)

~~~
acoard
>I think we can kind of view obscurity in the same way. It's a way to signal
to a predator that we're a hard target and that they should give up.

This has it completely backwards. Security through obscurity's goal is not to
signal predators, it's the opposite. The goal is to obscure, to hide. The
"signal" is there is nothing here (or nothing here worth your time). One of
the best examples (it's in the article!) is changing the default SSH port.
Just by obscuring your port you can usually filter out the majority of break-
in attempts.

The only way security through obscurity signals to "predators" is if they've
seen past your defence, and thus defeated the obscurity. Obscurity (once
revealed) is not a deterrent. Likewise an authentication method (once
exploited) is not a deterrent.

>Of course in the age of automation, relying on obscurity alone is foolish
because once someone has automated an attack that defeats the obscurity, then
it is little or no effort for an attacker to bypass it.

This is true of any exploit basically. Look no further than metasploit.
Another example: a worm is a self-automating exploit.

~~~
bleepblorp
Using a non-standard SSH port is a bad example because nmap can see through
that deception in a few seconds. Any attacker who is looking for more than
just the lowest of low-hanging fruit will not be even slightly deterred.

A better example would be a port-knocking arrangement that hides sshd except
from systems that probe a sequence of ports in a specific way. This is very
much security by obscurity, because it's trivial for anyone who knows the port
sequence to defeat, but it's also very effective as anyone who doesn't know
the port sequence has no indication of how to start probing for a solution.

~~~
Sebb767
> Using a non-standard SSH port is a bad example because nmap can see through
> that deception in a few seconds.

Compared to milliseconds. Do yourself the favor and open one sshd on port 22
vs one on a port >10000, then compare logs after a month. The 22 one will have
thousands of attempts; the other one hardly tens if even any.

The 99% level we're defending against here is root:123456 or pi:raspberry on
port 22. Which is dead easy to scan the whole IPv4 space for. 65K ports per
host though? That's taking time and, given the obvious success rate of the
former, is not worth it.

Therefore I'd say it's the perfect example: It's hardly any effort, for
neither attacker nor defender, and yet works perfectly fine for nearly all
cases you'll ever encounter.

EDIT: Note that it comes with other trade-offs, though, as pointed out here:
[https://news.ycombinator.com/item?id=24445678](https://news.ycombinator.com/item?id=24445678)

~~~
cthalupa
I know we've spoken in another thread, but I think it's important for people
to understand that this sshd thing is a perfect example of why it isn't this
easy: You reduce log spam moving to a non-privileged port, but also reduce
overall security - a non-privileged user can bind to a port above 10k, but
can't bind to 22. sshd restarts for an upgrade, or your iptables rules
remapping a high port to 22 get flushed, that non-privileged user that got
access via a RCE on your web application can now set up their own fake sshd
and listen in to whatever you are sending if it manages to bind to that port
first and you ignore the host key mismatch error on the client side.

Or you can implement real security, like not allowing SSH access via the
public internet at all and not have to make this trade off.

~~~
acoard
Here's a counter-example (I said else-where in this thread):

Imagine a 0day for SSH drops tomorrow. Almost immediately script kiddies all
over the world will be trying to take over everything running on port 22.

I'll also point out that we're generally talking about different threat
vectors here, so it's good to lay them out. I don't think obscurity helps
against a persistent threat probing your network, it helps against swarms.

> a non-privileged user can bind to a port above 10k, but can't bind to 22.
> sshd restarts for an upgrade, or your iptables rules remapping a high port
> to 22 get flushed, that non-privileged user that got access via a RCE on
> your web application can now set up their own fake sshd and listen in to
> whatever you are sending if it manages to bind to that port first and you
> ignore the host key mismatch error on the client side.

This is getting closer to APT territory, but I'll bite. If someone has RCE on
your SSH server it honestly doesn't matter what port you're running on. They
already have the server. You're completely right it would work if you have
separate linux users for SSH and web server. Unfortunately that's all too rare
in most web-servers I see (<10%), as most just add SSH and secure it and call
it a day (even worse when CI/CD scripts just copy files without chowning
them). But let's assume it here. In reality, even if you did have this setup
this is a skilled persistent threat we're talking about (not quite an APT, but
definitely a PT). They already own your website. Your compromised web/SSH
server is being monitored by a skilled hacker, it's inevitable they'll
escalate privileges. If they're smart enough to put in fake SSH daemons,
they're smart enough to figure something else out. Is your server perfectly
patched? Has anyone in your organization re-used passwords on your website and
gmail?

You're right that these events could happen. But you have to ask yourself
what's actions of yours will have a bigger impact:

* Changing to non-standard SSH port, blocking out ~50% of all automated hacking attempts. Or port-knocking to get >90% (just a guess!). * Use standard port, but you still have an APT who owns your web server and will find other exploits.

~~~
cthalupa
>Imagine a 0day for SSH drops tomorrow. Almost immediately script kiddies all
over the world will be trying to take over everything running on port 22.

Yep! And I should be clear: I am not saying just don't change the SSH port.
I'm saying if you care about security, at a minimum disallow public access to
SSH and set up a VPN at a minimum.

>Unfortunately that's all too rare in most web-servers I see (<10%), as most
just add SSH and secure it and call it a day (even worse when CI/CD scripts
just copy files without chowning them).

I'm a bit confused here. In every major distro I've worked on (RHEL/Cent,
Ubuntu, Debian, SUSE) the default httpd and nginx packages are all configured
to use their own user for the running service. I haven't seen a system where
httpd or nginx are running as root in over a decade.

I think the bare minimum for anyone that is running a business or keeping
customer/end user data should be the following:

1) Only allow public access to the public facing services. All other ports
should be firewalled off or not listening at all on the public interface

2) Public facing services should not be running as root (I'm terrified that
you've not seen this to be the case in the majority of places!)

3) Access to the secure side should only be available via VPN.

4) SSH is only available via key access and not password.

5) 2FA is required

I think the following are also good practices to follow and are not inherently
high complexity with the tooling we have available today:

1) SSH access from the VPN is only allowed to jumpboxes

2) These jumpboxes are recycled on a frequent basis from a known good image

3) There is auditing in place for all SSH access to these jumpboxes

4) SSH between production hosts (e.g. webserver 1 to appserver 1 or webserver
2) is disabled and will result in an alarm

With the first set, you take care of the overwhelming majority of both swarms
and persistent threats. The second set will take care of basically everyone
except an APT. The first set you can roll out in an afternoon.

With the first set, you take care of the overwhelming majority of situations.

~~~
bleepblorp
Protecting sshd behind a VPN just moves your 0day risk from sshd to the VPN
server.

Choosing between exposing sshd or a VPN server is just a bet on which of these
services is most at risk of a 0day.

If you need to defend against 0days then you need to do things like leveraging
AppArmor/Selinux, complex port knocking, and/or restricting VPN/SSH access
only to whitelisted IP blocks.

~~~
cthalupa
Except you don't assume that just because someone is on the VPN you're secure.

If the VPN server has a 0day, they now have... only as much access as they had
before when things were public facing. You still need there to be a
simultaneous sshd 0day.

I'll take my chances on there being a 0day for wireguard at the same time
there's a 0day for sshd.

(I do also use selinux and think that you should for reasons far beyond just
ssh security)

~~~
bleepblorp
A remote code execution 0day in your VPN server doesn't give an attacker an
unauthorized VPN connection, it gives them remote code execution inside the
VPN server process, which gives the attacker whatever access rights the VPN
server has on the host. At this point, connecting to sshd is irrelevant.

Worse, since Wireguard runs in kernel space, if there's an RCE 0day in
Wireguard, an attacker would be able to execute hostile code within the
kernel.

One remote code exploit in a public-facing service is all it takes for an
attacker to get a foothold.

~~~
cthalupa
I do not run my VPNs on the same systems I am running other services on, so an
RCE at most compromises the VPN concentrator and does not inherently give them
access to other systems. Access to SSH on production systems is only available
through a jumphost which has auditing of all logins sent to another system,
and requires 2FA. There are some other services accessible via VPN, but those
also require auth and 2FA.

If you are running them all on the same system, then yes, that is a risk.

------
tptacek
There's something to the idea of rehabilitating "obscurity", or at least
recognizing that "cost" is part of threat models, and you can raise costs for
particular attack vectors by degrees instead of "to infinity".

But SSH is a terrible example, because the cost to the defender of simply not
having SSH vulnerabilities is the same, or even less, than the cost of
obfuscating it with nonstandard ports, "port knocking", or fail2ban, which are
all silly ideas.

Just use SSH keys, and disable passwords.

I think maybe it comes down to this: dialing attacker costs up incrementally
can make sense if it's the most cost-effective way for a _fully-informed_
defender to improve security. But incremental cost-increasing countermeasures
aren't a substitute for sound engineering; you don't get to count "having to
learn stuff" as a valid defender cost.

~~~
rsync
"But SSH is a terrible example, because the cost to the defender of simply not
having SSH vulnerabilities is the same, or even less, than the cost of
obfuscating it with nonstandard ports, "port knocking", or fail2ban, which are
all silly ideas."

I know who I am arguing with here but port knocking is not silly. It's
_fantastic_.

When I say fantastic, I don't mean it solves all of our problems and obviates
any other protections ... what I mean is, for almost _zero cost_ [1] it adds a
non-zero level of actual protection.

As a lifelong UNIX sysadmin, it is one of the few totally unalloyed security
improvements that I have been able to add to my systems. I believe there are
sshd vulns extant that you and I don't know about and port knocking allows me
to worry _less_ about them.

I also recommend SMS alerts on successful knocks - SMS alerts that you should
never see in surprise. This is trivial, by the way, as you can put semicolons
in the knock command:

    
    
      /sbin/ipfw add 01021 allow tcp from %IP% to 10.0.0.10 22 setup ; /usr/local/sbin/timestamped_sms 4155551212 "knock from %IP% - "
    

[1] knockd on FreeBSD, 10+ years, not one hang or crash.

~~~
tptacek
It solves none of your problems and adds complexity and cost to your defense
without corresponding increases to attacker costs.

If you believe there are unknown OpenSSH attacks, you can't coherently believe
that port knocking is a real defense, since port knocking doesn't do anything
to protect the SSH channel that attacks will be carried out in.

Instead, if you're actually worried about OpenSSH vulnerabilities, you
shouldn't be exposing SSH to the public Internet at all. I'm not super worried
about OpenSSH server vulnerabilities, but I would never recommend that teams
leave SSH exposed; they should just hide that stuff behind WireGuard.

~~~
ThA0x2
>It solves none of your problems

Wrong, it solves tons of them.

>adds complexity and cost

Almost zero complexity and cost. Maybe if you're a bad at sysadmin work it
adds cost and complexity.

>defense without corresponding increases to attacker costs.

It adds a _huge_, almost incalculable cost increase to attackers.

>If you believe there are unknown OpenSSH attacks, you can't coherently
believe that port knocking is a real defense, since port knocking doesn't do
anything to protect the SSH channel that attacks will be carried out in.

Looks like you don't understand the concept of 0-days. Several CVEs we're
listed elsewhere. I suggest researching 0-day exploits so you understand how
port knocking mitigates them.

Port knocking mitigates 0-days.

>Instead, if you're actually worried about OpenSSH vulnerabilities, you
shouldn't be exposing SSH to the public Internet at all.

I don't disagree here, VPN is a great solution. Nonetheless, for some shops
simple port-knocking on a bastion host solves, a lot of these issues, and
removed the complexity that VPNs add.

>I'm not super worried about OpenSSH server vulnerabilities, but I would never
recommend that teams leave SSH exposed; they should just hide that stuff
behind WireGuard.

No one is super worried about things like shellshock, heart bleed, etc. until
they happen.

Port knocking solved a lot of problems, protects you from zero-days, and makes
SSH noise a non-issue (huge signal-to-noise gains).

Used in production for years. It's fantastic.

~~~
tptacek
Port knocking adds a huge, almost incalculable cost increase to attackers. I'm
going to remember that one, thanks!

------
n0on3
It seems to me that the article is missing a few of points on what "security
by obscurity" means.

From Wikipedia: "reliance [...] on design or implementation secrecy as _ _the
main method_ _ of providing security [...]"

So, to use the model mentioned in the article, a single slice of cheese. It's
not "an additional layer of defense", it's the main one (so you have other...
weaker layers? ¯\\_(ツ)_/¯)

Second, "reliance on secrecy of design and implementation" is different from
"reliance on secrecy of _whatever-else_", because design and implementation
are most often either easily discoverable (sure, occasional skids might not
scan port 64323 but what about someone who can observe your traffic?) or
pretty much guaranteed to be discovered by adveraries with (not even as much
as one might think) time and motivation.

Third, some of the examples mentioned (e.g., the decoy cars) are not even
security by obscurity, that's called deception.

So, sure, you can do non-standard stuff to make it harder for _some_ not
discover your vulnerabilities (ssh non-standard port is actually a good thing
given the massive amounts of bots around), but that should _never_ be your
only (or your main) layer of defense.

Security by obscurity is not underrated, by definition it's just bad.

~~~
cthalupa
>ssh non-standard port is actually a good thing given the massive amounts of
bots around

Except that if you use a port above 1024 (like the author does) you no longer
have assurances that it is a privileged user that launched the process. Any
non-privileged user on a Linux system can bind to a port higher than 1024, so
all it takes is sshd restarting after an update if it's directly listening on
a high number port, or iptables rules getting reloaded if they're being used
to forward traffic from a high port to 1024 and an attacker can have their own
credential collecting service running where you think sshd is, and all it
takes is someone ignoring the host key mismatch error to give up your good
creds to an attacker, and now they have more access into your infrastructure.

~~~
unethical_ban
Never thought about this before, but is this a tunable thing in the kernel
config? Some way to signal to the OS "only use port ranges above 16382 for
unpriv" and move the boundary up?

~~~
pixl97
Most distributions come with a 'portreserve' daemon for just this purpose.

~~~
cthalupa
Mentioned in my other comment (
[https://news.ycombinator.com/item?id=24447846](https://news.ycombinator.com/item?id=24447846)
) but at most portreserve makes it a race. It cannot guarantee that an
unprivileged user cannot bind to that port.

------
JackC
The way I think about this, there are some aphorisms that work as actual
design principles, and some that are just used to defend a decision you have
already made to someone who doesn't need to understand it.

"There's no such thing as security through obscurity" is an example of the
second; you can use it to mean "shut up and stop asking questions about the
secure system I designed," but you can't use it to design a secure system or
explain why your choices are correct.

The useful design principles behind "no security through obscurity" are just a
little more complicated -- they're more like "every secure system must have
defined entropy sources (such as keys) that provide a lower bound on the
security budget against a hypothetical attacker with knowledge of everything
except for the entropy sources." And, "because obscurity does not measurably
improve the lower bound on the security budget, it is only a good idea if it
also does not raise the chance of implementation errors, does not make it
harder to obtain third party reviews of the system, and does not make the
security of the system harder to prove." An argument about whether something
is security-through-obscurity-in-a-bad-way probably actually wants to be an
argument about an underlying design principle along those lines.

I don't exactly begrudge people using "shut up and trust me" phrases in
situations where that's needed, but I think they're almost always unhelpful in
forums like this.

~~~
reagent_finder
I was going to make approximately this point. However, I think it's also
important to have some of those "shut up and trust me" phrases codified and
have them available for the layman via Google. Because sometimes those people
demand "proof" or they'll go searching for it themselves and if it's right
there to be found and most major sources agree... well, the discussion can
then be "Is this just obscurity where security is needed?" AS IT SHOULD BE.

If you get right down to it, passwords are just obscurity. Usernames are just
obscurity. In this very thread people are dismissing port knocking while it's
functionally equivalent to a password.

I will personally stand by "security through obscurity is not security"
forever because that way we can get to the actually interesting question --
what level is needed for this service?

Let's take a simple example from the public Internet -- you want to share
something. So you put it on a server with Apache. You add TLS and PFS. You
hide it in a folder structure somewhere. You add a single-use token or just
htaccess.

Any of those individually would be obscurity, but put together they are most
likely more than enough for... well, anyone. So is it still obscurity or
actual security? That's a debate for the ages, but I think most people would
agree all of those put together are fine-ish, but pick just one method and
it's just obscurity.

This whole thread is basically just a philosophical debate where half the
people haven't read the article, the other half disagrees with minutiae in the
article, the third half disagrees with major points of the article, the 4th
half is sharing anecdotes and the 5th half just wants to participate.

------
snowwrestler
All software security comes down to obscurity: it depends on the selection of
specific numbers that are known to the authorized parties, but are extremely
difficult to guess (i.e. very obscure) to the unauthorized parties.

The extreme of this strategy is to make a successful guess cost more than
anyone can possibly pay, for example by using numbers so obscure that all
known algorithms for guessing them will take longer than the heat death of the
universe to succeed, or something like that.

But while there seems to be solid math to calculate such numbers,
implementations are rarely perfect and can easily leave open the possibility
of other guesses that cost a lot less than breaking the theoretical limit.
(AKA side channels, vulnerabilities, bugs, etc)

Where "security by obscurity" comes in for criticism, it's usually because the
implementers have misjudged the amount of obscurity they are actually
creating, or misjudged the amount they need (their threat model), or both.

It's easy to make these mistakes because it is hard to create perfect
implementations, and hard to know exactly the current and future capabilities
of your attackers.

~~~
iso8859-1
Security can also be achieved by physical separation or identity based on
physical human traits. Now we're getting philosophical, but if the security
crew of the data center knows my face, and does not allow other people to
enter, would you reduce my face to a value that is "extremely difficult to
guess"?

~~~
bosswipe
Right. There are three types of authentication factors: something you know,
something you have, something you are.

Obscurity and passwords fall under "something you know". Biometrics like what
you describe would fall under "something you are". A physical key such as a
yubikey or the sim card matching an SMS challenge would be "something you
have". Multi-factor authentication is more secure but it doesn't negate the
discussion here about hardening the "something you know" factor.

~~~
chrononaut
What's interesting is that (mostly impractical at the moment) attacks on
biometrics authentication mechanisms end up summarizing that category to also
"something you have", rather than "something you are" \-- not that it negate
its particular utility.

------
jmount
Like the article. Security also needs to be sensitive to usability trade-offs.
Make things hard for adversaries, easy for intended users.

For some things, like VPNs, the adversaries are going to be more familiar with
the details than the intended users. I often joke that an effective way to
crack a VPN would be offer to configure it properly for a user in exchange for
ten minutes of unfettered access to the target company; enough users are
sufficiently frustrated they would take this bad deal fully knowing what it
meant.

~~~
shiftpgdn
This is the whole "shadow IT" that actually results in a lot of security
breaches. Look at the recent twitter hack for a great example. Staff were
storing login credentials in a slack pinned message because using the right
tools were a headache.

~~~
alibarber
One thing I despise is internal systems with self signed certs because setting
it up properly is a faff or no one can agree on the latest and greatest way to
do it.

Oh cool that’s fine I’ll just click away all the big scary warnings in my
browser to access this page. I’m an engineer and know what I’m doing! It’s a
super strong key anyway. Oh wait I’ll just send this link to Bob in accounting
and tell him to _do the very thing we’ve been telling users not to do under
pain of ridicule for ages_ and then he’s now doing that 10 times a day and now
all of https is pointless because he knows that ‘it’s probably fine to ignore
it because I have to do that at work’...

~~~
outworlder
I have tried (unsuccessfully) to argue this point at a previous employer.

Email server certificate expired and IT sent messages _teaching_ people to
ignore cert validation errors.

------
kazinator
Article misuses/misunderstands the term "security by obscurity", and is
attacking a strawman position based on its own definition of what it means.

Security by obscurity refers to a situation where security is dependent on the
secrecy of an algorithm (the algorithm not being widely known or peer
reviewed) rather than (or in addition to) a secret _datum_ used with that
algorithm.

The opposite practice is to use a well-known algorithm and depend only on the
secrecy of the inputs to that algorithm.

The layers of security presented in the article do not meet the definition of
"security by obscurity".

Even a port number like 64235 is a secret datum, not a secret algorithm. It's
not a hard-to-discover secret datum; it is poorly guarded. But that's not what
"security through obscurity" means. Using a funny port number is a widely-
known system, with an objective benefit: it requires an attacker to take
certain steps that are not required with a known port number. The assumption
is that the attacker knows that alternative port numbers are being used.

------
fooblat
To me this seems like a bit of a strawman argument.

The claim was never that using obscurity is bad and should be avoided. As I
first heard it, "Security through obscurity is not security" is saying that if
you are relying on obscurity to keep your stuff secure then you aren't doing
enough.

I think this is still true and the conclusion of the article agrees

    
    
        Security by obscurity is not enough by itself. You should always enforce the best practices.

~~~
formerly_proven
> The claim was never that using obscurity is bad and should be avoided.

Yet. All of these are from HN.

> 3\. Since when is obscurity a valid security measure?

> Security through obscurity, not a valid security plan.

> The problem with these "obscurity as a valid security layer" arguments is
> that there's already obscurity built into these protocols.

> Especially since most people believe "Obscurity" to still be a valid
> security technique.

> You're just reciting the same tired old rhetoric that security through
> obscurity is a valid defense mechanism. It's just not.

> I thought the general consensus here is that security by obscurity is bad.

> Obscurity is bad because it makes you _think_ it adds security.

> To maybe give some perspective _why_ security people say that security by
> obscurity is bad - and especially serving ssh via port 64323: [...]

> I dismissed it as security through (bad) obscurity but is there a valid
> security reason to do this?

> Compression is not encryption and security by obscurity is bad practice.

> it's understood that security by obscurity is bad.

> Security by obscurity is bad, of course, but in that model it's such a minor
> factor.

And countless many more. Some of these reference "security by obscurity",
which, if you're kind, you can interpret as "security only through obscurity"
(though reading in context this mostly doesn't seem to be what is meant),
while others dismiss obscurity entirely. You will also regularly find
commenters lament this point of view as the "mainstream idiocy".

~~~
Sebb767
> Obscurity is bad because it makes you _think_ it adds security.

I agree with the OP, but I also fully agree with this point. I've seen people
download the fishiest stuff or open anything because "I have an antivirus
installed". Now, I don't claim that no AV would be better in all cases, but it
is very much a factor.

~~~
magicalhippo
But it's not like that is it. Try to guess the model number written on my
monitor.

I'll wait...

------
wglb
Raising the cost of attacks is a good thing, particularly if the cost of doing
so is not too great.

However, beware that obscurity is in the eye of the beholder, or more
relevantly, in the eye of the attacker. For example, script kiddie attackers
may be the ones who in the twitter example only scan the default ports. This
is an important element to defend against.

But a seriously skilled attacker isn't going to use script kiddie methods.
They will use more complete, likely stealthy attack patterns.

Bear in mind that what you think of obscure may be breakfast for a skilled
attacker. If you are serious about defense, then you will be compelled to
follow the ninja threat model, which, in part, says _The attacker is going to
sit on the same network segment as the application. There’s no firewall or
filters. There’s a special place in hell reserved for products that require
firewalls or filtering to protect themselves against attack._

Focus too much on obscurity and you will fall victim to the fallacy of
"defense by presumed motive."

~~~
BlueTemplar
Is this why ~half of the people say that firewalls are not worth it for IPv6 ?

------
vsareto
>However, if you can reduce the risk with zero cost, you should do that.

Zero cost is rarely, rarely true with regards to operations. If you use non-
standard ports, you'll have to document that somewhere, or else it becomes
tribal knowledge.

If you don't document it, and someone leaves, how do you know how to access
your servers? At the very moment you don't know how to SSH in, you've just
paid the price. It's no longer zero cost.

If you do document it, you must now take the time to manage the permissions to
that document, figure out who needs to know, and then change access as people
come and go. All of that requires time, which also has a cost.

Plus all of this also has training costs when you onboard new people.

Zero cost is a real thing with computer science but not operations.

~~~
derefr
Zero net cost is certainly almost never a thing; but zero _incremental_ cost
is often a thing.

To further your alternate-port example: let's say you have some instances
running on Google Cloud. GCP already has a big CLI codebase that they get
everybody to use, which has a command `gcloud compute ssh` for connecting to
instances, which already has tons of magic built into it. It would therefore
be pretty easy for GCP to add _additional_ magic — e.g. randomizing the SSH
ports of newly-deployed instances, and then publishing those ports as project
secrets in a way that the gcloud CLI tool can discover and use in the `gcloud
compute ssh` subcommand.

The incremental cost of an approach like this is effectively zero: the DevOps
folks didn’t have to build anything new to get this advantage, because they
_already_ built all the infrastructure required (i.e. spent the labor-cost
you’d be spending) in the process of getting some _other_ , earlier
advantages.

In a sense, setting up a platform or infrastructure that's more
complex/flexible than what you require at the time, is the opposite of
"technical debt." Rather than saving labor now but needing to be paid down
with later labor, it requires more labor now, but _potentially_ saves labor
later. It’s a bit like paying a retainer fee: you get less than you pay for
(or nothing) up front; but in return, you get things "for free" later on.
"Tech equity" might be a good term for this — it's what you get when you
_invest_ labor into your tech stack.

~~~
vsareto
I don't disagree, but this kind of goal post moving is how zero cost will turn
into a buzzword down the line because it'll really mean zero incremental costs
when referring to operations.

Plus, zero incremental costs may only apply if you match the situation being
presented. If not, you may have real costs associated with implementing
obscurity. This evaluation of whether you will get zero incremental costs or
not is a cost in itself.

It's just a misappropriation of the term from zero cost abstractions and it
bugs me, especially since it's being ported from compiler theory/engineering
to operations, two things which rarely have anything to do with one-another.

They'd be way better off coming up with "cost-effective obscurity" ideas,
instead of calling this zero cost.

------
JshWright
There is a reason the military doesn't paint their tanks bright pink... Armor
is important, but if you don't get shot at in the first place, even better.

~~~
4ad
Security by obscurity is not painting tanks in camo. Security by obscurity is
assuming your enemy won't find your tanks because you didn't broadcast on
public radio where your tanks are.

~~~
JshWright
That is another appropriate analogy (and it's why the military invests in
SIGINT).

To the point though, no one should "assume the tanks won't be found", but it's
still worthwhile to do things to make it less likely they will be found.

------
aschatten
Didn't Telegram challenge this rule as well? > * Never roll your own crypto
Afaik, discovered practical vulnerabilities like [1], [2] were patched, and
the rest are theoretical, like [3].

    
    
      > Using Symmetric Encryption in the Database: When you write data to the database, use a function like encryption_algorithm(data,key). Likewise, when you read data, use a function like decryption_algorithm(data,key). If the attacker can read your backend code, obviously he/she can decrypt your database.
    

I think the author misclassified this method. An actual encryption is not
obscurity. It would be, sort of, if the _key is stored in code_. But when a
proper key management is in place, it's a solid approach.

[1]
[https://news.ycombinator.com/item?id=6948742](https://news.ycombinator.com/item?id=6948742)

[2]
[https://web.archive.org/web/20181118154823/https://www.alexr...](https://web.archive.org/web/20181118154823/https://www.alexrad.me/discourse/a-264-attack-
on-telegram-and-why-a-super-villain-doesnt-need-it-to-read-your-telegram-
chats.html)

[3]
[https://eprint.iacr.org/2015/1177.pdf](https://eprint.iacr.org/2015/1177.pdf)

------
peterwwillis
Risk is not just a formula. Risk is also "formulaic": when you get people used
to an idea, they become blind to things outside of that idea, and there in
lies the danger.

If your corporate IT group regularly asks users to send in their passwords via
e-mail in order to perform some remote maintenance, then the users will be
habituated to sending their password to a familiar e-mail address. If someone
from outside their company asked them for their password, they would
immediately say no. But an e-mail with the right "From: " address, they would
quickly fall for. So it becomes easy to trick the users into sending their
password to an attacker in some circumstances, because of the assumptions they
make.

Security by obscurity is just another form of this: a practice which isn't
really secure, but people may _think_ is secure, because it seems to avoid the
simplest, most stupid attacks. But literally _any action you take_ could
prevent the simplest, most stupid attacks. That doesn't mean that any action
you take makes you "more secure".

Hiding a key under a door mat or in a sun visor isn't "more secure" than
leaving it in plain view. Anyone who's not a total moron will find it, and if
that's your whole security posture, you're screwed.

------
miles
Changing SSH port is far more efficacious at reducing nonsense than the
Twitter poll in the article suggests:

> "I ran an experiment with a virtual machine exposed to the internet which
> had sshd listening on port 22. The server stayed online for one week and
> then I changed the ssh port to 222. The number of attacks dropped by 98%.
> Even though this is solely empirical evidence, it’s clear that moving off
> the standard ssh port reduces your server’s profile."[0]

> "In the time that I gathered 7,025 connection attempts to my SSH daemon on
> port 22 I received 3 on port 24."[1]

Also, great top comment by 16s[2] in this HN thread, "Why putting SSH on
another port than 22 is bad idea".[3]

[0] [https://major.io/2013/05/14/changing-your-ssh-servers-
port-f...](https://major.io/2013/05/14/changing-your-ssh-servers-port-from-
the-default-is-it-worth-it/)

[1] [https://danielmiessler.com/blog/security-and-obscurity-
does-...](https://danielmiessler.com/blog/security-and-obscurity-does-
changing-your-ssh-port-lower-your-risk/)

[2]
[https://news.ycombinator.com/item?id=6615994](https://news.ycombinator.com/item?id=6615994)

[3]
[https://news.ycombinator.com/item?id=6615734](https://news.ycombinator.com/item?id=6615734)

~~~
rjkennedy98
It's also way less effective than they mentioned because they didn't get
hacked either way. So there was 0% difference in effectiveness between port 24
and 22 since the ssh was properly configured.

Security by obscurity only matters if you aren't secure in the first place. It
can be a good extra layer of protection, but the worst examples of security
mishaps I've seen are because people find the security unnecessarily
burdensome and so they bypass it entirely. So in that way obscurity in that
case has a real cost.

~~~
10000truths
Even unsuccessful SSH attempts can have operational costs, though. The machine
still accepts the TCP connection and does the SSH handshake. If I’m running a
server and I’m billed by data usage or by vCPU minutes, I don’t want to waste
my allotted resources by making it easy for every half-baked crawler around
the world to make connections to my machine. Using a non-default port cuts
down those numbers significantly. Sure, a targeted attack won’t be thwarted,
but at least the server is not being DDoS’ed anymore.

------
drewg123
Almost 2 decades ago, I maintained our company's self-hosted web server on
FreeBSD/alpha. It ran a simple (thttpd) web sever. I remember looking through
the logs, and seeing script-kiddie attack after attack fail due to running
thttpd instead of apache, FreeBSD rather than Linux, and alpha rather than
x86.

I obviously kept the machine patched and up-to-date, but I think I probably
could have left it unpatched, and it still would have been fine.

------
sytse
One downside of security by obscurity is that it makes it harder for whitehat
people to spot problems in your code. It is like asking everyone $1000 to look
at your source code. It is relatively more likely to deter whitehats since
their upside is lower.

~~~
sedatk
That only works for relatively popular projects though.

------
catears
While the article does have a point that obscurity can improve defences, I
think security is not about defence. Security is about managing risk in a
consistent and rational manner. This means that any defense mechanism need an
_appropriate_ level of defense against the threat.

Having a network share on the home network that only your household can access
and use 2FA for that? Maybe a bit too much. Do you know your organization will
be individually targeted by smart and tenacious actors? Changing the SSH port
isn't gonna stop them.

I agree with the article that more discussion about what makes something
secure is valuable security work. Disregarding defenses at first sight because
they "sound obscure" isn't a good argument. But it also doesn't mean that
"small things that might stop someone" is a good layer of defense.

And then there is also the cost of adding security layers...

------
archi42

        Yes, I scan them all: 53.2%
        No, I use default scan: 46.8%
        186 votes
    

Well, I doubt that. My private VMs have had ssh listening on some random port
for the last decade, after I was annoyed how the auth.log became an inreadable
spam-fest. Now it's been well over a year since the last probe.

Maybe the author's pentesting pals do that on a few public /24 or some
intranet (I would, if I was a pentester). But you're average bad guy scanning
/8 blocks looking for an easy catch? Maybe with a botnet...

(TBH, I focused on security/crypto during university, but ended up in an other
field - so my practical knowledge is limited).

------
adontz
I think the article misses more important attack vector focusing on brute
force, instead of human weaknesses.

Obscurity is naturally fragile, vulnerable to social engineering. Social
engineering is the real problem. We can filter out brute force easily. It can
be fail2ban, or even simplest ip-tables rules like

    
    
        iptables -A INPUT -j REJECT -p tcp --dport 22 -m state --state NEW -m recent --name TCP_SSH --update --rttl --seconds 600 --hitcount 15 --reject-with icmp-port-unreachable
        iptables -A INPUT -j REJECT -p tcp --dport 22 -m state --state NEW -m recent --name TCP_SSH --update --rttl --seconds 60 --hitcount 5 --reject-with icmp-port-unreachable
        iptables -A INPUT -j ACCEPT -p tcp --dport 22 -m state --state NEW,ESTABLISHED -m recent --name TCP_SSH --set
    
    

If you move from passwords to SSH keys, it increases security not really
because number of possible keys is larger than number of possible passwords.
More important is that you eliminate bad practice. One cannot share SSH key
over phone conversation, write it down on a piece of paper and stick it to a
monitor. Change is nothing similar to upgrading from 1024 bit SSH keys to 3072
bit SSH keys. If you store SSH key on some HSM, like YubiKey, even better, no
one can copy this key, only steal.

You cannot really hide IP address or port number. You'll send this information
to your colleagues and partners over SMS, Facebook Messenger, Whatsapp, Viber,
Telegram, E-mail, Skype, Zoom, many times, multiple channels. Or you will
write it down on a Wiki, like Confluence, which is public to entire
organization, and that knowledge is not a secret anymore.

My greatest fear is not a script kiddo with botnet, but an addict employee
with debts.

------
helsinkiandrew
Unless you are Goldman Sachs, the NSA or someone else who is being
specifically targeted this is always true.

Otherwise, switching ports or making your systems a bit different to the
others on the internet means the majority of hackers - 'bots', 'scanners' and
'script kiddies' will move on to easier targets.

On the internet you don't have to outrun the bear, you just need to run faster
than some other guy.

------
blakesterz
I was thinking maybe this goes back to people in security looking at things in
a different way than others? Like ITSec folks spend all day everyday reading
about every possible way a bad actor can make bad things happen. They look at
something like changing the port SSH listens on and think about all the ways
the best & brightest bad actors will get around that in no time at all.
Everything ends up looking pretty useless at some point, because you end up
seeing that it's possible to get around nearly everything.

Another example might be folks in the security community saying that SMS 2fa
is no good because all it takes is someone taking over your phone account to
get around. Sure, that happens, but not all that often, and usually happens to
people with something that's worth time & focus by talented bad actors.

"Security by obscurity is not enough by itself. You should always enforce the
best practices. However, if you can reduce the risk with zero cost, you should
do that. Obscurity is a good layer of security."

I rather like the conclusion.

------
austincheney

        risk = probability * severity
    

Security by obscurity is bad regardless of other controls because it does
little to reduce probability of attack and nothing for severity. It is only
barely helpful at reducing the probability of attack because it is ineffective
against various forms of automated footprinting. That is just the attacker.

Security controls impact everybody though. Not only does it make the problem
obscure to an attacker it also makes the problem obscure to non-attackers.
This dramatically increases risks because it impacts the application and
distribution of other security controls.

Since it’s barely helpful where intended and harmful where it’s unintended
security by obscurity only increases risks.

The analogy to software is the belief that hiding source code makes it safer.
Hidden source code is not any safer but the vulnerabilities are a bit harder
to find. The benefit of open source is that the vulnerabilities are exposed to
anybody who reads the code which allows more vulnerabilities to be exposed and
patched.

~~~
arduinomancer
Ehh the majority of companies practice security by obscurity as an extra
layer.

There's the idea in security that an attacker knowing your algorithm/practices
shouldn't mean anything yet you rarely see companies detail the security
measures they take on internal systems because we know keeping this secret has
no downside.

~~~
austincheney
Popularity does not validate stupidity.

Security policies at most companies are often generic and not secret.
Reporting chains for emergency remediation and asset identification are secret
because those identities are potential attack vectors. Information sensitivity
of that nature means it must be protected from disclosure and not that it
should otherwise be hidden. The key phrase for sensitivity management is: need
to know.

------
_tk_
To maybe give some perspective _why_ security people say that security by
obscurity is bad - and especially serving ssh via port 64323:

Typically you want to know who is connecting to what server via what service
and log these connections. If something is off, an alert can be generated. If
ssh isn't served on a standardized port, logging and alerting becomes more
complicated - albeit not impossible.

There is more housekeeping to do. In case of a handoff, things like this need
to be documented. If all services work on their default port, there is no need
for documenting them.

In the case of compromise, it becomes very hard to identify how a machine got
compromised.

Yes, a lot of people do not do a full port scan. But those are not the people
exploiting risky vulnerabilities. Security by obscurity reduces your risk, but
only to a certain extent. Having a proper patch management or firewall
management in place reduces your risk a lot more.

A lot of owls do get killed by humans, despite their camouflage.

~~~
mercer
> Typically you want to know who is connecting to what server via what service
> and log these connections. If something is off, an alert can be generated.
> If ssh isn't served on a standardized port, logging and alerting becomes
> more complicated - albeit not impossible.

Could you elaborate on that? I serve ssh on a non-standard port precisely in
part because it drastically cut down on the noise of failed log-ins, to the
point where when I check the logs I'm almost the only one who actually bothers
to try to log in via that port. That seems like a win to me.

~~~
thinkharderdev
Not the OP but I assume what they mean is that if you have network-wide
monitoring across a network with lots of servers then it won't be able to
easily make sense of what is happening if servers are all using non-standard
ports for things.

~~~
_tk_
Correct. This is an argument made from a corporate network perspective.

------
lizknope
I bought a new virtual machine and I waited about a day until I logged in. The
SSH logs showed over 100 failed login attempts. I hadn't even logged in!

I changed the default SSH port to a random high number.

I had zero failed login attempts in 2 months.

Of course use strong security methods but I suggest changing the default port
numbers just to clean up the log files.

~~~
rudolph9
What ports did the firewall have open?

------
corytheboyd
Please do not use random variable names in source code. Uglify/minify instead.
It’s a bit unclear because right above that “tip” is obfuscating code, did I
miss something?

~~~
Shared404
For JS, does it not make sense to use random variable names in production?
Obviously you don't want to while developing, but it seems like an efficient
method to help obfuscate.

~~~
umvi
Yeah, GP is saying you as the developer should not use random variable names.
Let an uglifier select random names for you.

~~~
Shared404
Ok, that makes sense. Thanks for clarifying for my stupid self.

------
shmerl
Obscurity is double sided. While the attacker can be hindered by it, so is
someone who can audit the defense and find its deficiencies. I always thought
that was the main argument for avoiding security by obscurity - the benefit of
better audit and improving defense overall outweighs the benefit of obscuring
it.

------
daenz
No mention of port knocking for SSH. I used to be scanned constantly for SSH
logins. So I changed the port. The login attempts stopped for awhile, but
eventually they found the port. Now with port knocking, I haven't seen a
single attempt.

Security by obscurity _alone_ is bad, but as another layer, it can be great.

~~~
Majromax
> Security by obscurity alone is bad, but as another layer, it can be great.

I beg to differ in your case.

Had you left SSH on its default port, what would your expected time-to-
compromise be? Presumably you weren't using a root:password credential, or
else your system would not have remained up enough long enough for you to
implement any obscurity.

But if an attacker, with full ability to try logins, could not reasonably
guess your login credential in the lifetime of the universe (i.e. public key
SSH or a strong password), then you've not improved security by moving to a
port-knocking model.

You have reduced _nuisance_ , but nuisance isn't part of the standard threat
model for SSH security.

To put it another way: you've not seen another unauthorized login attempt, but
would you be comfortable relying on that and use root:password as your access
credential?

~~~
daenz
I disagree. Suppose the latest SSH has a 0-day, now I am vulnerable, even
though I only use PK-auth. Obscurity is just another layer, and the purpose of
layers is to help make rare vulnerabilities (like 0-days) not compromise the
system. By hiding the door, they cannot even touch the 0-day without another
rare vulnerability.

------
xfz
Good, thought-provoking article, but use with caution.

I've seen security through obscurity misused too often as the only line of
defence, or as a "temporary" stop-gap that outlives its usefulness. It can
lead to a false sense of security.

Such measures also do not tend to keep up with changes as attacks become more
sophisticated or cheaper to carry out.

You also need to make sure that there are no unintended consequences - does
your non-standard configuration make it harder to apply upgrades? Does your
own penetration testing also scan all ports, or is it only going to discover
weak servers running on port 22 on your network?

That said, I would do the type of things mentioned in the article as an "added
bonus", but try to exclude them from my overall security evaluation (either
rough mental model or formal threat model).

~~~
2ion
> I've seen security through obscurity misused too often as the only line of
> defence, or as a "temporary" stop-gap that outlives its usefulness. It can
> lead to a false sense of security.

Exactly.

> we need for some reason make this development-stage service publicly
> available on the internet, and it's connected to a lot of our services
> internally, but we can implement AAA only later and we need it now. No issue
> publishing it on an obscure route/IP/domain name?

And the setup stays online for 5 years.

------
afrcnc
So the "developer" who created an "educational" ransomware project that was
abused for half a decade by criminal groups now has a controversial and low-
level view of security practices and is broadcasting it to the world. I'm
shocked, I tell you. Shocked!

------
ricardo81
I've used some very tight-arsed VPS providers at the low range
(128MB/1IPV4/$12 a year) and some of them mention a high load, and it's mainly
due to brute force on port 22.

It makes sense to change port purely to avoid the low-barrier noise but of
course it isn't much better security. Port knocking is on the same lines.

I'm by no means a security expert but these measures would surely help: less
opportunists = less opportunities.

Saying that, I'm public key auth only and disabling any public facing service
I'm not using.

The "security through obscurity" thing seems like a warning to avoid shortcuts
rather than some implementations that help reduce noise. As long as you
understand the fundamental problem of security, the obscurity thing is just a
sidebar.

~~~
mywittyname
> less opportunists = less opportunities.

You miss 100% of the shots you don't take applies to hackers as well.

------
UI_at_80x24
I like to think of this in military terms.

"Don't be where the enemy expects to find you."

As TFA and several others have pointed out:

(1) Don't use this as your only method of defense. The further from your LOCK
the better the key needs to be. (2) Use your security in the layers that offer
the most benefit. (3) Be proactive in your defense.

-Changing your SSH port will stop that largest number of attempts on your service. -A non-default port PLUS port-knocking PLUS key-only PLUS whitelisted IPs PLUS whitelisted login names. Is better then only one of those. -Apply very liberal firewall rules to prevent any unauthorized from IP's that should have access to your service. Country-code level blocklists are a thing.

Being obscure is about NOT being where your opponent expects to find you.

------
transpute
Obscurity has been used for detection of bots engaged in profitable ad fraud,
by having web clients execute a JS payload whose behavior can be profiled.
Temporary payload obscurity enables a silent alarm, which can be used to stop
financial payouts.

Unlike many approaches to cybercrime defense, obscurity-enhanced bot detection
has lead to both prosecution and extradition of accused attackers,
[https://www.cyberscoop.com/tag/methbot/](https://www.cyberscoop.com/tag/methbot/)

White Ops technical talk at PSEC18,
[https://youtube.com/watch?v=Aqdn09myGlM](https://youtube.com/watch?v=Aqdn09myGlM)

------
zczc
Security by obscurity was the perfect way to send the largest diamond across
the world:

"Due to its immense value, detectives were assigned to a steamboat that was
rumoured to be carrying the stone, and a parcel was ceremoniously locked in
the captain's safe and guarded on the entire journey. It was a diversionary
tactic – the stone on that ship was fake, meant to attract those who would be
interested in stealing it. Cullinan was sent to the United Kingdom in a plain
box via registered post." \-
[https://en.wikipedia.org/wiki/Cullinan_Diamond](https://en.wikipedia.org/wiki/Cullinan_Diamond)

~~~
rudolph9
Technically a strong password is a diversionary tactic where incorrect
passwords could be considered diversions. The difference between what is
generally regard as "obscurity" and "security" is orders of magnitude in the
number of options one must explore in order to break-in.

------
choeger
I think the article confuses two concepts. "Security by obscurity is bad"
usually applies to things like "our own proprietary hash function", or "our
own proprietary remote control protocol", or sometimes even just "well, noone
has the code, right?". I call this obscurity proper. This is often little more
than an omission.

All the positive examples in the article mean actively obscuring the view of
an attacker. This is an _addition_ of things like camouflage or distraction.

Proper obscurity is obviously bad, because there simply is no security concept
behind it. Additive obscurity is obviously smart because it adds to an
existing concept.

------
shadowgovt
It has its place; the key thing to remember about it is it's not sustainable.

Security by, say, mathematically-hard problems stays secure even when the
problem's design is understood. Security by obscurity breaks any time the
secret gets out.

(There is an overlap point where a math problem is too simple to solve and,
meanwhile, an obscure secret is "The sixteen digit number the President
memorized to launch the nukes" where the security-by-obscurity can even beat
out mathematically-secure, but the middle points of those two sets are
separate and the reliability heavily tilted in favor of the mathematical
cryptography).

------
waihtis
I run a cyber software company with a product basically structured on this
principle (deception.)

The swiss cheese picture is excellent, since that is how stuff happens in the
network internals, with lateral movement and other internal activities.

Say an adversary has to jump 5-10 hops from initial point to target system and
you can, with very lightweight obfuscation and ”obscurification”, increase the
attackers mistake rate dramatically - it makes a ton of sense from a risk and
economic perspective.

Consider the alternative (succesful internal hardening and monitoring) which
is way out of scope resourcewise for most.

------
filleokus
My beard is not grey enough to know about how the Security by Obscurity
meme/mantra have evolved in the tech community.

But one thought I have is that this more nuanced picture is much more
complicated to tell beginners. Beginners / not as security conscious
developers often, wrongly, assume that obfuscation is much more powerful than
it is.

The safest digest of the "Apply ≈Kerckhoff's principle, but some obscurity on
top of that is not a bad idea if it's cheap to implment", is probably
"security by obscurity is bad!1!".

Certificate pinning or sandboxing in mobile apps won't stop people from
reverse engineering your API's. But if your personal belief is that it's like
almost a state actor level attack to see your API routes or modify requests,
it will undoubtedly influence how you implement them.

I've seen some serious problems where companies do really bad things (like
sending the ID of the currently logged in user and not checking it server
side, allowing for execution as arbitrary user), which I guess at least partly
arose from thinking along the lines of "It's only our signed code that will
ever make these requests". Even bad developers wouldn't make the same
misstakes in ≈2020 on the web, where the understanding that the client is
untrustworthy have fully statured the common understanding.

------
bob1029
Do we feel that honeypots might actually be one form of this? I.e. something
that isn't deterministically going to prevent an attack like a OTP
cryptographic scheme, but that may trick a large majority of attackers into
thinking they are actually in a secure production system for a long time.

I know they are used primarily for detection, but why not go the extra step
and make a honeypot that is a truly believable facsimile of a real corporate
environment so the attacker wants to stay around even longer. There are lots
of clever ways you can switch network traffic to make it look like you are
talking to one host when in reality you are talking to a VM jail under a
security administrator's desk. Load these environments up with fake, but
believable data. How would an attacker know if they are in production actual,
or fake prod? Once you "acquire" an attacker, you could even monitor their
approach and string them along with hopes of getting into SVRSQLPROD (which is
obviously going to be loaded with fake bullshit, but they wont know until they
find the symmetric encryption key which you will probably never give them).

Again, I think we are all clear that the above is not deterministic security
and that certain experienced attackers (or insiders) may be able to smell such
a honeypot from a mile away.

~~~
waihtis
Preface: May be biased, as I run a honeypot company.

To your question: absolutely. They are also a very economically effective and
(done right) an easy-to-implement solution with very low-to-none risk of
jeopardizing any legitimate traffic.

To your why not do this - we are working on this right now. There's a lot of
interest to what you described in above average maturity security teams; we
have a few customers in this niche helping us design the "attacker playpen."
You are right in that it is a challenge to make believeable enough _without
introducing risk into the environment._

------
catears
I want to tackle a misunderstanding I have seen from some posters in this
thread about passwords/secrets/keys. Using a password should not be considered
a form of "obscure defense".

If you are using a password there is a mathematical definition of how hard it
is to crack, the number of bits of entropy contained in the password. If you
use a password manager like KeePass it will tell you the number of bits in you
password.

If it takes me 2^100 guesses for a 50% chance to discover your password then
that is not obscurity, that is a valid defense mechanism. That the password
itself is obscure is not a reason to call the strategy obscure.

Passwords and keys are used to create an artifact that will unlock access to a
whole bunch of information. Instead of protecting each piece of information
individually, we can now focus our efforts on protecting the password instead.

With a password we have managed to make the process of protecting information
simpler, less obscure.

Sorry to discuss something a bit off-topic from the article, but I figured I
had seen the "passwords are obscure" argument so many times here and that this
could be a valuable opportunity to teach something about security.

------
kempbellt
The best security is telling people:

 _I have a super secret password to my bank account! It 's super hard to
guess, and there's 12 factor authentication. You have to get my cat's paw
print to sign in._

When the truth is: There is no bank account, password, or cat. And you are
actually a homeless, broke, dog lover.

If you want to keep something secure, don't brag about how secure it is. Don't
talk about it at all.

------
noncoml
In Applied Cryptography, Schneier says obscurity is "take a letter, lock it in
a safe, hide the safe somewhere in New York".

Somehow in my mind, cryptography, eg. RSA, is also obscurity then. But instead
of obscuring the physical coordinates in the set of coordinates of New York,
we obscure the location of the private key in the set of prime numbers.

~~~
jariel
Lock it in the safe, hide it ... but then forget about the safe and make sure
everyone else has as well.

Then it doesn't exist.

------
mindfulhack
The ironic thing is that it would be logical _not_ to share or publish
examples of security by obscurity, in order for them to be more effective!

Doesn't that reveal part of the problem of 'security by obscurity'?

Indeed, how many people publicly disavowing 'security by obscurity' do so to
secretly benefit from the methodology?

------
nickcw
I'd argue that all security is security by obscurity, it is just a question of
how many attacker-seconds it takes to break.

Obscurity means keeping something private that if the attacker knew they could
access your service. Traditionally security by obscurity is something like
putting your ssh login port on port 61329 rather than port 22.

I'd argue that the above is 16 bits of obscurity, whereas your ssh key you log
in with is 1024 bits of obscurity. The attacker needs that 16 bits of port
number obscurity and the 1024 bits of ssh key obscurity to log in.

However the attacker-seconds to break the 16 bit port number is a rounding
error compared to the attacker-seconds to break the 1024 bit ssh key

Which is where, I guess, the idea that "security through obscurity" is bad
came from.

I'd argue that the attacker-seconds is still higher with your ssh on port
61329 though, so why not use that too.

~~~
athrowaway3z
Ik like this taken on it. But the math isnt complete. You need to account for
each attack path taken.

For example: a generic SSH vulnerability means somebody is going to make a
botnet to check every port 22, leaving your 1024 bits useless and the 16 bits
worth more.

------
cj
Article is down for me. Cache:

[https://webcache.googleusercontent.com/search?q=cache:Bgl-
ex...](https://webcache.googleusercontent.com/search?q=cache:Bgl-
exCaSzkJ:https://utkusen.com/blog/security-by-obscurity-is-
underrated.html+&cd=1&hl=en&ct=clnk&gl=us)

------
RedComet
This has always been obviously true. The only sense in which the (original)
phrase has any meaning is with respect to cryptographic primitives. And even
then, all a cipher is really doing is "obscuring" data. I've never really
heard anyone other than Steven Gibson subscribe to the silly phrase.

------
DiffEq
I think that most initially looked down on the security through obscurity
layer as flawed only because many people, a few years ago, thought that this
is the only layer they needed. Somehow over time many people began to think
that this layer is not useful at all. A little bit of thinking about the issue
would absolve one of the approach to completely exclude that layer.

Also it is so easy and cheap to implement is another reason to discard it
because in many organizations today people think that security can only be had
by spending boatloads of money on people and software. Surely a simple, fast
and cheap solution can't really reduce risk now can it?

And then on top of those things people do not think about, nor fully
understand risk, hence the whole Covid 19 scare and the many other things in
the world people are afraid of.

------
electricant
Just as a side note, moving your ssh server port from 22 to whatever may make
your server unreachable under strict firewalls. If you are allowed to connect
to some whitelisted ports then it's highly unlikely that port 64323 will be
allowed.

------
Jumziey
This article is so odd in its conclusion. The problem lies in obscurity
generating a mess thats hard to reason around by designers, hiding obvious
weakpoints, making it hard to find the most valuable area to work on. Changing
from a default port should not really count as this since its easily
configurable and does not impact neither the design nor usability (unless it
does). Port scanning avoidance also fills an actual function in terms of load
issues.

Rather what seem to be the issue is what security by obscurity actually means
and can be missinterpreted as.

Layering lots of obscurity and putting time on that when it can be spent to
increase security in the actual weakpoints or/and hiding them for
developers/maintainers are problematic to say the least.

------
zemnmez
Security by obscurity is harder to reason about. Heavily obfuscated code is
always more insecure once the obfuscation is reversed. The other big
difference is you can incrementally break it; you are not going to be privy to
if someone is selling your deobfuscated code online. Lastly, the example of
randomising the presidential car i’d likely not call security by obscurity.
Security by obscurity does not mean that there are secrets inherent to the
protocol (or private keys would be obscurity!) if it was the same car every
time, and they relied on nobody telling anybody, that’d be security by
obscurity for me, otherwise it’s just a random value with a low keyspace

------
Aaronstotle
I like the article overall and agree with the author. Only thing that stuck
out to me was when he puts out the twitter poll which shows that a majority of
people do scan the entire port range, it defeats the purpose of saying that
most people stick to default scans.

------
riquito
It's like a painting in front of a safe:

\- doesn't reduce the security of the safe

\- ensure you don't advertise "THERE'S A SAFE HERE" to whoever visit your
house, possibly reducing burglar attempts

Now, if you have just the painting over a hole in the wall, you got something
wrong...

~~~
cthalupa
Now, assume that every painting in your house can be checked for a safe behind
it in milliseconds (total, for all safes) by the people in your house.

Did you actually accomplish anything?

~~~
choward
Yes, let's say you have random people over you don't know (to fix things for
example). One of them is a criminal that wants to case your house to see if
it's worth bteaking into when you're out of town. If you're with them the
whole time they are working, they won't be able to just look behind the
painting and your house doesn't become a target. That's assuming of course
that the painting isn't too valuable.

~~~
cthalupa
But that scenario now takes us firmly outside the realms of what we’re
discussing here. You’re actively monitoring the actions of people that you
know are going to be there.

For an Internet facing server, it would be more like an art gallery where you
have hidden the safe behind one painting, and the gallery is open 24/7 with no
security to stop people from looking behind paintings, where you know a good
portion of visitors are going to do so. You can see an identifier for each
visitor that looks behind paintings, but many visitors are doing so and the
person that comes in to crack the safe might not have been the person who
found it.

------
codingdave
The higher level of abstraction behind an article like this is that security
is a mitigation activity within a broader risk management plan. Most of the
time, the best practices in the security field are best practices for good
reasons - they costs are a reasonable to mitigate the business risk of not
having security, so we do them.

But there are times when you just need to discourage people, not truly secure
a site. Not many, but they do exist. Pseudo-security in those cases is cheap
and meets the business needs. Likewise, there are times when best practices
aren't good enough, and you need to go beyond the norm.

Either extreme is driven by thinking through the acceptable risks, evaluating
costs, and making a decision.

------
greyhair
Obscurity takes many forms, and some are useful, particularly to common
hackers, and others are completely useless. There are a number of code walking
tools and disassemblers that will rename all variables and function linkages
and provide annotated source. Someone still has to crawl through that source,
but all the obfuscation is removed. I know professionals that use these tools,
not to hack, but to perform security audits on products that they use within
highly secure settings.

I attended a security conference a few years ago, cannot think of the speakers
name right now, but he said you should assume that professional hackers have
better tools and larger budgets than you will ever have access to.

------
ocdtrekkie
I can't tell you how many times I've seen someone say doing something for
obscurity shouldn't even be done because of the adage. Some of these adages
are really harmful.

InfoSec has very much gotten a bunch of these statements that can't be argued
with, and people won't even take you seriously if you point out the flaws with
password managers or PKI.

I've definitely used methods in my own code which "aren't trustworthy" from a
security doctrine standpoint, but proven near 100% effective, on their own.
Just because a state actor won't be phased by it doesn't mean it isn't a
strategy that'll prevent 99% of automated attacks.

------
ccktlmazeltov
The problem of "security by obscurity" is that it assumes that the whole
security is obtained via obscurity, whereas "more security obtained by
obscurity" is good as it assumes that obscurity is used as defense-in-depth.

------
VonGuard
I had a Mac SE with an ethernet card in 2001, around when Red Worm was loose.
We put an HTTPd server on it, and watched as each RedWorm attempt came in.
This ancient super slow machine not only stayed completely safe despite being
on Mac OS 7.5 or something around there, while Windows machines of current
vintage were being taken apart around the world. Added bonus, the slow speed
of the SE meant it took about 10 times longer for each Red Worm attempt to
give up, so we at least monopolized some tiny portion of those infected
systems for a bit, keeping them from infesting others for a few more
seconds...

~~~
jlgaddis
In case anyone else is confused at first, I'm assuming you meant

    
    
      s/Red Worm/Code Red/g

------
simonjgreen
I am a strong believer in the Swiss cheese model[1] of risk mitigation. I
first learnt about this through pilot training and apply it throughout my
professional life now.

A lot of comments here are saying "don't do x, do y." or "x and y are useless,
you should just do z".

The Swiss cheese model helps you visualise that adding and recognising having
many layers of defense carry value and should be recognised as such.

1\.
[https://en.m.wikipedia.org/wiki/Swiss_cheese_model](https://en.m.wikipedia.org/wiki/Swiss_cheese_model)

------
beams
Sure, it adds a little bit of “security” like the camouflage on tank example,
or the prey jumping. Those examples are poor however, because — unlike
changing default ports, adding knocking, obfuscating code - these examples are
extremely low maintenance/cheap, and come with essentially no downside.

What’s missing here is the discussion of tradeoffs. I fail to see how e.g.,
requiring port knocking adds enough security to justify the annoyance.
Changing the default port, maybe, but given how easily it’s detected anyway,
the cons still outweigh the pros IMO.

------
GhostVII
One thing I didn't see discussed in the article was the balance between the
benefits of security by obscurity, and the benefits of having your code open
source (or at least making your security methods known) so more people can
audit it. Personally I don't actually think there is that much security
benefit to having open source code since most people don't audit random
codebases for fun, but that is one of the arguments I've heard against
obscurity. Of course some methods of obscurity can still be done with open
source code as well.

~~~
topkai22
Open sourcing has to be done with the audience in mind. It generally doesn’t
make sense to (publicly) open source a system that is idiosyncratic to a
single organization. The only likely interested audience is hostile attackers.
A useful general purpose dev tool though? Sure, and the people using it might
be able to help.

------
jchook
Ultimately if governments (such as Australia, US) continue to prevent citizens
from using encryption, we will have no choice but to employ security-by-
obscurity atop secure-by-design principles to have privacy.

For example, making ciphertext look like cleartext[1], or hiding text in
images[2].

1\. [https://steganography.live/](https://steganography.live/)

2\.
[https://github.com/DimitarPetrov/stegify](https://github.com/DimitarPetrov/stegify)

------
3pt14159
In DC there is this concept of "the blob" which is basically shorthand for
"The Washington consensus that isn't verifiable, but that most people parrot
since to hold an opposing view doesn't really get you anything because even if
you're right, nobody will remember. All that they'll remember is that you're
that weird guy that looks at stuff with a strange perspective and that you may
be too dense to social signal that you're in the blob."

I've noticed the same thing with software developers.

\- "Client side encryption in JavaScript is useless!" Until CloudBleed came
out and the only company that was safe was a password manager that used it. To
thwart client side encryption you need to actually modify the contents of the
JS payload, which is detectable. But no matter how much evidence I give that
this tactic works and is actually used in production and that it actually
stops attacks, programmers just don't care.

\- "Don't do security by obscurity!" But then we all implement passwords
(which is just security by obscurity) and the best people in intelligence
don't have LinkedIn accounts. Anyone can join an OSINT forum and see the
actual tools that get used. Security by obscurity works for many, many actors.

There are many, many little bits of stuff like this. Think to yourself: How
many times has code that you've written lead to an RCE vuln that was
exploited. Personally, I can only give a lower bound, and that lower bound is
zero because I'm extremely careful, but I don't pretend that it's never
happened. Anyone that is familiar with data science or economics or political
economy understands that when a signal is dampened the response is dampened.

~~~
josephcsible
> But then we all implement passwords (which is just security by obscurity)

No it isn't. Security by obscurity is explicitly keeping things _other than
passwords and keys_ secret.

~~~
Spivak
I feel like it's a pretty weak point to say that passwords and keys would be
security by obscurity if we didn't carve out a special exception for them. Why
do they get a special exception? Because they're really really hard to guess,
not because they're fundamentally different.

Let me give a real-life example of a good non-password, non-key piece of
secret information that's used for authentication. If you need to recover a
WoW account that you've lost access to the customer service reps will ask you
to tell them the names of the characters on the account. Your account name
isn't secret, and your character names aren't secret. But the relationship
_is_ because they aren't ever publicly connected. The odds that someone other
than the account owner having this information is low and the odds of guessing
it by chance is impossible.

~~~
floe
They're _verifiably_ hard to guess. That is fundamentally different.

(At least when passwords are generated with enough entropy.)

~~~
Spivak
But does that make them different or are they just things that are easy to
verify? If you could calculate the entropy of another authentication scheme
would it be included?

The danger of security by obscurity is that your system might not have as much
entropy as you initially estimate and can be easily defeated. Sounds a lot
like the vulnerabilities in normal crypo applications, right?

------
axaxs
I don't buy the poll, at all. When I move SSH off port 22, I don't get 50
percent traffic. I get 0 percent. It's the first thing I do to harden any
server.

------
calibas
I think there's two different things we're talking about here, one is hiding
things and the other is using obscure ways of doing things.

Changing your SSH port is the former.

Using NETRJS instead of SSH is the later.

Hiding things is just a good security practice in general (just understand
hackers have access to port scanners) so that kind of "security through
obscurity" isn't a bad idea. Using obscure protocols is something quite
different, and really shouldn't be in the same category.

------
tyingq
Moving sshd to a non-standard port is certainly a good example. If only
because it turns down the log noise. So that an "unauthorized connection"
stands out.

------
ldng
The bad rep is probably due to too many people having used Obscurity as an
excuse to hide the fact the actual security was, at best, an after though for
their product.

------
hnruss
The problem with security through obscurity is the false sense of security
that it provides. This in itself can lead to vulnerabilities.

Regarding data obfuscation, for example:

\- Can an average person differentiate between encrypted data and well-
obfuscated data just by looking at it?

\- Would it be reasonable for the average person to assume that obfuscated
data is equally as "secure" as encrypted data?

\- Might someone store and transmit "secure" data differently than normal
data?

------
shuringai
"if you can reduce risk probabiloty with zero cost..." since when is obscurity
zero cost? You've seen how much denuvo and vmprotect cost? Can you name any
free free code obfuscator besides proguard that actually works? Can you name
one that supports golang or rust? Security trough obscurity is considered bad
because it's not zero-cost, and the investment you put into it might rather go
into actual security

~~~
sedatk
Changing the port number is quite cheap.

~~~
austincheney
That is also not a security control.

~~~
sedatk
Theoretically, yes. But if it makes you get off the radar of some malicious
attacker who is capable of exploiting you, then the mission is accomplished.

~~~
austincheney
Not at all. The attacker will find it within a minute after running a port
scan.

~~~
sedatk
Not applicable to mass scanners. They simply can’t afford scanning all ports
for all hosts.

------
abhishekjha
This came up in the stackoverflow podcast where Reddit founders were the
guest. They mentioned that they stored plain text passwords initially which is
fundamentally a bad design but at the same time it helps to block spam. If a
user starts to create a lot of accounts programmatically they generally use
the same password thus much easier to filter. Security via Obscurity, if you
can do it, can be very very effective.

~~~
tgb
You could do that without a plain-text password, though with a salt it would
be harder (though you could still do it proactively by checking the password
when the account is made).

~~~
tzs
To check the new account's password at creation time against existing salted
hashes, you'd have to hash the new password with each existing password's
salt. If you are using something like bcrypt or scrypt which is designed to be
slow at this, that might take a while if you have a lot of existing accounts.

Maybe a Bloom filter approach? Besides the salted slow hash you store of each
password, also put the password in a Bloom filter. Check new passwords against
the Bloom filter. You'll get some false positives that way, but maybe that is
acceptable.

I'm seriously tempted, if I ever have to implement a password system again, to
allow up to 256 characters, just store an unsalted SHA256 hash, and tell
people on signup that they should be using a password manager with a long
random password if they care about security of their account.

------
arendtio
In my opinion, the article misses the point: Security by obscurity might be
beneficial, but it is by no means as strong as real security. So the problem
is, that people who take the obscurity road, might not care so much about the
rest.

Security by obscurity is simply a completely different class, comparable with
dollars and cents. So if you care about security would you rather focus on the
dollars or on the cents?

------
User23
I practice security through obscurity every day. For example I don't flash
large amounts of money when out in public. The notion that security through
obscurity isn't security is and always has been monumentally stupid. In some
sense sure cover is better than concealment, but in the real world 100%
concealment is better than 100% cover since in the former case one won't be
taking fire at all.

~~~
kag0
the problem with that (and security through obscurity in general) is that if
your opponent attacks at random, they still have a chance of getting you.
having lived in a bad neighborhood I can tell you that you don't have to flash
money, or even look like you have any money to get into trouble (someone
chooses you at random for a gang imitation, crackhead wants your shoes, or
someone is just straight up crazy). it's always better to have as much cover
is reasonable, and beyond that concealment doesn't hurt.

------
CodeArtisan
A field where obfuscation is very common is commercial video games where they
are now up to the point of using a virtual machine that generates an
instruction set randomly at compile time to obfuscate some part of the code.
These games are still cracked almost on release day.

[https://en.wikipedia.org/wiki/Denuvo](https://en.wikipedia.org/wiki/Denuvo)

~~~
somerando7
It makes the barrier to entry MUCH, MUCH higher. First you have to unpack a
binary, THEN you have to fixup any custom VM call that it makes. Basically
only incredibly specialized people/groups will be able to do this.

Contrasting to games of the 2000s the barrier has been raised significantly
for hackers

------
goalieca
Security by obscurity has a bad reputation because it should never be used in
place of a proper secure solution whenever possible.

Most security experts will argue in adding layers of defence where the proper
solution is not possible.

There are other considerations for obfuscation as well. A risks assessment
might consider the skill of the attacker and the resources required (eg:
computational power) in order to break in.

------
blackflame7000
Security through obscurity is not fool proof, but neither is cryptography in
general. It's all about making the problem as difficult as reasonably
possible. Someone could guess a private key by incredible luck according to
Murphy's law and while the probability of that happening is so extremely
improbable, it is not zero, and therefore not foolproof.

------
nitwit005
These days you have to assume hackers have read the engineering new hire guide
that you wrote up. The SSH port will probably be in there.

A big tech company will have tens of thousands of current and former
employees. Those employees may try to break in, and all the easily accessible
internal wikis or other common resources are going to end up on some hacking
forum somewhere eventually.

------
throw149102
One point I haven't seen brought up yet is that anywhere from 25% to 35% of
data breaches are related to an internal actor. Your obscurity will do nothing
in those cases, because the internal actor will actually _know_ about the
obscurity. That being said there is a place for obscurity in security, it just
has to be traded off with the usability issues.

~~~
sedatk
He didn’t claim it works for all cases. But it works for some cases, such as
throwaway accounts.

------
fortran77
I agree. If I run my SSH and my VPN on non-standard ports, the number of
probes I get a day goes from hundreds to one or two.

If I change the admin page of a Wordpress corporate site URL to something
other than the standard wp-login.php the number of scripts that try to crack
it each day goes from a thousand to zero.

It's _very_ effective, along with other precautions for locking these things
down.

------
johnisgood
In many cases it is much better to fake (or spoof) information, rather than
try to hide it. Browsers come into mind. Instead of trying to hide information
about yourself, which would make you unique, just give them false information
that is common, and that way you blend in with the rest of the people,
everyone is content, and nothing true has been revealed of you.

------
Merrill
"Roll your own crypto" may not be entirely bad either.

Suppose that you encrypt your message using "my own crypto". The result is
ciphertext that looks like a random bit sring. Then encrypt the ciphertext
using a standard algorithm such as AES.

An attacker will have difficulty since a successful decryption of AES is hard
to recognize as such.

~~~
thomasahle
Or you could spend that extra compute on just using a longer AES key. (Or if
you distrust AES itself, make your second layer some other well thought
through encryption scheme.)

------
Naac
Agree with most people here that security by obscurity is bad "by itself".

For example, changing your public server ssh port from 22, to say, 2942, is a
great way to limit the amount of bot autoattempts from trying to log into your
server. Having a password-less ssh port 2942 open is clearly bad, but not when
combined with all the standard good practice ssh security.

~~~
lynndotpy
My understanding is that it's important to keep SSH (and other services) on a
privileged port. (I think the default is <1024.) Otherwise, unprivileged
malware that could cause the SSH server to crash and take over the
(unprivileged) port. No idea if this actually happens in practice though.

------
random3
Security by obscurity naturally transform security into a probability. When
you use given hard opaque rules (e.g. TLS + X Y Z) you stop thinking in depth.

Instead when you think about layers of obscurity, you go much deeper affecting
the probability at each layer (host, port, etc.)

In reality, at a different conceptual level, things like TLS are also bundles
of obscurities.

------
phs318u
One problem with “obscurity” is that - by design - as few people are aware of
the detail as possible (otherwise it’s not obscure. That’s both it’s strength
and it’s weakness. With far fewer eyeballs on it, it’s easy to think you’ve
gotten it right, when in all likelihood you’ve probably gotten it wrong
(somewhere).

------
stjohnswarts
Security by obscurity is dumb to count as part of your main toolbox of
tactics. However it can be icing on the cake, like moving ports around and
turning off ssh root login at all. Lowering your footprint never hurt.
Sometimes you just have to run a little bit faster than the other guy if
you're being chased by a bear.

------
darkerside
Shhh not if you keep telling people about it!

------
kjgkjhfkjf
Yes, making security breaches harder for attackers at zero cost is obviously
good. But obscurity does not have zero cost if it makes the system less
efficient to operate. Having multiple cars in a presidential convoy is
inefficient; using non-standard ports adds complexity; obfuscating data makes
debugging harder; etc.

------
calvinmorrison
Security by Obscurity is great. This is a takeaway from the port knocking
conversation here last week. If there's a zero day exploit in sshd, I'd rather
it be behind some layer you would have to "know" to get in, rather than
sitting open to the world. Why make your target bigger than it needs to be?

------
gumby
By definition cryptography is security through obscurity as you map something
from a small space into a large space (e.g. multiplying Two numbers you know
is easy while finding two prime factors is hard).

The aphorism is unfortunately too short to be enlightening except to someone
who already understands it.

------
dec0dedab0de
I'm surprised port knocking never really caught on. Does anyone know of it
being used in production anywhere?

~~~
jonfw
port knocking is just, another way of doing a password, right? It's pretty
much just a PIN code with a slightly obscure method of inputting the digits.

I would imagine that this is open for a man in the middle attack- if this
traffic were intercepted- you'd be able to see port numbers, right?

~~~
dec0dedab0de
_if this traffic were intercepted- you 'd be able to see port numbers, right?_

Sure but I hope the service you're opening up with the knock is actually
secure like ssh.

The idea is just that you cant portscan to find something to attack. Its
basically the same reasoning behind using non standard ports, but takes it a
bit further.

~~~
jonfw
The non-standard port is of trivial value, but it's practically zero cost-
that's why it's used. Port knocking doesn't have that benefit- you're
establishing another secret that has to be maintained and accessed- but unlike
key based auth or passwords- that secret is insecure in transit and unwieldy
to use.

If you want to add another layer and manage another secret- why not just add
another layer of the lower-friction and more secure methods we already use to
establish secret-based auth?

~~~
TillE
Which comes all the way around to: a VPN sounds like a better option in every
respect. More secure, universally supported.

I think the main reason you'd use port knocking is because it's fun and cool.

------
anonu
I seem to recall a story about how a remote server would open a port only
after a few unsuccessful calls in sequence to a pre-determined set of ports
right before.

I've always wanted to set up something like this - but it seems like a pain to
remember everytime i want to connect..

------
lowwave
Totally! Security by obscurity is awsome!!!! Had a software running for over
10 years for many users. Never a hack because the backend is some very obscure
framework on jvm. Just upgrade the JDK and everything still runs fine after
over 10 years.

~~~
lowwave
However, comments like "SSH runs in port 22 and my credentials are
utku:123456. What is the likelihood of being compromised?"

You have any idea what kind of RAM and computing power an average user have
access now days. Think of better obscurity abstractions than that.

Try bcrypt/scrypt hash your shadow password on linux.

------
perlgeek
Security is not just about preventing attacks, but also about detection and
response.

In detection, honeypots are very useful, like a machine named
gitlab.yourcompany.com in your internal network, but which just alerts SOC
about login attempts. That's pretty much obscurity.

------
Havoc
The people arguing against "security through obscurity" seem to have an
implicit assumption that you're not also doing proper security.

e.g. I move my SSH ports to a high port. But I also ed25519 keys. I think the
port helps though. Plus keeps the logs clean.

------
burtonator
I mean crypto is based on security by obscurity if you think about it. It's
just REALLY obscure.

You can technically compute the private key for someone's Bitcoin wallet for
example. It's just you'd hit the end of the heat death of the universe by
then.

------
MrXOR
Security by obscurity is dangerous. You can eliminate any obscurity with
reverse engineering, spying, etc. Obscurity makes your system prone to
catastrophic collapse and loss of security. Security by open design is
necessary but even insufficient.

------
BorisMelnik
yes agree with this whole heartedly - have been using security by obscurity in
my servers, home and office for 30+ years probably since I learned about it in
the Linux Bible or something like that.

I don't think anyone assumes when people say "security by obscurity" they mean
"only" it is a great layer to add in addition to.

In my home for instance, the entire back of my house is hidden with 10 foot
trees so when people drive by the road, they don't see my house. Now I've got
a deadbolt, alarm, cameras, a dog, and a gun to add to my layers but having
those trees there is a nice feature.

------
KingOfCoders
I agree, I change the SSH port on all machines and have much less hacking
results from drive by hackings.

Security-by-obscurity doesn't replace real security of course, but it removes
a lot of noise.

------
meowface
Just a tip, since I notice the author included it in their poll: "nmap
-p0-65535" can be (almost) abbreviated to "nmap -p-". That excludes port 0 but
is otherwise identical.

------
blamestross
Avoiding "Security through Obscurity" is not about how useful it is. It is
about keeping security experts and cryptographers alive and out of prison.

------
OminousWeapons
The article is correct but they miss the true value add of security through
obscurity: signaling lower ROI to attackers. Security through obscurity
generally forces attackers to perform more actions and do more recon. Every
additional action taken increases the risk of detection by defenders, costs
the attackers valuable time (meaning lower ROI), and makes the target less
appealing relative to other targets. Security through obscurity tactics are
absolutely useful tools in a defender's toolbox (in conjunction with other
security countermeasures).

~~~
thinkharderdev
I think it depends on what system you are talking about and where it is. So if
you have an internet-facing server running sshd on port 22 then you are going
to get hammered with low-effort, automated scans and changing to non-standard
port can cut down on noise and at least "hides" you from low-effort attackers.
But if your server is in a hardened, private subnet then any attacker that is
even in a position to connect port 22 has already bypassed multiple layers of
security and is already invested so likely won't be in the least bit deterred
by a non-standard port.

------
wgjordan
Indeed, overly broad denunciations of 'security by obscurity' come up as a
point of confusion often on HN, and this post provides a good, coherent
summary of a proper response.

A defense mechanism that only partially mitigates an attack vector could be
considered 'security through obscurity' if deployed on its own, but _that same
mechanism_ could be considered 'defense in depth' if deployed alongside other
defense layers as part of a more comprehensive security model.

------
nightsd01
Rolling your own crypto: most definitely a bad idea.

Using existing best security practices AND adding in a few low risk plot
twists: genius.

------
dathinab
Also if the security get much more complex it's not worth it.

If it's sheap (initial dev + maintenance + usability) then why not do it?

------
terlisimo
Arguably, most security is security through obscurity.

No password -> simple password -> complex password

Plaintext -> Caesar cypher -> Vernier cypher -> modern cyphers

40-bit crypto -> 56-bit crypto -> 128-bit crypto -> 256-bit crypto

0.0.0.0 network allow-list -> /24 network allow-list -> /32 (per host) network
allow-list

allow by default -> deny by default

standard port -> non-standard port

We just add layers of obscurity until they add up to "enough" and don't grow
into "beyond tedious".

~~~
saagarjha
Passwords and keys are not "obscurity", they are mathematically difficult to
break if done correctly. There is no mathematical guarantee for security by
obscurity.

------
petjuh
I've often wondered about using an older OS such as OpenVMS for which hacking
tools simply don't exist.

~~~
wglb
Good hackers make their own tools over a weekend.

And if it is open . . .

See [https://nostarch.com/bughunter](https://nostarch.com/bughunter)

------
zaptidizap
I like passing around a bunch of completely meaningless "keys" and passwords
that innitially always fail.

------
aazaa
This article could benefit from a definition of "security by obscurity."

Every crypto system is based on obscurity of one kind or another. That private
key, password, or token is just an obscure form of information that may yield,
eventually, to a brute force attack. Or not. It's really hard to know for
sure.

~~~
iso8859-1
Does this mean that
[https://en.wikipedia.org/wiki/Provable_security](https://en.wikipedia.org/wiki/Provable_security)
does not exist?

------
server_bot
Fun article! I'd add that malware packers are a good example of an obscurity
layer that's typically effective in practice.

But food for thought: in the general case you can't reliability predict the
efficacy of an obscurity mechanism, so you never know if it's an actual layer
of defense or a placebo.

------
rgj
It was never “security by obscurity is bad”

It was “do not _rely on_ security by obscurity”

------
m3kw9
Ppl that say obscurity is bad must be saying look here try hacking my site,
it’s unhackable and secure. Everyone knows it’s good, it just make you look
weak to say obscurity is useful. Ppl are scared to admit how useful it is in
security.

------
_emacsomancer_
If the security-by-obscurity means binaries that you don't have the sourcecode
to running on your systems, not only is that such a 'security method' not
underrated, it's downright dangerous.

------
blackflame7000
Another name for security through obscurity is stealth

------
worker767424
Aren't passwords just security through obscurity?

------
LordOfLamers
Be different from other one else, stay calm forever.

------
cosmotic
Intentional obscurity needs to be identified as such so maintainers know what
follows isn't actual security.

Obscurity also clouds maintainability, often making things difficult to
diagnose and debug.

------
paradox242
Yes, this is one of those rules that is designed for novices. It's actually a
reasonable part of any security strategy for the reasons you mentioned.

------
ac42
Passwords / keys are obscure by definition. With this in mind, "Security by
Obscurity" technically adds one or two bits to your key.

------
hatch_q
Blog seems to ignore the fact that password is just security by obscurity. And
having absolutely no obscurity with security is very hard.

------
Majromax
> In this post, I will raise my objection against the idea of “Security by
> obscurity is bad”.

I think this article's fundamental flaw is that it conflates the concepts of
_obscurity_ and _a secret_.

To start with, a definition: a system is secure if an attacker has no
reasonable chance of unauthorized access over a relevant period unless they
are in possession of necessary secrets.

SSH with public-key authentication is secure by this definition, since the
(remote) attacker has no realistic chance of guessing the proper secret key
within a human lifetime and there is no better-than-chance way to obtain the
secret key. Likewise, a strong, high-entropy password is impractical to guess.

Running on nonstandard ports, however? It doesn't add practical security
because guessing is so trivial. The author's twitter reach had a 50/50 split
on whether they scanned all ports for pen-testing, so that implies that using
a nonstandard port increases the time-to-compromise from either (lifetime of
the universe) or (about an hour) to twice that, depending on whether the
second (real) layer of security is vulnerable. In neither case does the
obscure port provide meaningful protection.

Some activities like port-knocking _can_ add security, but only if the
practitioner thinks of the knocking as a secret from the start. That requires:

* Limiting who has knowledge of the secret (i.e.: a port knocking routine known only to you is secret; one distributed in a public client for access to a production service is not), * Having plans in place to change the secret if it is ever compromised (DeCSS) or found to be flawed, and * Ideally ensuring that the secret cannot be guessed / confirmed independently of other secrets.

Other suggestions in the article ignore this difference:

* Database encryption requires an attacker to possess two secrets for extraction (internal access to the database plus the key) rather than just one. It's not obscurity. * Randomizing variable names or obfuscating code is _not_ a secret because an interested attacker can reverse the obfuscation with ordinary human levels of effort. The confidence here is strictly false, since it "secures" against low-effort attackers and not high-effort ones. The "secret" is distributed publicly, so it is no secret at all. * The convoy example is again a secret; the point is that a would-be attacker does not know which car contains the target _and has no reasonable ability to guess_ with better-than-chance success.

Obscurity goes from marginally effective (or outright ineffective) to
counterproductive when its implementation makes it harder for the designer to
reason about their own system. Someone who rolls their totally unique
cryptosystem is relying in part on algorithmic obscurity for their security,
but in doing so they give up on established (and battle-tested) best practices
in favour of their own limited analysis.

Ultimately, the "Swiss cheese" model of security is a poor analogy because a
big number for a human is a small number for a computer. To take the convoy
analogy again, a would-be attacker is only going to get one shot, but a
computer can try billions.

------
jkire
To me all these slogans around security is to ensure people really, truly,
actually think about things before they go against the grain. Is using
obscurity as part of your defence always wrong? No, but equally it often adds
a false sense of security. Popularising these easy to remember slogans helps
change peoples defaults. Nowadays, if someone sees an attempt at security by
obscurity it (hopefully) rings alarm bells and causes them to interrogate it
to ensure that there is also other security measures in place, or that it is
otherwise OK. It's the same with "never roll your own crypto"

I find it somewhat interesting that the article uses an example which falls
right into another pitfall that "security vs obscurity" is trying to prevent.

> SSH runs in port 64323 and my credentials are utku:123456. What is the
> likelihood of being compromised? > > Now we changed the default port number.
> Does it help? Firstly, we’ve eliminated the global brute forcers again since
> they scan only the common ports. ... So, if you switch your port from 22 to
> 64323, you will eliminate some of them. You will reduce the likelihood and
> risk.

This is technically correct. However, the author has identified a security
concern that he wants to mitigate: brute force attacks. Now, you could try and
reduce that risk by using a different port which might reduce it by 50% _,_
or* you could fix this issue by deploying fail2ban (or using ssh keys, or VPNs
and bastion boxes, etc), and thus negating that attack vector entirely. There
isn't even a usability argument here: making people remember the right port
for ssh is _less_ usable than setting up fail2ban. Of course there are tonnes
of other attack vectors to consider, but in general where possible its better
to "properly" (fsvo.) mitigate those concerns and only rely on obscurity where
that isn't possible. If a concern is mitigated than adding obscurity does
almost nothing, while likely proving to be more annoying to the end user (like
in the case of specifying a port in the above example).

Now of course that's not to say that you should never use obscurity, but if
you do then I think its entirely reasonable that you are prepared to give a
good justification why its appropriate. For example, sharing via secret URLs
is a good example where it can be easy to justify in some settings, but it
equally may not be OK for documents that are really really sensitive as its
relatively easy for links to be shared in error with the wrong people.

RE some comments about using obscurity to signal that your deployments would
be harder to get into and thus for attackers to not bother: I'd genuinely love
to know if that is true or not, I wouldn't be surprised if attackers assumed
obfuscation mean that the more advance security measures hadn't been deployed
(otherwise why bother with obfuscation?).

* Based on the twitter poll in TFA, though if you have a targeted attack it seems sensible to assume that if port 22 doesn't work they'd try again with other methods

------
valgeirg
I am not going to read the article because i am to lazy, but I agree.

------
kseifried
Making security by obscurity actually work

The reality is security by obscurity CAN work, but only if three critical
elements are met:

The first is to know when the obscurity has failed

The second is to be able to quickly change the obscured component (e.g. a
password)

The third element is the hardest, security by obscurity only really works if
you can survive exposure of the obscured data/system

Which leads to the fourth element: security through obscurity IS NOT security
through secrecy (so either I'm a liar or bad at counting, leave your vote in
the comments below)

Let's start with the 4th element, obscurity vs secrecy. Passwords. Passwords
generally only work if they are secret. Some password systems like Kerberos
take great pains to ensure passwords remain secret, for example by NOT sending
the password itself to a remote system, but by sending proof that the user has
the password (grossly simplified, but generally correct, now you understand
Kerberos!). Secrecy involves hiding things that if exposed will be a problem
that can't be solved, like your password, the formula to Coca-Cola and so on.

Obscurity won't work for things that need to be secret. Obscurity won't work
for things that once exposed result in the game ending.

Even when an item can be obscured, it is still important to know when it is no
longer obscured, otherwise you now have an element of your security system
that has effectively been breached. For example if you are using randomized
port numbers to prevent SSH scanners from constantly trying default
username/password combinations and someone (like shodan.io) port scans you and
publishes your SSH server ports you either need to change that, or not be
relying on that obscurity for your security (e.g. I used to change my SSH port
#'s just to reduce logging activity and make it easier to filter/read logs for
actually malicious activity).

The second element is that once your obscurity becomes known you need to be
able to change it, if you can't change your SSH port # (because you don't have
a way to tell clients where it is) then you have a security control that
cannot be recovered and you lose it. Security elements should always strive
for long term survivability because the simple fact is attackers get to try
more than once.

The third and final element (because we started counting at 4!) is your system
cannot simply fail because the obscure element was discovered. Using a no
standard port for SSH works if you also use strong passwords or (ideally) key
based login. Obscuring SSH ports and leaving a default admin:1234 login is
brittle, and as evidenced by scanners like shodan.io easily exploited.

I think, honestly, the best use case for "security by obscurity" is to cut
down on the noise of logs and casual scanning/scripted hacking, which can be
valuable, having less chaff to sort through for actual attacks can both save
time and money, but also give you a better chance of finding the real attacks.

[https://app.voice.com/post/@kurtseifried/making-security-
by-...](https://app.voice.com/post/@kurtseifried/making-security-by-obscurity-
actually-work-1599853811-1)

------
cthalupa
This article misses the point and makes a bunch of arguments that fall apart
on anything more than the surface level... much like security through
obscurity.

We'll look at the SSH port example.

What does changing the port get you? You no longer get hit by the automated
sweeping that hits basically all internet accessible IPs. Cool! So you had
root/password or root/apple or whatever, you were going to get owned by the
automated scans, good, you're now more secure. But you shouldn't have been
using a weak password to begin with, and now there is a very real risk that
you think you are more secure than you were previously.

He compares this to animals that have natural camouflage, or the President
switching cars in his convoy. But there's no guarantee that you will see an
owl in a tree, or be able to determine which car the President is in. But you
can scan every port on an IP and find sshd listening with fingerprinting. The
cost there is basically zero for someone that wants to attack you. If all I
had to do was wait a few seconds longer to make sure I saw an owl or know what
car the President is in, then they would not be effective either. You cannot
compare situations where there are specific limitations and use them as proof
positive for a situation where those limitations don't exist.

And this is important: His recommendation to run sshd on 64323 is also just
actively making you __LESS __secure. Ports under 1024 are privileged on Linux
- you must have superuser privileges or otherwise be granted access to bind to
these ports. No such protections exist for 64323. Now, let 's say you have a
user level compromise on that server - they start a process monitoring to see
if sshd ever restarts/crashes/otherwise stops listening on the port it was
listening on, and as soon as it does, they start their own malicious sshd
replacement. Now all it takes is someone ignoring the host key mismatch to
give away their credentials, which the attack can then likely use to penetrate
further into your environment.

The author doesn't fully understand the add on effects of his suggestion here,
and as a result one of his security by obscurity tips makes you less secure to
a focused attacker. Meanwhile, if you REALLY wanted to secure SSH you do not
make it listen on internet accessible IPs, and only have it available when you
VPN in and access it via a jumphost while using key based auth with 2FA on top
of it.

When you start obfuscating your code, using random variable names, and
generally making it harder to read, how much more likely are you to introduce
bugs than you would with a clean code base? Are human introduced bugs more
likely to be a security risk than variable names being random? Than code being
obfuscated?

I don't disagree with encrypting the database, but I also don't consider
encryption or password/key protecting something security by obscurity.

Basically: Obscuring things helps prevent you from low effort attackers that
should not be scary to you to begin with. They do nothing to minimal amounts
to protect you from dedicated attackers, and potentially introduce new risks
that allow easier access from dedicated attackers. The sort of security
measures you should be implementing to stop dedicated attackers will already
eliminate the risk from the low effort attackers.

NONE of the arguments in this article are new, and they have all been argued
against quite extensively in the past.

------
yalogin
The author doesn’t understand the phrase “security by obscurity” and doesn’t
know why we use that. He took the normally used phrase literally and ran with
it.

The phrase is used to suggest developers shouldn’t think that obscuring
something provides security. We don’t say not to obscure stuff. In fact all
the examples in the articles are from what’s already used in products. So the
security community uses the best available method to secure the given
task/stack already.

------
resfirestar
In my opinion the SSH example with a non default port, random username and
easy password is a perfect example of a bad kind of security through
obscurity: instead of a user friendly and foolproof approach (disabling
password authentication and using keys), we introduce multiple layers of
obscurity that make life harder for the sysadmin and users, which collapse as
soon as someone creates an account on the box without a sufficiently obscure
name. When it inevitably fails (either because of the aforementioned reason or
because a global scanner has the clever idea of trying some more obscure
usernames) everyone looking back on it will wonder why you built this Rube
Goldberg machine instead of just using SSH keys.

Changing the RDP port is a slightly better example of actually using security
through obscurity as a defensive layer because Microsoft doesn’t give you any
good ways to lock down RDP (best practice is of course keeping it behind a VPN
or using a Remote Desktop solution with a more modern authentication system),
but from a practical point of view I know several companies that were hit with
ransomware this year via RDP on a non-standard port. I think they would rate
the risk reduction from that approach pretty low.

Finally, symmetric database encryption is not an obscurity measure, as the
author himself points out it specifically protects data against an attacker
who can query the database but not find the key. Whether the attacker can get
the key is a matter of capability not determination or luck.

