Hacker News new | past | comments | ask | show | jobs | submit login
Security by obscurity is underrated (utkusen.com)
935 points by pcr910303 8 days ago | hide | past | favorite | 508 comments





Agree with the article.

People have been misinterpreting "security by obscurity is bad" to mean any obscurity and obfuscation is bad. Instead it was originally meant as "if your only security is obscurity, it's bad".

Many serious real-world scenarios do use obscurity as an additional layer. If only because sometimes, you know that a dedicated attacker will be able to breach, what you are looking for is to delay them as much as possible, and make a successful attack take enough time that it's not relevant anymore when it's broken.


In nature, prey animals will sometimes jump when they spot a predator[1]. One of the explanations is that this is the animal communicating to the predator that it is a healthy prey animal that would be hard to catch and therefore the predator should choose to chase someone else.

I think we can kind of view obscurity in the same way. It's a way to signal to a predator that we're a hard target and that they should give up.

Of course in the age of automation, relying on obscurity alone is foolish because once someone has automated an attack that defeats the obscurity, then it is little or no effort for an attacker to bypass it.

Of course, sprinkling a little bit of obscurity on top of a good security solution might provide an incentive for attackers to go someplace else. And I can't help but think of the guy who was trying to think of ways to perform psychological attacks against reverse engineerers [2].

[1] - https://en.wikipedia.org/wiki/Stotting

[2] - https://www.youtube.com/watch?v=HlUe0TUHOIc


>I think we can kind of view obscurity in the same way. It's a way to signal to a predator that we're a hard target and that they should give up.

This has it completely backwards. Security through obscurity's goal is not to signal predators, it's the opposite. The goal is to obscure, to hide. The "signal" is there is nothing here (or nothing here worth your time). One of the best examples (it's in the article!) is changing the default SSH port. Just by obscuring your port you can usually filter out the majority of break-in attempts.

The only way security through obscurity signals to "predators" is if they've seen past your defence, and thus defeated the obscurity. Obscurity (once revealed) is not a deterrent. Likewise an authentication method (once exploited) is not a deterrent.

>Of course in the age of automation, relying on obscurity alone is foolish because once someone has automated an attack that defeats the obscurity, then it is little or no effort for an attacker to bypass it.

This is true of any exploit basically. Look no further than metasploit. Another example: a worm is a self-automating exploit.


> This has it completely backwards. Security through obscurity's goal is not to signal predators, it's the opposite. The goal is to obscure, to hide. The "signal" is there is nothing here (or nothing here worth your time).

Most of the usages of "security through obscurity" that I've seen dissected and decried haven't been in the sense that something was being hidden, but rather that something was being confused. For example, using base 64 encoding instead of encrypting something. Or running a code obfuscator on source code instead of making the code actually secure.

Either way the economic costs that I'm talking about are valid. If an attacker sees that your SSH port isn't where it's supposed to be OR if an attacker sees that your SSH port ignores all packets sent to it (unless you first send a packet thats 25 0xFF bytes), then either way they're being signaled that you are more trouble than the computer that has an open telnet port.

There are slightly different usages of the same word, but the effect looks to me to be the same. More investigation or automation can make the obscurity go away, but it does make things a bit harder.


Fair point! Obscurity as confusion is not what I had in mind, but your points on confusion are totally valid. Your analogy with predators works better here.

Using base64 encoding, or encrypting your database, are both examples in the article. While I agree base64 is super trivial, the point about either of these is defence in depth. In the language of the article, it's reducing likelihood of being compromised.

>If an attacker sees that your SSH port isn't where it's supposed to be OR if an attacker sees that your SSH port ignores all packets sent to it (unless you first send a packet thats 25 0xFF bytes), then either way they're being signaled that you are more trouble than the computer that has an open telnet port.

This is semantics. Personally I'd say if an attacker cannot sense anything to connect to, there is no "signal" you're sending. You're rather not sending a signal that you're a threat, as you're not sending a signal at all due to being functionally invisible. Otherwise, we could say literal nothingness is sending the same signal that your server is. We agree on the substance here, i.e. the obscurity increases the economic cost of hacking and works as a disincentive, so we may just agree to disagree on the semantics.


There is supposed to be a response when a port is closed telling you the machine is online but not listening to that port. https://en.wikipedia.org/wiki/Port_scanner

most people have firewalls configured to simply drop traffic not destined for open ports, in which case there is no response as the traffic never makes it beyond the firewall.

If you'd like to be very visible in a different way, you could always waste resources:

1. Endlessh: https://news.ycombinator.com/item?id=19465967

2. Tarbit: https://github.com/nhh/tarbit


“Security through obscurity” means something like e.g. “uses a bespoke unpublished crypto algorithm, in the hopes that nobody has put in the effort to exploit it yet.”

Usually this is a poor choice vs. going with the published industry standard, because crypto is hard to get right, and people rolling their own implementations usually screw it up, making life much easier for dedicated attackers than trying to attack something that people have been trying and failing to breach for years or decades.

Software makers for example typically don’t publish the technical details of their anti-piracy code. But this usually doesn’t prevent software that people care about from being “cracked” quickly after release.


Banking Software uses all sorts of security through obscurity. Infact Unisys used to make custom 48bit CPUs for their clearpath OS to make targeting the hardware very difficult without inside knowledge of the chip architecture.

You are making the same argument this article is trying to explain to you. Security by Obscurity is not bad because _on its own_ its not enough, its good because coupled with other layers it adds security. I have been told to remove security by obscurity layers from systems by people that don't grok this. Security was, in a few cases, reduced to nothing. Systems that have one industry standard approach only laid totally open on the Internet due to a single misconfiguration or a single CVE being published. Any other layer would have helped, however "insecure", but they were removed due to the misconception that the layers themselves were "insecure". I would go so far as to say the first layer should always be security by obscurity for any unique system. If you fire up a web server and have the first security requirement that each http request must have the header X-Wibble:wobble I promise you this layer of security will be working hard all day long. Cheap, impossible to get wrong, it's not sufficient but it works.

Using a non-standard SSH port is a bad example because nmap can see through that deception in a few seconds. Any attacker who is looking for more than just the lowest of low-hanging fruit will not be even slightly deterred.

A better example would be a port-knocking arrangement that hides sshd except from systems that probe a sequence of ports in a specific way. This is very much security by obscurity, because it's trivial for anyone who knows the port sequence to defeat, but it's also very effective as anyone who doesn't know the port sequence has no indication of how to start probing for a solution.


> Using a non-standard SSH port is a bad example because nmap can see through that deception in a few seconds.

Compared to milliseconds. Do yourself the favor and open one sshd on port 22 vs one on a port >10000, then compare logs after a month. The 22 one will have thousands of attempts; the other one hardly tens if even any.

The 99% level we're defending against here is root:123456 or pi:raspberry on port 22. Which is dead easy to scan the whole IPv4 space for. 65K ports per host though? That's taking time and, given the obvious success rate of the former, is not worth it.

Therefore I'd say it's the perfect example: It's hardly any effort, for neither attacker nor defender, and yet works perfectly fine for nearly all cases you'll ever encounter.

EDIT: Note that it comes with other trade-offs, though, as pointed out here: https://news.ycombinator.com/item?id=24445678


I know we've spoken in another thread, but I think it's important for people to understand that this sshd thing is a perfect example of why it isn't this easy: You reduce log spam moving to a non-privileged port, but also reduce overall security - a non-privileged user can bind to a port above 10k, but can't bind to 22. sshd restarts for an upgrade, or your iptables rules remapping a high port to 22 get flushed, that non-privileged user that got access via a RCE on your web application can now set up their own fake sshd and listen in to whatever you are sending if it manages to bind to that port first and you ignore the host key mismatch error on the client side.

Or you can implement real security, like not allowing SSH access via the public internet at all and not have to make this trade off.


Here's a counter-example (I said else-where in this thread):

Imagine a 0day for SSH drops tomorrow. Almost immediately script kiddies all over the world will be trying to take over everything running on port 22.

I'll also point out that we're generally talking about different threat vectors here, so it's good to lay them out. I don't think obscurity helps against a persistent threat probing your network, it helps against swarms.

> a non-privileged user can bind to a port above 10k, but can't bind to 22. sshd restarts for an upgrade, or your iptables rules remapping a high port to 22 get flushed, that non-privileged user that got access via a RCE on your web application can now set up their own fake sshd and listen in to whatever you are sending if it manages to bind to that port first and you ignore the host key mismatch error on the client side.

This is getting closer to APT territory, but I'll bite. If someone has RCE on your SSH server it honestly doesn't matter what port you're running on. They already have the server. You're completely right it would work if you have separate linux users for SSH and web server. Unfortunately that's all too rare in most web-servers I see (<10%), as most just add SSH and secure it and call it a day (even worse when CI/CD scripts just copy files without chowning them). But let's assume it here. In reality, even if you did have this setup this is a skilled persistent threat we're talking about (not quite an APT, but definitely a PT). They already own your website. Your compromised web/SSH server is being monitored by a skilled hacker, it's inevitable they'll escalate privileges. If they're smart enough to put in fake SSH daemons, they're smart enough to figure something else out. Is your server perfectly patched? Has anyone in your organization re-used passwords on your website and gmail?

You're right that these events could happen. But you have to ask yourself what's actions of yours will have a bigger impact:

* Changing to non-standard SSH port, blocking out ~50% of all automated hacking attempts. Or port-knocking to get >90% (just a guess!). * Use standard port, but you still have an APT who owns your web server and will find other exploits.


>Imagine a 0day for SSH drops tomorrow. Almost immediately script kiddies all over the world will be trying to take over everything running on port 22.

Yep! And I should be clear: I am not saying just don't change the SSH port. I'm saying if you care about security, at a minimum disallow public access to SSH and set up a VPN at a minimum.

>Unfortunately that's all too rare in most web-servers I see (<10%), as most just add SSH and secure it and call it a day (even worse when CI/CD scripts just copy files without chowning them).

I'm a bit confused here. In every major distro I've worked on (RHEL/Cent, Ubuntu, Debian, SUSE) the default httpd and nginx packages are all configured to use their own user for the running service. I haven't seen a system where httpd or nginx are running as root in over a decade.

I think the bare minimum for anyone that is running a business or keeping customer/end user data should be the following:

1) Only allow public access to the public facing services. All other ports should be firewalled off or not listening at all on the public interface

2) Public facing services should not be running as root (I'm terrified that you've not seen this to be the case in the majority of places!)

3) Access to the secure side should only be available via VPN.

4) SSH is only available via key access and not password.

5) 2FA is required

I think the following are also good practices to follow and are not inherently high complexity with the tooling we have available today:

1) SSH access from the VPN is only allowed to jumpboxes

2) These jumpboxes are recycled on a frequent basis from a known good image

3) There is auditing in place for all SSH access to these jumpboxes

4) SSH between production hosts (e.g. webserver 1 to appserver 1 or webserver 2) is disabled and will result in an alarm

With the first set, you take care of the overwhelming majority of both swarms and persistent threats. The second set will take care of basically everyone except an APT. The first set you can roll out in an afternoon.

With the first set, you take care of the overwhelming majority of situations.


Protecting sshd behind a VPN just moves your 0day risk from sshd to the VPN server.

Choosing between exposing sshd or a VPN server is just a bet on which of these services is most at risk of a 0day.

If you need to defend against 0days then you need to do things like leveraging AppArmor/Selinux, complex port knocking, and/or restricting VPN/SSH access only to whitelisted IP blocks.


Except you don't assume that just because someone is on the VPN you're secure.

If the VPN server has a 0day, they now have... only as much access as they had before when things were public facing. You still need there to be a simultaneous sshd 0day.

I'll take my chances on there being a 0day for wireguard at the same time there's a 0day for sshd.

(I do also use selinux and think that you should for reasons far beyond just ssh security)


A remote code execution 0day in your VPN server doesn't give an attacker an unauthorized VPN connection, it gives them remote code execution inside the VPN server process, which gives the attacker whatever access rights the VPN server has on the host. At this point, connecting to sshd is irrelevant.

Worse, since Wireguard runs in kernel space, if there's an RCE 0day in Wireguard, an attacker would be able to execute hostile code within the kernel.

One remote code exploit in a public-facing service is all it takes for an attacker to get a foothold.


I do not run my VPNs on the same systems I am running other services on, so an RCE at most compromises the VPN concentrator and does not inherently give them access to other systems. Access to SSH on production systems is only available through a jumphost which has auditing of all logins sent to another system, and requires 2FA. There are some other services accessible via VPN, but those also require auth and 2FA.

If you are running them all on the same system, then yes, that is a risk.


For a non-expert individual who would like to replace commercial cloud storage with a self hosted server such as a NAS, do all these steps apply equally?

I am limiting the services to simple storage.

Looks like maintaining a secure self cloud requires knowledge, effort and continuous monitoring and vigilance.


Most of those are good practices for a substantial cloud of servers that are already expected to have sophisticated configuration management. They're easy to set up in that situation, and a good idea too because large clouds of servers are an attractive target - they may be expected to have lots of private data that an attacker might want to steal and lots of resources to be exploited.

A single server run by an individual and serving minimal traffic would have different requirements. It's a much less attractive target, and much harder to do most of those things. For example, it's always easy and a good idea to run SSH with root login and password authentication disabled, run services on non-root accounts with minimum required permissions, and not allow things to listen on public interfaces that shouldn't be. Setting up VPNs, jumpboxes, 2FA, etc is kind of pointless on that kind of setup.


>Imagine a 0day for SSH drops tomorrow. Almost immediately script kiddies all over the world will be trying to take over everything running on port 22.

But how much of a threat is this? Who's going to drop a ssh 0day with PoC for script kiddies to use? If it's a bad guy he's going to sell it on the black market for $$$. If it's a bad guy he's going to responsibly disclose.

>You're right that these events could happen. But you have to ask yourself what's actions of yours will have a bigger impact:

>* Changing to non-standard SSH port, blocking out ~50% of all automated hacking attempts. Or port-knocking to get >90% (just a guess!).

But blocking 50% of the hacking attempts don't make you 50% more secure, or even 1% more secure. You're blocking 50% of the bottom of the barrel when it comes to effort, so having a reasonably secure password (ie. not on a wordlist) or using public key authentication would already stop them.


It makes the logs less noisy. And with much less noisy logs it is easier to notice if something undesirable is happening. Also from my experience this 50% is more like 99%.

> Unfortunately that's all too rare in most web-servers I see (<10%), as most just add SSH and secure it and call it a day (even worse when CI/CD scripts just copy files without chowning them).

If you made a list of things like this which annoy you, I would enjoy reading it.


> Imagine a 0day for SSH drops tomorrow. Almost immediately script kiddies all over the world will be trying to take over everything running on port 22.

And with all those compromised servers they could easily scan for sshd on all ports.


Yeah, I'm seeing a lot of nonsense in here. Why is SSH publicly accessible in the first place???

Security through obscurity is just some feel good bullshit.


Why shouldn't SSH be public? It is useful and simple to secure.

Well, there's basically two stances you can reasonably take:

1) SSH is secure enough just by using key based auth to not worry about it.

2) SSH isn't secure enough just by using key based auth so we need to do more stuff.

If you believe #1, then you don't need to do anything else. If you believe #2, then you should be doing the things that provide the most effective security.

Personally, I believe #1 is probably correct, but when it comes to any system that contains data for users other than myself, or for anything related to a company, I should not make that bet and should instead follow #2 and implement proper security for that eventuality.

I'm willing to risk my own shit when it comes to #1, but not other people's.


Fair enough, I've edited the comment to reflect this :)

> The 22 one will have thousands of attempts

The range in the figures is surprising. I leave everything on port 22, except at home where due to NAT one system is on port 21.

On these systems, since 1 September:

  lastb | grep Sep\  | wc -l

  160,000 requests  (academic IP range 1),
  120,000 requests  (academic IP range 2),
    1,500 requests¹ (academic IP range 3),
    1,700 requests² (academic IP range 3),
  180,000 requests³ (academic IP range 3, just the next IP),
   80,000 requests  (home broadband),
   14,000 requests  (home broadband ­— port 21),
    5,000 requests  (different home broadband, IPv4 port)
        0 requests  (           ,,     ,,      IPv6 port)
¹²³ is odd. All three run webservers, ² also runs a mailserver, yet they have sequential IP addresses.

I don't bother with port knocking or non-standard ports to ensure I have access from everywhere, to avoid additional configuration, and because I don't really see the point when an SSH key is required (password access is disabled).


Good example, but doesn't help his point, which was:

> This has it completely backwards. Security through obscurity's goal is not to signal predators, it's the opposite. The goal is to obscure, to hide. The "signal" is there is nothing here

An attacker scanning the whole IPv4 space won't think "ah, there's no ssh on port 22, there's no ssh to attack". They will think "yep, they did at least the bare minimum to secure their server, let's move on to easier targets".

He proved the point he was trying to disprove.


So I get 10s of attempts a day for my sshd on port 7xxx. If I had an account with say ubuntu:ubuntu, it'd totally have been found by now.

I have 0 in the last 14 days on port 2xxx. Probably depends a lot on your IP range (I'd assume AWS etc is scanned more thoroughly) and whether you've happened to hit a port used by another service. But even in commercial ranges, I've seen hardly any hits on >10k.

But I have only anecdotal evidence as well, so my guess is as good as yours.


So scale the other aide up too. Just imagine what it's like on 22.

The article addresses this. He did a poll, and just under 50% of people use the default ports. So just by changing your default port, you eliminate half the break-in attempts.

Now you're absolutely right that this only deters less-skilled/inept hackers, a more competent hacker easily gets past this. But it's worth dwelling on the fact that we still stopped a substantial number of requests. Port knocking is definitely an improvement (i.e. more obscure). I'd guess with port-knocking more than 90% (even 99%) of attempts would completely miss it. The goal here isn't to rely completely on obscurity. It's security in depth. Your SSH server should still be secure and locked down.

The other question with this is what's your threat vector. Most people decry security through obscurity because an APT can easily bypass it. They can, but most people trying to hack you are script kiddies. Imagine an SSH exploit was leaked in the wild – all the script kiddies would be hammering everything on port 22 immediately.


The poll is my biggest issue with an otherwise agreeable article, the sample size and representation on Twitter doesn't make for anything close to reliable percentages.

I understand its use as a demonstrative aid but especially in the context of security, hinging your policies on the outcome of a Twitter poll seems like... well, security through obscurity.


Maybe a bit nitpicky but I think port-knocking is in kind of a grey area. You can think of it as a kind of password where you have to know the correct series of ports. Since the number of ports is quite large, there is also a correspondingly large number of possible port sequences so you can't, in principle, brute force it without a lot of effort.

> Maybe a bit nitpicky but I think port-knocking is in kind of a grey area. You can think of it as a kind of password where you have to know the correct series of ports.

Yes.

But you also have to know that port knocking is enabled at all. That's the obscurity part.


I think this implementation avoids that problem.

https://github.com/moxie0/knockknock


Gotcha. Fair point.

Less secure than a password because anyone sitting in the middle can observe the sequence.

> [port-knocking] is very much security by obscurity, because it's trivial for anyone who knows the port sequence to defeat

in what way is this different than a passphrase you don't know? i can trivially defeat any password which i already know, too :D

while discovering a non-standard ssh port is easy, discovering a port-knock sequence out of a possible ~65k per knock is impractically difficult (assuming the server has any kind of minimal rate limiting). a sequence of eight knocks will need 65k^8 attempts - and that's assuming you already know which port will be opened, which of course you won't.

you can even rely on just port-knocking of 8 ports and already get ~2^48 bits of entropy, which is about the same strength as a random 8 char alpha-numeric latin-charset password.

(someone plz check my math)


I agree with you that the example is not the best, but obscurity has a lot of benefits. We did an experiment with a few students on obscuring a WordPress installation some years ago to catch ppl scanning for certain plugins. That gave us the ability to use the regular paths as honeypots. Gives you an ability to detect 0-day attacks as well.

Changing the ssh port would still fall under security through obscurity whether its effective or not.

This doesn't consider that you can then monitor connections to port 22, and if you see any that's suspicious.

Disagree that port knocking is obscurity. That's a secret.

Security through obscurity would be using a nonstandard SSHD service.


I just turn off password authentication on SSH and moved to keys, then moved to IPv6. The automated scans haven't made it to v6 yet. The only better thing I could do is have an external v4 SSH honeypot that moves as slowly as possible to tie up a (tiny) resource.

IPv6 seems to be a good example of security by obscurity, with up to 64 bits of random IP adresses per machine, making scanning impossible in practice ?

Endlessh and tarbit do this

You can’t compromise what you can’t see.

Calling an administrator account “9834753” obscures it’s purpose and may reduce the likelihood of a compromise attempt as opposed to “databasesuperadmin”. But that doesn’t mean that you don’t need a good security token.


I find the SSH example slightly odd, when all you need to do is disable password authentication and root access. Moving away from port 22 just seems a little excessive?

In other words, changing the default SSH port number is similar to using camouflage. It just helps hide that something is there, but it does nothing to improve the defense once spotted. However, if the majority of predators don't see you, then the rest of your defenses are needed at that time.

It's also an indication that there are no default passwords in use. So even if you know what port SSH lives on, there's a lower ROI to attacking it than a default port.

I've heard a better analogy - security by obscurity is like camouflage on a tank. A tank has massive armor and a terrifying gun to defend itself with. But even a half-assed camouflage can delay enemy reaction by a few seconds. Sometimes it's all it takes, because it lets you shoot first. In addition, the cost of camouflage paint or a net is laughably low and can be replaced in the field. It's simply an extra layer of protection and a very inexpensive one.

https://www.reddit.com/r/netsec/comments/ioxux2/security_by_...

(security by obscurity ) is camouflage, not armor.


I think it is the opposite -- systems with rigourous security tend to be more open, because the designers are confident they understand their system. In contrast, systems that practice security through obscurity are often owned/managed by people afraid of what will go wrong.

We should distinguish obscurity from intentionally hiding the configuration, which makes attackers undertake discovery, and hence can lead to detection. But your internal red team / security review should have all the details available. If loss of obscurity leads directly to compromise then you don't have security. Cf insider threat.


Your example is advertising which is the opposite of security through obscurity.

Obscurity is another layer of hiding or indirection: like the owl has camouflage and it has a hole in a tree.

Advertising your fitness (your stotting metaphor) is effective when: you are part of a herd, and the attacker will only attack the weakest in that herd and then be satisfied. Like double locking your bike next to a similar bike that has a weaker lock.

Computer security is different because usually either:

a) everyone in the herd is being attacked at once (scattergun/IP address range scanning), or

b) you are being spear targeted individually (stotting won’t work against a human hunter with a gun, and advertising yourself won’t help against a directed attack).

An example of advertising your security might be Google project zero, or bug bounties.


>An example of advertising your security might be Google project zero, or bug bounties.

That's more akin to a gecko sacrificing it's tail IMO. You're taking a predator that's capable of a successful attack and rewarding them for not doing it at some cost to yourself. It provides an easy and less risky way of getting paid.


Using obfuscation is often a signal that you are a weak target, because there are a lot of places that use obfuscation but nothing else. A better indicator that you are a hard target is to enable common mitigations like NX, stack cookies, or ASLR.

Yeah, that's what I was trying to get at with my "in the age of automation" comment. If you go to a period in history without automation, then obscurity is going to be a lot more effective. And that's why I think people still want to go back to it. Obscurity is much easier to wrap your mind around than RSA, et al.

However, the psychological warfare video does make me think that there's still a place for obscurity after you've already used actual security measures. If you can find any technique that makes your attacker work harder vs some other target, then it feels like there's an economic value to doing it as long as the cost to you is relatively low.


There is one giant hole in your argument: both stack cookies and ASLR are mitigations that are nothing more than automated security through obscurity in the first place.

I assume you're equating picking a random SSH port with scanning for an ASLR slide or guessing a stack cookie, but they are different situations: processes that die are generally treated quite seriously, and they leave "loud" core dumps and stack traces as to what is going on–usually this gets noticed and quickly blocked. With SSH you can generally port scan people quickly and efficiently in a fairly small space (and to be fair, 16-bit bruteforces are sometimes done for applications as well, when possible)–and the "solution" here where you ban people that seem to hit you too often is literally what you are supposed to be running in the first place.

And in general, the sentiment was "if you are using those things, you are likely to have invested time into other tools as well such as static analysis or sanitizer use" which are not "security through obscurity" in any sense, whereas the "obscurity" that gets security people riled up is the kind where people say things like "nobody can ever hack us because we changed variable names or used something nostandard" because it is usually followed with "…and because we had security we didn't hash any passwords".


How so? Stack cookies and ASLR are a form of encryption, where an attacker had to guess a random number to succeed in an attack.

Obscurity really just boils down to a secret that doesn't have mathematical guarantees. It's doing something that you think the attacker won't guess, just like an encryption key, but without the mathematically certified threat model, so you just hole that the attacker is using a favorable probability distribuy for their guesses


The attacker who had already compromised the integrity of the system in question has to guess or probe for a random number with relatively low entropy in order to do something useful and straightforward with that already compromised system.

The only downside I see immediately is that there's a counterweighted risk to obscurity in your security layer: you can confuse your own users (or yourself).

Many security tools I've used are downright user hostile in how little information they provide the end-user (or the admin!) regarding why an auth process failed. It incentivizes people to simplify or bypass the system entirely when they can't understand the system.


Semirelated. Anytime I have written a protocol with a checksum I implement a 'magic checksum' that just passes. And a debug mode that enables it and diagnostics. The reason is usually if somethings wrong with a packet of data the best thing to do is ignore it completely. But that makes development insane. So having two modes gives you the best of both worlds.

When moving to some scary streets in my travel,I would shout to my companion in the local language, trying to signal to potential thugs to choose chasing someone else. I did it once in Russia while in a dodgy neighborhood to buy vodka, and back in the 1990 when westerners were under some threats in the middl east.

The security example of stotting would be the exact opposite:

Remove all obscurity and expose all your techniques and algorithms and setting up bounties for people to break your defences.

See eg https://cloud.google.com/beyondcorp and https://cloud.google.com/security/beyondprod where Google gives up on VPNs.


> In nature, prey animals will sometimes jump when they spot a predator[1]. One of the explanations is that this is the animal communicating to the predator that it is a healthy prey animal that would be hard to catch and therefore the predator should choose to chase someone else.

I think this analogy perfectly explains my hostility to security by obscurity. When I see a system that uses standard ports and demonstrates best practices, I think "oh well, they probably know what they are doing." When I see a system using strange ports and / or has extra extraneous crypto, I think "well, maybe this guy is an idiot" and take a deeper look.


I think the predator/prey real world analog to "security through obscurity" would be camouflage.

An argument against obscurity is that it adds additional pains for your "regular" users (as in developers/3rd party developers/app developers) while being a small deterrent against unauthorised users (as they will be able to circumvent the "obscurity layer" and replicate their method to other bad actors).

edit: In the first sentence "against" is not what I wanted to say: what I wanted to say is that it "downgrades it's effectiveness". I agree that obscurity can and sometimes should be a layer of security.


> An argument against obscurity is that it adds additional pains for your "regular" users (as in developers/3rd party developers/app developers)

No one should be applying obscurity to public-facing APIs or anything for which documentation is widely distributed outside the company.

A better example would be Snapchat's intense and always evolving obfuscation strategies: https://hot3eed.github.io/snap_part1_obfuscations.html

Even though someone took the challenge to de-obfuscate most (but not all) of the protections, just look at how much effort is required for anyone else to even follow that work. More importantly, consider how much effort is required relative to other platforms. It's enough of a pain that spammers and abusers are likely to choose other platforms to attack.


When security is totally impossible because there is no way to distinguish a trusted party from an adversary, obscurity is the only hope.

If you cannot distinguish a trusted party from a malicious party everything is then potentially malicious. This is why we have certificates, certificate revocation, and trust authorities.

And that works great until a trust authority gets compromised. It's for this reason why the US DoD has it's own root certificate authorities and thus many military websites actually look like they have invalid https certs. Browsers don't ship with DoD root certs installed as trusted.

Yeah, I am on a DODIN as I write this. In the civilian world a CA falls back on a decentralized scheme called Web of Trust which allows CAs to recipricate certs from other CAs and invalidate other CAs as necessary.

The DOD chose to create their own CA scheme originally for financial reasons in that over a long enough time line new infrastructure pays for itself with expanded capabilities while minimizing operation costs dependent upon an outside service provider. This was before CACs were in use.

https://en.wikipedia.org/wiki/Web_of_trust


Thanks for the additional info, I didnt know (but probably should have assumed) that finance was the primary motivator. I just had to implement CAC authentication for a webapp and they still use their own CAs for client-side certs aka cac’s so it seems like it was a pretty savy investment at the time that’s not going away anytime soon

Agreed. The maxim warning against "security from obscurity" is often reduced to an irrational comprehensive avoidance of obscurity. It's similar to the irrational avoidance of all performance optimization because Knuth warned of premature optimization.

Both reductions lose practical utility by omitting nuance.

* Avoid wasting your time doing performance optimization until tuning is necessary. But definitely take obvious and easy measures to ensure your software is fast, such as choosing a high-performance language or framework with which you can be productive.

* Don't exclusively rely on obscurity. But definitely take obvious and easy measures that leverage obscurity to add another layer of defense, such as changing default ports, port-knocking, or whatever.

To use the same art of reduction to counter the common interpretation: A complex password is, in a manner of thinking, security from obscurity. Your highly complex password is very obscure, hence it's better than a common (low obscurity) password from a dictionary.


> But definitely take obvious and easy measures that leverage obscurity to add another layer of defense, such as changing default ports, port-knocking, or whatever.

Except that can lead to operational problems down the road. For example "oh yes, we're nice and secure, not only do you need a 512bit private key to get into this device, you also need to connect from a secure network"

Then along comes covid, and you can't get into the building.

"Oh dear, you're not on the secure network, you can't come in"

So you spend 2 hours (while your network isn't working right and you're losing customers) finding and getting in through a back door.


I would call that system secure. It does not just rely on an obscure password but is actually restricted by a list of whitelisted networks.

The failure in that case is only that the admin didn't consider that normal work might be done from home at some point or that the middle or upper manager thinks that he should be able to freely administrate his critical infrastructure from anywhere...


IP whitelists break so often for "unanticipated reasons" that I've lost all sympathy for not anticipating it. Doubly so for using a whitelist to lock yourself out of the whitelist admin.

It's so common the security community should make it a meme to spread awareness: Don't get pwned by DHCP while running from SSH 0-day RCEs.


Of course security can lead to operational problems. Security is a trade-off against convenience.

> Of course security can lead to operational problems.

So can lack of security.


> Except that can lead to operational problems down the road.

in the example you mention the 'security' is working by design, but the operational parameters changed which in turn made that security model unsuitable - so it is the parameter change, rather than the 'security' is what led to the problems.

The original system could have been just as 'obscure' but also included an appropriately secured mechanism that allowed for this kind of remote access / disaster scenario.


That isn't obscurity. And in your scenario, there was a security hole if the requirement was that you had to be on the intranet, but someone was able to gain access from the outside.

The number of times I've seen people shitting all over port knocking is truly confusing. Since we added it several years ago, we've not had a single case of hackers trying to break into sshd. Before port knocking, 100's a day, even though it was on a very unusual port.

I try to tell people this, when they poo poo port knocking, but they just don't get it.

EDIT: s/the/they/


But serious question -- what exactly is the benefit? Before, it's not like they were getting in anyways if you were using keys.

So I confess I still don't "get it". Unless you just want cleaner logs or something. I assume you're still getting the same number of initial connection attempts per day, but just not recording them?

Is it something to do with network or CPU consumption related to failed subsequent attempts by the same actor? (Which, the same as port knocking, should be rate limited anyways?)


There have been bugs found in SSH server implementations that allowed limited remote code execution or even authentication bypasses. Missing an update or two isn't bad when nobody can figure out how to connect to your server.

Of course you have to update at some point. However, if someone drops a zero day on your SSH server while you're asleep you're probably glad that you've got a secret sauce to protect your server, letting the vulnerability bots focus on other servers.


If port knocking existing in a vacuum, sure. It'd be great.

The issue is there are other options that are better - like VPN only access to SSH - that you can use instead of (or in addition to)

If everyone advocating for port knocking was also saying set up VPN only access, sure. It's an additional authorization factor via where ports are used as a proxy for a PIN. But I haven't seen a single person in here saying they use it in addition to a VPN - people are saying it's their primary form of protection.

You can setup a wireguard VPN in as much time as it takes to set up port knocking. Now you have all of the benefits port knocking provides, and more. And you could even still set up port knocking in addition to the VPN if you really wanted to, but I would argue there's not much point.


Curious, how does this work? I am not very familiar with VPN. Is the VPN connection setup for the SSH session only? What if someone needs to have multiple SSH session, going to different networks altogether?

Im thinking it could be pretty impractical to go onto a whole other network to open an SSH session.


>Curious, how does this work?

It depends on the implementation. For a client <-> server VPN, it creates an interface on your local machine that corresponds to the network address range for the VPN, and tunnels traffic to the remote end.

For a site to site VPN, two appliances create a tunnel between them, and traffic is routed over that tunnel via the same sort of routing rules you normally use.

> Is the VPN connection setup for the SSH session only?

It can be. It can also be configured for all traffic, or some other combination.

> What if someone needs to have multiple SSH session, going to different networks altogether?

You can have multiple VPN connections to multiple networks. It can get complicated if the VPNs are using overlapping IP space.

> Im thinking it could be pretty impractical to go onto a whole other network to open an SSH session.

I'm not entirely sure why. Millions of people use VPNs every day for a variety of reasons, including SSH. I currently have 8 saved VPN configurations in my wireguard client, and connecting to one is as simple clicking on the client and picking the one I need in the dropdown. Then I SSH as normal, except its to the server's private IP and not public.


Why aren't you concerned that bugs will be found in your port knocking implementation?

I think the main concern with port knocking is that it's observable. You're effectively sending your password in clear, so if someone can intercept or overhear your traffic then your secret is lost. Cryptographic authentication schemes like SSH itself or VPNs do not have this problem.


Port knocking is a way to decrease the amount of random 0day/brute force scripts finding their way into your server. It will only stop automated scripts and attackers that don't know who you are. It's obviously no protection against incentivized attackers.

A VPN has upsides and downsides. It obviously protects your server a lot better against directed attacks, but when you lose your laptop or when your computer gets ransomware'd, you can't get access to the server anymore.

Furthermore, code execution vulnerabilities have been found against VPN servers because of their immense complexity and OpenVPN can consume quite a lot of resources for a daemon doing nothing. WireGuard has changed the VPN landscape with its simplicity, but if you fear your server may not be updated all too often (because it's partially managed by a customer, because your colleagues might not care to do so after you leave), leaving a simple solution behind can have its upsides.

I'm not advocating that everyone should enable port knocking on their servers to make them secure or anything, but the "port knocking is always bad" crowd is often very loud despite the fact that there are small little ways port knocking can improve security with very little effort or increased attack surface.


but what does getting past port knocking help with? now they have to find a bug with ssh.

If OP has a false sense of security due to port knocking the ssh may not have been updated as recently.

We update sshd daily, as we are on CentOS and use the official updates. Nice guess, though.

From what people have told me is the point is to remove automated attempts from logs so that when someone actually works out how to try to connect it becomes a strong signal that you have a real attack and you can check the logs to see if they are using real usernames or some other info to suggest that they know more than just random spam attempts. Normally dedicated attackers blend in with the random noise of the internet.

It is as simple as reducing an attack surface. If attackers can't talk to sshd, then can't try to hack it. In a world where zero days are real, why chance it?

Why is that so hard to grasp. Still boggles my mind.


The same with "GnuPG is bad" mantra on hackernews. There is nothing better that GPG currently for all its functionality and the only answer you get when asking for substitute is don't use this function or use some obscure application. Yeah right.

I agree that there is nothing better than GPG for the narrow scope of encrypting email. But I think there are very few cases where encrypted email is the most secure way to communicate, in lieu of other forms of encryption.

Encrypted email is almost a marginal usage scenario for GPG compared to other uses. It does everything. It is everywhere.Yes it is big, nobody has to use all of it. Just like C++... oh wait it is unpopular on hacker news bubble too despite being a juggernaut of a language. It will still be relevant long after hacker news will be no more.

I basically use GPG for one thing, at this point: signing git commits.

As far as I know, there isn't another GitHub/GitLab compatible way to do this. So I'll keep using GPG until there is.


Age is demonstrably better: https://github.com/FiloSottile/age

Also, an informed analysis of PGP: https://latacora.micro.blog/2019/07/16/the-pgp-problem.html


Informed analysis like lack of forward secrecy in something made for non ephemeral communication - for storing, sending files, digital signatures etc. Or backwards compatibility so you can access and verify your backups, archives etc. from 10 or more years ago.

Show me ephemeral encryption scheme for something that needs to be readable in the future like that.

This analysis is highly uninformed I would say.


It’s not an informed analysis, or even an analysis at all.

That’s not how analyzing algorithms or programs work. Even a basic threat model is missing.


Just respond "don't knock it 'til you try it... or rather, don't try it 'til you knock it!"

we've not had a single case of hackers trying to break into sshd. that you are aware of.

Yes, you have lovely clean logs to audit

Do you use single-port knocking or a sequence of port-knocks?

OP would respond but then that would break the obscurity! :-)

But the examples given won't help and is just bad advice in general.

- Assume for every code obfuscator there's a deobfuscator or at least someone as clever as you out there.

- Randomizing variable names it just a nuisance, it won't stop any competent pen tester or attacker.

- Encrypting the database is an odd one. Your program will also have to decrypt the data to use it. Where do you store the encryption keys? In your code? Don't assume obfuscating your code and/or randomizing variables will protect your encryption keys.


You seem to be thinking in terms of security mechanisms either perfectly blocking attacks or being useless. That's the wrong model. It's about costs. Obfuscating otherwise-open code doesn't mean that nobody can ever figure out what it does, but it raises their costs. Randomizing variables raises costs. Encrypting the DB raises costs on an amortized basis (some cracks may get the key and then it may not raise the cost much, but other cracks may only get the data in which case cost is raised a lot). Things are "secure" not when it's impossible for any actor in the world you don't want to get access to get access, but when the costs to those actors exceed the loss you may experience. (Preferably by a solid margin, for various reasons.)

As to whether this is good or bad advice, that depends on how expensive these things are (e.g., encrypting database fields may be very expensive if you write raw SQL calls as your primary DB interface but may be dirt cheap if you're using an ORM that has it as a built-in feature) and your local threat model (e.g., "dedicated, personalized attackers reading your source" is very different from "does it defeat automated scanners?"). You can't know whether these are good or bad ideas without that additional context.


> You seem to be thinking in terms of security mechanisms either perfectly blocking attacks or being useless. That's the wrong model. It's about costs.

This is something that bothered me quite a bit in Bruce Schneier's various comments on airline security. He repeatedly wrote that profiling young Arab men as likely terrorists was pointless, because if it became harder for young Arab men to get through security, terrorist organizations would simply start sending Japanese grandmothers.

But of course where it's relatively easy to find young men willing to die for a cause, it's much more difficult to find grandmothers who will do the same. And where it's relatively easy for an Islamic group based in the Middle East to connect to Arabic social networks, it's much harder for that group to connect to Japanese networks.


But it’s easier to find Arab women. Or an Irish girlfriend of an Arab man. Or two old Korean people.

(All real examples)


I'm pretty sure the potential recruitment pool of young Arab men is still many times greater than the pool of Irish girlfriends of Arab men.

It's about improving the odds/reducing the exposure, not achieving some theoretical absolute perfection.


It's bad math. It doesn't matter if you have 1 arab man and 1 korean woman, or 1000 arab men and 1 korean woman. You only need 1.

If you calculate the probabilities correctly, you get very different results.


No, obviously it's harder to find two old Korean people than one old Japanese person. Everything you've listed is hundreds, thousands, or millions of times more difficult than the young-Arab-man case.

Suppose you take down a plane with a young Arab man, and then you want to take down a second plane. There is a neverending stream of similar men willing to do the job. If your strategy requires you to use elderly Korean couples, you're done after the first plane -- you'll never find a second one.


This is also absent in the analysis of "security theater." I've often felt the "theater" does in fact have a material impact on target selection. One doesn't need to actually have a methodology that results in better capture of terrorists to deter them to other targets: one just needs a methodology that has plausibility of increasing the risk of failure. The unfalsifiability of "security theater" is actually a feature not a bug: it means there's always a non-zero weight on it's potential risk impact to terrorist considering air travel as a target.

All other things being equal, the opportunity cost will shift towards targets that have less elements akin to "security theater", since it's basically 'money on the table' to de-risk the attack.

So, the real question to ask about "security theater" is not if it has a material impact on human safety with flying, but if its deterrent effect pushes risk to places we'd rather it not go or if the costs of performing it do not outweigh this deterrence benefit. Given the potentially paralyzing effect it would have on the global economy if air travel were covered in a blanket of fear of flying, it's hard to argue that "decentralizing" this risk to other targets is a bad idea.


The problem with heavily focusing on Arabs while paying less attention to other threats is that Arab Islamist terrorists aren't the only problem aviation security needs to deal with.

Focusing most of the security effort on Arabs is a good way to fight the war of 19 years ago, but it leaves the air travel system vulnerable to upstart terrorist movements that see the lack of universal security as an exploitable vulnerability.

For example, there's nothing to say that America's right wing terrorist groups won't decide to switch from shootings and vehicle ramming attacks to attacks on air travel. The TSA ought to be prepared for this, or any other, emerging threat.


> You seem to be thinking in terms of security mechanisms either perfectly blocking attacks or being useless.

Ours is an industry with a lot of people "on the spectrum".

https://thesilentwaveblog.wordpress.com/2017/03/08/aspergers...


You’re only considering one side of the costs. Obfuscation mechanisms also impose a cost on your legitimate users. There’s lots of reasons why you want your users to actually buy in to using your security controls, and annoying controls with highly questionable effectiveness is the best way to kill that buy in. Users will only tolerate so much burden from user facing controls, so you want to make sure all of the controls you impose upon them are actually useful.

The other thing that’s harmful is relying on something to provide security, when it actually can’t. That’s actually going to have a negative impact on your threat model. People will say (they’re even saying it in this thread) that their port knocking or non-standard port usage has cut out the port scanning noise in their logs. But who cares? A properly secured ssh port isn’t going to be cracked by an automated scanning tool. But a poorly secured hidden one will be easily found and cracked by any motivated attacker. You have to implement the proper control anyway, and the obfuscation one ends up providing no benefit while simply annoying your users.

Security by obscurity is dumb, it doesn’t provide any benefit. Security in depth doesn’t mean multiple layers of controls that don’t work add up to one that does. Obscurity is just a way of spending your scarce resources on controls that don’t work, and wastes your scarce command of your users attention on controls that don’t work. So in reality, they’re also always coming at the opportunity cost of controls that actually do.


> but it raises their costs.

I would ask by how much.

Having to perform source audits on code with obfuscated variable names added almost no time to the task.

Again, these methods work against not-so-determined attackers. If you as a defender have limited resources, where would you choose to spend it--on defending against unskilled attackers, or attackers that are more likely to cause you damage?

>but when the costs to those actors exceed the loss you may experience.

There are several problems with this logic. First, it kind of presumes that there is a symmetry in the costs for the attacker and defender. Wise defenders will use methods that have high leverage. Also, the attacker doesn't care at all about your costs. They care about what they can get from you--whether it is access to something that you aren't thinking of, or your crown jewels.

Encrypting databases is sometimes required by compliance, but is no defense against a good attack.


It's still bad security.

Sure, it increases costs for a certain subset of attackers. Instead of sending easily found and trained young Arab men, they have to put more effort into recruitment. However, in return for that, they get far reduced scrutiny.

Therein lies the problem. It is the real-world equivalent of dropping all packets from a country instead of properly analyzing the packets. You'll stop the low-cost automated garbage attacks, but you won't stop a dedicated attacker, even if the attacker is in that country.


> Assume for every code obfuscator there's a deobfuscator or at least someone as clever as you out there.

There is still some information lost in the process:

- "let eigenvector_coefficient = 23" => "let x = 23"

A de-obfuscator isn't going to be able to recover the valuable information contained in the original name. Will it stop a determined attacker? Maybe not, but it would surely slow them down as they now need to spend an order of magnitude longer trying to understand what the code is doing.

> Randomizing variable names it just a nuisance, it won't stop any competent pen tester or attacker.

Believe it or not, nuisances are enough to stop some people. A lot of would be attackers are just cruising for low hanging fruit.

Remember, the goal is to "reduce risk" and not "stop any highly skilled targeted/tailored attack". Because let's face it, even if you are the greatest crypto wizard in the world, you will fall victim to a highly sophisticated attack tailored specifically to you.


>Believe it or not, nuisances are enough to stop some people. A lot of would be attackers are just cruising for low hanging fruit.

It is not "some people" that I worry about. I worry about attackers with a level of skill.

As I noted elsewhere in the thread, I have audited obfuscated code and the obfuscation is only a speed bump. I can only presume that attackers are smarter than I am, and obfuscation is effectively not an issue. And it is not an order of magnitude. This is another example of developer thinking that this form of obscurity is of any real value. Reviewing code will tell you if eigenvector_coefficient is really what it claims to be or something that morphed into something that the developer didn't originally intend.

Also keep in mind that code reviews approach code from a totally different angle than a developer would either developing or during a code walkthrough.


It might make sense in some contexts, but code obfuscation is a great example of where software engineers think it provides security where it provides none.

Developers often have some idealized notion that an attacker is going to need to piece their program logic back together and try to decode the purpose of each obfuscated variable in order to find a hardcoded password/value.

In reality an attacker is just going to dump strings and try them all or simply set a breakpoint just before the important syscall and let your program do the work. Code obfuscation provides little to no value for these common methods, yet we cannot resist the urge to list it as a bullet point in security meetings, leading to a false sense of security.


Exactly. If you're running crypto and think getting rid of variable names is going to stop people; it's not. Any off-the-shelf algorithm is usually easy to recognize to an accomplished reverse engineer with a basic background of what kind of things they're looking for.

Honestly, the modern JavaScript toolchain is better at giving reverse engineers a headache than 80% of binary obfuscators.

As someone who is not very good at JavaScript reverse engineering, I would tend to agree that minifiers are pretty annoying.

So the first thing that I do with one of those is to parse it and convert it to s-expressions. Problem solved.

>simply set a breakpoint

I knew nothing about this topic in general, but elsewhere in this thread there was a link to a blog post about obfuscation methods used in a piece of commercial software. One item was a function that detects a breakpoint, obfuscates its boolean return value so you can't tell if it did, and makes the program hang when it does. Pretty neat.

I think your (and my) ignorance of such methods is evidence that they probably are reasonably effective, even though when explained, they're not quantum physics.


Let me give you an example. At a previous job as a devops, of my predecessors frequently used these "techniques, minus the encrypted database but I sure he would have done it if he knew how. So there was some buggy internal app they needed some new features added to and the person who wrote it thought he was clever and obfuscated the code. It took me a whopping 30 minutes churn through his 'clever' obfuscation scheme and the randomized variable were just a nuisance. Honestly his best obfuscation technique was his horrible code that made no sense.

Even OP's advice about running services on non-standard ports isn't sound. Who doesn't run a service scan? Even sites like Shodan do service discovery for you. I'm going to find whatever port your running ssh on if your running it.


> I'm going to find whatever port your running ssh on if your running it.

I still think it's a good idea. With SSH on port 22, ten thousand bots plus an attacker try to hammer it (so says fail2ban). With SSH on port 9278, zero bots plus an attacker try to hammer it. By throwing away the 99.99% of the chaff, you can see the remaining wheat you care about.

Changing SSH ports isn't about saying "yep, we fixed it!" and calling it a day. It's about decreasing the amount of stuff you have to deal with, which is quite useful. It's something you can do in addition to everything else that gives a decent bang for its buck. No, it doesn't keep you out, but it does keep out those thousands of bots crawling around looking for an open 22 to pester.


But new people in the industry shouldn't think that the things recommended in the article should be used as a primary defense and are accepted industry practices. Moving SSH to a new port to reduce false security alerts is one thing, having people read that article and walk away thinking this is how we do things is another. We don't.

I didn't take that away from the article at all. It said:

> So let’s talk about security by obscurity. It’s a bad idea to use it as a single layer of defense. If the attacker passes it, there is nothing else to protect you. But it’s actually would be good to use it as an “additional” layer of defense. Because it has a low implementation cost and it usually works well.

I think it's good to do those things in addition to the other stuff. Obscurity isn't sufficient by itself, but is another layer of defense.


In addition to the stuff you should really being doing? That stuff is hard enough for beginners, without confusing them with speculation like this that goes against best practices and common sense, especially without clearly explaining the pitfalls and real dangers to each of these hypothetical scenarios. Besides, if you're already using industry accepted solutions to security problems and someone manages to gain unauthorized access anyways don't expect any of this amateur crap to offer any real protection at that point.

I don’t feel as though you came away with the real intent of the article, which didn’t make the arguments you’re shouting down.

> I'm going to find whatever port your running ssh on if your running it.

Not if you have to port knock before the ssh port is open to new connections.


Why not? I'll run my automated port knocker

Huh? How would that work? You have no idea what my port knocking scheme is.

For all you know you have to knock ports 22, 46, 1776, and 8998 to the timing of "shave and a haircut" switching between udp and icmp along the way... Good luck, the entropy you have to overcome is astronomical.


I would thing that blocking IP's of insessant knockers would be easy to implement.

They use proxies! So many proxies.

>Randomizing variable names it just a nuisance, it won't stop any competent pen tester or attacker.

Sure it will. Imagine that your old, unpatched Wordpress admin is at /random-gobbledygook instead of /wp-admin. An attacker would have to try to hit random alphanumeric directories of your webserver over and over again, hoping that he stumbles across a specific thing that they can attack. This is completely impractical, unless they're somehow clued in that the URL exists.

It's really about making life difficult for an attacker, so much so that they will simply give up, or find an easier target. That can be achieved by throwing up a series of difficult/obscure barriers, each which makes it less likely you'll be trivially penetrated.


I ran a world-writable off-the-shelf wiki for years. Trivially tweaked the edit url, visible on every page. But that was enough to break automated spam tooling defaults, so the spamming human might get to see a note, pointing out that robots.txt was blocking indexing, so there was really no reason to waste both our times. The dominant threat wasn't the spammer, but their dumb automation.

One of the first widespread security vulnerabilities I had to deal with was this one:

https://www.giac.org/paper/gcih/115/iis-unicode-exploit/1011...

You could basically encode DOS commands in the URL bar for a site running IIS and it would run remotely.

The automated attack basically replaced the index.html pages. But if you didn’t use the default pages. It didn’t have any effect.


> Assume for every code obfuscator there's a deobfuscator or at least someone as clever as you out there.

Then it filters out people who are not using a deobfuscator or are less clever than I.

> Randomizing variable names it just a nuisance, it won't stop any competent pen tester or attacker.

Then it will stop incompetent pen testers.

I don't see how your comment refutes the point made. The point is not that it makes your likelihood of attack zero, it just reduces the likelihood via adding more roadblocks.


> competent pen tester

So it does eliminate incompetent ones? That's kind of the point of the article.


If you have any bit of real security, incompetent people wouldn't pose a risk.

So, what's the gain?


> - Randomizing variable names it just a nuisance, it won't stop any competent pen tester or attacker.

But it will stop incompetent attackers - of which there are many. In fact, they are the vast majority.

None of those 'obscurity' techniques will stop a targeted attack. That's not their function. But each of them raises the bar. The more hoops, the better.


Isn't there some model where the keys that you use to decrypt the database are acting something like one time codes, and you have to use particular credentials to access the key server? So the attacker would need to be able to stay in the network to be able to actually access the data -- they couldn't just download the entire database and crack it offline. I don't know how that actually is implemented, but just curious on that.. I also wonder how many people put things in their various systems that are obvious attacker trip-mines -- like having some fake button that says like "copy image of database to disk" or something that the actual internal employees are told to never click that button -- but maybe they have even some confluence page that talks about "how to download the database", but is actually a fake entry meant to trip up an attacker in case they get access to your confluence pages as well as your database.. so they click that button and the admins get alerted...

Overloading names is a good code obfuscation strategy, but tricky and best done with code for obvious reasons (unless you like your regular code to present the challenges of BrainF). For instance, depending on your language, you may be able to have a variable, a function, an object, a pointer, a data structure, an index variable, etc. all called just "a".

Making sense out of code obfuscated this way is really* hard for humans, but will compile or interpret just fine so long as your obfuscator obeys the rules of your language. (We started on this at one of my early startups nearly 20 years ago, but didn't get funded soon enough for protecting the IP in our unique JS to matter. It was unique enough that we actually applied for a patent on part of it - drawing a 16 trace live strip chart of data from network sources at better than 4-10 Hz per channel was really hard with the bowsers and computers of 2002!)


The database key should be generated per encrypted database and then stored using something like the OSX keychain. The OS enforces that only a given application can retrieve that key (via application code signing).

Internally, we phrase it as "Make the system objectively hard, then don't tell all the details". Wasting an attackers time is a fine goal.

It's a lot like bike locks.

Yes, most people can grab some bolt cutters, snip, and bike off. Yet, so many bikes remain unstolen with extremely week locks.

The vast majority of attacks are crimes of opportunity. Hackers aren't generally trying to target a single company or computer for a bot net, they are looking to get as many as possible. Almost any amount of effort above and beyond the typical will cause them to jump past you as a target.

Back to the bike lock analogy. Again, most locks can be bypassed, getting one that requires an edge grinder will almost certainly ensure that your bike won't be stolen (Why steal that bike when there are 20 with simple wire locks?). Add 2 locks and you've got a bike that will almost never be knicked.

https://www.youtube.com/watch?v=oPDHPpnXPv8

This video can teach you a LOT about software security.


> Wasting an attackers time is a fine goal. reply

This. Putting a tarpit on port 22 isn't going to stop an attacker, but it will slow the ssh scans down for everyone.

https://github.com/skeeto/endlessh


Honeypots are fun, but be VERY careful how you deploy them. Ideally they are on a completely separate network on the WAN side of a second firewall. The last thing you want is for someone to find an exploit in your honeypot and use that to gain access to your network.

Security by obscurity is bad. Obscurity alone does not provide much security especially in a cryptographic setting. It can not be relied on as your sole protection.

Security and obscurity: if you make something secure and then obscure information about that system from an attacker that can increase the security. However obscurity is often organizationally expensive and very fragile. A key can be rotated, but changing how something functions is very hard to rotate.


Maybe the test should be: "is my system considered to be secure even without any obscurity?" If the answer is yes, then add obscurity.

For instance, the port 22 example. Suppose you have a bastion host. SSHD running on port 22, root password disabled, passwords disabled (only SSH keys), no other services running, all other ports filtered/closed. It should be fairly secure, even if exposed to the internet, right?

Now you can change the port. Change the SSH banner and hide the version. Add some port knocking. And so on. None of these measures would work by itself, but they will discourage non-targeted attackers.


They are also non-trivial amounts of work, both to build and to maintain..

Someday, someone will have problems connecting and waste half a day debugging it before they realize what is up.


For a very specific example, look at the classified ciphers used by the US Gov't TLAs. Why are they classified? Because if they are harder to get info about -- literally obscured -- then it's an additional layer of defense.

Or troop movements during war... Sure, the locations can be figured out, but by not broadcasting locations that's more work for the enemy and thus a bit more secure.

Obscurity is absolutely a key piece of security, because it adds the complexity of discovery.


This is true but I think that its not really a binary classification and there is a spectrum from useless and trivial obscurity (base64 encoding some "secret") to actually useful obscurity. After all, you can call password authentication "security through obscurity" since you only need to know the correct sequence of characters and your security relies on that sequence remaining obscure.

Many serious real-world scenarios do use obscurity as an additional layer

It works for the military, for spy agencies, and governments.

If obscurity didn't have any benefit, then the military's latest weapons wouldn't be tested in the Nevada desert, or some remote island; they'd be tested in Illinois, or off the coast of Long Island.


Most programming and IT sayings are grossly misinterpreted. My personal favorite is "premature optimization is the root of all evil," which originally came with a ton of context but today is often misinterpreted as "never worry about performance" resulting in a lot of slow bloated software.

Changing a port adds one bit of entropy. Not being forced to use "admin" as a username adds a whole bunch, but at least one bit. Not being forced to use https://url/admin also adds another bunch, but at least 1 bit.

Of course, if any of these things are known the entropy drops to zero... Just like a private ssh key that gets pwnd.

All too often I see tickets on open source projects asking for changes to allow better obfuscation, which are then denied using the mantra "obscurity is not security".

They all add bits of entropy to a security and/or threat model that maintainers ignore.


All encryption is "security through obscurity". The parameter space is very large. The key is somewhere in it. You have access to the whole space, but no clue as to where the key is. Good luck finding the key.

> Instead it was originally meant as "if your only security is obscurity, it's bad".

Since all security is essentially "through obscurity" somehow, I would simply reframe that into the onion model. Good security is like an onion, it has many layers. When you only have one layer, that's bad security.


I agree with the principle, but I disagree with the article's example of changing the SSH port as an example of obscurity. Lots of people set up SSH servers on multiple ports, especially in the case of relay servers that provide access to multiple machines through one IPv4 address.

A better example of security by obscurity would be to, for example:

* Flip all the SSH bits or XOR it with some long key.

* Encapsulate SSH inside another protocol, such as websockets over HTTP port 80, or embedded inside what look to an outsider as cat pictures being sent over HTTP.

* SSH over TCP over Skype video.

Incidentally, any of these methods work well for confusing China's firewall and keeping the SSH connection alive, and would probably confuse hackers as well for a little while. They could all be implemented in a router box that doesn't affect your actual deployment.


Hell yes, couldn't agree more.

This last year, I found out about knockd and if that isn't some awesome shit, I dunno what is. Yet, there are plenty of articles saying, incorrectly, how it's awful. It is simply another layer of security on top of everything else you have. Like you said, security by obscurity is more about making it fucking slow, irritating, tedious, and without any sense of reward. "Aha! After only a week, I've figured out you're port knocking! Oh shit... wait, you still totally have the server properly locked down. FML." Because after each "obscure" layer there is a "real" layer of security and hopefully those all those real layers buy you the time to detect and prevent the threat.


Also don't forget that relative effort matters too. Consider "The Club" protection for cars - in a lot, the one with The Club is chosen last to break into just due to its relative difficulty. (Weighted against the potential upside, obviously.)

The port knocking itself may actually be the strongest link in the chain, despite it being one of obscurity, if the population of targets in your "value pool" is large enough so that you are always below a sufficient number of others without knocking enabled, since all attackers will bounce to those when they realize they are not knocked.


> Instead it was originally meant as "if your only security is obscurity, it's bad".

no, not really. what it means is: every important sytem has attackers trying to exploit it. finding an exploit is a series of hunches while probing the system as a blackbox, and you need just one; meanwhile a defender has to be methodical enough find them all.

given the differences, obscurity removes the defender ability to systematically analyze the system while on the other hand for an attacker it remains as much of a blackbox as it was before.


It is obviously a misinterpretation of the original idea behind "security by obscurity is bad". Same goes for "goto considered harmful", which is not always true.

Although, Kerckhoffs's principle are a good way of describing how a secure cryptosystem should behave. This is what people should have in mind.

Obscuring will just add some delay as you state, but it might be irrelevant in many situations.


A simple example would be separating usernames and passwords, having an outer and inner password (think Truecrypt/Veracrypt) or even personal quirks. Again, it depends how much the attacker knows, but even today you can still do the classic "hash my master key with site name" for a password that you wouldn't store anywhere.

That's not the only problem with obscurity. It not only obscures flaws from attackers, it also obscures them from you and makes a system hard to maintain. In any complex system, ultimately there will develop chinks in your armor that owe their existence to obscurity hacks that were thought clever at the time.

I know someone who would rather store passwords/api keys in the database encoded in a way that is not clear text but is not encrypted or hashed arguing that its overkill to encrypt.

Obscurity instead of Security is bad too.


Sure, like giving a login page an unexpected URL to foil bots (eg hiding WordPress admin).

If that was the only security it’d be terrible.

But not having 1000s of bots pounding on the door saves a lot of headaches.


A lock that keeps an experienced lock-picker out for a few minutes will keep the layperson out indefinitely... Until they grab the bolt cutters. Everything is relative to context.

I agree.

The only thing I would add is that it also needs to be maintainable - the obscurity should not impede the maintainer's understanding of the implementation.


Sure, security by obscurity slows down bad actors, but in reality it's not by a significant amount. Often the obscurity that you add aren't even where they're looking. You have to go through a certain level of effort to add it the obscurity. That effort is not enough to warrant the insignificant slowdown of the bad actor. You're better off using that effort to improve your real security in other areas. In addition, you're adding complexity that you have to maintain.

It's fine as an additional layer only when the primary layers do not rely on obscurity.

I've seen too many instances where obscurity is used to justify weak primary layers (IE it's fine we're using this single word shared password since we have all these other layers). It can often provide a false sense of security since it looks like a security layer when in reality it often turns out to simply be a minor inconvenience to an experienced attacker.


Also, related, the kind of traffic needed to probe in a reasonable amount of time can easily be spot.

Well put. Using it as an additional layer isn't bad.

Agree, Its a good/cheap first step !

There's something to the idea of rehabilitating "obscurity", or at least recognizing that "cost" is part of threat models, and you can raise costs for particular attack vectors by degrees instead of "to infinity".

But SSH is a terrible example, because the cost to the defender of simply not having SSH vulnerabilities is the same, or even less, than the cost of obfuscating it with nonstandard ports, "port knocking", or fail2ban, which are all silly ideas.

Just use SSH keys, and disable passwords.

I think maybe it comes down to this: dialing attacker costs up incrementally can make sense if it's the most cost-effective way for a fully-informed defender to improve security. But incremental cost-increasing countermeasures aren't a substitute for sound engineering; you don't get to count "having to learn stuff" as a valid defender cost.


"But SSH is a terrible example, because the cost to the defender of simply not having SSH vulnerabilities is the same, or even less, than the cost of obfuscating it with nonstandard ports, "port knocking", or fail2ban, which are all silly ideas."

I know who I am arguing with here but port knocking is not silly. It's fantastic.

When I say fantastic, I don't mean it solves all of our problems and obviates any other protections ... what I mean is, for almost zero cost[1] it adds a non-zero level of actual protection.

As a lifelong UNIX sysadmin, it is one of the few totally unalloyed security improvements that I have been able to add to my systems. I believe there are sshd vulns extant that you and I don't know about and port knocking allows me to worry less about them.

I also recommend SMS alerts on successful knocks - SMS alerts that you should never see in surprise. This is trivial, by the way, as you can put semicolons in the knock command:

  /sbin/ipfw add 01021 allow tcp from %IP% to 10.0.0.10 22 setup ; /usr/local/sbin/timestamped_sms 4155551212 "knock from %IP% - "
[1] knockd on FreeBSD, 10+ years, not one hang or crash.

It solves none of your problems and adds complexity and cost to your defense without corresponding increases to attacker costs.

If you believe there are unknown OpenSSH attacks, you can't coherently believe that port knocking is a real defense, since port knocking doesn't do anything to protect the SSH channel that attacks will be carried out in.

Instead, if you're actually worried about OpenSSH vulnerabilities, you shouldn't be exposing SSH to the public Internet at all. I'm not super worried about OpenSSH server vulnerabilities, but I would never recommend that teams leave SSH exposed; they should just hide that stuff behind WireGuard.


>It solves none of your problems

Wrong, it solves tons of them.

>adds complexity and cost

Almost zero complexity and cost. Maybe if you're a bad at sysadmin work it adds cost and complexity.

>defense without corresponding increases to attacker costs.

It adds a _huge_, almost incalculable cost increase to attackers.

>If you believe there are unknown OpenSSH attacks, you can't coherently believe that port knocking is a real defense, since port knocking doesn't do anything to protect the SSH channel that attacks will be carried out in.

Looks like you don't understand the concept of 0-days. Several CVEs we're listed elsewhere. I suggest researching 0-day exploits so you understand how port knocking mitigates them.

Port knocking mitigates 0-days.

>Instead, if you're actually worried about OpenSSH vulnerabilities, you shouldn't be exposing SSH to the public Internet at all.

I don't disagree here, VPN is a great solution. Nonetheless, for some shops simple port-knocking on a bastion host solves, a lot of these issues, and removed the complexity that VPNs add.

>I'm not super worried about OpenSSH server vulnerabilities, but I would never recommend that teams leave SSH exposed; they should just hide that stuff behind WireGuard.

No one is super worried about things like shellshock, heart bleed, etc. until they happen.

Port knocking solved a lot of problems, protects you from zero-days, and makes SSH noise a non-issue (huge signal-to-noise gains).

Used in production for years. It's fantastic.


Port knocking adds a huge, almost incalculable cost increase to attackers. I'm going to remember that one, thanks!

Why not just block SSH access from the public internet and use a VPN? Trivially easy to setup and more secure than knocking.

All it takes is me somehow being able to listen in on your traffic - not even decrypt it - and now I know the knock sequence. I know that you have SSH listening on that server. I know you are actively doing something on it.

vs. a VPN where... all I know is you are communicating over a VPN. With DPI I might be able to determine what type of traffic you're sending, but not where it is ultimately going.


Again, this is where all port knocking debates devolve to ...

Port knocking is not the christ child that will wash away all of our sins ... and therefore is not worth implementing.

You're right!

It doesn't add that much. But it's non-zero and has almost zero cost. It's very elegant, in my mind, and it makes me very happy.


It doesn't add enough to compensate for its costs, which are commensurate with those of VPNs, which provide drastically more return on the investment. But VPNs don't have a cheering section, because they're so obviously useful that nobody has any incentive to make that banal observation. "Port knocking" is idiosyncratic and widely looked down on by security engineering teams, so there's a contrarian impulse that makes them seem worth discussing.

I'm struggling to walk away with a crystallized view of why port-knocking is bad, though.

I do agree, nobody should be going to sleep at night, relying solely on obscurity as their source of protection. But these commenters are offering it as an additional layer of indirection. They're not touting it as _the_ solution, full stop.

At the most basic level, would you refute the claim that port knocking or alternate ports are adding additional friction for an attacker, or no?

Myself, I would prefer to run a simple, (hopefully) set-and-forget daemon on my server if it really did add an extra layer of obscurity to my secured SSH service.

I guess I just fail to see why it's one against the other.


There's several components to this.

Foremost, there is an opportunity cost to setting it up. The time you spend setting up port knocking could be spent setting up another form of security. I believe it is a sound argument to say that a VPN provides more security at a similar level effort. No public SSH means an attack cannot know SSH is running on the server from a port scan because it simply isn't listening. It allows you to reduce the attack surface - you can add more and more servers that you need to SSH into, but you are only allowing public access via your VPN - so you have fewer potential ingress points, and can ratchet up your security and auditing commensurately. And if your VPN concentrator is owned, you should have been setting things up so that they did not implicitly trust someone just because they were on the VPN, so you still have all of your usual measures of security in place.

In that case, there's just not much point. You could also enable port knocking, but I don't think it provides much benefit.

That brings us to the next part. Port knocking is a "weird" thing. It's idiosyncratic and not standardly used. Documenting it and understanding it is additional overhead, and it's something you have to manage and worry about on every server that's using it. Additionally, both standard and SPA implementations are vulnerable to man in the middle attacks, though most SPA based implementations will require an active MITM in that blocks the initial packet rather than just replaying a knock sequence. So extra complexity, less secure, and an oddity on the network that you have to have documented and explain to new team members, etc.

If you're a single person managing a single server, well, honestly you're probably fine just turning off password auth. And you can feel free to do port knocking and whatever else. It probably doesn't matter.


Thanks, I appreciate the thoughtful response.

It sounds like port knocking and VPNs, while starkly different in approach, have some overlap in their approach to threat mitigation.

Wireguard et al are much better equipped to handle the needs of an organization, while port knocking's value trends to smaller teams, or even individuals.

I wouldn't want to manage knock rotations for 600 employees, for example.


A tangent: are VPNs other than WireGuard less likely to have vulns compared to SSH? Seems the same to me (or worse for OpenVPN a few years ago)

You're asking for my opinion, and that's all I can relate, but here's my ranked ordering of things likely to have RCE vulnerabilities, from least to most secure:

* A Java, Python, or Ruby app server

* OpenVPN

* Stock nginx

    ----- starts to get really unlikely right here ----
* OpenSSH

* The Linux IP stack

* WireGuard


One crude first-order comparison is to look at the relative size of the code. More code is more likely to have more vulnerabilities, to a first-order BOEC metric.

""Port knocking" is idiosyncratic and widely looked down on by security engineering teams"

I can't comment on that.

I'm not a UNIX sysadmin because it's a rewarding career path with excellent opportunities for advancement.

I'm a UNIX sysadmin because I truly love doing it and always have. I would do it for free.


That's why I do software security!

I guess my point is largely: I can set up a VPN in a roughly similar timeframe to setting up port knocking, and it has roughly similar overhead for end user, but the VPN gives me significantly more security while also solving the same issue port knocking does. In that case, why not just set up a VPN instead of port knocking?

I will again agree with you that the VPN is a more robust and more complete protection. You are correct.

I think the reason I continue to prefer (and evangelize) port knocking is that the intersection of (modest) security gain and simplicity/robustness hits a sweet spot for me.

Again, 10+ years in production on many hosts, worldwide, and never so much as a blip. IF knockd were to fail, it would fail in a very boring way. VPNs, on the other hand, are far more complex and they fail in fascinating ways.

I am a sysop turned sysadmin - this is my lifes work. I prefer simple, unixy tools that fail in boring ways :)


Not to be That Guy, but what I'm reading is that you have the wonderful opportunity before you to learn about VPNs until they fail in boring ways!

That's not how that works.

Simple systems tend to fail in boring ways. Complex systems tend to fail in interesting ways. Learning more about a complex system, while rewarding in many ways, will not change that identity.


It's been my experience that a system's complexity (basic or complex) is less an intrinsic trait and more a matter of subjective perception, familiarity, and experience. Your experience is clearly different.

My daily bread and butter is VPNs, but I must admit that I think there may be a truth here.

While I fully agree that portknocking doesn’t provide the same layer of protection or flexibility a VPN does - but with the original article in mind: if your reason for deploying a VPN is because you fear to expose unknown bugs in sshd to the Internet the same could be said about every vpn solution.

Therefore portknocking is / (would be*) indeed more elegant because

- it makes no promises to be secure (as in as secure as a VPN) - one could argue: if you use it you know that portknocking is just an additional security layer - and maybe don’t get lazy as in a VPN - a misconfiguration or bug or an attacker might expose sshd on your hosts - a misconfigured VPN at least in a somewhat sizeable deployment can lead to countless attack surfaces

Having said that, that only will work if the rest of the sshd security is in check and your password isn’t hunter2


I think get where you're coming from here, but I don't fully agree.

>f your reason for deploying a VPN is because you fear to expose unknown bugs in sshd to the Internet the same could be said about every vpn solution.

Yeah. You might have a VPN zero day - but then you still have to get into the other SSH servers. Two zero days simultaneously active for openssh and your VPN solution? Pretty unlikely, especially public ones. Someone burning two private zero days on you means you're an incredibly high value target and neither of these would suffice as your sole defense to begin with.

The rest of your argument, if I'm understanding it correctly, is that you think people will get more lax with securing SSH on a box only reachable via VPN than if it was reachable by port knocking? It's possible, but I don't know that the evidence really shows that - lots of comments on this article are along the lines of "i set up port knocking and I've never even seen a malicious ssh connection attempt since then!" - no details of the rest of the security measures they've got in place.

And yeah, going from 'I set up a VPN to connect to my web servers via ssh' to 'I have VPN access to a whole network with all sorts of things running on it' is a big step up, but I don't think it's really in the boundaries of this discussion. Port knocking was never going to be a replacement for a larger VPN deployment, and when you're opening up network access to a wider range of things then how you approach things definitely needs to change.


First of all I agree with you, that we should compare solutions in a comparable manner and I went overboard.

So yes, if we want to be fair we have to compare an in-host defense system like portknocking (which has one job: secure sshd) to a in-host vpn setup more alike to like the often mentioned wireguard.

And in this "configuration" I completely agree. I still think it may be more likely for a VPN to expose security critical bugs than a bug in knockd - but as you said this should only allow access to your next layer of defense (namely sshd) and maybe (if you're a really valuable target) a three-letter-agency might throw all their resources at you and are willing to throw every weaponized exploit they have at you - yeah than you're even more correct because than they would have a far easier time just intercepting your port knock sequence and throwing all their quantum computation power against your sshd keys.

> The rest of your argument, if I'm understanding it correctly, is that you think people will get more lax with securing SSH on a box only reachable via VPN

The argument I was trying to make is that while a VPN is in in every way a really good idea (the way we described it here - as an in-host security layer) but I have yet to see it being rolled out in that way.

I come from a more traditional sysadmin setting, and of most sysadmins I worked with would find implementing this "correctly" to tedious and would either

a) terminate the vpn connection at the rack or co-location "border" and shove a bunch of servers down a single VPN connection b) terminating every servers vpn connection at a single vpn concentration point

Regardless of which, in virtually all cases that I know of never was there any thought given about intra-VPN firewall rules or allowing only certain ports on the VPN. Most of the time you get the servers that are somewhat related, shove them in a subnet expose that subnet via VPN and you're golden.

And so from my practical experience, I would think that a compromised VPN in my reality would be worse than a exploited knockd, but only because it isn't scoped to the same level.

On a sidenote: I'll guess that modern orchestration tools make it pretty easy to roll out knockd and / or wireguard pretty easily in the discussed fashion - it's just I don't get to play with those.

That was a lot of text, just to say I agree with you - but hey, I guess agreeing on something on the internet is somewhat nice so have a great day.


Which vpn though?

https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=openvpn

https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=pulse+secur...

VPN doesn't magically fix all problems.

wireguard is good so far, but the kernel implementation is in C, so who knows.


No, a VPN isn't magic in and of itself. And yes, I would suggest wireguard for the simplicity and performance these days.

I do agree with the author of the original article that security should come in layers

Once something is secured with SSH and a VPN, you've got that many more actual layers - you now need a CVE that allows access or credential leak for both the VPN and SSH. (And many of those CVEs don't necessarily allow a random attacker to arbitrarily gain access)

https://news.ycombinator.com/item?id=24446919 has my list of what the bare minimum SSH protections should be for anything where you are storing customer/user data in my opinion, as well as additional best practices that I have employed.


Like you said.. "worry less"

Can a theoretical attacker intercept a port knocking sequence? maybe. Would a script kiddie running a new ssh 0day against the entire internet be able to do this? no.


> almost zero cost

If it's your private pet server - sure. In larger networks you have to document the access, manage the allowed ports on the network, configure security groups or equivalent on instances, provide alternative steps for people with unusual clients (for example database UI app proxying over SSH), etc. The cost suddenly becomes very non-trivial.


> I believe there are sshd vulns extant that you and I don't know about and port knocking allows me to worry less about them.

That's interesting, that's the first time I've heard a justification for port knocking that actually makes sense to me.

I'm curious for others' thoughts here -- are non-public vulnerabilities something you consciously try to mitigate? So that, for example, using 2 different 8-character passwords that are implemented with different technologies, is therefore fundamentally more secure than a single 16-character password? Precisely so that a vulnerability in one is still protected by the other?

To me this feels like it's really only applicable if you need to protect your data from hostile governments targeting you specifically, who might actually have zero-days they have weaponized.

However, if you're just trying to protect yourself from everyday hackers or even targeted corporate espionage, is unknown vulnerabilities really something that's realistically worth protecting oneself from? (Assuming you're always installing all security patches.)


I agree. I think this comes down to the Mickens Security Threat Model. Your adversaries come in basically two forms: Mossad and Not-Mossad. If your adversary is Mossad, you've already lost; if a governmental actor wants your data badly enough, they'll get it. If your adversary is not-Mossad, they almost certainly don't have access to any secret zero-day exploits; stay up to date on patches and use good passwords and you'll be fine. Port knocking will almost certainly protect you from not-Mossad, assuming your adversary doesn't know that you're using it.

Sure, a small percentage of adversaries are in neither category, and a random hacker dedicated to hitting your specific server may suspect port knocking and could try to circumvent it, but most companies don't have an adversary like that, and even if they do, you've made it harder for them for a small cost.


I love that article, but this comment beautifully illustrates the problem with it, because unless you believe "19 year old with better-than-normal tooling" counts as "Mossad", it has totally screwed up your perception of the threat model.

I understand how port knocking can throw off nmap and reduce brute force traffic.

Does it solve anything else?

> I believe there are sshd vulns extant that you and I don't know about and port knocking allows me to worry less about them.

Wouldn't you need to worry about vulnerabilities in knockd?


Agreed: adding port-knocking and fail2ban in addition to passwordless should not be and are not silly ideas.

> Just use SSH keys, and disable passwords.

CVE-2001-0144 - SSH1 CRC-32 compensation attack detector allows remote attackers to execute arbitrary commands on an SSH server or client via an integer overflow

CVE-2008-0166 - OpenSSL 0.9.8c-1 up to versions before 0.9.8g-9 on Debian-based operating systems uses a random number generator that generates predictable numbers, which makes it easier for remote attackers to conduct brute force guessing attacks against cryptographic keys.

I had a machine almost get compromised from the 1st vulnerability ( noexec on /tmp broke their script ). When the 2nd came out I was using non standard ports and or port knocking. Despite having vulnerable keys I was safe until I could upgrade.

If a SSH RCE 0day was released:

* every "Just use SSH keys, and disable passwords" box sitting on the internet with ssh on port 22 will get compromised within hours.

* The boxes using fail2ban will get compromised within hours.

* The majority of boxes on nonstandard ports would likely be ok, at least for some time.

* The boxes using port knocking would be safe.


I think the fact that you had to list the 20-year-old SSH CRC compensator vulnerability to establish the untrustworthiness of SSH is telling; very few pieces of software have OpenSSH's current track record. I would cite the same 2 vulnerabilities to suggest that SSH is as trustworthy as almost any other piece of software you can run.

Having said that: I don't like exposing SSH services either! Which is why I try to keep them behind WireGuard, at least on prod networks that I care about.

In contrast to an actual VPN, port-knocking and (heh) nonstandard SSH ports shield you only from casual attackers; both give a middlebox attacker all the access they need to launch the attack.


Only the 4th point is really true: if you run SSH on a non-standard port but it's otherwise accessible, you'll still see scans on a regular basis.

Port knocking isn't a terrible idea but I generally prefer locking down the networks (or, these days, using AWS SSM / GCP IAP to avoid listening publicly at all) since having something on the internet means you're just one mistake away from problems and need to staff monitoring accordingly.

The other thing to remember here is that we're talking about one general CVE in two decades. Almost any other running service has been much worse so while SSH is important to protect I don't know that I'd make the argument that further pushing that one service is really the best bang for your buck.


> Only the 4th point is really true: if you run SSH on a non-standard port but it's otherwise accessible, you'll still see scans on a regular basis.

Possibly.. It does depend on the port. 222 and 2222 often are scanned with 22. 2200-2299 is probably common now. I was using 2221 for a bit but after a few years that started seeing some auth attempts.

I mostly watched entire /16s, not single hosts.. the scan patterns for a large netblock are very interesting. It takes as much effort to scan the entire internet on port 22 as it does to scan all ports on a /16.. attackers simply do not do that.

The benefit of some of the port knocking systems is that the attack surface is almost nothing and they are easy to audit. I used it a few jobs ago on my management system/bastion host. I couldn't rely on the VPN since I was the one that managed the VPN, so I needed a way to securely login remotely that did not go through the VPN, and did not end up having sshd exposed to the world.

These days I run sshd at home behind https://www.tarsnap.com/spiped.html

Ubiquitous wireguard may change things.. we'll see.


What about entire /56s ? (Home user on IPv6.)

> you'll still see scans on a regular basis.

Not in my experience, I would even say that full range port scanning is extremely rare. Botnets (again, in my experience) seem to only be interested in vanilla installations and will test standard ports exclusively. But of course, if you are in charge of some very tempting target (eg a cryptocurrency exchange) your experience will be totally different than mine.


> * The boxes using port knocking would be safe.

No, safer. It is very well possible to brute-force port knocking or eavesdropping the ports since that information is not encrypted. Is it harder? Of course, a lot, but if you think scanning 65k ports on each host on the internet is reasonable, then evading a port knock is very much, too.


> It is very well possible to brute-force port knocking

It's incredibly unlikely - there is probably more chance of the sun imploding tomorrow. And if you're the type to install port knocking, you've almost certainly also installed something like LFD, which will temporarily block IPs for port scanning.

Also, without inside information, how would you even know that a server was using port knocking?

> unlike


eavesdropping, maybe. There are tools like https://github.com/mrash/fwknop that are not vulnerable to that or brute-forcing.

But brute forcing in general? not a chance. There are 18446744073709551616 4 port sequences.


And there are roughly 1267650600228229401496703205376 port 22's in the IPv6 space - I've substracted a few for reserved and unassigned spaces, but at this scale a few orders of magnitude hardly matter.

Here, for comparison:

  281474976710656 - total ports in IPv4 space
  18446744073709551616 - 4 port combinations
  1267650600228229401496703205376 - my estimation for 22 in IPv6
And if you don't block the knocking when receiving traffic on another port, brute forcing gets quite a bit easier. I mean, it's still unreasonable. But my point is, when we accept 0,001% chance as possible, I don't think we can say that 0,000001% is impossible - just a lot less possible ;)

18446744073709551616 is 65 bits. Let’s simplify, you're trying to guess a number in 2^64 You can’t guess in parallel. Reasonable constraints on the server side (i.e. limit tries on the combination/per hour before suspending ssh for a while) may have been implemented.

I’d say cracking that is… Unfeasible.

That also assumes you know the existence of a server on which there is ssh under an unknown combination of port knocking of length 4.


The actual chances for guessing the 4 port combination are closer to 0.000000000000000000001%, about as likely as winning the lottery three times in a row. If you're trying to brute-force me with those odds, I'll take my chances.

There is a vastly higher chance of there being exploitable bugs in port knocking tooling than there is of there being exploitable bugs in SSH. You are adding extra exposure and gaining nothing.

I use SSH keys, and disabled passwords. However when I was running SSH on port 22, the number of attempts was slowing my machine to a crawl at times.

Moving the port to some obscure random one divided the number of requests from several thousands per hour to a few per day. Definitely an improvement by any measure : suddenly you can analyze the attacks if necessary.

I run fail2ban on top of it, because why not? In case someone would attempt to really target my system, any obstacle is good to take. And who knows what ssh vulnerabilities exist; any protection is good to take.


I gotta wonder - how in the world can you ever get enough failed SSH login attempts to noticeably affect system performance?

I usually have several cloud servers running with a normally secured SSHD running. There's some failed login attempts yeah. I've never seen even 1% CPU usage from them. I doubt even posting my server address on every hacking forum I could find and daring them to try and hack me would result in getting enough failed SSH login attempts to blip my CPU usage. I have no idea how that could even happen, aside from somebody intentionally targeting your server with a really weird attack for whatever reason.


I love it when people say this. Analyze the attacks... and then what? I'm seriously asking. Block the specific source addresses you know about so far?

Actually fail2ban takes care of that for me. Anyway, the important part was having my home PC not crawling and its disk filled with failed connection logs because of the deluge of bots requests. Avoiding being DDOS'ed, for short.

It's not really a direct security advantage, so this is mostly off-topic, but changing the default port does greatly reduce log noise, and theoretically could be a bit less taxing for your network connection or CPU if it's a cheap server not intended for publicly hosting services. (If it is then the traffic would be a drop in the bucket compared to regular production traffic, though. And it's admittedly probably a drop in the bucket either way.)

Reducing log clutter alone probably does confer some small indirect benefit, since it's less likely a more sophisticated attempt or successful breach would go unnoticed when inspecting logs. (Assuming there's some SIEM log forwarding or that it's not a situation where an attacker was able to or wise enough to wipe logs.)


I think a lot of the people in this comment thread are missing the point when using the `sshd` example. There is no single infallible way to secure ssh, but there are a lot of things that can be done together to make it pretty darn hard to hack, and most of those countermeasures have some degree of 'obscurity' to them.

Example:

* Use RSA keys instead of passwords -> This will eliminate most risk, except for exploits in sshd itself,

* Change the default port from 22 to something in the 40k+ range, which will keep you from being scanned, and

* Whitelist IP addresses that can connect to port xx on your server -> This will eliminate 95% of remaining risk

* Using a 'clean' bastion server to access other systems via agent-forwarding, preventing malware on admin workstations from being able to propagate over SSH.

So, no you're never going to be 100% secure, that's just unreasonable. But like you said, the cost can be increased to the point that all but the most determinted state sponsored APT groups.


>* Change the default port from 22 to something in the 40k+ range, which will keep you from being scanned, and

I'm replying to these suggestions all over this item because I think it's important, so I apologize if you've since seen this comment elsewhere, but:

This introduces new security risks. Non-privileged users can bind on ports in the 40k+ range and cannot bind on 22. If you restart sshd for a software upgrade or some other reason, or the iptables rules you're using to remap the ports get flushed, the malicious non-privileged user can now bind to the port people were communicating with your sshd on, and if they ignore the host key mismatch, everything they send can be captured by the malicious user.

Older openssh clients have default configurations that can result in the leak of the whole private key, if you use password auth or 2FA they can outright steal those, perhaps their fake sshd will do more than just steal credentials and will actually mimic a shell and let them gain more understanding of how the system ticks, etc.

Is this level of attack something most people are going to run into? No. But neither is an attack more sophisticated than brute force password attempts. It's definitely information people should be keeping in mind when making these sorts of decisions, too.


> This introduces new security risks. Non-privileged users can bind on ports in the 40k+ range and cannot bind on 22.

My firewall does port mapping so externally it's not 22, but internally it is.


This is significantly better than just changing the port the daemon listens on, for sure.

There's still public access to SSH, so you're still at risk from a zero day, weak credentials, etc., so I don't think it's quite to ideal levels where you are employing a VPN, disallowing all public access, etc., but at least you're not introducing new potential attack vectors :)


What about VPN makes the VPN server software more secure than the SSH server software?

The level of security is cumulative. You do not trust a connection just because it's connected to the VPN. So if your VPN concentrator is compromised via 0day, the only access they get is the same as if things were listening on the public internet.

To gain access to the server via SSH they now need both a way in to the VPN and a way in to SSH, vs. just needing a way in via SSH.

It doesn't do much if someone just gives up the keys for the VPN and SSH, but it would mean that you would need two simultaneous exploits for the VPN and SSH to gain access.


Yeah but if they compromise the VPN they potentially have access to a lot more than just the SSH server. At least in the setups I've seen deployed.

I'm not sure I necessarily understand your argument, so my apologies if I'm off here.

In scenario 1, you do not gate access via VPN. Things are accessible via the public internet.

In scenario 2, you do gate access via VPN. Things are not accessible via the public internet. Someone compromises the VPN. They now have as much access as if there was no VPN and things were accessible to the public internet.

In scenario 2, you are more secure than in scenario 1 until the VPN is compromised. You are then just as secure as you were in scenario 1.

If you are not restricting access to a VPN in the first place, how would compromising a theoretical VPN result in greater access?


In the setups I've seen, once you've connected through VPN you're essentially on the LAN. If you compromise the SSH server, then you're also essentially on the LAN. Yes with the VPN you still have to compromise the server running the SSH service if that's the machine you want access to, but inside the LAN you now have a much greater attack surface.

Of course if the setup is VPN -> firewall -> SSH to make sure only the SSH is exposed through VPN, then I agree you'd be more secure with VPN+SSH.


But without the VPN, you're already the equivalent of on the LAN because all of these services are exposed to the public internet.

In the discussion we're having, we're going from a setup where there is no equivalent to a private network because everything is public, to having a private network that only allows you access to the things that were previously public.


No because I have a firewall in front of the SSH, as mentioned. I would assume a firewall is in front of the VPN as well of course.

So either only SSH is exposed to the public, or only VPN is exposed. Without an additional firewall after the VPN, how is my LAN more protected with the VPN vs SSH?


Your goal is to protect SSH, not the VPN network. The VPN network is just a tool for protecting SSH.

With your configuration, all that needs to exist is an SSH 0 day to gain access to the server. With a VPN, they need that AND a 0 day for the VPN software to gain access to the server.

You can have a more complex setup with a VPN, but that isn't the discussion here - the discussion is securing SSH. If you want to provide VPN access to an array of other services, or as access to a corporate LAN or similar, then that's another conversation that has to involve the specifics of those services and that configuration. It's not what is being recommended here.


Fair enough, guess I was restricting my view to my bubble. For a single server sure defense in depth should work, assuming you're not running the VPN on the same box.

I'd suggest using jump hosts (-J or ProxyJump) rather than agent forwarding to a bastion host. IIRC the latter gives the bastion host access to your keys.

Each security measure has a value and a cost. Keys over passwords provide by far the best value/cost ratio. Using obscure ports or port knocking or whitelisted IPs are relatively clunky mechanisms that are more expensive and obscure your security posture as much to yourself as to adversaries.

This is absolutely true, but in some ways this is more about reducing the number of 'attempted connections' in the sshd log. Meaning, any failed connection that is recorded (and ideally shipped off to a centralized log system) is in some way actionable. Opening up port 22 (with keys) will still create tonnes of alerts from any SIEM.

The other thing to consider is that there could be exploits in OpenSSH itself. There hasn't been a truly critical vulnerability in a very long time, but low severity or non RCE vulnerabilities aren't exactly rare: https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=openssh


>obscure your security posture as much to yourself as to adversaries

Almost barfed from sheer intensity of tech corpobabble. Its for blocking 0-days dummy!


Neither changing the SSH port nor using IP source address filters constitute serious countermeasures; they complicate systems and offer little return on the investment. Don't bother. If you're worried enough to change the SSH configuration, set up WireGuard.

Heartbleed caused data leakage in the handshake phase of the protocol. I don’t know if SSH was affected, but there’s no reason why a similar exploit won’t be found for SSH in the future, and trivially obscuring your SSH port protects you from 99.9+% of automated attacks, possibly buying you time to patch or mitigate.

I could be wrong, but if you are using public/private keys to authenticate to ssh, then even attacks that can listen in on the connection would be limited. Because the private key is never transmitted, unlike a password.

With heartbleed, a bug in the implementation of the protocol led to the server randomly leaking contents of the server’s memory, which could be anything from private keys to user or system passwords to other confidential information. No passwords or MitM was required. You can read more at heartbleed.com

And it still doesn't matter, because sshd literally never has the private key that allows access. If a server only allows access via SSH key, you could literally have a complete RAM dump of the whole system and not be able to access it.

> still doesn't matter (...) you could literally have a complete RAM dump of the whole system and not be able to access it.

I'd say that matters. Think about all the secrets (tls keys, whatever) a server has in memory.

If you can't connect to the sshd daemon, you can't attack it.


Though it would be a tragicomic shame if you got caught by a nasty 0-day while the clown up at port 34015 narrowly escaped and earned enough time to patch before pre-mapped host scans begun.

[flagged]


No, it's not, and it won't.

One advantage of putting ssh on a non-standard port is that your logs, which are otherwise filled with automated ssh break-in attempts, now become almost empty. It's much easier to look for other problems when the signal to noise is increased.

> But SSH is a terrible example

I’m not sure there is any such good example though. Every obscurity control I’ve ever seen has imposed costs upon the users, administrators, engineers... but I’ve never seen one that I would rely on to improve security posture in any meaningful way.

I’ve certainly never seen an obscurity control that was worth its opportunity cost. I can think of dozens of actually useful controls where even a marginal improvement in operational performance would be worth more than every conceivable obscurity control combined.


> There's something to the idea of rehabilitating "obscurity", or at least recognizing that "cost" is part of threat models, and you can raise costs for particular attack vectors by degrees instead of "to infinity".

Exactly! Especially when you can create a high cost asymmetry, low-cost for you to add, high cost for the attacker to bypass.

Agree that the SSH examples aren't the best. I would have picked DRM.


DRM has the problem that it is illegal to bypass, even if your intent is not malicious.

I agree that changing the SSH port may not be the best example of a low cost measure, since bypassing is also low cost.

I would like to see a list of suggestions of "low cost" ways to obscure systems that are (relatively) harder to counteract. But I guess as soon as anyone publishes such a list then hackers will start checking for them.


> Just use SSH keys, and disable passwords

Or even use a good password, if you don't have many untrusted users. It works perfectly.


> are all silly ideas.

Changing the port is not silly, it increases the SNR in logs, that's already a worthy goal.


[flagged]


No personal attacks in HN comments, please.

https://news.ycombinator.com/newsguidelines.html


>This just shows how ignorant you (and most) are on the topic of port knocking.

You, uh, do know who you're replying to, right? https://sockpuppet.org/me/ if not - I don't mention this to go "lol he must be right because of who he is", but calling a well respected security researcher with plenty of real world street cred ignorant is a bit much.

>SPA port knocking is cryptographically secure and does not suffer from replay attacks.

SPA port knocking doesn't suffer from passive replay attacks, but it does suffer from block and replay attacks. An active MITM can still get you.

His suggestion hasn't been "if you care about security just don't do port knocking", his suggestion has been "if you care about security just throw up a VPN it'll be more secure and just as much work"


[flagged]


>Wrong. SPA does not suffer from any MITM attacks.

Care to elaborate? Not even fwknop documentation claims to be secure from all mitm attacks:

>Automatic resolution of external IP address via cipherdyne.org/cgi-bin/myip (this is useful when the fwknop client is run from behind a NAT device). Because the external IP address is encrypted within each SPA packet in this mode, Man-in-the-Middle (MITM) attacks where an inline device intercepts an SPA packet and only forwards it from a different IP in an effort to gain access are thwarted.

If I'm MITM'ing you from the same Starbucks or am otherwise behind the same NAT as you, I don't care if you've got the IP encrypted in the packet when I forward it on.

>Not the same amount of work, so no, wrong. If I had a dollar for every billion dollar unicorn that that didn't have a corporate VPN, I'd have a lot of dollars.

There's not enough billion dollar unicorns out there to actually have a lot of dollars, even if 100% of them lacked corporate VPNs :D

Regardless, you don't even need a full on corporate VPN. You can throw up a tiny VM for your VPN in the same private subnet as your servers, only listen on 22 on the private IPs for the servers. You can do this in less than an hour with Wireguard. Super easy.


How does this work on IPv6 ?

>Care to elaborate? Not even fwknop documentation claims to be secure from all mitm attacks:

You made the claim. You prove it with documentation.

>If I'm MITM'ing you from the same Starbucks or am otherwise behind the same NAT as you, I don't care if you've got the IP encrypted in the packet when I forward it on.

That is by definition NOT a MITM attack.

>There's not enough billion dollar unicorns out there to actually have a lot of dollars, even if 100% of them lacked corporate VPNs :D

The example is only billion dollar ones. If I include +$10m+ ones, I'd have enough to dollars to buy a new laptop ;D!

>Regardless, you don't even need a full on corporate VPN. You can throw up a tiny VM for your VPN in the same private subnet as your servers, only listen on 22 on the private IPs for the servers. You can do this in less than an hour with Wireguard. Super easy.

You just described a bastion host, and port knocking makes sense on those as well LOL. Wireguard only currently supports UDP, which can and had been a limitation in the past.


>You made the claim. You prove it with documentation.

I... er, did?

>That is by definition NOT a MITM attack.

You're intercepting the packet and blocking it by being in the path.

>You just described a bastion host, and port knocking makes sense on those as well LOL. Wireguard only currently supports UDP, which can and had been a limitation in the past.

Bastion hosts are generally SSH/RDP/VNC type affairs. SSH in to the bastion and then you have access to the other servers. This is actually how I set things up in production environments - the VPN concentrator only allows access to the jumphosts, and then there's extensive logging and auditing there.

I'm not sure why Wireguard only supporting UDP would be a problem - you can pass whatever type of traffic inside of the tunnel.


>I... er, did?

You... Ugh... Didn't? You claimed that it suffers from MITM attack. You are not able to prove that it suffers from any MITM attack (the docs specifically outline a way to mitigate a specific MITM attack, but do not outline any others). Unless you have a source that states otherwise, you're wrong.

>You're intercepting the packet and blocking it by being in the path.

Wrong, that is by definition not a MITM attack.

>Bastion hosts are generally SSH/RDP/VNC type affairs. SSH in to the bastion and then you have access to the other servers.

Correct, and you set up port knocking for these. Thanks for proving my point.

>This is actually how I set things up in production environments - the VPN concentrator only allows access to the jumphosts, and then there's extensive logging and auditing there.

There should be extensive logging and auditing on the bastion host. Port knocking reduces the noise to effectively 0.

>I'm not sure why Wireguard only supporting UDP would be a problem - you can pass whatever type of traffic inside of the tunnel.

There have been multiple instances where UDP has been block at sites in the past. Looks like you're ignorant to this. Look up why OpenVPN supports TCP.


The cost of not having SSH vulnerabilities is infinite because there is no way to ensure that.

Applications are open for YC Winter 2021

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: