People have been misinterpreting "security by obscurity is bad" to mean any obscurity and obfuscation is bad. Instead it was originally meant as "if your only security is obscurity, it's bad".
Many serious real-world scenarios do use obscurity as an additional layer. If only because sometimes, you know that a dedicated attacker will be able to breach, what you are looking for is to delay them as much as possible, and make a successful attack take enough time that it's not relevant anymore when it's broken.
I think we can kind of view obscurity in the same way. It's a way to signal to a predator that we're a hard target and that they should give up.
Of course in the age of automation, relying on obscurity alone is foolish because once someone has automated an attack that defeats the obscurity, then it is little or no effort for an attacker to bypass it.
Of course, sprinkling a little bit of obscurity on top of a good security solution might provide an incentive for attackers to go someplace else. And I can't help but think of the guy who was trying to think of ways to perform psychological attacks against reverse engineerers .
 - https://en.wikipedia.org/wiki/Stotting
 - https://www.youtube.com/watch?v=HlUe0TUHOIc
This has it completely backwards. Security through obscurity's goal is not to signal predators, it's the opposite. The goal is to obscure, to hide. The "signal" is there is nothing here (or nothing here worth your time). One of the best examples (it's in the article!) is changing the default SSH port. Just by obscuring your port you can usually filter out the majority of break-in attempts.
The only way security through obscurity signals to "predators" is if they've seen past your defence, and thus defeated the obscurity. Obscurity (once revealed) is not a deterrent. Likewise an authentication method (once exploited) is not a deterrent.
>Of course in the age of automation, relying on obscurity alone is foolish because once someone has automated an attack that defeats the obscurity, then it is little or no effort for an attacker to bypass it.
This is true of any exploit basically. Look no further than metasploit. Another example: a worm is a self-automating exploit.
Most of the usages of "security through obscurity" that I've seen dissected and decried haven't been in the sense that something was being hidden, but rather that something was being confused. For example, using base 64 encoding instead of encrypting something. Or running a code obfuscator on source code instead of making the code actually secure.
Either way the economic costs that I'm talking about are valid. If an attacker sees that your SSH port isn't where it's supposed to be OR if an attacker sees that your SSH port ignores all packets sent to it (unless you first send a packet thats 25 0xFF bytes), then either way they're being signaled that you are more trouble than the computer that has an open telnet port.
There are slightly different usages of the same word, but the effect looks to me to be the same. More investigation or automation can make the obscurity go away, but it does make things a bit harder.
Using base64 encoding, or encrypting your database, are both examples in the article. While I agree base64 is super trivial, the point about either of these is defence in depth. In the language of the article, it's reducing likelihood of being compromised.
>If an attacker sees that your SSH port isn't where it's supposed to be OR if an attacker sees that your SSH port ignores all packets sent to it (unless you first send a packet thats 25 0xFF bytes), then either way they're being signaled that you are more trouble than the computer that has an open telnet port.
This is semantics. Personally I'd say if an attacker cannot sense anything to connect to, there is no "signal" you're sending. You're rather not sending a signal that you're a threat, as you're not sending a signal at all due to being functionally invisible. Otherwise, we could say literal nothingness is sending the same signal that your server is. We agree on the substance here, i.e. the obscurity increases the economic cost of hacking and works as a disincentive, so we may just agree to disagree on the semantics.
1. Endlessh: https://news.ycombinator.com/item?id=19465967
2. Tarbit: https://github.com/nhh/tarbit
Usually this is a poor choice vs. going with the published industry standard, because crypto is hard to get right, and people rolling their own implementations usually screw it up, making life much easier for dedicated attackers than trying to attack something that people have been trying and failing to breach for years or decades.
Software makers for example typically don’t publish the technical details of their anti-piracy code. But this usually doesn’t prevent software that people care about from being “cracked” quickly after release.
A better example would be a port-knocking arrangement that hides sshd except from systems that probe a sequence of ports in a specific way. This is very much security by obscurity, because it's trivial for anyone who knows the port sequence to defeat, but it's also very effective as anyone who doesn't know the port sequence has no indication of how to start probing for a solution.
Compared to milliseconds. Do yourself the favor and open one sshd on port 22 vs one on a port >10000, then compare logs after a month. The 22 one will have thousands of attempts; the other one hardly tens if even any.
The 99% level we're defending against here is root:123456 or pi:raspberry on port 22. Which is dead easy to scan the whole IPv4 space for. 65K ports per host though? That's taking time and, given the obvious success rate of the former, is not worth it.
Therefore I'd say it's the perfect example: It's hardly any effort, for neither attacker nor defender, and yet works perfectly fine for nearly all cases you'll ever encounter.
EDIT: Note that it comes with other trade-offs, though, as pointed out here: https://news.ycombinator.com/item?id=24445678
Or you can implement real security, like not allowing SSH access via the public internet at all and not have to make this trade off.
Imagine a 0day for SSH drops tomorrow. Almost immediately script kiddies all over the world will be trying to take over everything running on port 22.
I'll also point out that we're generally talking about different threat vectors here, so it's good to lay them out. I don't think obscurity helps against a persistent threat probing your network, it helps against swarms.
> a non-privileged user can bind to a port above 10k, but can't bind to 22. sshd restarts for an upgrade, or your iptables rules remapping a high port to 22 get flushed, that non-privileged user that got access via a RCE on your web application can now set up their own fake sshd and listen in to whatever you are sending if it manages to bind to that port first and you ignore the host key mismatch error on the client side.
This is getting closer to APT territory, but I'll bite. If someone has RCE on your SSH server it honestly doesn't matter what port you're running on. They already have the server. You're completely right it would work if you have separate linux users for SSH and web server. Unfortunately that's all too rare in most web-servers I see (<10%), as most just add SSH and secure it and call it a day (even worse when CI/CD scripts just copy files without chowning them). But let's assume it here. In reality, even if you did have this setup this is a skilled persistent threat we're talking about (not quite an APT, but definitely a PT). They already own your website. Your compromised web/SSH server is being monitored by a skilled hacker, it's inevitable they'll escalate privileges. If they're smart enough to put in fake SSH daemons, they're smart enough to figure something else out. Is your server perfectly patched? Has anyone in your organization re-used passwords on your website and gmail?
You're right that these events could happen. But you have to ask yourself what's actions of yours will have a bigger impact:
* Changing to non-standard SSH port, blocking out ~50% of all automated hacking attempts. Or port-knocking to get >90% (just a guess!).
* Use standard port, but you still have an APT who owns your web server and will find other exploits.
Yep! And I should be clear: I am not saying just don't change the SSH port. I'm saying if you care about security, at a minimum disallow public access to SSH and set up a VPN at a minimum.
>Unfortunately that's all too rare in most web-servers I see (<10%), as most just add SSH and secure it and call it a day (even worse when CI/CD scripts just copy files without chowning them).
I'm a bit confused here. In every major distro I've worked on (RHEL/Cent, Ubuntu, Debian, SUSE) the default httpd and nginx packages are all configured to use their own user for the running service. I haven't seen a system where httpd or nginx are running as root in over a decade.
I think the bare minimum for anyone that is running a business or keeping customer/end user data should be the following:
1) Only allow public access to the public facing services. All other ports should be firewalled off or not listening at all on the public interface
2) Public facing services should not be running as root (I'm terrified that you've not seen this to be the case in the majority of places!)
3) Access to the secure side should only be available via VPN.
4) SSH is only available via key access and not password.
5) 2FA is required
I think the following are also good practices to follow and are not inherently high complexity with the tooling we have available today:
1) SSH access from the VPN is only allowed to jumpboxes
2) These jumpboxes are recycled on a frequent basis from a known good image
3) There is auditing in place for all SSH access to these jumpboxes
4) SSH between production hosts (e.g. webserver 1 to appserver 1 or webserver 2) is disabled and will result in an alarm
With the first set, you take care of the overwhelming majority of both swarms and persistent threats. The second set will take care of basically everyone except an APT. The first set you can roll out in an afternoon.
With the first set, you take care of the overwhelming majority of situations.
Choosing between exposing sshd or a VPN server is just a bet on which of these services is most at risk of a 0day.
If you need to defend against 0days then you need to do things like leveraging AppArmor/Selinux, complex port knocking, and/or restricting VPN/SSH access only to whitelisted IP blocks.
If the VPN server has a 0day, they now have... only as much access as they had before when things were public facing. You still need there to be a simultaneous sshd 0day.
I'll take my chances on there being a 0day for wireguard at the same time there's a 0day for sshd.
(I do also use selinux and think that you should for reasons far beyond just ssh security)
Worse, since Wireguard runs in kernel space, if there's an RCE 0day in Wireguard, an attacker would be able to execute hostile code within the kernel.
One remote code exploit in a public-facing service is all it takes for an attacker to get a foothold.
If you are running them all on the same system, then yes, that is a risk.
I am limiting the services to simple storage.
Looks like maintaining a secure self cloud requires knowledge, effort and continuous monitoring and vigilance.
A single server run by an individual and serving minimal traffic would have different requirements. It's a much less attractive target, and much harder to do most of those things. For example, it's always easy and a good idea to run SSH with root login and password authentication disabled, run services on non-root accounts with minimum required permissions, and not allow things to listen on public interfaces that shouldn't be. Setting up VPNs, jumpboxes, 2FA, etc is kind of pointless on that kind of setup.
But how much of a threat is this? Who's going to drop a ssh 0day with PoC for script kiddies to use? If it's a bad guy he's going to sell it on the black market for $$$. If it's a bad guy he's going to responsibly disclose.
>You're right that these events could happen. But you have to ask yourself what's actions of yours will have a bigger impact:
>* Changing to non-standard SSH port, blocking out ~50% of all automated hacking attempts. Or port-knocking to get >90% (just a guess!).
But blocking 50% of the hacking attempts don't make you 50% more secure, or even 1% more secure. You're blocking 50% of the bottom of the barrel when it comes to effort, so having a reasonably secure password (ie. not on a wordlist) or using public key authentication would already stop them.
If you made a list of things like this which annoy you, I would enjoy reading it.
And with all those compromised servers they could easily scan for sshd on all ports.
Security through obscurity is just some feel good bullshit.
1) SSH is secure enough just by using key based auth to not worry about it.
2) SSH isn't secure enough just by using key based auth so we need to do more stuff.
If you believe #1, then you don't need to do anything else. If you believe #2, then you should be doing the things that provide the most effective security.
Personally, I believe #1 is probably correct, but when it comes to any system that contains data for users other than myself, or for anything related to a company, I should not make that bet and should instead follow #2 and implement proper security for that eventuality.
I'm willing to risk my own shit when it comes to #1, but not other people's.
The range in the figures is surprising. I leave everything on port 22, except at home where due to NAT one system is on port 21.
On these systems, since 1 September:
lastb | grep Sep\ | wc -l
160,000 requests (academic IP range 1),
120,000 requests (academic IP range 2),
1,500 requests¹ (academic IP range 3),
1,700 requests² (academic IP range 3),
180,000 requests³ (academic IP range 3, just the next IP),
80,000 requests (home broadband),
14,000 requests (home broadband — port 21),
5,000 requests (different home broadband, IPv4 port)
0 requests ( ,, ,, IPv6 port)
I don't bother with port knocking or non-standard ports to ensure I have access from everywhere, to avoid additional configuration, and because I don't really see the point when an SSH key is required (password access is disabled).
> This has it completely backwards. Security through obscurity's goal is not to signal predators, it's the opposite. The goal is to obscure, to hide. The "signal" is there is nothing here
An attacker scanning the whole IPv4 space won't think "ah, there's no ssh on port 22, there's no ssh to attack". They will think "yep, they did at least the bare minimum to secure their server, let's move on to easier targets".
He proved the point he was trying to disprove.
But I have only anecdotal evidence as well, so my guess is as good as yours.
Now you're absolutely right that this only deters less-skilled/inept hackers, a more competent hacker easily gets past this. But it's worth dwelling on the fact that we still stopped a substantial number of requests. Port knocking is definitely an improvement (i.e. more obscure). I'd guess with port-knocking more than 90% (even 99%) of attempts would completely miss it. The goal here isn't to rely completely on obscurity. It's security in depth. Your SSH server should still be secure and locked down.
The other question with this is what's your threat vector. Most people decry security through obscurity because an APT can easily bypass it. They can, but most people trying to hack you are script kiddies. Imagine an SSH exploit was leaked in the wild – all the script kiddies would be hammering everything on port 22 immediately.
I understand its use as a demonstrative aid but especially in the context of security, hinging your policies on the outcome of a Twitter poll seems like... well, security through obscurity.
But you also have to know that port knocking is enabled at all. That's the obscurity part.
in what way is this different than a passphrase you don't know? i can trivially defeat any password which i already know, too :D
while discovering a non-standard ssh port is easy, discovering a port-knock sequence out of a possible ~65k per knock is impractically difficult (assuming the server has any kind of minimal rate limiting). a sequence of eight knocks will need 65k^8 attempts - and that's assuming you already know which port will be opened, which of course you won't.
you can even rely on just port-knocking of 8 ports and already get ~2^48 bits of entropy, which is about the same strength as a random 8 char alpha-numeric latin-charset password.
(someone plz check my math)
Disagree that port knocking is obscurity. That's a secret.
Security through obscurity would be using a nonstandard SSHD service.
Calling an administrator account “9834753” obscures it’s purpose and may reduce the likelihood of a compromise attempt as opposed to “databasesuperadmin”. But that doesn’t mean that you don’t need a good security token.
(security by obscurity ) is camouflage, not armor.
We should distinguish obscurity from intentionally hiding the configuration, which makes attackers undertake discovery, and hence can lead to detection. But your internal red team / security review should have all the details available. If loss of obscurity leads directly to compromise then you don't have security. Cf insider threat.
Obscurity is another layer of hiding or indirection: like the owl has camouflage and it has a hole in a tree.
Advertising your fitness (your stotting metaphor) is effective when: you are part of a herd, and the attacker will only attack the weakest in that herd and then be satisfied. Like double locking your bike next to a similar bike that has a weaker lock.
Computer security is different because usually either:
a) everyone in the herd is being attacked at once (scattergun/IP address range scanning), or
b) you are being spear targeted individually (stotting won’t work against a human hunter with a gun, and advertising yourself won’t help against a directed attack).
An example of advertising your security might be Google project zero, or bug bounties.
That's more akin to a gecko sacrificing it's tail IMO. You're taking a predator that's capable of a successful attack and rewarding them for not doing it at some cost to yourself. It provides an easy and less risky way of getting paid.
And in general, the sentiment was "if you are using those things, you are likely to have invested time into other tools as well such as static analysis or sanitizer use" which are not "security through obscurity" in any sense, whereas the "obscurity" that gets security people riled up is the kind where people say things like "nobody can ever hack us because we changed variable names or used something nostandard" because it is usually followed with "…and because we had security we didn't hash any passwords".
Obscurity really just boils down to a secret that doesn't have mathematical guarantees. It's doing something that you think the attacker won't guess, just like an encryption key, but without the mathematically certified threat model, so you just hole that the attacker is using a favorable probability distribuy for their guesses
However, the psychological warfare video does make me think that there's still a place for obscurity after you've already used actual security measures. If you can find any technique that makes your attacker work harder vs some other target, then it feels like there's an economic value to doing it as long as the cost to you is relatively low.
Many security tools I've used are downright user hostile in how little information they provide the end-user (or the admin!) regarding why an auth process failed. It incentivizes people to simplify or bypass the system entirely when they can't understand the system.
Remove all obscurity and expose all your techniques and algorithms and setting up bounties for people to break your defences.
See eg https://cloud.google.com/beyondcorp and https://cloud.google.com/security/beyondprod where Google gives up on VPNs.
I think this analogy perfectly explains my hostility to security by obscurity. When I see a system that uses standard ports and demonstrates best practices, I think "oh well, they probably know what they are doing." When I see a system using strange ports and / or has extra extraneous crypto, I think "well, maybe this guy is an idiot" and take a deeper look.
edit: In the first sentence "against" is not what I wanted to say: what I wanted to say is that it "downgrades it's effectiveness".
I agree that obscurity can and sometimes should be a layer of security.
No one should be applying obscurity to public-facing APIs or anything for which documentation is widely distributed outside the company.
A better example would be Snapchat's intense and always evolving obfuscation strategies: https://hot3eed.github.io/snap_part1_obfuscations.html
Even though someone took the challenge to de-obfuscate most (but not all) of the protections, just look at how much effort is required for anyone else to even follow that work. More importantly, consider how much effort is required relative to other platforms. It's enough of a pain that spammers and abusers are likely to choose other platforms to attack.
The DOD chose to create their own CA scheme originally for financial reasons in that over a long enough time line new infrastructure pays for itself with expanded capabilities while minimizing operation costs dependent upon an outside service provider. This was before CACs were in use.
Both reductions lose practical utility by omitting nuance.
* Avoid wasting your time doing performance optimization until tuning is necessary. But definitely take obvious and easy measures to ensure your software is fast, such as choosing a high-performance language or framework with which you can be productive.
* Don't exclusively rely on obscurity. But definitely take obvious and easy measures that leverage obscurity to add another layer of defense, such as changing default ports, port-knocking, or whatever.
To use the same art of reduction to counter the common interpretation: A complex password is, in a manner of thinking, security from obscurity. Your highly complex password is very obscure, hence it's better than a common (low obscurity) password from a dictionary.
Except that can lead to operational problems down the road. For example "oh yes, we're nice and secure, not only do you need a 512bit private key to get into this device, you also need to connect from a secure network"
Then along comes covid, and you can't get into the building.
"Oh dear, you're not on the secure network, you can't come in"
So you spend 2 hours (while your network isn't working right and you're losing customers) finding and getting in through a back door.
The failure in that case is only that the admin didn't consider that normal work might be done from home at some point or that the middle or upper manager thinks that he should be able to freely administrate his critical infrastructure from anywhere...
It's so common the security community should make it a meme to spread awareness: Don't get pwned by DHCP while running from SSH 0-day RCEs.
So can lack of security.
in the example you mention the 'security' is working by design, but the operational parameters changed which in turn made that security model unsuitable - so it is the parameter change, rather than the 'security' is what led to the problems.
The original system could have been just as 'obscure' but also included an appropriately secured mechanism that allowed for this kind of remote access / disaster scenario.
I try to tell people this, when they poo poo port knocking, but they just don't get it.
So I confess I still don't "get it". Unless you just want cleaner logs or something. I assume you're still getting the same number of initial connection attempts per day, but just not recording them?
Is it something to do with network or CPU consumption related to failed subsequent attempts by the same actor? (Which, the same as port knocking, should be rate limited anyways?)
Of course you have to update at some point. However, if someone drops a zero day on your SSH server while you're asleep you're probably glad that you've got a secret sauce to protect your server, letting the vulnerability bots focus on other servers.
The issue is there are other options that are better - like VPN only access to SSH - that you can use instead of (or in addition to)
If everyone advocating for port knocking was also saying set up VPN only access, sure. It's an additional authorization factor via where ports are used as a proxy for a PIN. But I haven't seen a single person in here saying they use it in addition to a VPN - people are saying it's their primary form of protection.
You can setup a wireguard VPN in as much time as it takes to set up port knocking. Now you have all of the benefits port knocking provides, and more. And you could even still set up port knocking in addition to the VPN if you really wanted to, but I would argue there's not much point.
Im thinking it could be pretty impractical to go onto a whole other network to open an SSH session.
It depends on the implementation. For a client <-> server VPN, it creates an interface on your local machine that corresponds to the network address range for the VPN, and tunnels traffic to the remote end.
For a site to site VPN, two appliances create a tunnel between them, and traffic is routed over that tunnel via the same sort of routing rules you normally use.
> Is the VPN connection setup for the SSH session only?
It can be. It can also be configured for all traffic, or some other combination.
> What if someone needs to have multiple SSH session, going to different networks altogether?
You can have multiple VPN connections to multiple networks. It can get complicated if the VPNs are using overlapping IP space.
> Im thinking it could be pretty impractical to go onto a whole other network to open an SSH session.
I'm not entirely sure why. Millions of people use VPNs every day for a variety of reasons, including SSH. I currently have 8 saved VPN configurations in my wireguard client, and connecting to one is as simple clicking on the client and picking the one I need in the dropdown. Then I SSH as normal, except its to the server's private IP and not public.
I think the main concern with port knocking is that it's observable. You're effectively sending your password in clear, so if someone can intercept or overhear your traffic then your secret is lost. Cryptographic authentication schemes like SSH itself or VPNs do not have this problem.
A VPN has upsides and downsides. It obviously protects your server a lot better against directed attacks, but when you lose your laptop or when your computer gets ransomware'd, you can't get access to the server anymore.
Furthermore, code execution vulnerabilities have been found against VPN servers because of their immense complexity and OpenVPN can consume quite a lot of resources for a daemon doing nothing. WireGuard has changed the VPN landscape with its simplicity, but if you fear your server may not be updated all too often (because it's partially managed by a customer, because your colleagues might not care to do so after you leave), leaving a simple solution behind can have its upsides.
I'm not advocating that everyone should enable port knocking on their servers to make them secure or anything, but the "port knocking is always bad" crowd is often very loud despite the fact that there are small little ways port knocking can improve security with very little effort or increased attack surface.
Why is that so hard to grasp. Still boggles my mind.
As far as I know, there isn't another GitHub/GitLab compatible way to do this. So I'll keep using GPG until there is.
Also, an informed analysis of PGP: https://latacora.micro.blog/2019/07/16/the-pgp-problem.html
Show me ephemeral encryption scheme for something that needs to be readable in the future like that.
This analysis is highly uninformed I would say.
That’s not how analyzing algorithms or programs work. Even a basic threat model is missing.
- Assume for every code obfuscator there's a deobfuscator or at least someone as clever as you out there.
- Randomizing variable names it just a nuisance, it won't stop any competent pen tester or attacker.
- Encrypting the database is an odd one. Your program will also have to decrypt the data to use it. Where do you store the encryption keys? In your code? Don't assume obfuscating your code and/or randomizing variables will protect your encryption keys.
As to whether this is good or bad advice, that depends on how expensive these things are (e.g., encrypting database fields may be very expensive if you write raw SQL calls as your primary DB interface but may be dirt cheap if you're using an ORM that has it as a built-in feature) and your local threat model (e.g., "dedicated, personalized attackers reading your source" is very different from "does it defeat automated scanners?"). You can't know whether these are good or bad ideas without that additional context.
This is something that bothered me quite a bit in Bruce Schneier's various comments on airline security. He repeatedly wrote that profiling young Arab men as likely terrorists was pointless, because if it became harder for young Arab men to get through security, terrorist organizations would simply start sending Japanese grandmothers.
But of course where it's relatively easy to find young men willing to die for a cause, it's much more difficult to find grandmothers who will do the same. And where it's relatively easy for an Islamic group based in the Middle East to connect to Arabic social networks, it's much harder for that group to connect to Japanese networks.
(All real examples)
It's about improving the odds/reducing the exposure, not achieving some theoretical absolute perfection.
If you calculate the probabilities correctly, you get very different results.
Suppose you take down a plane with a young Arab man, and then you want to take down a second plane. There is a neverending stream of similar men willing to do the job. If your strategy requires you to use elderly Korean couples, you're done after the first plane -- you'll never find a second one.
All other things being equal, the opportunity cost will shift towards targets that have less elements akin to "security theater", since it's basically 'money on the table' to de-risk the attack.
So, the real question to ask about "security theater" is not if it has a material impact on human safety with flying, but if its deterrent effect pushes risk to places we'd rather it not go or if the costs of performing it do not outweigh this deterrence benefit. Given the potentially paralyzing effect it would have on the global economy if air travel were covered in a blanket of fear of flying, it's hard to argue that "decentralizing" this risk to other targets is a bad idea.
Focusing most of the security effort on Arabs is a good way to fight the war of 19 years ago, but it leaves the air travel system vulnerable to upstart terrorist movements that see the lack of universal security as an exploitable vulnerability.
For example, there's nothing to say that America's right wing terrorist groups won't decide to switch from shootings and vehicle ramming attacks to attacks on air travel. The TSA ought to be prepared for this, or any other, emerging threat.
Ours is an industry with a lot of people "on the spectrum".
The other thing that’s harmful is relying on something to provide security, when it actually can’t. That’s actually going to have a negative impact on your threat model. People will say (they’re even saying it in this thread) that their port knocking or non-standard port usage has cut out the port scanning noise in their logs. But who cares? A properly secured ssh port isn’t going to be cracked by an automated scanning tool. But a poorly secured hidden one will be easily found and cracked by any motivated attacker. You have to implement the proper control anyway, and the obfuscation one ends up providing no benefit while simply annoying your users.
Security by obscurity is dumb, it doesn’t provide any benefit. Security in depth doesn’t mean multiple layers of controls that don’t work add up to one that does. Obscurity is just a way of spending your scarce resources on controls that don’t work, and wastes your scarce command of your users attention on controls that don’t work. So in reality, they’re also always coming at the opportunity cost of controls that actually do.
I would ask by how much.
Having to perform source audits on code with obfuscated variable names added almost no time to the task.
Again, these methods work against not-so-determined attackers. If you as a defender have limited resources, where would you choose to spend it--on defending against unskilled attackers, or attackers that are more likely to cause you damage?
>but when the costs to those actors exceed the loss you may experience.
There are several problems with this logic. First, it kind of presumes that there is a symmetry in the costs for the attacker and defender. Wise defenders will use methods that have high leverage. Also, the attacker doesn't care at all about your costs. They care about what they can get from you--whether it is access to something that you aren't thinking of, or your crown jewels.
Encrypting databases is sometimes required by compliance, but is no defense against a good attack.
Sure, it increases costs for a certain subset of attackers. Instead of sending easily found and trained young Arab men, they have to put more effort into recruitment. However, in return for that, they get far reduced scrutiny.
Therein lies the problem. It is the real-world equivalent of dropping all packets from a country instead of properly analyzing the packets. You'll stop the low-cost automated garbage attacks, but you won't stop a dedicated attacker, even if the attacker is in that country.
There is still some information lost in the process:
- "let eigenvector_coefficient = 23" => "let x = 23"
A de-obfuscator isn't going to be able to recover the valuable information contained in the original name. Will it stop a determined attacker? Maybe not, but it would surely slow them down as they now need to spend an order of magnitude longer trying to understand what the code is doing.
> Randomizing variable names it just a nuisance, it won't stop any competent pen tester or attacker.
Believe it or not, nuisances are enough to stop some people. A lot of would be attackers are just cruising for low hanging fruit.
Remember, the goal is to "reduce risk" and not "stop any highly skilled targeted/tailored attack". Because let's face it, even if you are the greatest crypto wizard in the world, you will fall victim to a highly sophisticated attack tailored specifically to you.
It is not "some people" that I worry about. I worry about attackers with a level of skill.
As I noted elsewhere in the thread, I have audited obfuscated code and the obfuscation is only a speed bump. I can only presume that attackers are smarter than I am, and obfuscation is effectively not an issue. And it is not an order of magnitude. This is another example of developer thinking that this form of obscurity is of any real value. Reviewing code will tell you if eigenvector_coefficient is really what it claims to be or something that morphed into something that the developer didn't originally intend.
Also keep in mind that code reviews approach code from a totally different angle than a developer would either developing or during a code walkthrough.
Developers often have some idealized notion that an attacker is going to need to piece their program logic back together and try to decode the purpose of each obfuscated variable in order to find a hardcoded password/value.
In reality an attacker is just going to dump strings and try them all or simply set a breakpoint just before the important syscall and let your program do the work. Code obfuscation provides little to no value for these common methods, yet we cannot resist the urge to list it as a bullet point in security meetings, leading to a false sense of security.
I knew nothing about this topic in general, but elsewhere in this thread there was a link to a blog post about obfuscation methods used in a piece of commercial software. One item was a function that detects a breakpoint, obfuscates its boolean return value so you can't tell if it did, and makes the program hang when it does. Pretty neat.
I think your (and my) ignorance of such methods is evidence that they probably are reasonably effective, even though when explained, they're not quantum physics.
Even OP's advice about running services on non-standard ports isn't sound. Who doesn't run a service scan? Even sites like Shodan do service discovery for you. I'm going to find whatever port your running ssh on if your running it.
I still think it's a good idea. With SSH on port 22, ten thousand bots plus an attacker try to hammer it (so says fail2ban). With SSH on port 9278, zero bots plus an attacker try to hammer it. By throwing away the 99.99% of the chaff, you can see the remaining wheat you care about.
Changing SSH ports isn't about saying "yep, we fixed it!" and calling it a day. It's about decreasing the amount of stuff you have to deal with, which is quite useful. It's something you can do in addition to everything else that gives a decent bang for its buck. No, it doesn't keep you out, but it does keep out those thousands of bots crawling around looking for an open 22 to pester.
> So let’s talk about security by obscurity. It’s a bad idea to use it as a single layer of defense. If the attacker passes it, there is nothing else to protect you. But it’s actually would be good to use it as an “additional” layer of defense. Because it has a low implementation cost and it usually works well.
I think it's good to do those things in addition to the other stuff. Obscurity isn't sufficient by itself, but is another layer of defense.
Not if you have to port knock before the ssh port is open to new connections.
For all you know you have to knock ports 22, 46, 1776, and 8998 to the timing of "shave and a haircut" switching between udp and icmp along the way... Good luck, the entropy you have to overcome is astronomical.
Sure it will. Imagine that your old, unpatched Wordpress admin is at /random-gobbledygook instead of /wp-admin. An attacker would have to try to hit random alphanumeric directories of your webserver over and over again, hoping that he stumbles across a specific thing that they can attack. This is completely impractical, unless they're somehow clued in that the URL exists.
It's really about making life difficult for an attacker, so much so that they will simply give up, or find an easier target. That can be achieved by throwing up a series of difficult/obscure barriers, each which makes it less likely you'll be trivially penetrated.
You could basically encode DOS commands in the URL bar for a site running IIS and it would run remotely.
The automated attack basically replaced the index.html pages. But if you didn’t use the default pages. It didn’t have any effect.
Then it filters out people who are not using a deobfuscator or are less clever than I.
Then it will stop incompetent pen testers.
I don't see how your comment refutes the point made. The point is not that it makes your likelihood of attack zero, it just reduces the likelihood via adding more roadblocks.
So it does eliminate incompetent ones? That's kind of the point of the article.
So, what's the gain?
But it will stop incompetent attackers - of which there are many. In fact, they are the vast majority.
None of those 'obscurity' techniques will stop a targeted attack. That's not their function. But each of them raises the bar. The more hoops, the better.
Making sense out of code obfuscated this way is really* hard for humans, but will compile or interpret just fine so long as your obfuscator obeys the rules of your language. (We started on this at one of my early startups nearly 20 years ago, but didn't get funded soon enough for protecting the IP in our unique JS to matter. It was unique enough that we actually applied for a patent on part of it - drawing a 16 trace live strip chart of data from network sources at better than 4-10 Hz per channel was really hard with the bowsers and computers of 2002!)
Yes, most people can grab some bolt cutters, snip, and bike off. Yet, so many bikes remain unstolen with extremely week locks.
The vast majority of attacks are crimes of opportunity. Hackers aren't generally trying to target a single company or computer for a bot net, they are looking to get as many as possible. Almost any amount of effort above and beyond the typical will cause them to jump past you as a target.
Back to the bike lock analogy. Again, most locks can be bypassed, getting one that requires an edge grinder will almost certainly ensure that your bike won't be stolen (Why steal that bike when there are 20 with simple wire locks?). Add 2 locks and you've got a bike that will almost never be knicked.
This video can teach you a LOT about software security.
This. Putting a tarpit on port 22 isn't going to stop an attacker, but it will slow the ssh scans down for everyone.
Security and obscurity: if you make something secure and then obscure information about that system from an attacker that can increase the security. However obscurity is often organizationally expensive and very fragile. A key can be rotated, but changing how something functions is very hard to rotate.
For instance, the port 22 example. Suppose you have a bastion host. SSHD running on port 22, root password disabled, passwords disabled (only SSH keys), no other services running, all other ports filtered/closed. It should be fairly secure, even if exposed to the internet, right?
Now you can change the port. Change the SSH banner and hide the version. Add some port knocking. And so on. None of these measures would work by itself, but they will discourage non-targeted attackers.
Someday, someone will have problems connecting and waste half a day debugging it before they realize what is up.
Or troop movements during war... Sure, the locations can be figured out, but by not broadcasting locations that's more work for the enemy and thus a bit more secure.
Obscurity is absolutely a key piece of security, because it adds the complexity of discovery.
It works for the military, for spy agencies, and governments.
If obscurity didn't have any benefit, then the military's latest weapons wouldn't be tested in the Nevada desert, or some remote island; they'd be tested in Illinois, or off the coast of Long Island.
Of course, if any of these things are known the entropy drops to zero... Just like a private ssh key that gets pwnd.
All too often I see tickets on open source projects asking for changes to allow better obfuscation, which are then denied using the mantra "obscurity is not security".
They all add bits of entropy to a security and/or threat model that maintainers ignore.
> Instead it was originally meant as "if your only security is obscurity, it's bad".
Since all security is essentially "through obscurity" somehow, I would simply reframe that into the onion model. Good security is like an onion, it has many layers. When you only have one layer, that's bad security.
A better example of security by obscurity would be to, for example:
* Flip all the SSH bits or XOR it with some long key.
* Encapsulate SSH inside another protocol, such as websockets over HTTP port 80, or embedded inside what look to an outsider as cat pictures being sent over HTTP.
* SSH over TCP over Skype video.
Incidentally, any of these methods work well for confusing China's firewall and keeping the SSH connection alive, and would probably confuse hackers as well for a little while. They could all be implemented in a router box that doesn't affect your actual deployment.
This last year, I found out about knockd and if that isn't some awesome shit, I dunno what is. Yet, there are plenty of articles saying, incorrectly, how it's awful. It is simply another layer of security on top of everything else you have. Like you said, security by obscurity is more about making it fucking slow, irritating, tedious, and without any sense of reward. "Aha! After only a week, I've figured out you're port knocking! Oh shit... wait, you still totally have the server properly locked down. FML." Because after each "obscure" layer there is a "real" layer of security and hopefully those all those real layers buy you the time to detect and prevent the threat.
The port knocking itself may actually be the strongest link in the chain, despite it being one of obscurity, if the population of targets in your "value pool" is large enough so that you are always below a sufficient number of others without knocking enabled, since all attackers will bounce to those when they realize they are not knocked.
no, not really. what it means is: every important sytem has attackers trying to exploit it. finding an exploit is a series of hunches while probing the system as a blackbox, and you need just one; meanwhile a defender has to be methodical enough find them all.
given the differences, obscurity removes the defender ability to systematically analyze the system while on the other hand for an attacker it remains as much of a blackbox as it was before.
Although, Kerckhoffs's principle are a good way of describing how a secure cryptosystem should behave. This is what people should have in mind.
Obscuring will just add some delay as you state, but it might be irrelevant in many situations.
Obscurity instead of Security is bad too.
If that was the only security it’d be terrible.
But not having 1000s of bots pounding on the door saves a lot of headaches.
The only thing I would add is that it also needs to be maintainable - the obscurity should not impede the maintainer's understanding of the implementation.
I've seen too many instances where obscurity is used to justify weak primary layers (IE it's fine we're using this single word shared password since we have all these other layers). It can often provide a false sense of security since it looks like a security layer when in reality it often turns out to simply be a minor inconvenience to an experienced attacker.
But SSH is a terrible example, because the cost to the defender of simply not having SSH vulnerabilities is the same, or even less, than the cost of obfuscating it with nonstandard ports, "port knocking", or fail2ban, which are all silly ideas.
Just use SSH keys, and disable passwords.
I think maybe it comes down to this: dialing attacker costs up incrementally can make sense if it's the most cost-effective way for a fully-informed defender to improve security. But incremental cost-increasing countermeasures aren't a substitute for sound engineering; you don't get to count "having to learn stuff" as a valid defender cost.
I know who I am arguing with here but port knocking is not silly. It's fantastic.
When I say fantastic, I don't mean it solves all of our problems and obviates any other protections ... what I mean is, for almost zero cost it adds a non-zero level of actual protection.
As a lifelong UNIX sysadmin, it is one of the few totally unalloyed security improvements that I have been able to add to my systems. I believe there are sshd vulns extant that you and I don't know about and port knocking allows me to worry less about them.
I also recommend SMS alerts on successful knocks - SMS alerts that you should never see in surprise. This is trivial, by the way, as you can put semicolons in the knock command:
/sbin/ipfw add 01021 allow tcp from %IP% to 10.0.0.10 22 setup ; /usr/local/sbin/timestamped_sms 4155551212 "knock from %IP% - "
If you believe there are unknown OpenSSH attacks, you can't coherently believe that port knocking is a real defense, since port knocking doesn't do anything to protect the SSH channel that attacks will be carried out in.
Instead, if you're actually worried about OpenSSH vulnerabilities, you shouldn't be exposing SSH to the public Internet at all. I'm not super worried about OpenSSH server vulnerabilities, but I would never recommend that teams leave SSH exposed; they should just hide that stuff behind WireGuard.
Wrong, it solves tons of them.
>adds complexity and cost
Almost zero complexity and cost. Maybe if you're a bad at sysadmin work it adds cost and complexity.
>defense without corresponding increases to attacker costs.
It adds a _huge_, almost incalculable cost increase to attackers.
>If you believe there are unknown OpenSSH attacks, you can't coherently believe that port knocking is a real defense, since port knocking doesn't do anything to protect the SSH channel that attacks will be carried out in.
Looks like you don't understand the concept of 0-days. Several CVEs we're listed elsewhere. I suggest researching 0-day exploits so you understand how port knocking mitigates them.
Port knocking mitigates 0-days.
>Instead, if you're actually worried about OpenSSH vulnerabilities, you shouldn't be exposing SSH to the public Internet at all.
I don't disagree here, VPN is a great solution. Nonetheless, for some shops simple port-knocking on a bastion host solves, a lot of these issues, and removed the complexity that VPNs add.
>I'm not super worried about OpenSSH server vulnerabilities, but I would never recommend that teams leave SSH exposed; they should just hide that stuff behind WireGuard.
No one is super worried about things like shellshock, heart bleed, etc. until they happen.
Port knocking solved a lot of problems, protects you from zero-days, and makes SSH noise a non-issue (huge signal-to-noise gains).
Used in production for years. It's fantastic.
All it takes is me somehow being able to listen in on your traffic - not even decrypt it - and now I know the knock sequence. I know that you have SSH listening on that server. I know you are actively doing something on it.
vs. a VPN where... all I know is you are communicating over a VPN. With DPI I might be able to determine what type of traffic you're sending, but not where it is ultimately going.
Port knocking is not the christ child that will wash away all of our sins ... and therefore is not worth implementing.
It doesn't add that much. But it's non-zero and has almost zero cost. It's very elegant, in my mind, and it makes me very happy.
I do agree, nobody should be going to sleep at night, relying solely on obscurity as their source of protection. But these commenters are offering it as an additional layer of indirection. They're not touting it as _the_ solution, full stop.
At the most basic level, would you refute the claim that port knocking or alternate ports are adding additional friction for an attacker, or no?
Myself, I would prefer to run a simple, (hopefully) set-and-forget daemon on my server if it really did add an extra layer of obscurity to my secured SSH service.
I guess I just fail to see why it's one against the other.
Foremost, there is an opportunity cost to setting it up. The time you spend setting up port knocking could be spent setting up another form of security. I believe it is a sound argument to say that a VPN provides more security at a similar level effort. No public SSH means an attack cannot know SSH is running on the server from a port scan because it simply isn't listening. It allows you to reduce the attack surface - you can add more and more servers that you need to SSH into, but you are only allowing public access via your VPN - so you have fewer potential ingress points, and can ratchet up your security and auditing commensurately. And if your VPN concentrator is owned, you should have been setting things up so that they did not implicitly trust someone just because they were on the VPN, so you still have all of your usual measures of security in place.
In that case, there's just not much point. You could also enable port knocking, but I don't think it provides much benefit.
That brings us to the next part. Port knocking is a "weird" thing. It's idiosyncratic and not standardly used. Documenting it and understanding it is additional overhead, and it's something you have to manage and worry about on every server that's using it. Additionally, both standard and SPA implementations are vulnerable to man in the middle attacks, though most SPA based implementations will require an active MITM in that blocks the initial packet rather than just replaying a knock sequence. So extra complexity, less secure, and an oddity on the network that you have to have documented and explain to new team members, etc.
If you're a single person managing a single server, well, honestly you're probably fine just turning off password auth. And you can feel free to do port knocking and whatever else. It probably doesn't matter.
It sounds like port knocking and VPNs, while starkly different in approach, have some overlap in their approach to threat mitigation.
Wireguard et al are much better equipped to handle the needs of an organization, while port knocking's value trends to smaller teams, or even individuals.
I wouldn't want to manage knock rotations for 600 employees, for example.
* A Java, Python, or Ruby app server
* Stock nginx
----- starts to get really unlikely right here ----
* The Linux IP stack
I can't comment on that.
I'm not a UNIX sysadmin because it's a rewarding career path with excellent opportunities for advancement.
I'm a UNIX sysadmin because I truly love doing it and always have. I would do it for free.
I think the reason I continue to prefer (and evangelize) port knocking is that the intersection of (modest) security gain and simplicity/robustness hits a sweet spot for me.
Again, 10+ years in production on many hosts, worldwide, and never so much as a blip. IF knockd were to fail, it would fail in a very boring way. VPNs, on the other hand, are far more complex and they fail in fascinating ways.
I am a sysop turned sysadmin - this is my lifes work. I prefer simple, unixy tools that fail in boring ways :)
Simple systems tend to fail in boring ways. Complex systems tend to fail in interesting ways. Learning more about a complex system, while rewarding in many ways, will not change that identity.
While I fully agree that portknocking doesn’t provide the same layer of protection or flexibility a VPN does - but with the original article in mind: if your reason for deploying a VPN is because you fear to expose unknown bugs in sshd to the Internet the same could be said about every vpn solution.
Therefore portknocking is / (would be*) indeed more elegant because
- it makes no promises to be secure (as in as secure as a VPN)
- one could argue: if you use it you know that portknocking is just an additional security layer - and maybe don’t get lazy as in a VPN
- a misconfiguration or bug or an attacker might expose sshd on your hosts - a misconfigured VPN at least in a somewhat sizeable deployment can lead to countless attack surfaces
Having said that, that only will work if the rest of the sshd security is in check and your password isn’t hunter2
>f your reason for deploying a VPN is because you fear to expose unknown bugs in sshd to the Internet the same could be said about every vpn solution.
Yeah. You might have a VPN zero day - but then you still have to get into the other SSH servers. Two zero days simultaneously active for openssh and your VPN solution? Pretty unlikely, especially public ones. Someone burning two private zero days on you means you're an incredibly high value target and neither of these would suffice as your sole defense to begin with.
The rest of your argument, if I'm understanding it correctly, is that you think people will get more lax with securing SSH on a box only reachable via VPN than if it was reachable by port knocking? It's possible, but I don't know that the evidence really shows that - lots of comments on this article are along the lines of "i set up port knocking and I've never even seen a malicious ssh connection attempt since then!" - no details of the rest of the security measures they've got in place.
And yeah, going from 'I set up a VPN to connect to my web servers via ssh' to 'I have VPN access to a whole network with all sorts of things running on it' is a big step up, but I don't think it's really in the boundaries of this discussion. Port knocking was never going to be a replacement for a larger VPN deployment, and when you're opening up network access to a wider range of things then how you approach things definitely needs to change.
So yes, if we want to be fair we have to compare an in-host defense system like portknocking (which has one job: secure sshd) to a in-host vpn setup more alike to like the often mentioned wireguard.
And in this "configuration" I completely agree. I still think it may be more likely for a VPN to expose security critical bugs than a bug in knockd - but as you said this should only allow access to your next layer of defense (namely sshd) and maybe (if you're a really valuable target) a three-letter-agency might throw all their resources at you and are willing to throw every weaponized exploit they have at you - yeah than you're even more correct because than they would have a far easier time just intercepting your port knock sequence and throwing all their quantum computation power against your sshd keys.
> The rest of your argument, if I'm understanding it correctly, is that you think people will get more lax with securing SSH on a box only reachable via VPN
The argument I was trying to make is that while a VPN is in in every way a really good idea (the way we described it here - as an in-host security layer) but I have yet to see it being rolled out in that way.
I come from a more traditional sysadmin setting, and of most sysadmins I worked with would find implementing this "correctly" to tedious and would either
a) terminate the vpn connection at the rack or co-location "border" and shove a bunch of servers down a single VPN connection
b) terminating every servers vpn connection at a single vpn concentration point
Regardless of which, in virtually all cases that I know of never was there any thought given about intra-VPN firewall rules or allowing only certain ports on the VPN. Most of the time you get the servers that are somewhat related, shove them in a subnet expose that subnet via VPN and you're golden.
And so from my practical experience, I would think that a compromised VPN in my reality would be worse than a exploited knockd, but only because it isn't scoped to the same level.
On a sidenote: I'll guess that modern orchestration tools make it pretty easy to roll out knockd and / or wireguard pretty easily in the discussed fashion - it's just I don't get to play with those.
That was a lot of text, just to say I agree with you - but hey, I guess agreeing on something on the internet is somewhat nice so have a great day.
VPN doesn't magically fix all problems.
wireguard is good so far, but the kernel implementation is in C, so who knows.
I do agree with the author of the original article that security should come in layers
Once something is secured with SSH and a VPN, you've got that many more actual layers - you now need a CVE that allows access or credential leak for both the VPN and SSH. (And many of those CVEs don't necessarily allow a random attacker to arbitrarily gain access)
https://news.ycombinator.com/item?id=24446919 has my list of what the bare minimum SSH protections should be for anything where you are storing customer/user data in my opinion, as well as additional best practices that I have employed.
Can a theoretical attacker intercept a port knocking sequence? maybe. Would a script kiddie running a new ssh 0day against the entire internet be able to do this? no.
If it's your private pet server - sure. In larger networks you have to document the access, manage the allowed ports on the network, configure security groups or equivalent on instances, provide alternative steps for people with unusual clients (for example database UI app proxying over SSH), etc. The cost suddenly becomes very non-trivial.
That's interesting, that's the first time I've heard a justification for port knocking that actually makes sense to me.
I'm curious for others' thoughts here -- are non-public vulnerabilities something you consciously try to mitigate? So that, for example, using 2 different 8-character passwords that are implemented with different technologies, is therefore fundamentally more secure than a single 16-character password? Precisely so that a vulnerability in one is still protected by the other?
To me this feels like it's really only applicable if you need to protect your data from hostile governments targeting you specifically, who might actually have zero-days they have weaponized.
However, if you're just trying to protect yourself from everyday hackers or even targeted corporate espionage, is unknown vulnerabilities really something that's realistically worth protecting oneself from? (Assuming you're always installing all security patches.)
Sure, a small percentage of adversaries are in neither category, and a random hacker dedicated to hitting your specific server may suspect port knocking and could try to circumvent it, but most companies don't have an adversary like that, and even if they do, you've made it harder for them for a small cost.
Does it solve anything else?
> I believe there are sshd vulns extant that you and I don't know about and port knocking allows me to worry less about them.
Wouldn't you need to worry about vulnerabilities in knockd?
CVE-2001-0144 - SSH1 CRC-32 compensation attack detector allows remote attackers to execute arbitrary commands on an SSH server or client via an integer overflow
CVE-2008-0166 - OpenSSL 0.9.8c-1 up to versions before 0.9.8g-9 on Debian-based operating systems uses a random number generator that generates predictable numbers, which makes it easier for remote attackers to conduct brute force guessing attacks against cryptographic keys.
I had a machine almost get compromised from the 1st vulnerability ( noexec on /tmp broke their script ).
When the 2nd came out I was using non standard ports and or port knocking. Despite having vulnerable keys I was safe until I could upgrade.
If a SSH RCE 0day was released:
* every "Just use SSH keys, and disable passwords" box sitting on the internet with ssh on port 22 will get compromised within hours.
* The boxes using fail2ban will get compromised within hours.
* The majority of boxes on nonstandard ports would likely be ok, at least for some time.
* The boxes using port knocking would be safe.
Having said that: I don't like exposing SSH services either! Which is why I try to keep them behind WireGuard, at least on prod networks that I care about.
In contrast to an actual VPN, port-knocking and (heh) nonstandard SSH ports shield you only from casual attackers; both give a middlebox attacker all the access they need to launch the attack.
Port knocking isn't a terrible idea but I generally prefer locking down the networks (or, these days, using AWS SSM / GCP IAP to avoid listening publicly at all) since having something on the internet means you're just one mistake away from problems and need to staff monitoring accordingly.
The other thing to remember here is that we're talking about one general CVE in two decades. Almost any other running service has been much worse so while SSH is important to protect I don't know that I'd make the argument that further pushing that one service is really the best bang for your buck.
Possibly.. It does depend on the port. 222 and 2222 often are scanned with 22. 2200-2299 is probably common now. I was using 2221 for a bit but after a few years that started seeing some auth attempts.
I mostly watched entire /16s, not single hosts.. the scan patterns for a large netblock are very interesting. It takes as much effort to scan the entire internet on port 22 as it does to scan all ports on a /16.. attackers simply do not do that.
The benefit of some of the port knocking systems is that the attack surface is almost nothing and they are easy to audit. I used it a few jobs ago on my management system/bastion host. I couldn't rely on the VPN since I was the one that managed the VPN, so I needed a way to securely login remotely that did not go through the VPN, and did not end up having sshd exposed to the world.
These days I run sshd at home behind https://www.tarsnap.com/spiped.html
Ubiquitous wireguard may change things.. we'll see.
Not in my experience, I would even say that full range port scanning is extremely rare. Botnets (again, in my experience) seem to only be interested in vanilla installations and will test standard ports exclusively. But of course, if you are in charge of some very tempting target (eg a cryptocurrency exchange) your experience will be totally different than mine.
No, safer. It is very well possible to brute-force port knocking or eavesdropping the ports since that information is not encrypted. Is it harder? Of course, a lot, but if you think scanning 65k ports on each host on the internet is reasonable, then evading a port knock is very much, too.
It's incredibly unlikely - there is probably more chance of the sun imploding tomorrow. And if you're the type to install port knocking, you've almost certainly also installed something like LFD, which will temporarily block IPs for port scanning.
Also, without inside information, how would you even know that a server was using port knocking?
But brute forcing in general? not a chance. There are 18446744073709551616 4 port sequences.
Here, for comparison:
281474976710656 - total ports in IPv4 space
18446744073709551616 - 4 port combinations
1267650600228229401496703205376 - my estimation for 22 in IPv6
I’d say cracking that is… Unfeasible.
That also assumes you know the existence of a server on which there is ssh under an unknown combination of port knocking of length 4.
Moving the port to some obscure random one divided the number of requests from several thousands per hour to a few per day. Definitely an improvement by any measure : suddenly you can analyze the attacks if necessary.
I run fail2ban on top of it, because why not? In case someone would attempt to really target my system, any obstacle is good to take. And who knows what ssh vulnerabilities exist; any protection is good to take.
I usually have several cloud servers running with a normally secured SSHD running. There's some failed login attempts yeah. I've never seen even 1% CPU usage from them. I doubt even posting my server address on every hacking forum I could find and daring them to try and hack me would result in getting enough failed SSH login attempts to blip my CPU usage. I have no idea how that could even happen, aside from somebody intentionally targeting your server with a really weird attack for whatever reason.
Reducing log clutter alone probably does confer some small indirect benefit, since it's less likely a more sophisticated attempt or successful breach would go unnoticed when inspecting logs. (Assuming there's some SIEM log forwarding or that it's not a situation where an attacker was able to or wise enough to wipe logs.)
* Use RSA keys instead of passwords -> This will eliminate most risk, except for exploits in sshd itself,
* Change the default port from 22 to something in the 40k+ range, which will keep you from being scanned, and
* Whitelist IP addresses that can connect to port xx on your server -> This will eliminate 95% of remaining risk
* Using a 'clean' bastion server to access other systems via agent-forwarding, preventing malware on admin workstations from being able to propagate over SSH.
So, no you're never going to be 100% secure, that's just unreasonable. But like you said, the cost can be increased to the point that all but the most determinted state sponsored APT groups.
I'm replying to these suggestions all over this item because I think it's important, so I apologize if you've since seen this comment elsewhere, but:
This introduces new security risks. Non-privileged users can bind on ports in the 40k+ range and cannot bind on 22. If you restart sshd for a software upgrade or some other reason, or the iptables rules you're using to remap the ports get flushed, the malicious non-privileged user can now bind to the port people were communicating with your sshd on, and if they ignore the host key mismatch, everything they send can be captured by the malicious user.
Older openssh clients have default configurations that can result in the leak of the whole private key, if you use password auth or 2FA they can outright steal those, perhaps their fake sshd will do more than just steal credentials and will actually mimic a shell and let them gain more understanding of how the system ticks, etc.
Is this level of attack something most people are going to run into? No. But neither is an attack more sophisticated than brute force password attempts. It's definitely information people should be keeping in mind when making these sorts of decisions, too.
My firewall does port mapping so externally it's not 22, but internally it is.
There's still public access to SSH, so you're still at risk from a zero day, weak credentials, etc., so I don't think it's quite to ideal levels where you are employing a VPN, disallowing all public access, etc., but at least you're not introducing new potential attack vectors :)
To gain access to the server via SSH they now need both a way in to the VPN and a way in to SSH, vs. just needing a way in via SSH.
It doesn't do much if someone just gives up the keys for the VPN and SSH, but it would mean that you would need two simultaneous exploits for the VPN and SSH to gain access.
In scenario 1, you do not gate access via VPN. Things are accessible via the public internet.
In scenario 2, you do gate access via VPN. Things are not accessible via the public internet. Someone compromises the VPN. They now have as much access as if there was no VPN and things were accessible to the public internet.
In scenario 2, you are more secure than in scenario 1 until the VPN is compromised. You are then just as secure as you were in scenario 1.
If you are not restricting access to a VPN in the first place, how would compromising a theoretical VPN result in greater access?
Of course if the setup is VPN -> firewall -> SSH to make sure only the SSH is exposed through VPN, then I agree you'd be more secure with VPN+SSH.
In the discussion we're having, we're going from a setup where there is no equivalent to a private network because everything is public, to having a private network that only allows you access to the things that were previously public.
So either only SSH is exposed to the public, or only VPN is exposed. Without an additional firewall after the VPN, how is my LAN more protected with the VPN vs SSH?
With your configuration, all that needs to exist is an SSH 0 day to gain access to the server. With a VPN, they need that AND a 0 day for the VPN software to gain access to the server.
You can have a more complex setup with a VPN, but that isn't the discussion here - the discussion is securing SSH. If you want to provide VPN access to an array of other services, or as access to a corporate LAN or similar, then that's another conversation that has to involve the specifics of those services and that configuration. It's not what is being recommended here.
The other thing to consider is that there could be exploits in OpenSSH itself. There hasn't been a truly critical vulnerability in a very long time, but low severity or non RCE vulnerabilities aren't exactly rare: https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=openssh
Almost barfed from sheer intensity of tech corpobabble.
Its for blocking 0-days dummy!
I'd say that matters. Think about all the secrets (tls keys, whatever) a server has in memory.
If you can't connect to the sshd daemon, you can't attack it.
I’m not sure there is any such good example though. Every obscurity control I’ve ever seen has imposed costs upon the users, administrators, engineers... but I’ve never seen one that I would rely on to improve security posture in any meaningful way.
I’ve certainly never seen an obscurity control that was worth its opportunity cost. I can think of dozens of actually useful controls where even a marginal improvement in operational performance would be worth more than every conceivable obscurity control combined.
Exactly! Especially when you can create a high cost asymmetry, low-cost for you to add, high cost for the attacker to bypass.
Agree that the SSH examples aren't the best. I would have picked DRM.
I would like to see a list of suggestions of "low cost" ways to obscure systems that are (relatively) harder to counteract. But I guess as soon as anyone publishes such a list then hackers will start checking for them.
Or even use a good password, if you don't have many untrusted users. It works perfectly.
Changing the port is not silly, it increases the SNR in logs, that's already a worthy goal.
You, uh, do know who you're replying to, right? https://sockpuppet.org/me/ if not - I don't mention this to go "lol he must be right because of who he is", but calling a well respected security researcher with plenty of real world street cred ignorant is a bit much.
>SPA port knocking is cryptographically secure and does not suffer from replay attacks.
SPA port knocking doesn't suffer from passive replay attacks, but it does suffer from block and replay attacks. An active MITM can still get you.
His suggestion hasn't been "if you care about security just don't do port knocking", his suggestion has been "if you care about security just throw up a VPN it'll be more secure and just as much work"
Care to elaborate? Not even fwknop documentation claims to be secure from all mitm attacks:
>Automatic resolution of external IP address via cipherdyne.org/cgi-bin/myip (this is useful when the fwknop client is run from behind a NAT device). Because the external IP address is encrypted within each SPA packet in this mode, Man-in-the-Middle (MITM) attacks where an inline device intercepts an SPA packet and only forwards it from a different IP in an effort to gain access are thwarted.
If I'm MITM'ing you from the same Starbucks or am otherwise behind the same NAT as you, I don't care if you've got the IP encrypted in the packet when I forward it on.
>Not the same amount of work, so no, wrong. If I had a dollar for every billion dollar unicorn that that didn't have a corporate VPN, I'd have a lot of dollars.
There's not enough billion dollar unicorns out there to actually have a lot of dollars, even if 100% of them lacked corporate VPNs :D
Regardless, you don't even need a full on corporate VPN. You can throw up a tiny VM for your VPN in the same private subnet as your servers, only listen on 22 on the private IPs for the servers. You can do this in less than an hour with Wireguard. Super easy.
You made the claim. You prove it with documentation.
>If I'm MITM'ing you from the same Starbucks or am otherwise behind the same NAT as you, I don't care if you've got the IP encrypted in the packet when I forward it on.
That is by definition NOT a MITM attack.
>There's not enough billion dollar unicorns out there to actually have a lot of dollars, even if 100% of them lacked corporate VPNs :D
The example is only billion dollar ones. If I include +$10m+ ones, I'd have enough to dollars to buy a new laptop ;D!
>Regardless, you don't even need a full on corporate VPN. You can throw up a tiny VM for your VPN in the same private subnet as your servers, only listen on 22 on the private IPs for the servers. You can do this in less than an hour with Wireguard. Super easy.
You just described a bastion host, and port knocking makes sense on those as well LOL. Wireguard only currently supports UDP, which can and had been a limitation in the past.
I... er, did?
>That is by definition NOT a MITM attack.
You're intercepting the packet and blocking it by being in the path.
>You just described a bastion host, and port knocking makes sense on those as well LOL. Wireguard only currently supports UDP, which can and had been a limitation in the past.
Bastion hosts are generally SSH/RDP/VNC type affairs. SSH in to the bastion and then you have access to the other servers. This is actually how I set things up in production environments - the VPN concentrator only allows access to the jumphosts, and then there's extensive logging and auditing there.
I'm not sure why Wireguard only supporting UDP would be a problem - you can pass whatever type of traffic inside of the tunnel.
You... Ugh... Didn't? You claimed that it suffers from MITM attack. You are not able to prove that it suffers from any MITM attack (the docs specifically outline a way to mitigate a specific MITM attack, but do not outline any others). Unless you have a source that states otherwise, you're wrong.
>You're intercepting the packet and blocking it by being in the path.
Wrong, that is by definition not a MITM attack.
>Bastion hosts are generally SSH/RDP/VNC type affairs. SSH in to the bastion and then you have access to the other servers.
Correct, and you set up port knocking for these. Thanks for proving my point.
>This is actually how I set things up in production environments - the VPN concentrator only allows access to the jumphosts, and then there's extensive logging and auditing there.
There should be extensive logging and auditing on the bastion host. Port knocking reduces the noise to effectively 0.
>I'm not sure why Wireguard only supporting UDP would be a problem - you can pass whatever type of traffic inside of the tunnel.
There have been multiple instances where UDP has been block at sites in the past. Looks like you're ignorant to this. Look up why OpenVPN supports TCP.