What is your conclusion that port knocking has made your network substantially safer based on? Just curious how you measure the effectiveness of something like this.
Could someone point to a contest where an attacker has managed to bypass port knocking. Surely someone has tested its effectiveness at creating more work for attackers.
How can anyone measure the effectiveness of the advice given by a security consultant? It saved a client from disasters that did not happen? How can anyone prove that such disasters would have happened? Answer: Proof of concept.
To question the effectiveness of port knocking, one needs some real stories of real disasters where port knocking was being relied on. Proof of concept. Otherwise all the discussion is still just speculation.
I wonder if it would be interesting to load a ssh honeypot that would feed from the logs so it would add the logins and then just dump the ip + input to a file.
Honeypot could basically run in a some sort of isolation layer (like Sandboxie or jails) and then self-destruct after the automated script is gone... and then you slam the door on that user/ip combo for good.
I can't help but think this would be interesting...
One of my university courses offered an opportunity for a project like this and I did it with some classmates.
We started by altering the ssh daemon to disallow all logins over this ssh daemon and to log all the usernames and passwords attempted. After a week we gathered thousands of attempts to brute force into the honeypot. Interestingly enough, the passwords used were a combination of the very commonly used ones but also ones that were clearly from other popped boxes.
After a week or so of this we altered the ssh daemon again. This time it would log all attempts but also grant access on the 3rd attempt no matter what the credentials were. The few bots that managed to get in all tried to install various rootkits on the machine, all of which where targeted at a different distro of Linux than we were using so it mostly just busted up our shells output.
It looks like ipv6 matching is supported since late 2017 (version 10.0 [0]), although the changelog states that "not all ban actions are IPv6-capable now". As for IPv6 capabilities, I don't have any recent experience with the software.
Couldn't you just ban the /64 and call it good? It's not like they get a random selection of addresses, they're all going to be the same CIDR. Or am I overlooking something here?
The point of the honeypot is that there's no heuristic causing any delay or elusive attacker being missed (e.g botnet trying once per IP). You don't even need any processing time, nor even complete syn/synack/ack: any TCP connection attempt to that port triggers an instaban.
Wouldn't a reasonable rebuttal be that the log filtering rule that discards these failed login attempts would accomplish the same security goal, with less mechanism?
i prefer knowing that nothing is happening rather than filtering things i dont like.
also, lots of people sing the praises of fail2ban, which needs to watch log files to operate, but you can get most of its utility by simply rate-limiting using iptables' session tracking without any additional mechanisms.
I can tell you right now what's happening and compress all those log records into a single record:
Every exposed 22/tcp on the Internet is being continuously exposed to automated SSH scanners running from thousands of points on the Internet. There is nothing you can reasonably do to prevent it (the sources are so diverse you couldn't even realistically block a determined scanner), and, if you've turned off password authentication --- which you must do anyways --- the probes aren't a meaningful threat.
From a systems-administration standpoint, it's nice to be able to easily sift out the random bots and drive-bys from the more personal probes that maybe have a human at the other end.
There are myriad bits and bobs of software and services that promise to do anomaly detection in logs, but an easier approach for now is to just move ssh off 22. It affords basically zero extra real security but makes any subsequent login attempts more worthwhile to look at.
Sure. Direct answer: search logs across all services and nodes on the network for activity from that source or netrange. See if it's targeting particular usernames, especially anything that shouldn't be guessable. Treat it first as a source of information: did I screw something up somewhere? Have we had an incident I don't know about? If it looks like nothingburger, ignore it and get on with the day.
Broader answer after reading some of your other comments in this thread: in your ideal, highly secured network environment, none of this is necessary because everything's wired up tighter than a gnat's ass hole. Unfortunately I've never had the pleasure of working for one of those places.
I guess that's fair. But: if you buy into this idea, there's a much, much better thing to do: look into Canarytokens. Canarytokens, unlike port knocking, are criminally underused. Really, you should do something similar for your off-port SSH service; don't actually _run_ SSH there, just run a stateless unprivileged service that spoofs a bit of SSH protocol and generates loud alerts.
The networks I'm describing aren't "ideal" or "highly secured". I am describing table stakes. While I was at Latacora, most of the clients we engaged with were already at this level of maturity when we joined up.
> The networks I'm describing aren't "ideal" or "highly secured". I am describing table stakes. While I was at Latacora, most of the clients we engaged with were already at this level of maturity when we joined up.
One of the most valuable things you do here is describe things that you believe are table stakes to people and organizations that have never heard of them. Companies like Latacora tend to self-select for clients that are at least aware that security should be a sensible line item in their quarterly budget. There are many many more organizations for whom moving ssh or even port knocking amount to a real improvement to their infrastructure. :-(
But the concept is really simple: come with any kind of thing you'd want to tripwire --- the AWS key is particularly slick --- and put it somewhere in your infra, then wait for alerts.
To defend Canarytokens here (and I’m totally biased because we make it) some tokens can’t be easily avoided.
Ie. If the token is a Slack/AWS/something API key, then the only way for the attacker to profit is to use it, and the moment they do, they tip their hand.
The joy of Canarytokens is not having to set up infrastructure to get the alerting win, with very little effort.
Not "knocking" the idea but to be honest using someone else's API key and expecting that no one is going to notice sounds really dumb. I guess if that is the level of intelligence you are up against, then "winning" can indeed be quite easy. Although I would argue if the goal is to restrict access and they managed to gain access then regardless of what they do next, whether it is smart or stupid, they have "won".
I don't even understand the question. We were talking about the value of moving SSH to a different port to cut down on logged probes. That at least has the value of giving you a weak signal about IP sources that are somewhat determined to break in. Port knocking doesn't even do that.
Maybe it is not supposed to. Port knocking is not a substitute for anything. Yet most criticisms of it, like yours, seem to assume it is going to be used as a replacement for something else.
i mean, you're in the security industry, where targeted attacks are the default assumption. the overwhelming majority of web properties are not valuable enough to get this kind of attention.
even if security through obscurity is not real protection from targeted attacks, it is at minimum significant noise reduction, and quite possibly a reasonable barrier against drive-bys which assume default configurations. i'm sure you know how many scanners try to access some default php/wordpress admin route. how much of that would disappear if wordpress was simply installed behind a random 32-char route instead of /wp-admin? all it is is obscurity, sure. but is it not security?
The whole "obscurity isn't security" thing is a super interesting topic. Suppose you have a static file on a public web server and you want to control access to the file... confidentiality. Many people would agree that if you come up with a long string, don't disclose it to untrusted parties, and use it as a basic auth password, then you have secured the file to some extent. And many people would claim that if you take that same undisclosed string, and make it a path segment (parent directory) for the file, then you have merely obscured the file, or perhaps secured it to a far lesser extent. The former makes it "not available to the public" while the latter makes it "available to the public, they just don't know where to look."
I call BS. These are pretty much the same thing* which is to say: an adversary trying to get the file needs to know something that they don't know, and can't reasonably guess/discover any time soon. Did you secure it or obscure it? Whatever you call it, you require a "something you know" factor.
*Obviously, there's the fact that other parts of the stack -- browser history, access logs, rate limiting, etc. -- actually treat a password payload with the respect it deserves and assume there's nothing sensitive about a URI path... kindly pretend my example doesn't have this flaw.
You may accidentally enable the directory listing.
While you may say that certain methods of withholding the information are isomorphic at certain conditions it doesn’t mean that they are semantically equivalent.
I admit that my example is flawed for a few reasons and autoindex is another in that list. While obscurity and security are not identical by any means, and therefore not semantically equivalent in general, I'm just pointing out how the "something you know" factor inherently utilizes what common ground they do share, whether you call it a secret, a key, a password, a URL with a long random portion.
Turn on logging for failed connection to port 22 and you have just recreated the same log file, assuming you trust the underlying security in your sshd configuration. You just moved the "nothing happened" event from the sshd to the firewall, and the firewall by default do not log such events.
The best argument in favor of knockd that I can think of is the same of fail2ban, which is that ssh connection is actually a finite resource. You can only have so many simultaneous TCP connections. Rejecting packages in the firewall has no limits, and I have had servers which has received service disruption in the past because swarms of ssh bots used up the available tcp connections that the web server could handle. fail2ban in my case was a less intrusive change to our work flow so we use that rather than knockd to solve the problem.
I will add as a small note that the default of not logging firewall events has come to bite us once in a while. Just like keeping the default in ssh to log, keeping the default to not log is a trade off between resource usage and keeping logs of events.
> simply moving ssh off of port 22 cuts your drive-by log noise by 99.9%, which makes auditing clean logs much much easier.
Is there a reason you can't use some form of IP whitelisting?
I'm not a proper sysadmin, but when I play about on EC2 I always configure the security-group to block incoming connections to TCP port 22 except from whitelisted IPs.
The only time I've had to broaden the whitelist beyond just a few static IPs, is with mobile Internet tethering. They seem to frequently change my IP, so I guessed at my provider's IP range and whitelisted it with a pretty broad wildcard mask. Still much better than accepting connections from any old address.
I can see that my rather simple approach might not scale very nicely.
> I'm not a proper sysadmin, but when I play about on EC2 I always configure the security-group to block incoming connections to TCP port 22 except from whitelisted IPs.
Because I have multiple IPs I connect to, most of which are unknown. Maybe I'm connecting from a coffee shop, maybe from a tethered phone, maybe from work, maybe from an airplane, or a train, or a friend's wifi, etc.
The point is that most login attempts are automated, and the automated scripts just try port 22 and move on if they fail. If you run ssh on port 22, you see in your logs all the failed attempts from the giant sea of automated scanners, and will probably miss the one or two humans who might know something about you specifically (like your username) and are trying a targeted attack.
If you put ssh on another port, all of those automated login attempts disappear from your logs. The determined human attacker probably has port-scanned you and found ssh on another port, so you'll see just that attacker and can look into it more deeply if you feel the need. On port 22, you probably never bother to look at your failed-attempt log, because there's so much noise.
I would be interested to read about peoples process and procedures from those that regularly read their ssh failed attempts, identify human attackers, and react to that information.
My question is basically: Do they exist? How is the work flow and how much time per year do they spend on it? What incident occurred and what value did they derive from the process. Was it cost-effective?
I don't read mine at all (talking about a personal server here[0]), because there's a huge amount of noise. The email alerts for it go to a mail folder that I literally never look at, because it gets hundreds of emails per day.
If I only got an email per week or so, I'd probably not filter them to a folder, and actually look at them.
[0] At work, other people deal with intrusion detection there, and ssh access is gated behind a VPN anyway.
About a decade and a half ago it was in vogue to constantly monitor your logs for attacks. There was sophisticated software to detect the latest attacks, and they would notify you constantly, and you'd run around chasing your tail for every unusual port scan. We spent tons of money on reporting software to try to stay on top of it all.
But we realized that as attacks increased, it was pointless to look at who or how often we were being attacked, because it wasn't making our systems more secure. What did make them more secure was actually securing them: patching systems, implementing firewall rules, blocking or slowing down multiple login attempts.
Non-sophisticated attacks are trivial to prevent, and sophisticated attacks won't show up in logs. Everyone needs to stop logging ssh attempts and just secure their boxes and move on with life.
It was a response to their main point, but the subtlety was lost on people who don't think critically.
Just because you see or don't see attacks doesn't mean you are more or less secure. It just means you've finally noticed there is risk. It doesn't mean there is more or less risk. It's the same risk, it's just more visible.
The quote is not about logs making you unsafe.
The quote is about ssh login and connection attempts on port 22 which clutter your log files.
Assume you move ssh to another port, say 23. In that case, anyone trying to connect to port 22 would find a closed port and the connection would fail, on a layer before login attempts get logged.