Hacker News new | comments | ask | show | jobs | submit login
Learn from your attackers – SSH HoneyPot (robertputt.co.uk)
194 points by robputt on Sept 19, 2017 | hide | past | web | favorite | 137 comments

Aside: I used to run a small ISP, a 200-300 dedicated+virtual machines. We set up our router to alert us if outbound SSH connections from a host went above a certain threshold, which was a super reliable way of detecting if a host was compromised. I think we had a near 100% success rate, because once a host is compromised they use it to start trying to compromise other hosts.

But, we also had every customer on a VLAN, limited to only being able to send traffic from their IPs, and also blocking incoming and outgoing bogon traffic.

Years ago I attended a presentation by Evi Nemeth (RIP) related to CAIDA and one thing they found in auditing "backbone" traffic was that some huge percentage of it was bogon traffic (I don't recall the exact number, but lets say 10% +/- 6%). Nobody wanted to filter that traffic because the pipes were less expensive than the routers to handle filtering packets at high pps rates.

You'll never really know your success rate though. You could have had machines compromised for years with small amounts of traffic.

I guess he doesn't mean that he detected all infected machines but that near 100% of machines which triggered the alarm were indeed infected.

That's a bad metric though. You could miss a lot of infected hosts.

That is the single most important metric when you want to create an alert.

Outbound traffic probably wasn't their only heuristic

High positive predictive value. Unknown sensitivity.

What I meant by that statement was that of the system compromises that we detected, nearly 100% of them were detected through the SSH outgoing connections alert.

Yes, there could have been compromises that went entirely undetected for years (we had a really high retention, so most customers stayed with us for 5+ years), so we had a good window to detect issues.

Probably the next biggest notification of compromise was alerts about spam on our network. Mostly that was an e-mail account compromise rather than system level. But there are a ton of false positives on spam alerts, particularly AOL alerts were almost always about legitimately sent e-mails.

They could lower the threshold until they get an acceptable proportion of false positives, but I wouldn't want to be one of those false positives.

> Nobody wanted to filter that traffic because the pipes were less expensive than the routers to handle filtering packets at high pps rates

Situation is better nowadays, modern routers like Juniper's Trio-based MX platform do things like bogon filtering and reverse patch checks at line rate without performance impact.

I've been using something similar for a while now. I use pam_jail on freebsd to drop the ankle biters using common ssh login attempts like test, ubuntu, oracle etc into a FreeBSD jail where I watch what the do and get a copy of all their tools. I rate limit the outgoing traffic from that jail to something painfully slow to prevent them from causing any major issues. But being able to fire up 'watch' on freebsd and snoop the tty they are on in the jail is awesome for forensics.

It's secure, they can't break out of the jail.

It's rate limited to prevent them causing much damage to anyone.

It's easy to observe every thing they type and do in the jail from the host.

That's...pretty awesome. Have any guides/blog posts/whatever on your setup?

I don't. When I was setting this up I didn't find any guides or blogs about doing this so I thought it was not of interest to anyone. So I just went about implementing it a piece at a time. Installing pam_jail, then getting /etc/pam.d/sshd configured to use it (which is a one-line addition, very easy). Now by default pam_jail simply puts the user into a jail of their home directory. This is where you can go a few routes with this. If simply jailing the user on the host OS to their home directory is all you are interested in doing you can stop there. Configure FreeBSD to not allow users to see other processes, user processes and group processes not owned by them, and they won't know anyone else is on the system.



The above are the two sysctl's you want to enable.

A simple ps will show you all the shells that are in a jail.

Tue 19 7:19PM priyanka.setecastronomy ~> ps -x


30308 - S 0:00.05 sshd: oracle@pts/0 (sshd)

Now you have the tty he/she is on (pts/0). There are many ways to get that including ps so use whatever method you prefer to get the tty.

Fire up watch to snoop on the tty.

Tue 19 7:48PM priyanka.setecastronomy ~> doas watch pts/0

And boom, you can watch the ankle biter do his/her thing.

You can do this on the host as described, because that is how pam_jail works. But if you prefer for some reason to make an actual jail with VNET etc and do this simply forward ssh on the host to the jail's IP. If you do rate limiting you will already be configuring pf or ipfw anyway so an extra line to forward 22 to the jail host is not big deal.

Yeah this is a crappy write-up but it's spur of the moment and in a HN reply post so it's worth the price paid.

I'm happy to answer questions or help you if you get stuck anyway I can just message me. It's pretty simple to do.

If this is in a corporate setting I would consult with legal before doing this. The world is a whacky place these days, and if the ankle biter does something evil from your honeypot jail you may be liable.

You know what would be cool? Piping these sessions to a web frontend and letting people watch these attempts in real time (or recorded) using a JS terminal emulator to display it all.

I would sit there and watch that.

I would pipe it to an LCD in an old fish tank.

Oh, look honey, this one's trying to use GNU options! Here's one who thinks he's on Ubuntu! Wow, I didn't know VIM could do _that_!

Take a look at http://honeycast.io/#!/0f9465db-3c01-49ba-9d81-16b1fd68627c. We are working on a new version of our honeypot software. The feature you mention will be supported.

Wow, everything I think of already exists. Nice.

That would be pretty cool!

Neat! I'd love to help!

Miss World 2000 has "Too many secrets"...

Love those references. If you have not seen it yet go watch Sneakers (1992). I am getting old - I remember when it was fresh :-)

This is great and should be something everyone should know how to do. Please write a more formal write-up

What have you seen them do?

Might have changed by now, but what they used to do was usually:

* Download a large file (e.g. XP SP3) from Microsofts servers to test bandwidth * Download IP scanners and trying to run them (in order to find new hosts to attack) * Download botnet code that connects to some C2 server

Tools they download were often downloaded as source, which they then tried to compile in order to run it.

Searching for e.g. 'youtube kippo' will get you videos of honeypot sessions.

Attempt to log in it would seem...

You should still be rather careful with this, as they have access to the same kernel as the host, and could potentially jailbreak through kernel vulnerabilities.

I have enough confidence in FreeBSD jails to not worry about that. If I found someone who could actually break out of a jail I would at the least buy them dinner anywhere, their choice. You can't modify a running kernel by direct access. And you can't load modules. So it would be a very interesting hack to see.

It's been done before, for example the "BadIRET" vulnerability (CVE-2015-5675) was used to jailbreak FreeBSD on the PS4 a couple of years ago.

Actually it hasn't been done before. That CVE required elevated privelages on the host already. They didn't break out of a jail.

They did break out of a jail, the vulnerability was used to gain arbitrary code execution in kernel mode, which was used to modify the cr_prison structure, thus performing a jailbreak.

Again, they already had elevated or root privs. If you've already got root privs nothing will save you.

They had root inside a jail, which isn't the same as being root outside the jail. To be able to gain arbitrary code execution in kernel mode from a jail is a security vulnerability.

Would love to read more about this setup, if you have it documented anywhere

I too would love to hear more about this, unfortunately this article on FreeBSD Jail Management Tools [1] seemed to be the most relevant thing I could come up with.

[1] https://blather.michaelwlucas.com/archives/2291

Would also love to see a write-up on this.

I just harden my sshd config based on https://github.com/arthepsy/ssh-audit and ALL bots will fail since they do not support the more current config os key-exchange, host-key, encryption and message authentication code algorithms.

To make it even more interesting: one can actually see which bot is trying by the way they fail to handshake.

I like a litte noice on my ports so leave ssh on its default port, accept authentication only with a certificate and be done. Works like a charm. And yes it makes my sshd stand out like a sour thumb since 99.9% (guestimation) of you folks don't tighten up your ssh configs :)

If one were to run a honeypot like this and take every IP which connects and attempts a login and immediately ban it from your network, then if more than 1% of the IP range they are in has been banned, ban the entire range... what would the expected outcome be for a typical residential user?

I've been doing something similar to this for years on a mail server that hosts all of my mail as well as a number of customers. Contrary to the other replies, it's worked out really well, and you can rely on customers to complain pretty quickly if anything's not working right.

The bulk of the abuse comes from Russian and Chinese networks, with Brazil working hard to catch up. Those can be pretty well perma-banned without consequence. Under "folks that should be fined into oblivion", there's cogentco.com, quadranet.com, colocrossing.com, level3.com, vpls.com, softlayer.com, hostingsolutionsinternational.com, datashack.net, singlehop.com, actonsoftware.com, and a whole raft of others.

The secret sauce is a carefully tuned ban decay function that scales up with the number of abuses. Occasional hits get a network banned for an hour or less, subsequent hits get them banned a little longer, and then there's a fine line where the function goes exponential, all the way up to 6-month-plus ban periods for really big nuisances.

Otherwise it's just fail2ban on some logs and some home-brew code that does whois lookups on nuisance IPs and some data mining on the whois response.

I run a Brazilian site and the latest attack came from Europe (France and Bulgaria). I get a lot from the US and China as well.

As we are really only interesting in our state, I don't think twice about banning international IPs and they are mostly from large datacentres.

I can't ban China as our suppliers at work are partly -Chinese, it's a royal PITA, I can't even whitelist them as they have to bound around VPN's just to get around the Great Firewall of China on their end.

If your suppliers use VPNs anyway, you can absolutely ban Chinese IPs without much consequence for them. However, if it's for a user-facing site, please don't just ban all of China. After relocating, I was quite annoyed by the number of websites that are unavailable not because they are blocked by the Great Firewall, but because they banned all of China and VPNs to stop the abuse they saw from some IPs in those ranges.

The problem is VPN is now banned unless it's recognized by the state (must be registered). You can still try to use the underground VPN, but not guaranteed. The GFC knows everything in and out. It is one of the most acclaimed and sophisticated closed source networking system in the world. They can toggle on and off in matters of minutes. They have a huge work force monitoring their internet and censoring speech in no time.

Of course the Chinese government are aware of existing circumvention, but they still allowed them because they want to capture data. They let these circumvention to exist even though they have the brightest engineers in China working for them. I vaguely remember a talk from RealCrypto a number of years ago, someone presented how one could try to escape GFC by sending data over different protocols and then resemble by the recipient. It's an ingenious circumvention that is quite hard to decipher, until someone recognizes a traffic pattern, then Chinese government knows.

Not sure if same protocol, but iirc Dust (https://github.com/blanu/Dust) was supposed to accomplish this. Created by Brandon Wiley co-founder of freenet.

If they consistently used VPN's I would but it seems to be intermittent since the GFWoC is a fickle thing.

You can ask a friend to setup a vpn at home with his traditional ISP

Would you be interested in open sourcing that?

Yup, would be great!

> Under "folks that should be fined into oblivion", there's cogentco.com, quadranet.com, colocrossing.com, level3.com, vpls.com, softlayer.com, hostingsolutionsinternational.com, datashack.net, singlehop.com, actonsoftware.com, and a whole raft of others.

What do you mean these should be fined? Level3 is a transit provider for massive portions of traffic on the Internet and Softlayer has some hundred thousand servers.

He means that is the host range he saw the most attacks from. Deduction then takes his argument to the understanding that many attackers use Level3 and Softlayer for attacking purposes.

IBM SoftLayer is a reputable cloud as far as I am concerned, just like AWS and Digital Ocean. I've seen and banned a few of the other names given, after concluding that they exclusively provide spam and malicious servers.

I do this sort of thing, but I'm listening on a few /16s.

1% is a pretty low threshold. There are some v4 networks out there that are complete garbage.. If I was going to do something like that I'd start at closer to 90%.. 230 hosts on a /24.

  $ select subnet, count(distinct(cidr)) as unique_sources from (select set_masklen(cidr,16) as subnet, cidr from stuff where why like 'SSH%' and added > '2017-08-01') as foo group by subnet order by unique_sources desc limit 20;
       subnet     | unique_sources
  ----------------+---------------- |          11688 |           8486  |           8454 |           7994 |           7892  |           7294   |           6384  |           6077  |           5905  |           5788 |           5620  |           5179  |           4893 |           4812  |           4266 |           4208   |           4203 |           3858 |           3836 |           3836
I think some of those networks are using CGN and have a much smaller number of actually compromised hosts.. ISPs generally just don't give a shit about security.

I don't see why a residential user should be worried about residential users from other ISPs being able to reach their machine usually. I suppose it would be problematic for torrenting and maybe gaming (depending on architecture of the game). But I imagine my grandmother couldn't care less if some other grandmother couldn't connect to her network directly.

90% seems really high... you'd really wait until 230 of 255 possible hosts have attempted a breakin before deciding they were on a network too dangerous to preserve your accessibility from? Are there a lot of networks where 90% of the boxes are launching attacks, but 10% have legitimate need to connect to your personal home machine?

It's problematic because plenty of people host websites and other network services on their own infrastructure at home / work, all in the residential IP space.

If you don't care about ingress connections, then you don't need blocklists anyway. You just keep everything behind NAT or a stateful firewall.

You may as well just block LACNIC IPs wholesale and save the trouble at this point

You'd be unreachable from cloud providers, major public hotspots, residential ISPs, and countries that use carrier grade NAT to stick all their traffic on one IP or one range.

Hmm, I hadn't thought about cloud providers. Public hotspots, residential ISPs, and countries behind firewalls aren't really something I expect most home users really need their home network to be accessed by. But blocking connections from cloud providers would be unworkable. I suppose lots of automated SSH attacks would be coming from cloud providers... maybe there are few enough they could be whitelisted?

With cloud providers it's likely a "bad" IP will eventually get re-assigned to a "good" customer whom you don't want to block. That's why an exponentially increasing ban is generally a good idea, I think. The more abuse you get from a given IP address the longer you ban it each time. If it's use for a one-time attack and thrown away, you forgive it relatively quickly.

From experience, there are two kind of cloud providers. The normal ones like AWS, OVH and Digital Ocean. And the shady ones.

The first one don't attack much of anything, there is the occasional spider that tries to index your website (we had data worth scraping). The later can be banned by entire AS without issues.

>what would the expected outcome be for a typical residential user?

Wasted time for precisely zero benefit.

There are lots of hobbyists that'll tell you otherwise, but in reality you should be using key auth and worrying about better things.

Fwknop is a much better solution than whack-a-mole blacklisting or even Fail2Ban.


in my first years I've setup a ssh server behind a nat'd network and installed fail2ban. the outcome was that nobody could access the ssh server... probably the same thing would happen, on ssh I've seen so many login attemts even on the smallest VPS I've ever gotten, it's just ridicoulus and important to use non password authentication with ssh (i.e. rsa keys probably 4096b).

Literally the first thing I do on any new machine is disable password logins, disable root logins and then punt the SSH port to the top of the range (usually 47999).

Every remote machine gets it's own keypair as well and links between machines also get their own key-pairs (key handling is PITA, I've yet to find a method I really like), Keys for work stuff are back up on multiple LUKS memory sticks (two in separate safes, two offsite).

I'm fortunate in that work machines are in the pet category not cattle category so we don't need to change keys that frequently (though I will rotate them out yearly anyway).

Friendly advice..

It might be better not to disclose this level of detail about your professional security practices, especially when there is a trail of breadcrumbs in the HN profile on your account..

Shrug, given how hostile the internet is I fail to see how this increases the risk at all.

Yeah, I am not so sure how knowing that keys are rotated helps the wannabe attacker. The only "useful" bit would be to look for SSH port up in the range, not much help though.

My attitude is short of you having the SSH keys if I can't tell my entire setup and still be secure then I'm relying on obscurity not security.

If you do it right it doesn't matter.

The default ip(6)tables rule: drop all packets

For example to access my least secured public facing server you need to do the following: 17 port port-knocking to enable crypto port knocking Which enables ssh Which requires a 16K RSA key and TOTP The only accounts that can be logged in have 18 character randomly generated usernames and 255 character passwords. The login accounts have no access to administrative functions. So you'll need to su to an administrative account and guess the 255 character password or have a zero day exploit that can bypass SELinux and cryptographically signed white-listing of binaries.

Just alter the ssh port, i have never received a ssh connection that i can not personally account for.

Security through obscurity is not the solution, though. Sooner or later someone does a port scan and finds that port. I've found that it's more efficient just to whitelist the IPs or subnets you might need and denying the rest.

Edit: I didn't mean changing the port is a bad thing. You can and should do it, and it will help a bit - just that it shouldn't be the only thing to rely on for your SSH security.

> Security through obscurity is not the solution, though

Security is about layers. Nothing is foolproof. It's about implementing layers of controls to reduce your attack surface to an acceptable level, with the trade-off that many controls increase the complexity of your setup or compromises the convenience for your users.

For example, for SSH, this probably includes

* changing the default port

* enforcing SSH key authentication

* enforcing passwords on SSH keys

* implementing fail2ban

* installing jump hosts for internal machines

* implementing a VPN rather than external facing hosts (and with that comes all the additional layers for the VPN)

* etc...

> * enforcing SSH key authentication

That cannot be enforced by the server because the key decryption occurs client-side. An alternative is to use Two Factor Authentication.

I think you mean the server can't enforce ssh key encryption/passphrase protection (next point down)?

And 2 or even 3 factor should maybe be on the list (key+pw, key+totp, key+pw+totp).

For keys, it's in theory possible to ease management with using ssh certificates and a CA - anyone know of a convenient way to manage totp secrets across multiple servers and users?

Yeah, I quoted the wrong line.

It is part of it in this case. You've just eliminated most of non-targeted scanners. Your log is much more readable and what is left will probably be dedicated attackers.

This might help in forensic investigation afterwards. Less crap to wade through.

Security through obscurity is definitely not the solution, but I'll tell you that I've been running SSH on a non-WKS port since the mid '90s as a standard practice, and the number of attackers attempting to connect there seems to have been 0, even during the "lets try all sorts of common logins" crazy days.

Let's face it, the sort of person who moves SSH to a different port probably is also the sort to disable password logins. So the effectiveness of searching for their SSH daemon is probably pretty low.

> Security through obscurity is not the solution, though.

It's not the only solution but something that can be used to at least drastically reduce the amount of noise in logs.

> I've found that it's more efficient just to whitelist the IPs or subnets you might need and denying the rest.

That does sound more efficient but what if someone connecting doesn't have a static IP?

We use a VPN for this reason. It works pretty well and means we can lock things down to a single IP while allowing staff to still get on from home.

I never understood this. If you allow roaming VPN sessions what is the difference between a roaming SSH session?

I'd argue they are effectively the same thing with the same auth methods available for both.

Why do you consider VPN to be a better protocol than SSH for security/authentication?

I don't consider it better, I consider it an additional barrier to attacking the production systems. Everyone authorised to access the servers (prod or non-prod) has to have an SSH key, protected by passphrase, to access the servers. This in addition to the requirement that they are either in the office or logged into the VPN means that any attacks have to come via that funnel.

Attackers who are not specifically targeting the company are unlikely to bother or to have the skills necessary. It's too much hassle and attackers scanning around for a random victim don't even see that the ports are open since they're behind a firewall.

So, it's not about this protocol vs that protocol for authentication, there certainly shouldn't be open access just because you got past the VPN auth, it's about layering your security to shrink the surface area and make it simpler to identify the entry points.

Alright, so you use VPN as your bastion server - effectively.

I guess I never considered the need to SSH directly to machines. I completely agree security is defense in depth through layers.

I brought up the question since I was just at a client site that required VPN to ssh to their single bastion (jumpbox) ssh host, which then you used to further ssh to production machines. The VPN layer in that architecture seems rather silly - just lock down your bastion server (SSH) in the same way!

> Alright, so you use VPN as your bastion server - effectively.

No, I apologise if I wasn't clear, the bastion server is accessible only after you are either also within the office or logged in via the VPN. It's an additional hoop to jump through, not a replacement. The point being, the bastion server is locked down in addition to the requirement to be logged into the VPN or being physically located at the office. IP locking is not a replacement, it's a sensible addition. I think perhaps this is why you think it's silly; you're misunderstanding that it's not a different layer it's an additional one.

> what if someone connecting doesn't have a static IP?

Whitelist your ISPs subnet then, at the very least - there's much lower probability of an attacker coming from just your ISPs, and, of course, use other measures too - I didn't mean that to be the only solution for the problem.

Yep, big advocate of that.


I'm withdrawing the question per the accepted answer below.


original comment:

>it's just ridiculous and important to use non password authentication with ssh

Does this follow from the assumption (that you did not state) "since I won't bother to create a high-entropy password"?

Granted its a fairly safe assumption if you're not trying too hard but SSH supports passwords of arbitrary length so I'm curious if there's any other reason, besides not taking the time to create a high-entropy password.


Accepted answer below (esp "Because your users cannot be trusted. I run servers which coworkers, all developers, get access to.")

Because your users cannot be trusted. I run servers which coworkers, all developers, get access to. It's either ssh keys or 2 factor authentication because most don't give a fuck about security and will insist using garbage passwords if not prevented from doing so (and then they will complain), even though the use of a password manager is mandatory.

> ... will insist using garbage passwords if not prevented from doing so ...

is how I prevent co-workers from doing so (on RHEL and derivatives).

Good to know. We do our user provisioning through an internal portal which enforces everything. The ssh servers als gets configured to disable password login so the only thing passwords are used for is sudo. All web stuff is behind a SSO gateway with password+2fa.

Perfect answer actually. Accepted and I've updated my original comment.

One thing i was surprised to learn was that whatever you type as your password is actually transmitted to the server for validation! So if you typed the password for the wrong server in.. then you've just leaked that other password to the server you were trying to connect to! uh oh!

if you use keys, this doesn't happen.

Private keys never leave the place they are born. Passwords not so much.

Don't forget about ipv6 if you have that enabled.

Only exception I can think of is SSHing into things where you’re not allowed to plug stuff in or put files on, like a public terminal of some kind. Password plus a second factor would be more usable there.

I did this with fail2ban. Initially just blocking the IP, and then sweeping back through occasionally and blocking the whole subnet when I had a few offenders in the subnet. There was no negative impact for me.

+1 fail2ban (properly configured, of course)

DOS against a range provided by an ISP assuming you could spoof the source over and over, you could have valid ranges auto-blocked for valid services where you DOS a range of IPs from any given ISP

Reminds me of this Fishing for Hackers post https://sysdig.com/blog/fishing-for-hackers/

Great post. Honeypots behind the firewall are also a great way to pick up lateral movement and privilege escalation.

There's a ton of useful open-source tooling one can play with in the space. Here's a list:

- https://www.smokescreen.io/practical-honeypots-a-list-of-ope...

Most of the projects are also small enough that you can easily contribute back to them especially in terms of reducing the ability to fingerprint the honeypots.

Somewhat timely, The Grugq just wrote up a nice article on Counterintelligence for Cyber Defence:

- https://medium.com/@thegrugq/counterintelligence-for-cyber-d...

Question: I've had an SSH server open to the world for months on a nonstandard port. I've never gotten any attacks logged on fail2ban, to the point where I suspect fail2ban is either not logging the attacks or not operating correctly... even though it logs every login I'm aware of just fine, so it really seems like there really haven't been any attacks. I find this so unlikely. Is this normal? Does no one scan high ports ever? Are there any more plausible explanations?

Is this server running a recent version of Ubuntu and using systemd? Also, have you verified that fail2ban actually blocks you by intentionally making bad login attempts to trigger a ban? I do this with every new machine and never assume it works until I can trigger it on demand. (test using a VPN or set a very short bantime so you don't lock yourself out)

I ask because systemd may not be writing the failed login attempts to the log file fail2ban monitors, which means fail2ban may not be responding to attacks. I don't recall exactly how I resolved this, but I think I had to set "backend = systemd" in my jail.local under the [DEFAULT] section header.

Oh wow, thanks a ton for mentioning this. It's Ubuntu indeed but I seem to have sysvinit rather than systemd from what I can tell. I have no idea how to actually test it in a way that ensures I will trigger it though (since I don't really have other setups that I can test the same method on and ensure I'm doing it correctly). What do I have to do to trigger it? Just fail a bunch of SSH logins in a row? (I have ssh, dropbear, pam-generic, and ssh-ddos enabled.)

> What do I have to do to trigger it? Just fail a bunch of SSH logins in a row?

Yes, that's what I do. If you have SSH login via password disabled (you should) then just temporarily move your private key(s) out of your ~/.ssh directory and try logging in a few times. "ssh user@machine && ssh user@machine && ssh user@machine" will do it if you have a ban set for three failed attempts. If it's working, your target machine should no longer respond at all, and connections will time out. If it's a web server the site should be unreachable with your web browser until after the bantime passes.

Ack, I figured it out. The ban does get triggered, but it doesn't succeed because iptables doesn't work on my system -- some module isn't properly compiled for the kernel. Thanks for helping me figure this out!

No problem, and this is a good reminder to test critical systems periodically. Same goes for backups! Good luck.

Thanks, I'll give it a try :) I've gotten myself locked out before like that, but ironically from a different machine...

No, this is pretty expected. All the script kiddies are scanning port 22 and nothing else. Recently I made the mistake of changing a root password (I know, SSH root acc ss, bad idea) to a dictionary word. In 1 minute the system was penetrated.

With a nonstandard port, bots mostly don't even bother to scan you.

Interesting, good to know, thanks for confirming. I guess security by obscurity is quite effective!

Technically, for an uninformed attacker, this is just one more level of entropy added to your password (multiply the combinations by 65635 - and then more in practice, as you have to account for the fact that this has a low probability of occurring from the viewpoint of the attacker).

It just really is a part of your password (albeit one that can be guessed easily by a more dedicated attacker, just like the username).

It is not plain added entropy. You can't multiply the complexity (or add the entropy) because there is a signal when the attack makes a partial guess (he gets the openssh prompt).

The added entropy is a bit harder to calculate, but unless your password has 16 bits of entropy¹, it will round pretty well to zero.

1 - Why TF would you run a ssh server that accepts passwords anyway? But nearly everybody I have seen that thinks changing the port is a good form of protection also sees no problem on accepting them.

Right :)

That sounds like a much better solution than I've used in the past. By simply restricting total # of connections / IP to one my attacks dropped dramatically, but not gone completely.

I think it's just a matter of numbers. The IP search space is already pretty large with IPv4 on just a single port. With IPv6 it'll be more or less impossible for attackers to make any substantial progress by just searching for IP addresses to attack, let alone an IPv6 address on a non-standard port.

After reading this I'm likely going to be upgrading my technique. SSH on IPv6 only (eventually shutting down IPv4 altogether, can't wait), listening on a non-standard port, limit 1 connection per IP.

Yeah, definitely listen on a nonstandard port. I'm not sure how to limit to 1 connection per IP but it sounds rather limiting, so I wouldn't do it... I often have multiple SSH sessions from my own IP.

Give yourself some breathing room. I limit connections to four in a three minute window.

    iptables -A INPUT -p tcp -m tcp --dport 22 -m recent --update --seconds 180 --hitcount 4 --rttl --name SSH --mask --rsource -j DROP

I just run a 'screen' session to open 10 bash sessions simultaneously when I want to do anything substantial. That has never felt restrictive to me.

Can you see two screens simultaneously with screen though?

If you want this, I would suggest tmux. Tmux by default comes with tabs (multiple terminals) and panes (multiple terminals visible at once within a tab) where you can split your screen vertically and horizontally however you like.

Usually, you want one full-height terminal and two about half the height of your screen.


But these are so much more restrictive. Right now I just open two terminal windows -- they can overlap each other, I can make it so one covers every part of the other except a tiny part whose update I want to see, and make changes in these first one. I can kill programs in one without being worried about accidentally killing them in the other. I just don't see the appeal of forcing everything into a single terminal screen honestly. What's the benefit?

If your network connection drops long running jobs keep running, no setup.

I do most of my work on our university cluster over SSH. I can close/shutdown my laptop at any time (or have my connection drop) and as soon as I SSH back into the server all my editors, shells, running jobs, etc. are exactly how I left them.

Oh, that's not quite what I meant. The question wasn't about whether I should use {screen,tmux} to open a shell; the question was whether I should use {tmux,etc.} to manage multiple shells in the same terminal window. As opposed to having multiple instances of them open in different windows.

Mosh is also worth a look.

You want ControlMaster. man ssh_config.

I'm not sure. The need has never come up for me though.

You can also share an ssh connection between sessions.

Don't know if it is a too obvious answer, but have a look at the output of 'sudo lastb', it will show failed attempts quickly to verify if it is plausible and fail2ban properly setup.

I've never heard of the command, so it's not obvious, thanks. However, I'm not sure how to interpret this. All I see is a single line:

    btmp begins Fri Sep  1 06:25:05 2017
Does this just mean nothing has happened? I'm not sure what's special about that date; the setup has certainly been running for much longer than that...

That's not a bad sign then. Mine shows lots of attempts. lastb shows the last unsuccessful (bad) login attempts. it uses the file /var/log/btmp as input. the time stamp of "btmp begins .." either is the same as the creation date of the btmp file or if the btmp file was rotate of an older version i.e. /var/log/btmp.0

I've been running non-standard ports for a few years and while it certainly cuts down on the non-stop brutes on 22, my servers still get smacked pretty often in 400-500 attempt blocks.

Oh, interesting. Are your servers targets of interest somehow? Or are your ports possibly still low numbers?

definitely not interesting, but they're on port 8022 so maybe that's a common one for attack scanners. i mean with masscan its so fast and easy to find ssh ports, i cant imagine any non-standard port is more obscure than the next. they are hosted on a low cost vps provider so maybe their ranges are common low hanging fruit.

I wouldn't have revealed the exact port number here, but thanks :) yeah, I think your explanation sounds plausible.

A port number isn't a real secret, it just screens out the bulk of mass scans. There's no point trying to hide it from targeted attackers.

The hosting providers have known IP ranges and they are scanned permanently. I wouldn't be surprised if some guys had a rule to try 8022 and the most common non standard ports.

Over a decade ago, I set up a tarpit [1] (I worked at a small web hosting company) on an unused machine to catch network scans. I had the idea of using the information gleaned from that to block IP addresses at the router level, but never did get around to it.

[1] http://boston.conman.org/2006/01/07.1

I almost closed the page on mobile because I thought it's empty or broken...

It's not any less broken on desktops.

Ahhh, back when XKCD was funny...

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact