Hacker News new | past | comments | ask | show | jobs | submit login
Tosh: Changing your SSH server's listen address every 30 seconds based on TOTP (github.com/mikroskeem)
217 points by Avamander on May 22, 2021 | hide | past | favorite | 197 comments



There should be no need to do this if you have a properly configured public/private key auth setup and disable password based login. And of course keep up to date on openssh patches and security advisories. I worry that something like this will provide a false sense of security for people who might ignore other more common-sense, fundamental precautions first.

Before doing something like this I would worry a lot more about client endpoint security (exactly to what level do you fully trust all the people and workstations/laptops that are authorized to ssh to this thing?), as an overall more likely threat.

There are also lots of less esoteric ways to not have a system listen on any publicly accessible IP address whatsoever. If it's really something critical you should be looking at a combination of making it purely an intranet service only, listening on an IP in an internal network block that isn't accessible from global routing tables at all. Or is completely firewalled off from the world, and only accessible once you've authenticated yourself to your VPN. Or only reachable once you first authenticate (public/private keys, two factor crypto key auth, etc) to a bastion host, and then reach the system from the bastion.


You're against this because it gives a false sense of security and might make people relax other measures, but then you suggest going with an intranet, which is widely known to create a culture of "if it's in the intranet it's safe" which is very detrimental to security as well - thought it was curious.


I am 100% in agreement with you on that point. The most common and risky thing about building an intranet type environment is that it can lead to a false sense of complacency. What is needed is both a belt and suspenders type approach to hardening the daemons and security on individual servers and things that are within the intranet, and also security measures designed to only allow authorized endpoint clients to get into the intranet. Essentially one needs to treat the individual servers and things that are in the private IP space as if they were still facing the public internet, even if they are not.

What you absolutely never want to do is create an environment that is metaphorically like a uncooked egg, after getting through the outer shell layer, things are soft and squishy inside.


There's a principle in security called "defense in depth". Your servers shouldn't be SSHable from the public Internet, but even if that's bypassed somehow, there should still be other layers of security. Each layer of protection adds security.


There's also a principle in security that's called simplicity.

"Defense in depth" is often quoted when people want to add further complexity to a system. There are cases where adding a security mechanism that adds complexity has a benefit that is so large that it's justified (e.g. adding TLS or ASLR). But it always needs to be balanced, because complexity adds attack surface.

The system linked here seems like it's adding a whole lot of complexity and only has a very weak case to be made for what it's good for.


Not making your servers SSHable from the public Internet is absolutely worth it though, and has better simplicity than exposing them (which requires setting up firewall/NAT routes).

To be clear, we haven't been talking about the OP port-shuffling scheme for many posts now in this subthread. We're talking about not having your servers be externally SSHable, period.


That’s probably why it is described as a toy.


Is there any reference material about the culture of "if it's in the intranet it's safe"? I have had this problem with some enterprise clients, but I would like to have reference material that I can use as an authoritive source.

An obvious thing that springs to mind is that the default campus network design that is in all the standard Cisco designs let to basically everything being vulnerable when the last exchange exploit hit or the last solarwinds issue occured. But I would like to have some sources so that I can make a better case to senior management.


90s security was about protecting perimeters and network boundaries. That kind of approach, network segregation/firewalls to keep your data secure, leads to the idea that you are magically protected across impenetrable network boundaries. Which leads people to think insecure protocols are OK on the LAN, or patching policy can be slower etc. These days you would treat the LAN as untrusted and start from there. Assume already compromised. And focus on people, processes, technology, and data. Where is the corporate network boundary these days anyway in the COVID/WFH era? People's homes with all their insecure equipment? Of course you still would have your network segmentation. But as part of defence-in-depth. You just assume it's ineffective or will be circumvented, which it often trivially is: phishing, social engineering etc.


I know all of these. I also don't need a reference to Zero Trust or beyondcorp. What I'm asking for is specifically authority that can be quoted in an enterprise context to make a case for these issues.

I would however also like to hear cases against Zero Trust and Beyondcorp. The most obvious I see with the old approach is that oftentimes Engineers in those environments are not able to work and when security punches holes into the old system the whole thing becomes way more insecure than they're actually aware of.


If you just need something with the name of a low-letter-count-agency and a ".gov" in the URL, you might take a look at https://csrc.nist.gov/publications/detail/sp/800-207/final


Microsoft and Google for example have published this kind of literature.


There current reinvention of not trusting an intranet goes by "zero trust" and "beyondcorp" in the consultancy and IT management whitepaper circles, but they pile on a bunch of dynamically configured tunneling and antivirus/client pc attestation things. The previous iteration was more meme-ish, search for "there is no perimeter".


The entire concept of zero trust network.


It is, to say the least, not conventional wisdom among security engineers that simply having network segmentation is detrimental to security. The concepts you're alluding to --- "Beyond Corp" and "Zero Trust" --- are subtle, and heavily cargo culted.


Of course, if you're the one setting up the network it's always secure, it's only if someone else does it that you are vulnerable.


Avoiding something like this because the possibility exists that someone will do something else wrong is just silly.

Also, the amount of CPU taken by brute force ssh is not zero. There are plenty of good reasons for this. Even if this by itself isn't the best implementation, it's an example of how to make things much, much harder on attackers.


The amount of CPU taken by brute force SSH on any modern system is negligible - unless we're talking about traffic levels that would qualify as a DDoS. Maybe 0.02 points on a standard unix load scale. In any case you should have something like fail2ban or its equivalent that blackholes traffic from repeated failed attempts to authenticate, not just to your public facing ssh daemon, but lots of other things. The default debian fail2ban daemon configurations, easily toggled on or off to watch various log files, are quite sensible.


The bandwidth ain’t growin’ on dem trees, son, and the likes of AWS are selling it to you for some pretty penny.


> Or is completely firewalled off from the world, and only accessible once you've authenticated yourself to your VPN. Or only reachable once you first authenticate (public/private keys, two factor crypto key auth, etc) to a bastion host, and then reach the system from the bastion.

There's a lot of attack surface in there. Port-knocking is supposed to be a way to reduce attack surface. It's a belt-and-suspenders approach to the reality that even fully patched openssh has exploitable bugs.

Using this tool, a MITM with an openssh 0day can just follow you in. KnockKnock [0] and tools like it do not suffer from this defect. This tool is conceptually similar to KnockKnock, using OTP instead of a monotonic counter. Using OTP opens it up to replay attacks.

https://github.com/moxie0/knockknock


> Port-knocking is supposed to be a way to reduce attack surface.

No, it's a bet that your port knocking tool has less (or better tested) attack surface than OpenSSH.

OpenSSH is pretty thoroughly tested by now, and the pre-auth parts runs with very little privileges.

The specific port knocking tool linked to above seems to expose very little, but there's still some logging going on that wouldn't happen otherwise and the potential for logic bugs in the python stuff. It's not an obvious bet to take.


Tx for the insight.

Does the extra logging carry a risk over and above dos (which is mitigated by the `-m limit` stuff in the iptables rules)?


Not much of an insight perhaps, just an observation. Risks are notoriously hard to quantify.

But where there's an attack surface there is a risk. There's logging and parsing of logs going on here.

Does that translate to practical risk, in the sense that your system will get owned in this way? Personally I wouldn't consider it very likely. A Linux box won't get popped via a plain open openssh but likely not via this python log parser either. It's still not a bet I would take.

There's so much going on in a network stack that I would look for bugs there before the same in pre-auth openssh but one does not know for certain until after the fact.


> There should be no need to do this if you have a properly configured public/private key auth setup and disable password based login.

Maybe, but I find that simply moving off of default port 22 drops the number of people attacking me by at least two orders of magnitude. That's not nothing.


Yeah I suspect that up to date openssh with a config that passes ssh-audit's checks, with a fail2ban config, along with an ed25519 key unlocked by a yubikey will be entirely adequate security for SSH. Then time would be better spent securing VPNs and reducing internal trust.

Bit like having a front door that would withstand a C4 blast, but now all your windows are shattered.


Fail2ban is also theater on properly configured SSH servers and has been since it was written.


I don't think anyone but the most uninformed would argue that fail2ban is an actual secure measure, it's more of a log file annoyance reducer. Obviously if you leave remote root enabled and the root password is one of a few dozen thousand common words that exist out there in public-domain password data sets, fail2ban isn't going to help much. As with the example of all the random botnet things out there that randomly try a popular list of dozens of common usernames (admin, root, webmaster, etc) with common passwords.

With things other than SSH it can also be effective in the most rudimentary first level filtering out of spam, various things that attempt to relay mail through my server get themselves first banned, and then banned for a longer time after they keep re-trying. Again with primarily the goal of having less cluttered postfix logs.


fail2ban is the same as having high number of rounds on password hashes to slow down attackers, and it takes about 30 seconds to install+configure. it makes a lot more sense than the title here, but is only useful as security-in-depth and can't replace other good practices. high rounds on a password hash is also equally useless if you use "password123" or something like that.

i've also seen significant reductions in idle cpu by using it and sending offenders to the timeout bin for 24h.

thanks for calling me "most uninformed" though.


I wasn't calling you most uninformed, your statement is totally correct, I was referring to any persons who might argue that fail2ban is purely useless as security theater. What I said was that fail2ban is useful as an annoyance/log clutter reducer, but not something that's an actual security measure which would be suitable to protect a poorly-configured sshd.


Fail2ban is also useful if you do leave something misconfigured. Mistakes happen so having a little redundancy is OK.


Apply your redundancy to whatever is generating and safeguarding your SSH configurations.


What does this even mean? I'm the one who generates the ssh configuration for my computer clubs server...


I've often used fail2ban not as a security control, but a log hygiene solution. I don't need 10000s of failed login attempts in my logs, it's annoying. I have full faith it's not stopping a server compromise, but it absolutely keeps the noise level down.


While fail2ban might still be a good practice, I've found that simply using a non-standard SSH port is just as effective at keeping the logspam down.


I had inherited a server (Arch linux) which ran something like fail2ban (can't remember what it's called). It slowed down the machine tremendously, because the iptables lists became very big, and every packet started taking up too much CPU. I had to disable it (switching to whitelisting instead). Did you ever encounter something like this with fail2ban?


at a previous job i cleaned up after such mess. they used to have fail2ban adding thousands of rules without ever deleting them automatically. I replaced it with a pam module that maintained an ipset for addresses with failed login attempts.


ipset is very efficient.


> There should be no need to do this

The fact that there are dozens of similar solutions out there says otherwise. These aren't the kind of tools that people build for fun. They fill a need. Remember, perfect can be the enemy of good.


> And of course keep up to date on openssh patches and security advisories. What about 0-day security breaches? Often fix preparation takes months before been available to end users. All this time your system is vulnerable.


Your point seems valid and reasonable, however looking a couple years in the past there was a situation where you would be screwed with that setup: https://www.debian.org/security/2008/dsa-1571

That was the infamous security flaw where SSH keys generated on debian/ubuntu were always out of a set of 32768 keys due to lack of entropy in key generation. So if your SSH setup is compromised like this, the approach in the article would have provided an additional layer of security.


Make sure your time servers are configured correctly. My department had several Silicon Graphics workstations. Running IRIX, they machines determine amongst themselves which has the most accurate time and they vote that one as the Timemaster. Any new machines added would take their time from the Timemaster.

The oldest machine was Timemaster and 8.5 minutes off. Took me week to figure out why my brand workstation had bad time. Fun times


TOTP issues related to bogus clocks are soooooo common that I've got my own public TOTP "secret" (so not really a secret) which I use to verify that my various devices running TOTP authenticators have the correct time (my phone, wife's phone, an airline/airgapped device running TOTP etc.).

It's so bad that Google's own authenticator has a "time synch" functionality or something like that in the very TOTP app (and it helps!). This speaks volume as to how bad and how not-solved-at-all the issue of drifting/wrong clocks is.

I much prefer U2F.


TOTP systems are supposed to implement a sliding window to account for reasonable clock differences, usually within 5 minutes. Devices further off from that really do have a problem, and should not be trusted. I also find it hard to believe that so many devices are so far off, given the ubiquitous access to GPS (atomic clock) signals and NTP on networks. This has been a solved problem for a long time.


> TOTP systems are supposed to implement a sliding window to account for reasonable clock differences, usually within 5 minutes.

The default is 30 seconds, as per the RFC: https://datatracker.ietf.org/doc/html/rfc6238

(not sure that meaningfully changes what you were saying, but just fyi.)


30 seconds is the default “time step” (4.2), but in talking about the transmission delay window (5.2), which the RFC recommends “at most 1 time step”, but also says that validation should occur for both the previous and next time step window. However in practical implementations, 5 minutes on both sides is typically used.


As long as an Android or OSX phone has internet access and it isn't totally messed up, it'll pull time from NTP if the modem's time doesn't exist. I too have doubts that many devices are so far off.


Man. Time based bugs are some of the worst. I spent weeks trying to figure out what was wrong with some of my scripts running under WSL. Apparently wsl Linux kernel had a bug that could cause time to drift by minutes.


Oohh, are we doing time bugs? Sun T4 servers around early 2010s, and I forget which Solaris release this was, had a beaut of a random clock jump.

Every now and then, certain apps on the a system would crash around the same time. We'd scrape through the logs and usually see our app cratered, some databases, ntpd, sshd naturally. Logging of timestamps was iffy, obviously.

The ntpd was the obvious suspect because, despite keeping a nice low offset to its peers like 1ms or two, for months at a time, out of the blue it would confess something like "time offset is too large, I can't fix that so I'll exit!". After chasing Sun ntpd bug reports[1] for a while, we ruled it out when we saw a pattern in the undamaged logs that looked like

    09:59:58.000 ...
    09:59:59.123 ...
    09:09:01.345 ...
    09:09:02.123 ...
Yep the system clock really had jumped back almost an hour. That explained everything about the userspace going nuts including ntpd exiting as a symptom and not a culprit.

After some Sun support and some sunbugs searching [1 again] we found the T4 in that Solaris rev had a hardware RTC with separate registers for H, M, S etc and a write mutex protecting them, but no read mutex. It was possible to read the RTC while it was being updated, which happened when it was syncing the OS clock to the RTC, or something like that. Fixed in a later release.

1: RIP sunbugs database. It was such a mature relationship where Sun would let everyone see what they were working on and customers could participate or at least know about known issues. I would love to find an archive. Of course Oracle shut that off immediately so you had to open a ticket and ask.


My favorite time bug:

Someone on a team I was on spent probably weeks working on a way to process an inbound sensor data stream (which was generated by another bit of software also maintained by us). The data was akin to an odometer, increasing over time based on usage. The trouble was there appeared to be two parallel streams of the same name, which were always offset from each other and while the offset varied they were always roughly close. The algorithm this person came up with eventually sorted them into "thing_1" and "thing_2" streams, at which point it got turned over to me to display. It actually worked pretty well, for what it's worth.

I started asking how am I supposed to show this to a user to make use of, what does this even represent?... but never got an adequate answer. So I started looking at the whole chain, and what I found was the piece of software generating the data had a small bug: it used "hh" instead of "HH" in the timestamp, but also no am/pm. The timestamps were supposed to be 24h and looked like it, but 9:02AM and 9:02PM both came in as "09:02". To confirm, we checked the database and, sure enough, every bit of data was between 1:00am and 12:59pm. In the end we fixed the timestamp bug and threw out the processing code.

It's simultaneously a hilarious date bug, a face-palming colossal waste of time, and a lesson on how not to run technical teams (the manager had no technical skill, and the "team" of ~7 was very siloed and each operated more like teams of 1-2).


My most memorable bug (because of how long debugging took) was inconsistent use of local and UTC timezones. Learned to localize timedates as late as possible (or preferably never) and delocalize as early as possible.


as if it is a rite of passage to become an adult engineer, i'm now facing timezone issues at work. no real problems yet, but the utc vs local timezone has to be handled.

Could you please share what did you read on this topic?

Thank you!


I don't think I read anything specific on the topic. I'd simply suggest only using local time in the views. Hopefully that's possible in your use case. Be defensive in your design.


This is why mission critical systems like Boeing airliners require a reboot. At some point it’s just a lot more practical and safer.


That was done via Airworthiness Directive (a mandatory prod patch for aviation), not by original engineering design intent. Similar story with the Patriot missile defense battery.

* - https://www.federalregister.gov/documents/2020/03/23/2020-06...

* - https://barrgroup.com/sites/default/files/case-study-patriot...


A very annoying DNS over HTTPS/TLS circular dependency bug manifests itself if your device doesn't have an RTC, or the battery is dead, or the clock is sufficiently skewed.

Clock is fucked, so TLS certs don't verify due to validity times, so DNS is broken, so NTP cant look up domains, so the clock can't be set...


At the end of the day since TLS depends on correct time to trust certificates, I guess the "everything is fine" solution is to fetch the DoH server's TLS cert, inspect the start and end dates, set the system time to the exact middle, then helicopter over NTP a bit to make sure it came up and changed the time to something hopefully correct.

On the one hand there's not very much else you can do since you're pointing at bits of thin air and saying "there's the trust chain" in the first place, but on the other hand plaintext DNS is... not much better?

Of course that's when the existential "why even DoH in the first place" starts (with side servings of "this feels so wrong putting it on the security report")...

(...Why do I suddenly feel like disabling certificate verification is going to catch on in a big way in embedded ntpds, almost like a standard best practice... aaaaaaaaa)


The best and simplest solution would just be to require an administrator to input the current date manually in that circumstance.


My Windows 10 clock goes 2-5x faster when I hibernate and come back


There is a funny bug with Windows Vista in KVM/QEMU where the clock ticks about 1000x faster than it should. You can see the hour hand moving in the clock panel, and animations play at warp speed, and media playback is extremely screwy.


It all depends on your constraints and who needs SSH access, but if you're like me, you (1) have a number of personal VM instances on major cloud providers and (2) don't want to deal with anything but 'vanilla' Linux and sshd. No customizing kernels, no portknocking, no TCP wrappers, no fail2ban, no VPNs (not even tailscale which is very nice).

The approach that I've been using for about a decade is a script that gets your current (internet facing) IP and then uses a cloud API credential to add that IP to the cloud provider's firewall as a valid source IP to port 22. The API security and cloud firewall implementation is left to the major cloud providers (and they are very good at these things).

I run it manually whenever I'm in a new location or in the rare cases my home IPs change. You can add the IP to the list or replace the list each time. Or clean it up after a trip with a separate CLI flag. I figured one day I would automate it to track IPs, run constantly, and trigger the change if it detected a new IP - but that day never came because it never annoyed me (ymmv).

This approach could be extended to teams, but there are more details to think through about API permissions, etc.


Can I ask why you don't trust running fail2ban on 'cloud' based VMs? The defaults for how long it will ban failed ssh attempts are quite sensible, and short enough to prevent you from accidentally locking yourself out for a very long period of time (automatic unban time to remove iptables rules).

If nothing else, it serves the useful purpose of stopping the log files from being cluttered up with various botnets' fully automated ssh username/password attempts that are out there, trying to gain access via well known factory default credentials.


It's not really about trust, it's about the hassle of adding and maintaining things (including keeping up with their changelogs and security issues).

As for log clutter, my approach comprehensively stops the logs files from being cluttered with failed attempts; that's one of the main reasons I do it.


That is a good point but I should also add that fail2ban as a daemon and its default configuration to watch log files for other popular daemons (example: a postfix smtp traffic log) has changed very, very little in the past 5-6 years. It's not very much of a concern in terms of keeping track of configuration changes in major system upgrades.


I prefer single packet auth with pyknock [1]. SSH config from client side looks like this:

  Host vm vm-0 vm-0.com
      User user
      HostName vm-0.com
      ProxyCommand sh -c "pyknock-client -s 0.0.0.0 -S \"\$(myip)\" open %h "$(pass my/pyknock/%h)" && sleep 1 && exec nc -4 %h %p"
      Port 1792
Where `myip` [2] is an small utility which reliably detects my external IP address.

[1] https://github.com/Snawoot/pyknock/

[2] https://github.com/Snawoot/myip


Really cool idea. If someone is interested further in the general concept it is called ”moving target defense.”


It's also known as "security audit evasion" and is the kind of thing you might do to hide a hostile system on a network.

You should never intentionally do it to your own systems. If you see something like this deployed assume it's either malice or laziness.


Compared to just deploying totp normally as a PAM module this is a horrible idea. Much harder to ratelimit, much cheaper for the attacker to bruteforce codes.

It’s neat for sure, but not a good defense.


Doesnt the TOTP in this case concern the listening port for sshd, so it doesnt actually touch the authentication in any way? Just switches the port in a TOTP’esque manner


The point is that if the attacker somehold got hold of the primary login credentials (username + key/password), then they can easily bypass this scheme with a port/address scan. This can be done very quick[1] and is hard to rate limit. Furthermore, an attacker that can eavesdrop on the user's connections can infer the OTP since it's being transmitted in the open., but if it was done through a PAM module they wouldn't be able to because it's encrypted.

[1] https://nmap.org/book/synscan.html


Both scenarios assume a pretty heavy compromise already in place before the ssh control starts crumbling


there is no "easily bypassing this scheme with a port/address scan" when it comes to ipv6 /64 ranges.

If you could scan 1 million IP addresses a second on a /64 (which is absurd), it would take 600K years to scan a full /64.


1 million IPs per second is doable with gigE if we're just sending a syn to port 22. I'm going to go out on a limb and assume your server has at least gigE.


However I don't think my server has 600000 years of runtime, or a sane firewall config is gonna allow 1 million connections a second.


Not sure why youre getting downvoted, I think an nmap based attack against this kind of rotating setup sounds borderline ridiculous


This does not use a full /64.

> If you could scan 1 million IP addresses a second on a /64 (which is absurd)

Not at all, I regularly scan the internet at well over 20Mpps.


This solution switches the listening address (IPv6 address), not the port.


Indeed you are correct.. Fruits of multitasking..


Stop it with this cloak and dagger BS. Just set a shared secret on it: https://github.com/google/tcpauth

(Yes, MD5 is safe for this use)

With tcp MD5 your connection is even secured against an active attacker who can sniff. They can't inject, or even RST the connection. Even if they can sniff and spoof everything.


> Yes, MD5 is safe for this use

Sure, it just begs the question: why in the world would you still try to find nails for the hammer called MD5 when (according to Wikipedia) cryptographers recommended upgrading to SHA-1 in 1996 already? This project's first commit was well, well beyond the deprecation of MD5. It's a bit safer than but also not entirely unlike putting a Windows 7 on the internet just because there are no known exploits in an up-to-date 7 system currently.


I'm not finding ways to to use MD5. TCP MD5 is supported on Linux and BSDs, and TCP-AO isn't.

TCP MD5 has been used for decades to protect BGP, and exactly because it's still safe there's been no push to add TCP-AO.

And it inherently requires kernel support, because it's part of TCP, not the application.

I would have preferred TCP-AO (RFC5925), not TCP MD5 (RFC2385), but the former is not supported anywhere.

See more at https://blog.habets.se/2019/11/TCP-MD5.html

(I should have added this link in my comment, but I forgot I wrote a blog post on the topic)


Cryptographers usually recommend switching to function X for usecase Y.

Despite SHA1 today and MD5 for even longer being “insecure” for some use cases there are plenty of usecases for which they are still secure and will remain so.

The reason for a wide deprecation is that most people can’t evaluate a hash function or an encryption algorithm in context well so it’s easier to say simply don’t use X.

RSA is also crap for many things and it’s slowly but surely being deprecated but it doesn’t mean it’s completely broken for every usecase.

Just to be clear I’m not stating that MD5 is necessarily safe for the usecase the GP states it is.


Okay but that doesn't explain why it makes sense to still pick an algorithm that more and more holes are being found in when every library already ships more modern alternatives. It would be better to remove the old MD5 code so we can't accidentally have someone use it for use-cases for which it isn't secure, but if people are pedantic about "but for this case I can still use it! Let me just do that!" that will just never happen and we're stuck with MD5 indefinitely.


I’m not going to evaluate if MD5 is safe for “tcpauth” even if I was competent enough to do so.

If confidentiality isn’t a factor (since any hush function that is fast m enough to brute force isn’t going to be particularly secure) and if integrity cannot be compromised through collisions then the hash function is safe for this usecase.

Why use MD5? It’s relatively easy to implement securely m, there are a lot of safe implementations and it’s fast.

This is why CRC32 is still used today also.


No, "why use MD5" is because MD5 is the only one supported by kernels. And it has to be supported by kernels in order to allow any realistic use of a BGP daemon.

I thought this should be clear from the fact that it protects against RST packets. Nothing on an application layer can do that.

I wish I could edit that comment because while I expected people to go "oh, I didn't know TCP had that!", multiple commenters seem to have not read past "MD5" and assumed that this is pure application-level.


CRC32 is used for its error detection properties which exceed those of a cryptographic hash of the same length (and, to boot, it's cheaper, too).

There's no reason to choose MD5 over SHA-1. It's less secure and slower and there's plenty of free implementations of SHA-1.

Ideally you'd use SHA-256, because of (smaller) security concerns with SHA-1, but it is a small touch slower than MD5.


MD5 is quite a bit faster on my machine.

  % for i in md5 sha1 sha256 sha512; do echo -n "$i: ";  time ${i}sum test.bin > /dev/null ; done 
  md5: ${i}sum test.bin > /dev/null  1.37s user 0.13s system 99% cpu 1.501 total
  sha1: ${i}sum test.bin > /dev/null  1.84s user 0.12s system 99% cpu 1.952 total
  sha256: ${i}sum test.bin > /dev/null  4.43s user 0.16s system 99% cpu 4.593 total
  sha512: ${i}sum test.bin > /dev/null  2.69s user 0.12s system 99% cpu 2.810 total
Surprisingly, SHA256 is much slower than SHA512 here.


OK, I measured a dozen machines, and I found a very mixed picture using 'openssl speed'.

- Lots of modern x86 machines with SHA-NI that were typically 50-300% faster than MD5

- Older Intel machines, where SHA1 was generally slightly faster, with one or two exceptions where the reverse was true.

- ARM machines with good SIMD/NEON where the NEON implementation of SHA1 was 30-40% faster than MD5.

- Embedded ARM machines with bad SIMD/NEON where it was pretty much a tie.

- A few ARM machines-- most notably Broadcom chipsets in the Raspberry Pi, where MD5 wins big a large margin.

- Embedded MIPS 24k, where MD5 won by 33%.

Then I found http://bench.cr.yp.to/results-hash.html , which bears out what I'd measured.

In any case, I don't think "speed" is any reason to select MD5, unless maybe if you're on MIPS 24k.


It's not about speed. It's about kernel support. And RFC2385 has kernel support and TCP-AO does not. See other comments.


>Surprisingly, SHA256 is much slower than SHA512 here.

SHA512 is expected to be faster than SHA256 on modern 64 bit architectures due to fewer rounds per byte.


Thanks. Shows how much I know! At least I am kind of handy with the shell...


> There's no reason to choose MD5 over SHA-1

What TCP stacks support TCP-AO? They do support TCP-MD5. That's why.


> when every library already ships more modern alternatives.

Kernel needs support. This cannot be done in user space.


If this is a joke, it is very well executed.


It's not a joke, BGP servers used to authenticate each other with this in the late 90s. This was also before NAT was really a thing.


a very small percentage of ISPs at some major IX points still want MD5 auth on BGP sessions across the fabric. Usually a moot point these days since the IX operator should have solid, reliable documentation of exactly what switch port and fiber patch panel assignment goes to which cage/suite/cabinet and ISP.

Or in the case of a PNI between two ISPs over their own cross connect, you absolutely want to have a mutual level of trust and cooperation between the BGP peers on both sides of the session.

And then other more modern methods of verifying that the IP blocks you're seeing from some other AS are legit, like verifying their RPKI signatures, IRR entries, etc.


Citation needed. The ISPs I've worked for run this pretty much everywhere.

I mean it's the only auth that exists for BGP, so why would you not want it?


Citation needed: I've maintained direct sessions over the fabric (not via route servers) between my AS and peers' ASes, with over a hundred ISPs, at some of the world's largest IXes. Out of those more than a hundred, maybe 3 used MD5. We also have a lot of PNIs direct with other carriers with POPs in the same building. Sorry I obviously can't provide a copy/paste of the junos config showing all the peers.


Uh, they very much still do.


Not a joke. I'm suspecting you don't understand that this is using a kernel-level feature, so if you think the joke is MD5, then please add rfc5925 to every OS so that I can switch to a better algorithm.

You use what actually exists. It's orders of magnitude better than portknocking BS.


Deploying a half-assed encrypted transport in front of a full-assed encrypted transport because you're afraid you might not know how to configure the full-assed encrypted transport is pretty funny, which is why I thought it might be a joke.

Port knocking, fail2ban, nonstandard SSH ports, all of that stuff is theater.


Well, sorta. TCP MD5 hasn't had preauth exploits. OpenSSH has.

But also, like I said in my blog post (https://blog.habets.se/2019/11/TCP-MD5.html) this isn't just about targeted attacks, but this also hides from things like Shodan, which connects to all your ports and records your headers.

It helps with wide-scale scanning and wide exploitation. Security isn't a yes/no, and getting out these databases without restricting by IP address isn't without actual security value.

E.g. if you do this then next time there's an OpenSSH bug, you won't be in Shodan and other more secret scanning databases to be picked off right away.

Port knocking is just plain overengineered silliness. If plaintext password to unlock another port is what you want, set up a UDP server. "Connecting to random ports" is just fooling yourself about what you're doing.

TCP MD5 least has the benefit that it prevents any kind of shenanigans happening to your connection.

fail2ban, agreed. The value fail2ban adds is keeping your logs more quiet.

nonstandard SSH ports, agreed. Especially after everyone and their dog scans all ports on the whole internet now anyway.


How long ago was the last viable standard-configuration OpenSSH pre-auth vulnerability? Was it within the last decade? If it's the Debian RNG vulnerability --- a platform vulnerability, not an OpenSSH vulnerability --- how much further back do you have to go to find the next one?


Sure, it was a while ago. But there are less severe bugs too. Fact remains though that something more complex is more likely to have a bug.

With TCP MD5 you don't even have to consider SYNfloods or SYNcookies. And because only people with the MD5 secret can even connect, it becomes an early tripwire if someone does have the password. Currently if you have an OpenSSH open to the world you should expect your logs to be spamming 24/7, which makes smarter attacks not stick out.

Frankly, given the option I would prefer to not even have port 22 advertise to the world exactly which OpenSSH and OS I use. Not because of security through obscurity, but just to make it slightly harder, and thus harder to get in without tripping any of the tripwires.

Then there's also people who add 6 digit OTP as a second factor. Those are pretty brute-forcable by default, so you can actually do online brute force of a user's password still. Just slower. (OpenSSH has a ratelimit, but I've gotten around TOTP this way). With a system wide good secret this can prevent brute forcing even in the presence of bad user passwords.

But if you've already decided that security is either yes or no, and that OpenSSH is marked "yes secure, and therefore can be open to the world forever, bravely taunting any attacker saying 'this far, no further'", then there's nothing I can say to convince you.

But also not everything on the Internet is (Open)SSH.


Oh, another aspect to this: OpenSSH may not have a bug, but maybe you need to interface using a PAM module to a radius or LDAP server. Suddenly the trust weakens.


There is a benefit from this "half-assed" encrypted transport, though-- someone in the middle who can inject packets can't disrupt your connection.


Whatever they did to get into a vantage point where they can predict sequence numbers gives them a bunch of other tools to disrupt connectivity.


Aside from DDoS, no not really. Not if every single accepted packet needs to be signed.


Perhaps, perhaps not.


Is this just spiped with MD5 instead of HMAC-SHA256?

https://www.tarsnap.com/spiped.html


No, this is part of the TCP layer, which is why it can even protect against RST packets.


No, spiped also encrypts data.


Heh guess I can cross this off my ideas list. 100% this is the way to go.


And all you have to do is trust that your client and server will always have synchronized clocks...


... like you need to when using TOTP for anything


Maybe I'm irrational, but it's one of the things that makes me real hesitant about where I deploy TOTP. Sometimes my cellphones randomly have a wildly wrong time -- a misbehaving (or malicious) cell-tower perhaps? And sometimes my computer gets the wrong time too -- e.g. booting between Windows and Linux screwing up the system timezone setting, or ntp failing to start properly, or when I busted up my CMOS. And I have to wonder, how secure is ntp from someone just spamming a system with the wrong times which can block me out?

I'd almost rather a combined thing where it's HOTP but it also rotates once per day like at midnight? Does anything do that, or does it even make sense? Is there a reasonable alternative -- challenge-response maybe?


> Sometimes my cellphones randomly have a wildly wrong time -- a misbehaving (or malicious) cell-tower perhaps?

If you experience that often, I would probably disable the setting to automatically set time from the network.

> booting between Windows and Linux screwing up the system timezone setting

That’s easily fixable with one registry change (RealTimeIsUniversal). You can also tell Linux to use the local time, but Linux will be less happy about that than Windows (Linux won’t write to the real-time clock automatically, for example).


The Linux/Windows timezone issue can be fixed with a registry setting [1]

If for some reason your time is off (e.g. after 3 failed attempts), it's easily detectable and fixable. Just browse to time.is [2], and your time is off, and set it manually if needed.

Because there's an increased dependency on accurate time, bad network time is now quite a rare occurrence in my experience. I haven't seen it happen in the last 3 years.

Once you point NTP to a trustworthy service (e.g. time.google.com [3] or time.cloudflare.com [4]), you won't have any issues.

The Google time server offers leap smear [5], and the Cloudflare one offers NTS (authenticated NTP).

1. https://wiki.archlinux.org/title/System_time#UTC_in_Microsof...

2. https://time.is/

3. https://developers.google.com/time

4. https://developers.cloudflare.com/time-services/nts/usage

5. https://developers.google.com/time/smear


It's one thing to lock yourself out of your application or admin interface when NTP breaks. It's another thing entirely to lock yourself out of recovering the server entirely when clock skew inevitably hits you.

If you really want 2FA for SSH, use something like Yubikeys that increment a counter and generate tokens based on that counter. And use it during the actual authentication session, not for figuring out which magic port the server will be listening on. You never have to worry about synchronized clocks, just a database tracking the highest counter value ever seen, so that previous values can't be reused.


Your server has NTP. Your client likely, too, if it has enough network to connect to the server. If it does not use NTP, it's easy to set the time within the minute or so required manually.


Hardware clocks are famously unreliable and inaccurate. NTP has failure modes that can result in servers being wildly out of sync with reality. Letting either one of those hose your ability to log in and diagnose/recover the system is a mistake.


It really depends on what you need. Yes, hardware clocks and NTP have failure modes[1], but how common are they? Hosing your bootloader or its ability to find the kernel is also a thing that happens where you definitely need some lower level access, which would also help with a mismatched server clock and TOTP. It depends on a lot of factors whether that is more or less common than NTP failure modes.

Would I employ this for a critical commercial server? No. Would I do so for my private server? Absolutely, without hesitation. If it really loses track of time so badly (hasn't happened yet to my knowledge), I can just log in through virtual console.

[1] I'm excluding drift and inaccuracy, since from my anecdotal experience they are usually not that bad.


I disagree that requiring the time to be correct precise to 30 seconds is "a mistake" in the general case, but it is a good point that this might happen in some weird edge case (where also changing the local clock +/-60s is not enough to fix it). Having a fallback that lets you in via a mechanism that does not depend on any external factors or uninterrupted power supply does seem like a good idea for important physical infrastructure. (For virtual stuff, in such a weird case you can always just modify the disk image to get ntp from a working host and boot again.)


From looking at the code, seems like it would be a pretty simple change to make the server accept several addresses, derived from the current time plus or minus a small number of 30-second intervals. That allows you to tolerate a small amount of clock skew, and if your clock is off by more than a minute or two, you should probably fix it anyway.


Just need to SSH in to fix it....


I'm not sure how you could really run a server environment of any size without that


I've been putting things behind wireguard. It seems to scale nicely and is more resilient against attack by the nature of the protocol.


Anyone done TOTP port-knocking to open an SSH listen port with just the originator white-listed for a short window?


I had a port-knocking ssh OpenBSD setup late 90's, did move the port about though, just white listed opened. Used SKEY for port sequence, was easy to run even on a Nokia in Java() in that era.

[EDIT ADD]>Actually was some Symbian and then Java implementation of the skey. Had Psion Series 5 used initially.


I have a machine exposed to the net on an "obscure" port, with fail2ban running on it.

In ... 5-8 years, I haven't had a single hit other than me mistyping a password. I understand the issues around doing this, but I'm actually shocked at how little (zero) incursion attempts it gets.

I should also note that since this is a personal machine, I also have entire countries blocked by CIDRs. I also do this for my wife's company's retail presence, and blocking the big 4 (China, Russia, Iran, NK) at a CIDR level stopped something like 98% of the brute force ssh attempts (the vast, vast majority of those from China).

I know it's not foolproof, and I may get some false positive blocks, but she doesn't have a business that needs to allow people from those countries.


Unlike port knocking (which folks seem to be mentioning a lot), this seems like it only has 1,000,000 possible locations that can rapidly be scanned.

Knocking has multiplicative growth and so many more possibilities.

Perhaps you could include honeypots in the IPv6 range where you’re not bound that block the user, but this seems less reliable overall.

I’m sure this is just great fun, but perhaps not something to think of as secure so much as a fun idea — to make it secure you might want to use a system more like knocking.


Port knocking sucks (subject to replay attacks).

Look at Single Packet Authentication. Fwknop is a solid implementation.


I have taken a different approach to remote access in limited scenarios:

gofwd: A cross-platform TCP port forwarder with Duo 2FA and Geo-IP integration. Its use case is to help protect services when using a VPN is not possible. Before a connection is forwarded, the remote IP address is geographically checked against city, region (state), and/or country. Distance (in miles) can also be used. If this condition is satisfied, a Duo 2FA request can then be sent to a mobile device. The connection is only forwarded after Duo has verified the user.

https://github.com/jftuga/gofwd


I played with a similar concept years ago for more general secure transmission where each endpoint would mux traffic to/from multiple addresses from their respective /64 as an added layer of obfuscation. It included optional dummy traffic and intentionally out of sequence transmissions to complicate inspection.

It makes more sense when used broadly by the majority of traffic across a link, by multiple clients, but it could be a useful application for IPv6 in environments an intermediate network is untrustworthy.


Would this potentially leak consecutive TOTPs to an attacker who connects to all 1,000,000 or so possible addresses at 30 second intervals (and records the one that responds)?


To elaborate a bit on what AlexCoventry already said:

No. The mechanism underpinning TOTP should guarantee that the internal state does not leak from any outputs produced. That is, of course, unless some security flaw is found, but that seems unlikely to ever happen at this point if you use a regular SHA-2-based TOTP. It basically does HMAC_SHA256(secret, time) where the time is known (also to the attacker) but the secret is shared between the two authenticating systems. If you could derive secret from time+output, the mechanism would serve a much more limited purpose. Part of the purpose of TOTP is that an attacker that observed a (number of) login(s) can't predict any future or past tokens for their own use.


If that were useful, the underlying HMAC would be broken.


Hm. Am I ready to build toshnet, a cloud of systems running tosh and a toshepherd, the masternode that stores the totp to find and manage them all? Mayhaps.


Looks like fun. :) I do have nftables set up so that connection to normal ssh port blackholes the access to the real ssh port for that source IP address for some time.

Not perfect, but good enough to deter people who'd first naively try to ssh to my server and then try to nmap it to find out if I have ssh running on some other port. :)

900 IP addresses blackholed just over a 1 day period. "security" scanning is incessant.


Reminds me of frequency hopping.

https://en.wikipedia.org/wiki/Frequency-hopping_spread_spect...

Interesting side-note, one of the people who re-invented this technique was actress Hedy Lamarr who was quite an accomplished inventor


One difference is that, as Bluetooth demonstrates, frequency hopping can be extremely fast at basically no cost. If you have a broad enough spectrum that the attacker needs to cover, that actually makes it harder to eavesdrop by adding a real cost (either a better listener or more listening devices). TOTP, on the other hand, typically allows for many seconds of leeway between the two systems, since there is no direct radio communication between the two systems that they can use to synchronize clocks (in the sense of a clock signal rather than actual timekeeping devices).


Why not just implement Single Packet Authorization (SPA)? The port only opens to that specific source on a recent cryptographically signed request with a timestamp. https://www.linuxjournal.com/article/9621


Same thing with ports, for those stuck on IPv4

https://blog.benjojo.co.uk/post/ssh-port-fluxing-with-totp


Many of us have done something similar, so it's good to see something that's better than a few shell scrips. OTOH, it's written in Rust, which is a bit heavy for something so simple.


Nice! Surely if IPv6, you're using DNS, no? And not typing out the full changing IPv6 addresses? Even in your homelab? Making this concept a bit moot at best - awkward to manage if anything.


This reminds of port knocking. Could be useful if you are really annoyed by those script kiddos, but in essence this is security by obscurity.


I don't think port knocking is security by obscurity. It is another layer of secret that must be known (or guessed) to gain access.


If frequency hopping IPv6 becomes popular, I wonder how long until we start having IPv6 exhaustion problems?


Never.

> The [ipv6] address space therefore has 2^128 = 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses

https://en.wikipedia.org/wiki/IPv6


If for some crazy reason you wanted to give each byte of memory and storage in a supercomputer its own IPv6 address, you'd be able to scale up to 300 septillion (301,936,594,947,653,725,323,264) nodes, each with 1 terabyte of RAM and 1 petabyte of storage, before IPv6 ran out of individual addresses.

That's not even counting the 65535 ports you could then use.


It’s not about how many devices you have. It’s about how many ports you have listening on each device and how frequently they hop addresses.


Globally yes. Each network has far fewer.


That’s a super cool concept, but doesn’t this fall a bit under “security by obscurity”?


The "security by obscurity" one-liner is one of my favorite examples of the sort of black-and-white thinking that is harmful to software engineering.

The truth is that playing defense is as much an exercise of technical design as it is economics.

Yes - if someone finds the SSH port, they have a window of opportunity, and you will be owned if you are not properly securing your server through the normal channels.

However, now they only have a small window of opportunity (say, 30 seconds). This does a few things:

- it takes time (money) to attack a target. without access to the OTP secret, randomly assigning ports dramatically increases the cost (time) of attacking you. throw in a tar pit and it's even worse. if you're not a high value target, the attacker moves on.

- now, any failed authentication attempt to your SSH server is a highly credible threat. repeat attempts are even more suspect - you are being targeted, and they probably have the OTP secret. effectively, you are able to resource your team more efficiently, because you can filter out noise.

security is not black and white. if you are a valuable enough target, someone will find a way in. defense in depth helps you manage your defense with limited resources.


I like this analysis, but I'm having a hard time seeing the advantage over port-knocking, which could also be randomized using OTP and would never reveal the SSH server to a port scan.

Zero window seems better than a 30 second window.

Excellent points otherwise.


> The "security by obscurity" one-liner is one of my favorite examples of the sort of black-and-white thinking that is harmful to software engineering.

My primary problem with people going for "security through obscurity" is that very often there are things which are intentionally obfuscated or obscured, in such a way that the implementer thinks that their method of hiding things will provide a significant measure of security. But then all of the other more common sense security precautions that should be implemented before the obscurity are ignored.

If I had a dollar for every industrial/embedded/M2M/IOT type thing that tries to be secure through obscurity but has other gaping holes in it, once you're familiar with the technical workings of the product...


> defense in depth helps you manage your defense with limited resources.

Indeed, but might there not better/cheaper ways to secure SSH?

Something that doesn't involve custom configuration that needs to be maintained.. like VPN to a jump-host... Or..?

Configuring and maintaining custom hacks is not cheap.


agreed - I would not deploy this as is, but the idea is interesting and I could imagine the concept being given some UX love in a different application


In the same way that passwords, private keys, and safe combinations are security by obscurity, sure.


Except that passwords, private keys, and safe combinations cannot be guessed in 0.1 seconds the way that a port can (65k possible values and a SYN packet really isn't large). There is a line to be drawn between high-entropy secrets and using an unpredictable port number.


Where are people getting the idea that "listening address" means port number? The title of the page literally says IP address...


Ah, misunderstanding on my part. For what it's worth, it doesn't say IP address, just listening address which I just took as whatever place (IP,port tuple) it listens on, and unless one has a huge IP range (uncommon with typical setups) I can see how "people" take that to mean port changing by default if they, like me, don't read carefully enough.


Huge ranges of IPv6 addresses are extremely common, virtually no ISP can be bothered to allocate smaller blocks than a /64. The minimum allocation of addresses recommended is a /32, which is 65k addresses.

You also know they don't mean port numbers because there's no such thing as a 6-digit port number.

From tfa: """Imagine your SSH server only listens on an IPv6 address, and where the last 6 digits are changing every 30 seconds"""


Yes, but that's only bad if it's your only security.


Layering has additional costs, like requiring additional client configuration and software and (in this case) only working over IPv6.

The number one step any public‐facing SSH server should take is to switch from password auth to keys only. Anyone who’s still concerned can put it behind a WireGuard VPN. Layers typically added beyond that (like changing port, etc.) don’t even register on the security scale, so to speak.

The tweet that inspired the post mentioned port knocking which has always been rather ridiculous given those alternatives.


The simple fact that ssh is over IPv6 already leaves out 99% of potential hackers aka bots.


I have some "cloud" VMs which have been around several years, they're attacked constantly, I haven't seen a single incoming IPv6 source to date.

I wonder if there's a "missed opportunity" for hackers there given presumably how many forget to configure their IPv6 firewall.


The search space is too enormous. Which is why you (somebody with a SSH server) should do IPv6 and then forget this other weirdness. [You should do publickey and so on, but I mean the Tosh stuff is "weirdness"]

Suppose you can write code that can try to connect to the SSH server on one million IP addresses per hour, if they respond you attempt an attack. You can try all of the servers in the entire IPv4 Internet in a few months even with a pretty naive algorithm.

But if you do this with IPv6 you won't ever finish trying addresses and indeed almost certainly won't find even one server (let alone successfully attack one) in your lifetime.

So immediately you need a more expensive attack method. Maybe you buy a supply of "passive DNS" (name -> address answers stripped of information about who asked, many big DNS providers sell this) which is not cheap and not well suited to this problem but it gets you somewhere. You pull out IPv6 addresses and try to SSH connect to them from your supplied list. This could work, but now you need to hope that your potential victims revealed themselves to you, all the juicy SSH servers in the world are invisible otherwise.


Actually it is far better than having no security at all


It's another layer of security. There have been exploits of OpenSSH in past so this may be prudent.


Trouble with extra layers, there's a point where it results in complexity. Which, in my experience, is more likely to be the root cause of a security problem.

I'm not saying this little demo is a disaster or anything. But for example, perhaps it requires an awareness of this scheme in an external firewall's rules, and maybe another machine pops up in the rather large IPv6 range that's now available.

At its extreme, these sorts of approaches can bring a lack of clarity which layer is providing the actual security.


I actually looked into this a few months ago and if memory serves, the last default setup authentication bypass was in something like 2003. Since then, I think the worst thing has been user enumeration. And 2003 was a very different world in terms of how much we cared about hardening, so ssh being reliable throughout all that time is really quite something.


No, the TOTP is[0] a shared-secret bitstream (like a stream cypher), which provides new randomness per use (obscurity means it relies on the attacker not knowing how it works in the first place). This is very weak security, similar to a combination lock or PIN, and should not be used in place of proper SSH crypto, but it is actual security. (The point, IIUC, is to quickly exclude drive-by attackers, so that serious/targeted attacks are higher above the noise floor.)

0: assuming I'm not being overly charitable


Security by obscurity is when you hide implementation details to improve security. Secrets are not obscurity, randomness is not obscurity.


Except that if they know you are using this technique (e.g. from snooping traffic) then it is straightforward to bypass, either by tailgating onto a recent connection attempt (if they can snoop) or just brute forcing it (they can test the whole key space in seconds).

There must be better ways to leverage long term shared secrets, recent authentication success, etc. I'd like to see something like Signal's ratchet mechanism.


I think the time aspect makes it OK, otherwise TOTP itself should be abandoned due to the same principle. (SSH still has password)


security by obscurity and defense in depth are synonyms.


No they are not. It doesn't help to have obscurity in depth.


I don't know why people knock on "security by obscurity" in general, it's a great defense for many threat models that an average individual would fall under. A lot of credential stealing viruses just look for specific folders or file names for examples, some vulnerability scanners look for specific ports, etc ...


I view it as a cost-benefit analysis between implementing security by obscurity versus actual security. If you have real security, then you don't need security by obscurity. If you have security by obscurity, you still need real security. So it's obvious which is a better ROI. That's not to say security by obscurity layered on top can't be useful, such as filtering out noise, but I think the point most people are trying to make is "this thing is not a solution to your security problem, it is a potentially dangerous distraction".


If you have real security, then you don't need security by obscurity. If you have security by obscurity, you still need real security.

If "real security" was a real thing and anybody knew how to actually do that, things like the Colonial Pipeline ransom-ware attack, etc. wouldn't happen all the time. As people have been saying since David Lightman in 1983 "Hey, I don't believe that any system is totally secure."

That's not to say security by obscurity layered on top can't be useful

I think that most people who would implement this (or similar schemes) realize exactly that, and are practicing "defense in depth". Could it be a "dangerous distraction?" Sure, in principle. But I don't see any particular reason that this would be more so than other elements of a "defense in depth" strategy.


"real security" means security that is not dependent on secrecy of implementation to remain secure. In the context of this post, it means configuring key-based access and disabling password-based access. If you do this, then the security-by-obscurity-based technique in the OP is unnecessary and redundant. Could key-based SSH access theoretically be cracked? Maybe (probably not, but let's say maybe for the sake of argument). But if so, a rotating listening address is probably no obstacle to an attacker of that caliber.


Could key-based SSH access theoretically be cracked? Maybe (probably not, but let's say maybe for the sake of argument).

Of course it could. Unless you're really going to posit that there are no bugs in any widely deployed ssh server implementations. Doesn't seem very likely to me.

Anyway... if you're being specifically targeted by a highly advanced adversary, it probably doesn't matter what you do. I tend to assume that most of us, most of the time, are not in that position, and should employ a layered, "defense in depth" strategy. Whether or not this specific technique is something worth deploying or not is an open question to me. My position is simply that we shouldn't just dismiss it out of hand without deeper consideration.


It’s because they had some security experts say it (about things like moving your open mail relay to port 7384 to ‘secures’ it) and it’s the only thing they know about security.


It's a good idea. Personally I prefer to use a SSH key.


If you're really looking for a second factor for SSH, why not just use a standard 2FA approach, like a yubikey? Or even just TOTP with PAM? It seems way more effective and way less complex.


Thinking a bit about it, I suppose some advantages would be that there's less exposure by default. It's also going to be easy to audit for port 22 being hit, which an attacker might try. I could see some benefits.


Super koel!


This makes me wonder if you could take this further and make the connection protocol dance across a variety of ports as part of the initial handshake.


Isn't this idea port knocking in essence?


Oh cool, I hadn’t even heard of that.


Why bother? Why not just use SPA secure port knocking with GPG users and expiring tickets that just hides the port completely using firewall rules unless authorized? fwknop is just one example, and it works for any and all services.

Also, there are these internal network creation systems called VPNs.

Sensitive ports should be guarded behind VPNs on private networks. And, the VPN port itself should be guarded with SPA port knocking.

Stop putting ssh on everything and on public IPs on the actual public internet. Don't do this. Do you know how many weeks were spent cleaning-up after idiots who did this with desktops contracting W32/Blaster? One "secured" Oracle database box on a public IP got an unknown trojan rootkit Mark Russinovich was like: what is this voodoo that they do? The "only" solution, since it "couldn't ever be taken down," was to block everything it didn't explicitly need to function and general outbound internet access. Lack of security, idempotent automation of configuration management, and restoration caused these issues. It languished on for years with what effectively was an "endogenous retrovirus" that "couldn't" be removed.


It isn't really secure... Because an attacker on the network can see which port you're about to connect to, and which IP you're connecting from, and connect to the target milliseconds before you do.

I prefer to keep the word 'secure' for things that provide at least man-in-the-middle protection, which this approach doesn't.


Single Packet Authorization doesn't claim to do anything about encryption, authentication, integrity, auditing, or anything else.

Leave ambiguous terminology hair-splitting at the door and get to specifics.

It's securing the keyhole of the padlock. No sense moving the padlock around when it can be closed to begin-with, and opened with a special knock that is extremely complicated: time, service, and identifies which user.

Port knocking on top of VPN, SSL, SSH, Wireguard, or whatever. You don't do this with telnet because common sense. Duh!


Somehow my windows computer doesn't want to fetch the correct time. Would probably be a nightmare.


Win10?


systemd-timesyncd is also a lovely possibility of forking it up.


Yup




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: