I believe privileged access management is the proper way to manage access to a password protected systems.
It is basically a terminal server, proxy, bastion or anything similar. You log in there with federated identity (for example AD) and it logs you in into target system with some shared or temporary user. Usually it also records session and does other security/compliance related things.
Examples are Delinea (former Thycotic), CyberArk, To some extent Apache Guacamole can be used as PAM.
PAM does not preclude having everything secure instead of some sort of 'hardened' bastion and 'soft' destination.
Everything should be 'hardened'. This also means everything has to do proper authentication and authorisation, and skipping that step by letting some proxy do that just creates a bottleneck in security, reliability, availability, and performance.
It also doesn't really matter how it's done, a Kerberos Ticket, x509 client certificate, JWT or multiple credentials (i.e. username with a password and MFA token) are all plenty valid. Granted, a ticket, certificate or token allows shipping claims or attributes allows for directory-less access control, but that doesn't mean that having to do directory lookups is not feasible anymore.
Most of the other things like f5, pa, Citrix, powerbroker and bomgar are just really shoddy software that you setup to attempt to not have to bear responsibility or know what you're doing (or it's clipboard/checklist-based security...), but that just bypasses fulfilling the actual need of a good IAM and PAM implementation. None of those products do it better than what is natively supported, and they are consistently more problematic (be it performance, cost or actual security).
They become a very high value target though and I have learned "security software" devs are as fallible as all devs, sometimes more.
Thycotic had some vulnerability with a symmetric recovery key a few years ago. But comprehensive product like this or roll your own this is frequent so I'd rather do keys and certs like others suppose
Check out FreeIPA (or Red Hat IdM if you like paying for things.). It’s Kerberos and a few other utilities in a very easy to setup package. It also support OTP MFA
MIT Kerberos supports preauth with OTP, or PKINIT (X.509 certifies); I don't know what Heimdal currently has. FreeIPA has been doing good work past that, on integrating FIDO, for instance, and can issue tickets on the basis of external identity providers. It certainly does more -- like a souped-up AD.
Isn't reasking MFA on purpose just a way to make people hate MFA? Shouldn't everything support a "Trust this device" option and then never ask again from it?
I'd rather reask credentials before elevating effective access level. Just like sudo reasks password. I don't mean reasking MFA to access corporate intranet website with a blog no one reads, I mean reasking for administrative access.
I don't see how setting up MIT or Heimdal Kerberos is complex compared with AD (which is more than Kerberos); they seem easy enough to me. SSO seems to me what you want, and Kerberos is the reasonable implementation.
> privileged access management is the proper way to manage access to a password protected systems
Managing access to privileged systems at scale is hard despite these tools. This isn’t a knock on the tools. It’s a people/risk problem. Eg staying on top of those recorded sessions becomes increasingly difficult over time.
The sustainable way is to work with business and compliance and remove the need for privileged access. It’s doable in many scenarios and leaves a very small “rump” of things that do need privileged access, with all the attendant overhead.
In general, challenging teams to do without privileged access and making it a “last resort” thing is a great idea, depending upon your industry / risk profile.
I've been moving away from this model towards user-asssociated VPNs or (inverse) captive portals.
Used Powerbroker and cyberark for a long time and while they're good at stated purpose the integration with more flexible and modern auth systems has had a lot of friction.
The particular regulatory area I work in is also just a non-starter for federated AAA from outside the regulated systems which colors my opinion though.
Combined with command restrictions in openssh and sudo etc you end up with several wholly disjoint attack surfaces, decent logging, and granular user restrictions.
The terminology varies by vendors but essentially there are authentication portals that users will log into and receive auth tickets. These are forwarded to network gateways, usually encrypted in a vpn tunnel, that allow traffic based on user RBAC, sometimes region or time, etc.
Captive portals are web auth pages for use cases the more structured method doesn't work for. They were envisioned as making you sign in hotel wifi and such but work in the other direction as well by forcing a web user login before allowing traffic from a host for some period of time.
> First, some vendors make it difficult to associate an SSH key with a user. Then, many vendors do not support certificate-based authentication, making it difficult to scale. Finally, interactions between public-key authentication and finer-grained authorization methods like TACACS+ and Radius are still uncharted territory
Keys (with/without certs) are the best route, but not always possible for every situation.
Honest question, unless it's mandated by your employer, or you don't personally care, why would you ever choose to use a service that doesn't offer that?
There are not many network vendors. Check the link in the first footnote for an example how Cisco, the leader in the field, makes it difficult to deploy SSH keys. This is getting better. For example, Juniper (another network vendor) now supports SSH certificates.
I have no idea what's going on in the footnote, but deploying SSH keys on Cisco equipment is like 3 commands (conf t, user x, ssh something something) to deploy public keys, not hard at all.
It's been a few years, but this requires manually deploying keys and adding/removing users on all your devices. Most use TACACS+ and/or Radius to centrally manage users, which don't support keys in that way (or at least didn't the last time I worked with them.)
Another possibility would be to use CA certificates for authentication and only TACACS+ for authorization and accounting. Juniper now supports CA certificates. Cisco may in 10 years.
Public key authentication is actually a Must Implement for SSHv2. Since SSHv1 is long obsolete, any gear that doesn't have pubkey doesn't actually have a de jure SSH implementation.
That doesn't mean it's always easy to install and manage keys. For example, the author of the passh tool recommended by this post somehow managed to come away with the impression that OpenWRT's ssh server only supports password authentication.
Another example: Ubiquiti gateway consoles like the UDM-Pro. You can install an SSH key but these are erased on reboot. So after every reboot I have a script that uses the SSH user password to re-install an SSH key but this can’t be relied upon and I haven’t found a way to make an SSH key persist.
One good example is bringing up equipment that comes out-the-box with a default password. This is common on BMCs for example, and you have to initially provision things somehow.
Bad(ancient) design, can you honestly say you look at that little image it generates and verify it is correct? Or actually read the acceptance of the key? No one reads that stuff or checks it, they just press yes to get to what they want to do.
Recently I tried to use an sftp script created with expect that ran fine on the command line, but the same script failed to run under cron. I made sure that all environment variables were properly set, but sftp didn't ask me for a password. I think it might have been an issue with the absence of a tty
sshpass didn't work, ended up rewriting the whole thing in Paramiko ... only to find out it doesn't respect the http_proxy environment variable.
Working with less-than-stellar tools --- ahem ROS --- has taught me how to placate commands that assume interactivity and/or a tty. To wrap up the offending command with a "fake" `tty`, I do
I've actually tried script and expect, but they didn't work.
I had enabled the debug option on expect and I couldn't see the password prompt when the program ran under cron (i was redirecting the script output to a log file). It did appear when running on the prompt though.
I couldn't figure how the sftp program was determining that it was running under cron. I suspect that it was inspecting if stdin was connected to a terminal or not, but I gave up around 4 am.
The other month I had to wrap a commercial server program with 'screen'. 'script' wasn't enough.
If you would run it in the background "./program &" and then exited ssh it would also exit. It wouldn't run with cron until:
export TERM=vt100
script -c "screen /foo/bar/program" /dev/null
All the program did on the console was print to stdout. How are you even program shit like this!?
Avoid sftp and use rsync instead (much better support for more types of metadata, more robust error handling).
Avoid passwords and use keys instead (easier to distribute, easier to generate, lets you lock them down to a single command).
The above avoids much unnecessary thinking.
If you really have to, there's always sshpass. And ssh -t to allocate a tty even if you are running without one. But this is seldom really necessary, first try harder do it the easy way.
I should have explored rsync better. I asked chatgpt some use cases and it made seem to be a bad fit because I needed to also delete some files on the destination machine. My prompt fu was probably a bit bad at the time.
> Avoid passwords and use keys instead .
The vendor only provides password authentication in this case
> If you really have to, there's always sshpass.
I´ve tried it, but sftp, when running under a cron script, detects that it is not running in interactive mode and does not issue the password prompt. The problem might have been caused by the TERM environment variable not being set as another reader suggested.
> And ssh -t to allocate a tty even if you are running without one. But this is seldom really necessary
I've used -o RequestTTY=force, but it also didn't work. Granted, it was close to 4 am and I might have missed a key aspect.
> first try harder do it the easy way.
Paramiko endep up being easier once I understood how to use the proxy command to interact with the company's http proxy.
Read the man page instead of spending time on text generating tools. There is a batch mode in sftp which you should use. And sshpass will absolutely solve the lack of a missing tty in sftp. (Missing TERM is only a problem when you have a real tty, which you haven't.)
Then again, don't do this. Use rsync. It is more robust and will avoid other problems in the future. And don't accept that a vendor only supports password-interactive authentication, it is unlikely to be the case, nobody implements their own ssh and every standard implementation of ssh accepts more ways of authentication.
Paramiko is a perfectly workable solution, but it's way more complex and requires more maintenance for your eventual successor. Please don't be that guy.
Point taken about rsync, but you are assuming incorrectly that I haven't read it and that I haven't tried batch mode, but it still failed to run under cron inside an expect script.
It's not my orgunit that manages the contract with the vendor. I will absolutely use paramiko instead of trying to explain a technical problem to two middle managers in another orgunit to take it up with a non technical person on the vendor side whenever they feel like it when my deadline ends at the end of the year
Same. As soon as I read the title: "Surely `sshpass` still works in this instance? Yup."
Even if avoiding good practices with PKI was defensible (and it definitely is not), further avoiding `sshpass` in favor of this more contorted trickery is (imho) probably the wrong choice.
On the other hand, text-based line-oriented APIs are a force multiplier that lets one solo dev/sysadmin be capable of administering (troubleshooting and building out and keeping alive) far more infrastructure than can be reasonably expected. And it's not sysvinit you should critique, but runit (fast and bulletproof - two things the other modern alternative doesn't really do).
> Ignoring failures because they're moderately unlikely is the hallmark of a bad developer.
I dunno I think something like JSON would still allow solo dev/sysadmins to get a lot done without the risk and effort of having to hand-roll separate parsers for every API. Then you could also offer something like Fuchsia's FIDL for programs to interface with - it allows generating the interface code fully typed in whatever language you want. No need to hand-roll a parser at all!
> runit
First I've heard of that. Looks interesting, but it doesn't seem like it has nearly enough features to run a modern desktop system? It just starts daemons and keeps them running as far as I can see.
I'm not familiar with Fuschia's FIDL system, but it looks intriguing. Though it's still pretty hard to beat using pipes and cut to ingest a line-oriented stream of text. If Fuchsia had beaten out UNIX in some parallel universe, it does look like we'd all be happier. Ha.
Runit is pretty good - it's the default in the Void Linux OS that I use on my personal machines. Definitely good enough for a modern desktop system, though it's not aimed at the same crowd as Ubuntu and Fedora.
> Though it's still pretty hard to beat using pipes and cut to ingest a line-oriented stream of text.
Well, a) that's fine for interactive use but awful for unattended use because it's so likely to go wrong, and b) it is actually pretty easy to beat that - JSON and jq is much easier, nicer and more reliable. Or Nushell/Powershell structured pipes. Or an autogenerated properly typed Python interface. Take a look at /proc/self/status before you tell me you'd rather use some awk/cut monstrosity than `jq .ppid`.
> If Fuchsia had beaten out UNIX in some parallel universe, it does look like we'd all be happier.
I'd definitely be happier - I wouldn't spend so much time debugging why Linux stuff breaks!
1) How are shell scripts janky? Could you elaborate on that? Obviously if someone doesn't know shell very well they will write suboptimal scripts. Personally I use runit init, which uses shell scripts for services and it works well.
2) No idea where you got that from, I never had a problem with this (as a full-time Linux user)
3) Mixed opinions on this one. There's a ton of various info in /proc that could theoretically be exposed via different syscalls, or maybe a single syscall? But having a text-based API in this case isn't a big issue really.
1. Sounds like you have written very few shell scripts if you don't know the issues and think you can just "don't make mistakes" (a classic fallacy). You can Google it and there are a ton of articles. Here's the first one I found which is decent but doesn't even mention some big issues like quoting: https://pythonspeed.com/articles/shell-scripts/
2. Try putting a space in your home directory and let me know how that goes...
3. Most programs don't resort to reading /proc or /sys because it is such a pain! I bet there's a ton of undiscovered vulnerabilities in programs that do.
1. The real fallacy here is assuming "it requires experience so it's impossible to do". Everything described in that article is trivial and anyone with experience in writing shell scripts knows about these things. People just instinctively go "this doesn't do what I expect therefore it's bad" instead of trying to understand the reasoning behind that. tl;dr Classic skill issue
2. Another fallacy. Obviously there will be badly written programs that don't handle paths properly. And I'm sure that if I put a space there it will screw things up. But it doesn't mean that most programs do that?
3. It's not a pain. It's absolutely trivial to do, and people whose code is vulnerable in this case are most likely just bad at programming and shouldn't be writing at such a low level as to cause vulnerabilities anyway. These are the type of people referred to as "developers" instead of "programmers". (Source: I personally parsed /proc entries without third party libraries, can't say it was hard)
> anyone with experience in writing shell scripts knows about these things
Who wants to have to be an expert in shell scripts to have to write them? In any case you're still wrong. Experts aren't immune to footguns. Post a complex shell script you've written. Let's see.
> Obviously there will be badly written programs that don't handle paths properly.
Like GNU Make?
> It's absolutely trivial to do
Trivial to do so that it works for you. Absolutely not trivial to do so that it always works.
The types of people who know the difference between those are referred to as "senior" instead of "junior". (Source: I personally parsed /proc entries too without third party libraries and there were plenty of footguns that a junior developer would skip over.)
> Who wants to have to be an expert in shell scripts to have to write them?
If you aren't good with shell scripts, why bother? Go use Python or something else entirely. And it's not like you would use shell scripts to write all your programs, they aren't suitable for anything more complex than stiching a bunch of programs together anyway, yet people still try. And then those people complain that shell script is full of "footguns" when they are misusing it.
> Trivial to do so that it works for you. Absolutely not trivial to do so that it always works.
No idea what you are on about, literally every single detail you need to correctly parse /proc entries is in proc manpage. And yes, if you wrote your parser correctly it will still work between kernel versions, because /proc is considered a stable API between the kernel and userspace (same as syscalls). There are no "footguns" and no mysteries to it. It's a solved problem. Provide a concrete example of a footgun if there is one, and by "footgun" I mean something that isn't clearly stated in the manual that everyone dealing with /proc reads (right?)
You have a fundamental misunderstanding of footguns, and humans. Explaining them is in manual doesn't mean they no longer exist. People often don't read the manual. Nor should they have to.
> If you aren't good with shell scripts, why bother? Go use Python or something else entirely
I agree! And given that essentially nobody is "good with shell scripts" we can simplify that advice to "don't write shell scripts".
> People often don't read the manual. Nor should they have to.
That's some kind of logic right there. If people don't read the manual, they have every right to expect that things will break. It's not surprising or shocking, it's just user error.
> we can simplify that advice to "don't write shell scripts".
No, we cannot simplify it like that. Shell scripts are a brilliant tool for a certain kind of problem, and they work well when used for that class of problem. When used for anything else, they work poorly. Same applies for literally any other tool. It's like saying "essentially nobody is good with a CNC machine so don't use it"
What bit me recently was trying to set up non interactive password protected ssh keys. (The use case is wrapping the ssh functionality inside another program.) Turns out, if you want to script the use of an ssh key that requires a decryption password, you can’t do it without ssh-agent (which isn’t really the best solution for a multi user program).
Thanks for the link. It's ironic that in the name of security, that solution is probably one of the best available. SSH is so protected against footguns that legitimate use cases are forced to use demonstrably worse security practices, just because some people might shoot themselves in the foot. I'm stuck with either that option, expect, or a total misuse of ssh-agent.
Depending on your use case it might be better to just store the key unencrypted. There’s not really much point encrypting it if you’re storing the passphrase on disk alongside the key anyway.
Right (what's the threat model)? The possibilities of restricted passphrase-less keys are under-appreciated for non-interactive use, or even interactive use. I'd rather mint an ephemeral key for an endpoint I control than type credentials or, worse, forward the agent, if I have to call out of an untrusted system (like an HPC login node).
I mean, the use case is I want my GUI wrapper to interactively prompt the user for the decryption password. It’s not getting saved to disk; I just want ssh capabilities (including password protected ssh keys) inside an interactive desktop app.
I can't tell what that involves but, for instance, the two GUI things I typically use with SSH are Emacs (openssh) and x2go (libssh), and they don't do that. Surely you want the agent anyway.
I’ve worked at more than one place where you SSH into a Linux host (often just for that datacenter) using certificate-based authentication, only to be printed a JIT (just in time) password for TACACS-based usage in that datacenter, and which is only valid for a few minutes.
Workarounds are many for network devices it seems!
I was waiting for the article to mention why the author chose not to employ that option. Though the author mentions in passing that one solution is brittle because it requires parsing output, I don't see why that's a problem. It's exactly what 'expects was designed to do.
You may wish to edit your comment to clarify that the author of https://github.com/clarkwang/passh claims sshpass is broken by design, not the author of the linked article.
Yeah, but that's my question, why is all that stuff about TTYs bad? The examples basically say:
bad:
bash-4.4# tty
/dev/pts/18 // the bash's stdin is also connected to pts/18
good:
bash-4.4# tty
/dev/pts/18 // the bash's stdin is connected to the new pts/36
...and stuff about controlling terminals and missing job control, but why are these things bad?
And yes, if I use either sshpass or passh, the password will have to be "on my computer" (i.e. in a script or text file), that's the whole point of it: accessing devices that don't do public-key authentication non-interactively
I’ve always been told that ssh is not supposed to work non-interactively. Which is the whole reason for sshpass, to work non-interactively. Ie. Broken by design.
I can argue that SSH password auth only makes sense.as in interactive affair; for non-interactive auth cases, there are public keys, certificates, smart cards, etc.
I totally agree, but sometimes you have no other choice (e.g. devices that only offer password auth), and in that case it is claimed that sspass is bad ("broken by design" according to the passh author) and passh is good. And that's what I'm confused about
Wrapping SSH requires handling a lot of different exceptions. If you want to avoid bugs and errors, find someone else's library for handling SSH that deals with things like host key prompts, changed fingerprints, filesystem permissions, connection errors and reconnects. If you're running commands, some may need things like proper PTY handling.
Password authentication for SSH is often paired with OTP (by people who think that it's somehow more secure to force me to store two lots of credentials for convenience or non-interactive use than keys, and won't do Kerberos or certificate authN). There's a version of sshpass supporting that: https://github.com/dora38/sshpass
What about: this is now somewhat builtin into SSH, so you don't need an additional tool (or more accurately, this additional tool could be far simpler and more robust).
Ok. That is helpful, but ssh server or client? No way I would bother upgrading the server on all those boxes just to get to a point that is already working with sshpass.
Client. OpenSSH 8.4 has SSH_ASKPASS_REQUIRE that allows one to always invoke SSH_ASKPASS, notably when run from a terminal (previously, it was only invoked when there was no terminal to prompt the password).
I kinda like the way ansible does it. There is a concept of a vault. You put all the passwords in that file and they are all encrypted. You use one password when running the command or playbook and all of the keys are decrypted as needed.
I don't know if that is efficient for 30K machines though.
It isnt. I ended up building a small golang binary that ran as root and I could hit it with http calls to execute whatever I wanted. Built a message queue that would work through all the machines for eventual consistency. Worked great.
That's the whole idea of using a system based on declarative state. As soon as the system is back up, the agent can resolve state again. You also keep a central copy of the state of every agent.
You can absolutely do this by writing your own agent (or by writing a family of bash script, but they tend to grow pretty complex over time), ansible is just a framework to write that in a standardized way. It will also out of the box handle a number of common system state such as running services and sysctl triggers.
There are a number of similar systems such as puppet or salt, which are all variations of the same basic idea. 30k hosts are a lot, and will need sizing the system appropriately, but it's not an unusual configuration by any means.
That was the benefit of my system over what you are talking about, there was no dependency on a global state or centralized control surface. Each worker was autonomous and self contained and had enough intelligence to bring itself to the desired state on its own. All you had to do was one line curl|bash install my service and it would take care of the rest without any other external dependencies. No worries about having to have ansible try to connect over and over again until things were working. It would just magically fix itself.
Again, many ways to skin the cat, but at the end of the day, this solution really worked extremely well. I would do it again in a heartbeat.
It is basically a terminal server, proxy, bastion or anything similar. You log in there with federated identity (for example AD) and it logs you in into target system with some shared or temporary user. Usually it also records session and does other security/compliance related things.
Examples are Delinea (former Thycotic), CyberArk, To some extent Apache Guacamole can be used as PAM.