Hacker News new | comments | show | ask | jobs | submit login
Ask HN: Developers, how do you keep SSH and web URLs organized at work?
88 points by nsxwolf 4 months ago | hide | past | web | favorite | 98 comments
So I'm sure many of you have dozens of web application URLs across multiple development, QA, production environments and as many URLs for tools and SSH endpoints you use all the time.

My system for keeping all this organized is to sift through bash history and browser history and abortive attempts at keeping Excel spreadsheets or Evernote pages full of this stuff.

Does anyone have any pointers for tools they like to make this easier?

Thanks!




For SSH, using your ssh config file[0] is probably the most idiomatic way to go about it. For web applications I tend to rely on using plain old bookmarks, using bookmark folders to keep related URLs together. It's a fairly simple system but it manages everything together where I'm most likely to use it.

[0] https://www.digitalocean.com/community/tutorials/how-to-conf...


Second this, this is also my method.

Cygwin does a good job with the ssh config file on Windows, if anyone is looking for tool suggestions.

This wasn't in OPs question, but have to add that if possible, put your SSH keys in the servers known_hosts file and you can have secure passwordless login, which combined with the ssh config file on your machine is fantastic. I just do "ssh $SERVER" in cygwin, where $SERVER is my defined alias pointing to an IP, and that's it. Takes literally five seconds.


If you have the potential for tens or more unique SSH hosts (particularly if they aren't conducive to wildcards), ASSH[0] is a rather handy utility. It allows you to break up your SSH config into multiple files and use inheritance and variables in your host configs.

[0] https://github.com/moul/advanced-ssh-config


What does this give you over the `include` directive?


+1. If you are somehow stuck on Windows, MobaXterm is quite nice for achieving a similar storage of SSH logins and password.


Putty, Moba, etc. usually have Session dialogs to save this stuff: https://puttysm.sourceforge.io/images/psm-folders.jpg


Isn’t git bash and its ssh config file equivalent to that on *nix systems?


Yeah, the git bash environment or even WSL Bash if you can get it on your machine will give you the ability to use the OpenSSH config file.


Microsoft added a native SSH client (and server) recently. https://arstechnica.com/gadgets/2017/12/microsoft-quietly-sn...


mRemoteNG is very good for many remote sessions.


+1 for mRemoteNG (although it uses Putty for SSH), but it supports a lot more: RDP, VNC, Telnet, raw sockets.


Yep. ~/.ssh/config for ssh stuff, ~/home.html for web links (that file is my browser home page, it also contains a few different forms for searches on various systems, such as google, DDG, our internal ticket system, etc).


I'm like you, but use pinboard for urls


For SSH, I actually wrote a tool to keep track of all of the hosts (and host groups) that I connect to on a regular basis:

https://github.com/Comcast/ssh-to

I'm responsible for a few hundred servers at this point, and the tool has scaled nicely for connecting to them, and occasional "run this one-off command across a series of servers". (But if it's more than once, you'll seriously want to use Ansible for that...)


This looks nice. I wasn't able to install it using Brew, which I'm not familiar with. When I installed Brew on Ubuntu, brew doctor gave me warnings, and it also failed to install the repository...

paul@ubuntu:~/src/wimo$ brew tap comcast/opensource github.com:Comcast/homebrew-opensource.git ==> Tapping comcast/opensource Cloning into '/home/paul/.linuxbrew/Library/Taps/comcast/homebrew-opensource'... Permission denied (publickey). fatal: Could not read from remote repository.

Please make sure you have the correct access rights and the repository exists. Error: Failure while executing: git clone github.com:Comcast/homebrew-opensource.git /home/paul/.linuxbrew/Library/Taps/comcast/homebrew-opensource --config core.autocrlf=false --depth=1

So, I just plopped the ssh-to shell script into my ~/bin folder.


Brew == homebrew, and it’s mainly for macOS


Oops, that's my bad. I goofed up on the URL. I fixed it in the README and feel free to try again with Homebrew. :-)


> This looks nice. I wasn't able to install it using Brew, which I'm not familiar with. When I installed Brew on Ubuntu, brew doctor gave me warnings, and it also failed to install the repository...

Why are you using Homebrew on Ubuntu? Homebrew is a MacOS package manager.


Linuxbrew [0] is a Linux-oriented frontend for Homebrew.

[0] http://linuxbrew.sh/


Thanks for mentioning this. Real nifty.


I don't usually SSH into things anymore because deployments are completely automated and logs are collected centrally. Depending on the app, the CI pipeline managing the deployment is either stored in the app repository itself or has a repo of its own. Credentials should be stored outside the repositories in a credential manager supported by the CI, or at least in encrypted files.

URLs for environments sound like application-related documentation and could be part of a Readme file the app repository.


Keepass2: https://keepass.info/

You can configure custom url handlers so that it opens the terminal then ssh right in the server you clicked on.

With a firefox extension, it let you autofill credentials.

Simple one file to backup or sync. Always encrypted. Crossplateform. Free software. Robust. Proven. And a Fugly UI that looks corporate enough for you pointy hair boss.

The only thing missing is that I'm using a cmd otp generator for one time passwords, but it doesn't have keepass support yet.


But does it have RBAC, revocation, central auditing, corporate compliance approval, a corporate approved supplier, a support contract spanning 5 years at least?

Otherwise pointy hair boss will bitch.


I'm sure I can provide those for a price :)


A key thing is having a standard for server names and URLs. $projectkey-$env-$dc-$type-$count for example so if I want to hit the Dev DB server for the London cluster for the Ikea project I'd use (SSH or WEB or whatever): ikea-dev-lon-db-01

SSH: autocomplete using known_hosts so I can easily tab complete any server name I've SSH'ed to in the past.

Web: Just bookmarks OR Environment Handbook pages on a wiki


Webpages - Bookmark folders organized by Application/System then Environment then functional use. Site specific info/creds in LastPass

Ssh - ssh config named as <system>_<environent>

rest APIs - postman collections by system


Hah, this is literally my exact setup too, right down to using LastPass and Postman.

Although, my work recently took away direct SSH access for all devs in all environments, we now have to SSH to a bastion host, then SSH to servers from there. Makes my ssh config less valuable.


You can use SSH agent forwarding, however that causes some holes to open up as well, so ymmv.


Don't do that. Do this instead (taken from my ~/.ssh/config):

  # Sample bastion host configuration without ForwardAgent
  # Host bastion
  #       User bastion_user
  #       Hostname 10.11.12.13
  #       IdentityFile ~/.ssh/id_bastion
  #
  # Host server
  #       User server_user
  #       Hostname 192.168.0.1
  #       IdentityFile ~/.ssh/id_server
  #       ProxyCommand ssh -q -W %h:%p bastion


Try to avoid the need for sshing into things and manually making changes. Your infrastructure should be as hands-off and automated as possible.

For URLs, you can use bookmarks or a custom homepage.


I made a script to feed URLS to fzf so that I can select and open the url with partial keywords.

For example, you can enter 'prod region1 bestservice' to interactive open the URL.

Here is the script:

    #!/usr/bin/env fish

    function +fzf_open_url
        [ -z "$FZF_OPEN_URL_LIST" ]; and echo "Please configure \$FZF_OPEN_URL_LIST first." >&2; and return

        set -l result ( \
            cat $FZF_OPEN_URL_LIST \
                | sort \
                | uniq \
                | grep -ve '^$' \
                | fzf --bind "enter:execute-silent:open {}"  \
        )
    end
Gist: https://gist.github.com/hktonylee/ca56920dd0323c1c40726764d1...


I am an ~/.ssh/config user. I have all the servers I connect to organized and use a convention like this:

  Host $company-$project-$location-$server
  	HostName $IP
  Host *$company*
  	# Gateway Settings
You can specify specific configs first, then have them waterfall down to general configs. This allows you to customize usernames, identity files, proxies etc. This file is known to fish autocompletions so I can just do `ssh` and use tab completions until I have all the pieces filled. I also use a socks proxy by specifying `-D $port` so I can access the server from inside Firefox with a network config (I use firefox solely for separate socks proxy access to avoid having to proxy all data in OSX system prefs. Not sure if Safari has that ability).


Here is my system:

- 1Password for teams is awesome. It has first-class support for things like database connections, credit cards, etc. You can also add custom fields to any object. It's a great way to share things with your team.

- I use .ssh/config to store my SSH connections with names I can easily remember.

- In Chrome, I have a few links in the bookmark bar. I open a new tab and click once to get my calendar, email, JIRA, etc. Things I use multiple times a day and usually open first thing in the morning.

- Documentation in Github repos, usually on the main README. This is for things like staging URLs.

I do sometimes rely on bash/browser history, but I always have everything I use in one of those four places.


Not sure what your platforms are or what your workflows would be but if you have potentially dozens or hundreds of these and you have some level of deterministic target names and, like me, the memory of a dead elephant then customizing a wee wrapper or three can go a long way. I am pretty adverse to switching apps/UIs for doing single tasking items of this sort. Very annoying.

For instance for ssh I use a small combo wrap 'ot' (open tab(s)) that opens a terminal tab and parses my args to start terminal tab batches of ssh sessions. Your wrap could be as simple or complex as you need. I parse my args as necessary like:

ot lead1(app) lead2(envid) [lead3(tier)] [lead4(instance/role)]

as so: cmdprmpt% ot coolapp dev1 app app1 # will open a single term tab to the coolapp app tier instance 1

leaving off the instance/role will open a tab session to each instance (ot coolapp dev1 db) leaving off the tier will open a tab to each node on every tier for that app env (ot coolapp dev1) leaving off the envid doesn't work because that would just be a lot of tabs.

Barring that type of approach aliases are definitely your friend. The same could be done for opening browser sessions and/or curl/wgets whatever etc..

Barring that and for anything over, say, 100 targets (or excruciatingly non-deterministic target names) I'd probably have a single file with a bunch of key pairs and wrap against that (or some sort of wee db for 1000's).


What I'd like to see, but haven't yet, is a way to teach a browser extension about my environments, what they are named and what their URL patterns are such that I can get a border color-indicator of "hey, you're in prod!" and have a quick switcher to view the same page I'm on now, but in a different environment (this page looks weird in test, was it already like that on prod, wonder if I can debug in my local dev) -- and export this configuration to share with newbs.


Stylus, a CSS manager, caan apply arbitrary stlys (sa, border or overlay) based on domains, subdomains, specific URLs, or regexes.

An example of an overlay, though not aimed at environments specifically:

https://plus.google.com/104092656004159577193/posts/ASRaMF5x...

https://pastebin.com/raw/eFQrTJaY


We do the color coding in the web applications themselves, usually through the background color.

There used to be a "Domain Switcher" extension for Firefox, but it doesn't seem to work in a post-Quantum world. Chrome has such a addon: https://chrome.google.com/webstore/detail/domain-switcher/lb...


Firefox's container tabs can do the colour coding if you're environments are on different domains. What you describe would indeed be useful though.


Does a sub domain count as a different domain?


It can do.


Might not be what your looking for, but what about something like stylebot to give your production sites a red border?



You could try Firefox tab containers, though I don't know if you can share them with others.


You could add the styles to your profile's userStyle or userChrome files.


For SSH, I use one key per service, and manage them via https://github.com/ccontavalli/ssh-ident - to ensure that each connection only has access to the key it should, according to the configuration, have access to.

I can't stress enough the importance of doing this - using either this tool, or something similar (like manually managing ssh agents).

Using one key only would mean that any server you ssh to has (assuming you forward the ssh auth/roaming) the ability to "forward" that connection, and do evil things with it. See https://blog.filippo.io/ssh-whoami-filippo-io/ for more about it.

Having multiple keys and only using an agent with the "right" key for the "thing" mean that each "thing" you SSH to would, in the above scenario, not be able to inflict any damage at all.

That is, ssh'ing into your personal dev box wouldn't give the dev box access to your work github account.

It's not "easy" to use ssh-ident, or create and remember which identities should be used where, but it's IMVHO easy _enough_.

If anyone knows of a better tool, or a better method, I'm all ears. I've thought about using multiple yubikeys or the like, but all methods I've so far found fall very, very short on usability.


Have you thought about using Yubikey with touch to use enabled? This way the remote cannot directly use the key, you have to touch blinking yubikey to authenticate.


I mentioned thinking about using multiple yubikeys, but they fall short on usability - I'd have to "juggle" many of them. :/


I meant YubiKey 4 touch described here: https://developers.yubico.com/PGP/Card_edit.html


     ~/.ssh/config
is really simple, I never needed anything else.

For URLs, I use bookmarks, that's it. You might actually need more, or you might think you might need more :D but I don't think so.

Also different URL collections have different places, for example I collected all our developer documentation URLs in a single wiki (Confluence) page with a permalink and put it under an easy-to-remember internal domain.


Keep living documentation by using DNS directly. It's your directory! Get a domain for each of your stages: Say foo.cats for integration, foo.pony for staging, and foo.io for prod. Then have a shorthand for every project and deploy consistently to bar.foo.cats, bar.foo.pony, and bar.foo.io. Deploy to a project name, not a server name. These names are also separate from the public name bar.com used to reach the service on bar.foo.io.

If you get NXDOMAIN on bar.foo.cats, you'll know there's no integration-server for that project. If you want to move to a different server, you update your DNS records, not your deploy-scripts. If you want to change the public name of a service you can do it without changing anything about how you access it.

Others have mentioned .ssh/config which is one piece of the puzzle. I have shorthands named bar-int, bar-stage, bar-live that point to the respective domains. We work towards having all SSH usernames being the same as the project shorthand. At that point the SSH aliases can be generated automatically from a list of projects.


For web urls, I actually just built something for this! https://www.homepagr.com

It replaces the "new tab" page. Basically it's an organized grid of links by category. A chrome extension lets you add bookmarks.

I built it for myself, and then I put the stripe api on top of it. (No users though :/)


I'm trying this out. I really like the simplicity of it so far.


Awesome! Been working on it for months (though I realize it looks as though a back-end engineer built it. which I suppose is true!) Feel free to ping me with questions/bugs/suggestions!


I created aliases in .bashrc. For instance, instead of 'ssh -i [key] user@devenvurl' I aliased it to 'ssh-dev'


I believe this can also be done in your ssh config file which would confer the added benefit of having the alias available to scp and rsync.


We have an in house "go links" [0, 1] tool, which I update heavily as I work, so everything is always a few key strokes away. So if you want to access "foo", you just type "go/foo". If you want to access to the dev version "go/foo-dev". If the link does not exist you are automatically put into a UI to make the link then and there.

If you have an issue tracking or code review system you customize it so you can do "go/T123" and go to issue "T123". Or for common apps you can put in place simple patterns like "go/foo/id" and it opens that application to looking at that id.

0 - https://github.com/kellegous/go

1 - https://www.golinks.io


If you want a cool way to manage bookmarks, check out my project zap: https://github.com/issmirnov/zap

It reads in a YAML config file and allows you to shorten URLs, with recursive expansion.

For example: https://subdomain.corp.company.com/staging --> http://s/staging, or even http://s/s

The nice thing with Zap is you don't need to click through bookmarks, and you only need to sync one file. There's also useful info in the README on how to set this up on your internal DNS server.


If your password manager supports it, make a secure note with all the URLs you use regularly. My work keep a big wiki of documentation, including frequently accessed servers, which seems to work really well - it also allows us to document each individual server.

When in doubt, write it down, preferably somewhere secure to prevent leaking anything if handling sensitive data. Excel spreadsheets work pretty well too, but formatting might get a bit distracting.

At a previous job we ran a lot of client-facing servers and kept most of the URLs in our server monitoring software. It wasn't great and there wasn't a lot of documentation, but frequently accessed servers were all in a central place (save for our test/Ansible servers).


List of URLs and little important notes - on the internal wiki (eg Confluence, Sharepoint...). Previously these pages were basically filled with credentials as well and it wasn't great, it had become a dumping ground for sensitive information. It's a lot better now as the page serves as a starting point for the various environments.

The credentials themselves - in a shared KeePass2 KDBX (with the KeePassHTTP extension for filling or the ability to press Ctrl+U), so that anyone can add or modify entries when needed.

For SSH, using KeePass2 + KeeAgent to serve SSH keys. If there are too many then again using Ctrl+U to launch a bash terminal with the ssh command and the corresponding path to which the temp key gets written.


For web application URLs and things like it (server names, IP addresses, things I need to keep track of), I keep it all in a vimwiki tech.md file in my ~/notes directory. I used to use org-mode, but it was overkill for me and after 20+ years of using vi, switching to emacs (even spacemacs) was a tough adjustment.

For SSH endpoints, I more or less have everything aliased in my ~/.ssh/config file. If it's just a server, I can just ssh hostname. If it's a host I'm doing port forwarding on, I'll have it aliased to ssh hostname-tunnel or ssh hostname-SOCKS5.


SSH: ~/.ssh/config

Note that you can specify remote user, details such aas crypto protocols, and many shell-completion tools autoexpand these.

Highlighting host and username in shell propmts also helps.

Webpages / URLs: bookmarks, though I'm relying increasingly on an old-school homepage, written in Markdown (or pick your alternative LWML) managed in git, and published via Makefile.

For team efforts, this in a Wiki, though with easy edit access. I increasingly prefer distict authoring/reading environments. That is, shell tools (vim, emacs, pandoc, make, git, etc.) for authoring, browser to read.


For URLs i have a single html page that I use as a starting page for my “dev” browser, which I edit manually when I add a new environment/tool.

For ssh I just use bash history, but I don’t have a ton of environments


Full disclosure, I work for the company behind this tool. We've had the same problem with URLs in the past and so built https://gurn.io it lets us map URLs to keywords / phrases so you can add "side project dev" and "side project prod". This is really useful for things like elastic beanstalk URLs, or even just navigating aws these days.

You can also share your resource lists as well if this is a team wide issue for you.


Like others have said for ssh I just use the config file I have a simple bash script for managing it. I then combine it with alias and other bash/powershell files inside my projects so I don't have to think to hard. For url I have a html file in my projects that links all the needed urls that typically links with my projects config file. So if I change my configs dev I don't have to worry about updating my html as well.

Simple rule of life if you ever do something a second time turn it into a script.


We use Racktables [0] (currently migrating to Netbox [1]) and a naming scheme like $product-$environment$servertype-$optDetails0[1..n] with a well known set of abbreviations.

So if I need to check the production nameserver it's ns-pdap-01.internal.tld and so on.

[0] https://www.racktables.org/ [1] https://github.com/digitalocean/netbox


As far as bookmarks go, since I’ve recently moved almost everything to a self-hosted NextCloud instance, I also looked into managing bookmarks.

The bookmarks app available in NextCloud is suppose to sync across browsers (with extension for each browser), though I only use Firefox these days, and it can sync stuff across devices out of the box of course.

From what I’ve seen it was a little bit confusing, since it’s seem to ass it’s own prefix and the UI in NextCloud is not very interesting. Still, might worth you while checking.


Can NextCloud handle duplicate bookmarks yet? Last I tried, if Firefox has the same URL bookmarked into two folders, then the Nextcloud server will not accept the second bookmark.


Use SSH config for SSH.

I built a database on AWS DynamoDB + API Gateway which has the "url" information. Also many other things, like which newrelic accounts are used by each environment. This is intended to be used by automation, but humans can just curl | jq these.

That said, SSH access (by humans) should be very rare. If you have to SSH to a production system, ask yourself what is it that you are trying to accomplish, and would you still be doing that if you had another tool to accomplish the same operation?


Mac? I like Shuttle for SSH organization: https://github.com/fitztrev/shuttle


This one looks really good!


I keep URLs organized in groups each one in a separate text file. One URL per line plus bash style comments for lines beggining with hash. Unfortunately browsers don’t support file with URLs as a command line parameter (as e.g. wget) so I have custom shell script accepting the file and opening each URL in new tab.


We do something similar. I have a directory structure like:

    environments/
      development/
        readme.md
        services/
          service1/
            readme.md
          service2/
            readme.md
      preproduction/
        ...
      production/
        ...
And store that in a git repository and publish it. We're starting to look at documentation generation tools (like Sphinx and readthedocs) to help make a pretty website for endusers.


I do 2 things:

1) As others have already written, I use ssh config to organize the urls and have simpler host names.

2) I use 2/3 letter codes as aliases which ssh into the specific host.

For ex- Environment - tool

sw = staging webserver pw = production webserver pj = production jenkins.

This allows me to open a new terminal window and type 2/3 characters to ssh into a box.


Depends on the task, but the tools I use are:

On windows I use mRemoteNG https://mremoteng.org/

On mac I use YummyFTP https://www.yummysoftware.com/


For web pages, folders on my bookmark bar in chrome. A few major ones as links on the bar, the rest in folders. E.g. "services" for links to swagger files, "stack" for Jenkins, docker, admin UI's. "Topics" for folders of related areas.


Keep it simple

SSH: ~/.ssh/known_hosts and ~/.ssh/config

Web URLs - Use bookmarks or write yourself a small local html file with a bit more organisation. Or use a text file with an editor that makes it easy to Cmd+Click a link.

Less simple, but great for teams, use a wiki page with the appropriate links.


I create folders full of shortcuts to the URL's of my webapps. One folder might be called "IT Apps" and another called "Development Apps" and they each contain relevant shortcuts for locally installed or web-based apps.


For my ssh environments, I use iTerm2 profiles with keyboard shortcuts to launch. However, I only have about 5 servers that I need to ssh to.

I find chrome bookmarks sufficient for my website bookmarks. They sync across computers and can be organized in folders.


I memorize everything and try to reuse the same patterns everywhere. I put stuff I rarely use in textfiles. When it's finished, I copy everything on an usb stick. I destroy all accounts I don't use (when possible).


I use 1 bash script per URL so that I can easily invoke the correct browser per URL (it reduces cognitive load). And of course I have it all in GitHub (gitbash as the interpreter for command line 'URL' completion)


Just append aliases to .bash_profile? E.g. alias connect-amazon-example="ssh -i ~/.ssh/amazon-example-key.pem ubuntu@123.123.123.123" Then just 'connect-amazon-example' at the terminal?


Is that best practice?

I've never really understood how you should keep SSH keys. I kept them all in a folder and set zsh to iterate through them all and run ssh-add on startup. Until I realised it can't cope after adding a certain amount, now I'm back to manually doing it.


One thing I did that makes my day easier is to make some shell completion scripts that does a dump of our DNS zone info so that when I ssh, I can tab complete any hostname in our data center.

Edit: autocorrect


Once upon a time I used a sandbox (just a simple locally hosted web app) to manage my work URLs and web tools. Now, I just use bookmarks cause it is more simple. Also config files for SSH.


I use my Keepass, the password manager. No password associated? Then don't fill in that field. It will allow you to classify and name, and add meta data.


Custom version of ethercalc (web spreadsheet written by the inventor of Visicalc himself -- Dan Bricklin) running in nw.js so that it can start processes.


Have a few SSH commands handy and set up aliases for them so you can just type `$ server_name` and boom you are there! tab completes and all that too.


I have a text file with headings. Text file still rules. On notepad ++ it identifies html links from the text and i can just ctrl click on a link.


You should be able to discover SSH end-points and URLs by reading documentation. If you can't, then documentation needs writing.


On windows, WinSCP has a nice internal folder structure for saving all your links. I tend to keep a txt file of all my urls though.


Most of my ssh connections are aliased via my dotfiles. I use bookmarks in browser.


I use ~/.ssh/config for SSH and Dynalist.io when I need saving URLs.


We have a wiki, but I personally keep everything in an Org-mode file.


Ctrl+R search in bash has always worked well enough for me.


SSH:

Windows: mRemoteNG

Linux: Gnome connection manager

Mac: Iterm 2 profiles (any better options?)


SSH endpoints go into ~/.ssh/config


TextExpander which is more or less like alias


need one time ssh config for new host/user/key




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: