

SSH into your EC2 instances with ease - ozkatz
http://ozkatz.github.io/ssh-into-your-ec2-instances-with-ease.html

======
jwilliams
Perhaps different use case - but I prefer to use a VPC with internal
addressing and DNS. Particularly if you're using more than just a few
instances.

Then have a bastion host in a DMZ that forward to the actual instances (I
prefer 172 as it tends to avoid clashing with wifi networks). This does cost
you a m1.small Amazon instance, but if you reserve it the cost is negligible.

Even better. You can do this automagically with ssh by putting a suitable
`ProxyCommand ssh <bastion> "nc %h %p"` in your ssh config. So you just `ssh
172.0.0.10` or ssh `my-internal-name.blah` and it tunnels straight in for you.

You can pair this with internal DNS if you want to get really fancy - although
it's a bit fiddly. From what I read internal DNS is pretty high up on the
Route 53 feature request list.

~~~
throwaway2048
SSH now has a -W netcat mode, so your ssh config can be simplified to:

    
    
           ProxyCommand ssh -W %h:%p <bastion>
    

this also means that netcat, or even a shell/ability to execute commands is
not required on the remote bastion host.

~~~
jwilliams
Ah! I wasn't aware of that. I'll update my scripts.

------
jperras
Hi! I'm the author of the blog post that you listed in your article. Glad you
found it useful; it surprises me to this day, almost 2 years later, at just
how many pageviews it continues to generate.

However, I would like to point out that the correct solution to this problem
is DNS, as others here have indicated. Couple Route53 with something like
Zonify ([http://nerds.airbnb.com/easy-aws-inventorying-with-
dns/](http://nerds.airbnb.com/easy-aws-inventorying-with-dns/)) by the fine
folks at AirBnB, and you've got something quite powerful that is diff'able via
your normal tools, and can be easily versioned for sanity and safety.

Don't let my comments (or the comments of others here) detract from the pretty
clever approach that you took. I think it's the fate of every ops/devops to,
at some point in their careers, create a host address storage/querying system
that contains an ad hoc, informally-specified, bug-ridden, slow implementation
of half of DNS without realizing it the first time around.

~~~
merlincorey
I'll even admit that I once had a particular client whose hosts I managed with
a Makefile that grepped a HOSTS file. I am ashamed and my neckbeard is
burning!

------
duey
Could also just name your hosts appropriately and use something like puppet to
create and update DNS records for your hosts automatically.

~~~
merlincorey
Yeah, as a neckbearded network and systems engineer I really don't understand
this thing at all - it should be a non-issue. Specifically, because the OP
states the machines are mostly _dedicated_ machines, there should be
absolutely no reason why they don't have _dedicated_ hostnames in DNS, either
locally or globally.

At organizations where I have held the above titled roles, DNS was one of
first things to be put into service if it had not already been used.
Connecting to the IP address of a dedicated machine by a nice name is pretty
much what the DNS was invented for. DNS is very much not static and if you are
changing a host's IP very often, you can set it up such that the DNS responds
appropriately with the proper up to date IP address.

OP's script is nice but... it's solving an already solved problem, IMO.

~~~
vacri
Confused me as well, since this is pretty much what DNS is for - I spent some
time trying to figure out the problem being solved...

~~~
DrStalker
The problem is he doesn't know how to use DNS. :-)

~~~
hobs
Another great way is to just modify ~/.ssh/config To something like this:

host alias hostName thehostname user your_username

I like the work in the article, but many times when I think I need to whip up
a sweet script to fix some problem in the terminal I find the old neckbeards
already wrote it, polished it, and left it on the shelf for me to use.

------
ctur
This is a pretty complicated solution. There are a ton of easier ones, but
probably the easiest is to just use ec2-ssh. It lets you apply tags to your
ec2 instances and ssh to them by very simple names.

[https://pypi.python.org/pypi/ec2-ssh](https://pypi.python.org/pypi/ec2-ssh)

~~~
Terretta
Discussed and dismissed as too slow in the article.

------
josephruscio
If you're looking for something a little more packaged and not averse to
installing a Ruby gem, this will manage multiple AWS accounts and allow you to
ssh/scp using AWS instance IDs as the target:
[https://github.com/mheffner/awsam](https://github.com/mheffner/awsam)

------
imperialWicket
DNS is a good solution here, but it also seems appropriate to highlight that
you shouldn't need to be connecting to these instances directly. As others
said, some type of configuration tool should be in place, logs should be
centralized, storage should be centralized, queues should be elsewhere.
Painful ssh config is a symptom of a different issue.

------
anuraj
what is the problem with

>>ssh -i <yourkey> ec2-user@<yourinstance>.amazonaws.com

isn't it the easiest it can get to?

~~~
bas
Welcome to Hacker News, where things that were obvious ten years ago require
blog posts (and page traffic) today. Sigh.

------
bitskits
This is a complicated solution to a simple problem. How about just editing
/etc/hosts or adding it to your DNS?

------
TallboyOne
Just use the elastic IP? I have all of my instances in a subnet, and just
connect to each's elastic IP. easy.

------
dmourati
No, just no. Please make it stop.

