

Amazon Updates Route 53 DNS Service - acremades
http://techcrunch.com/2013/05/31/amazon-updates-route-53-dns-service-to-make-hosting-high-availability-sites-on-ec2-easier/

======
thezilch
Source of paraphrasing: [http://aws.typepad.com/aws/2013/05/amazon-
route-53-elb-integ...](http://aws.typepad.com/aws/2013/05/amazon-route-53-elb-
integration-dns-failover.html)

~~~
mpclark
It is sad to see sites that were once well focused covering anything and
everything, presumably in desperate pursuit of a few more page views.

I'm sure this is of interest to some people, but Techcrunch? Really?

------
al3xdm
Multi region failover is seriously inhibited with their current VPC setup as
cross-region traffic is ridiculously complicated to get working.

We are on the "EC2 Original" setup with a Cassandra cluster operating over
regions via Elastic IPs. We tried to setup a multi-region cluster on their
"EC2 VPC" mode and gave up. A VPC can only span a region and currently no easy
way to manage traffic between them. We looked at getting a VPN connection but
the cost for a decent connection gets way prohibitive.

~~~
rdl
What do you mean by a VPN connection? Doing direct connect and then your own
transport, or just the aws IPSec tunnel stuff over their normal transport?

~~~
thelarry
I am curious about this as well... Why can't you connect between different
VPCs?

------
jread
On this topic, today HP Cloud opened access to their Akamai Anycast DNS
service with 75 edge locations. Pricing is about the same as Route 53
($0.35/domain + $0.55/million queries). Not as easy to use as Route 53 yet -
configuration is via API only.

<https://www.hpcloud.com/products/DNS>

~~~
UnoriginalGuy
Can anyone explain what Anycast is/why it is so huge now? I just looked at the
Wikipedia article and it is quite Computer Science-y rather than talking in
specific terms how it works or what the benefits are.

I understand how DNS works. UDP packet to server, returns IP from host via
UDP, and or requests DNS record from a different DNS server with more
information about the domain (all the way up to the root servers).

~~~
jread
It routes a single IP address to different DNS servers depending on a user's
location. It generally provides lower latency/faster DNS queries by routing
users to closer DNS servers.

------
jasongill
Now if only they would lower the price; the issue that I had with Route 53 was
that you pay not only for successful queries, but also for failed lookups
(NXDOMAIN). So if someone does a lookup for
"nonexistantsubdomain.yourdomain.com", you pay for the query - even though
Amazon didn't serve anything.

Not a huge deal, but we had a few domains that were getting hammered with
spam/bruteforcing, and with Route 53 there is no way to find or block them -
you just gotta keep paying for the queries.

~~~
jxf
Isn't that just the way the cookie crumbles, though? Route 53 still has to
look at a DNS query to decide what to do next, and some of them will be bad
lookups.

------
programminggeek
I'm not sure how well this is going to work out in practice, but the concept
of having multi-region failover could mean that much closer to almost entirely
bulletproof infrastructure (if you are willing to spend the money).

~~~
recuter
Not really. This is no different than running your own, say, Haproxy with a
heartbeat between two boxes next to each other on the same rack or some such.

That Amazon now with a bit of fiddling will fail over to another region if
your ELB instance is down or because it detects errors is great - but hardly
bullet proof.

What if your ELB instance is ok but your app is returning garbage.. its still
returning something, right? And ELB will happily forward that traffic without
thinking anything is wrong.

~~~
colmmacc
Full disclosure: Route 53 developer here. There is one interesting difference;
when using DNS failover in combination with Latency Based Routing, Route 53
supports partition mode failures.

For example; if a customer has endpoints available in both the AWS Sydney
region and an AWS US region, and Australian international connectivity is
impaired then users within Australia will still go to Sydney. At the same
time, a user in New Zealand who would ordinarily go to Australia (as it's
closer) may now find themselves served by US endpoints because reachability to
Australia from New Zealand is impaired but reachability to the US is ok. It's
a small part of the availability story, but one difference in how a DNS
failover may handle an event.

The "returning garbage" problem can be a hard one - both Route 53 and ELB can
be configured to check a particular url for health-status, and it's important
that that url's health be indicative of the overall stack's health, but it's
definitely a challenge sometimes as an application operator to maintain a good
"deep" check. For example, on my own personal Wordpress installation I check
if the DB is reachable and answering before returning 200 from my status URL,
but I've seen service owners do much much more comprehensive checks including
inspecting some metrics, counting overall 500s, measuring response times, and
so on.

