
Modern cloud architecture on AWS: server fleets and databases - colemorrison
https://start.jcolemorrison.com/understanding-modern-cloud-architecture-on-aws-server-fleets-and-databases/
======
malisper
Although there are a ton of AWS servers, there's only a few core services that
I recommend:

    
    
      EC2 - You need a server.
      RDS - You need a database.
      S3 - You need to store files.
      Lambda - You are building an API with short lived requests.
    

These services are all very high quality and are excellent at what they do.
Once you get outside of these core services, the quality quickly drops. You're
probably better off using the non-AWS versions of those services.

For a few quick examples, you should be using Datadog over CloudWatch,
Snowflake over Redshift or Athena, and Terraform over CloudFormation.

~~~
scarface74
Why would you ever use Terraform over CloudFormation? There are so many parts
of AWS that use CF and that you can modify from the getting started templates
like CodeStar and exporting a SAM template from your lambda template.

Before someone comments on how TF is “cross platform”, all of the provisioners
are vendor specific.

As far as what other services to use, if you are hosting your own services on
AWS instead of using AWS manager services, you’re kind of missing the point of
AWS.

But a few other services we use all of the time are CodeBuild, ElasticCache
(hosted Redis), ElasticSearch, Route 53, load balancers, autoscaling groups,
SSM (managing the few “pets” until we can kill them), ECS, ECR, Fargate, SNS,
SQS, DynamoDB, SFTP, CloudTrail, Microsoft AD, we are experimenting with the
recently announced Device Farm/Selenium service, step functions, Athena,
Secrets Manager, and a few more I’m probably forgetting.

~~~
viraptor
> Why would you ever use Terraform over CloudFormation?

1\. You're using Terraform already for resources outside of AWS (cdn,
monitoring, dns, anything else) and want to stay with a common tech.

2\. You're running into cases that CF doesn't support and have to generate
your descriptions externally, or use sparkleformation hacks.

3\. You want to manage a new AWS service. (CloudFormation support lags behind
Terraform, new services don't get CF resources for months)

~~~
scarface74
In cases two and three it’s just as easy to write a custom resource....

~~~
viraptor
You mean just as easy to write/test/deploy a custom resource as it is to use a
ready one? I disagree. I think there's a few days of work of difference in
that case.

~~~
scarface74
Actually, no.

Examples for creating them in Java, Python and Node are here

[https://github.com/stelligent/cloudformation-custom-
resource...](https://github.com/stelligent/cloudformation-custom-resources)

Just add a few lines of code for create, update and delete for your resource.

For Node and Python, you can write them in the web console, test them, copy
the code to your git repo and export the SAM CF template for your CI/CD
process.

------
Jonnax
At what scale would you want to use RDS rather than using an EC2 instance with
Postgres installed?

Assuming that the operator has the skills to manage Postgres.

It's not like RDS does something complex like Geodistribution, right?

Also what is the scaling like? Is it automatic? How quickly can you handle
more connections? Because my understanding was that it was slow.

I did have a play with their RDS Postgres nonths before, and I managed somehow
to crash it requiring a restore from snapshot. Also their smallest instance
was quite expensive for the performance.

~~~
makmanalp
I'd ask the opposite question - at what scale would you want to have your own
custom setup rather than RDS? Managing your own database infrastructure for
workloads other than "a few queries a second" is hard work with a lot of
pitfalls, and you better be at a size that there's some benefit (high levels
of customization, use case specific tuning, economies of scale, etc). As a
person who does exactly this for a living, I'd rather shell out for RDS or a
similar offering than my own setup most of the time. Especially at first,
before you discover what exactly you /don't/ like about it or what you'd want
different.

~~~
tmpz22
Is it hard work though? In a couple hours you should be able to setup
automatic backups and practice going through the recover process a couple
times. That's all there is for most small-business setups, but if you are
daring you can now do whatever you want with the config file, install
extensions, setup basic system monitoring (CPU/Ram usage, disk usage, etc.).
GCP/Digital Ocean let you look at node resource usage automatically, and since
Postgres is probably the only process it means you don't even need to set that
up!

~~~
malisper
> In a couple hours you should be able to setup automatic backups and practice
> going through the recover process a couple times.

Unfortunately there's a lot more too it than that. You need to handle when the
backup job fails or dies, have a process for deleting old backups, etc. Not
just that, but if you have multiple Postgres instances, you need to do this
work for each machine. I've seen first hand this kind of stuff become a huge
distraction. It's often worth it to pay AWS a bit more in exchange to not
worry about this stuff.

~~~
tmpz22
> Unfortunately there's a lot more too it than that.

Is there though? Consider what I would argue to be the "average" case:

* Your database never exceeds > 40% resource usage

* You service fewer then 1m queries/day

* You never burst more then 1k queries/minute

* You have a script tied to a cronjob that backs up the database, with basic error handling that sends you a Slack DM if it fails

* You have a script tied to a cronjob which deletes old backups, with basic error handling that sends you a Slack DM if it fails

What percentage of companies need more then that?

~~~
sciurus
Any that can't afford more than a couple minutes of downtime when a server
fails.

~~~
viraptor
That's definitely not an "average" company. It's also a _really_ small number
of companies that really can't afford that, rather than "earn less money than
usual".

------
kamilafsar
I keep reading all these horror stories about Aurora (especially PostgreSQL).
Is there anyone out there with an alternative story?

~~~
Roritharr
Using Aurora MySQL for over a year now in prod, purrs like a kittycat.

Just don't use the AWS Database Migration Service if you can help it, that
thing has a couple of badly documented pitfalls. (Fe. tables can't have ENUM
fields)

~~~
etaioinshrdlu
AWS Database Migration Service had a shockingly large gap between how it's
marketed and how well it actually performed.

It had so many gotchas and broken features. I'd be amazed if anyone got it
really working on large applications without weeks of time invested.

It would be so cool though if it worked seamlessly. It addresses some of the
hardest tasks in DB management, namely zero downtime server migration.

------
ramoz
"Modern" architectures can get quite complex and fast at scale & in complex
cases. This is merely a simple introduction to simple components of modern
cloud architecture.

~~~
pm90
I do agree with you, and I was lured into reading it because of that. However,
this seems like a nice introduction for beginners. Maybe it should be tagged
as such.

------
root-z
As someone who has spent a fair amount of time working with AWS. I appreciate
how approachable this tutorial is, as the official docs are usually way more
arcane.

