
How to Run Wordpress on AWS - Tomte
https://twitter.com/QuinnyPig/status/1250910042246660096/photo/1
======
laumars
There is a lot of scoffing in this thread and I agree AWS does make some
things overly complicated but I think the scoffing here is unfair because the
architecture looks far more complicated than it actually is.

First of all, lets bare in mind this is talking about larger, enterprisey
installs. So what would you need if you hosted on-prem:

\- a DB with a read replica in case your main DB died (RDS can actually build
read replicas for you pretty easily so this part of the deployment would be
much easier than doing the same MySQL replication config on-prem)

\- Redis for caching (my experience with WP is a couple of years out of date
now but when I last managed an highly popular WP install (several in fact)
Redis or Memcached caching was a necessity to get the kind of page load times
our visitors expected)

\- Shared storage (bare in mind you're running more than one WP web server -
if this was on prem you'd be hosting those files off a SAN)

\- A load balancer (I don't think I need to explain a justification for that
one)

\- And probably a CDN too (reduce the stress on your web-servers and internet
gateway)

That's literally the bare minimum any large Wordpress application would need
and that's all that AWS architecture is outlying. Sure there's a few more
items on there like NAT gateways but those are just network stacks that are
deployed as part of your VPC (or in on-prem terms, that's just the VLAN and/or
subnet management that you'd assign to your load balancer).

If you don't need multiple web-servers nor high availability then your
architecture becomes drastically simpler (no load balancer, no shared storage,
no replica DB, etc) but also you would run the risk of downtime and even data
loss. Which might not be an issue for some people but it's also not one of the
selling points of AWS. So you'd be better off with shared hosting or a managed
WP service -- or to put it another way, AWS was never going to be a good fit.

edit: Personally I think the issue here is more WP -- or rather the classical
CMS design -- than it is AWS. A static site generator style of CMS would be a
lot simpler cloud design and run a lot cheaper too. WP is designed from an era
when web shops had this infrastructure set up by default.

~~~
momokoko
But all what you described will port, with some adjustments, to a different
provider for another decade. This is all to host on one specific host. The
money spent on planning and architecture is a one time cost and it is very
high. Not to mention the much higher cost of ownership over time for hosting
on AWS.

AWS feels like hosting on IIS back in the day. It was extremely expensive and
pointless for more than 50% of their customers, but people kept saying “but
we’re enterprise!” and throwing money away until the world finally woke up.

~~~
Spooky23
AWS is just not a toolset optimized for Wordpress in a SMB/Startup/Hobbyist
org. And organizations who do enterprisy stuff on AWS aren't optimized for IT
practices that look sane to others.

You need this complexity in an enterprise environment because the web teams
don't have access to databases, and the DBAs aren't allowed to manage
databases outside of the data network, etc. That's why enterprise content
management exists. Big companies and public sector value repeatability and
seperation of duties over cash and time.

~~~
closeparen
Separation of duties is the root of so much evil. I'm very glad to work in an
environment where anyone can _submit_ a change to anything - it just has to go
through code review. Same security property: malice requires conspiracy. But
also safety against mistakes.

------
cbg0
While this is indeed overly complicated, you have to take into account the
fact that there are websites of all shapes and sizes powered by WordPress
these days, and not just blogs, but things like eCommerce, booking, dating
sites, forums and more, some of which would actually benefit from an overly
complicated setup like this.

If you're just running a simple WP blog, you could host it on a cheap shared
hosting package and not worry about devops, and as long as you use a cache
plugin that will serve your entire site as static content, you'll be able to
handle a large volume of traffic easily.

~~~
zerkten
>> While this is indeed overly complicated, you have to take into account the
fact that there are websites of all shapes and sizes powered by WordPress
these days, and not just blogs, but things like eCommerce, booking, dating
sites, forums and more, some of which would actually benefit from an overly
complicated setup like this.

It's amazing what runs on Wordpress and how relatively easy it is to scale
with straightforward infrastructure. I think the diagram is for to enterprises
which adore the complexity (because the enterprise architects won't be doing
any of the maintenance), and is conditioning people to solutions which will
have high consumption cost (because your money is best left with whatever
cloud provider you choose.)

That doesn't mean you shouldn't go all in on taking advantage of your cloud
provider when you have a good model of costs and performance. However, you are
best starting with the needs of the application and working from that point as
you allude to.

~~~
folkhack
> I think the diagram is for to enterprises which adore the complexity

Bingo. Anecdotal experience: I saw this happen at my last position where they
did an over-complicated/fragile WordPress setup with K8s.

We went over to K8s for "performance optimization", "security", and "system
stability"... what bothers me the most as a WordPress expert (specifically
with HUGE/redundant rollouts) is that none of these reasons/bottlenecks were
studied before designing the infrastructure. K8s is great, but if we start
throwing arbitrary infra at our problems all we see is increased complexity
without addressing the core tech issues.

When I pushed to load test the new infra it was met with crickets - likely
because that was never the goal in the first place. The goal was to jam a sexy
technology into our stack so someone's quarterly goals were met with some
lookin' pretty/SF-deluded engineering director that touts not knowing
development/architecture makes him better in his role.

Honestly - it's just job security, chasing a dragon, and cargo-cult for most
of these orgs... if we were _actually_ on the hook for "more with less" you
would _NEVER_ see an atrocity like what was posted. KISS is sure out of style
when it comes to these fool's outrageous egos/budgets.

~~~
zerkten
> KISS is sure out of style when it comes to these fool's outrageous
> egos/budgets.

We will see how it survives this potential long-term economic downturn we are
in, or approaching. I laughed out loud when you mentioned K8s for "security".

------
yllus
This is ridiculously over the top.

I run our organization's WordPress sites on Elastic Beanstalk (just do a "eb
deploy" from your Git repo of the site and it gets up there), plus RDS (Amazon
Aurora) and CloudFront. EB and Aurora auto-scale and CloudFront does its CDN
thing.

I highly, highly recommend Elastic Beanstalk (
[https://aws.amazon.com/elasticbeanstalk/](https://aws.amazon.com/elasticbeanstalk/)
) to anyone who wants their org to concentrate on creating value (new/better
code) and being cost-efficient (use the compute power needed that minute
instead of overprovisioning) instead of fiddling with custom server configs
and wondering if you're fully patched up. It was a game changer for us.

~~~
headcanon
Sounds like your setup is only 2 steps away from what OP describes:

1\. If you need multiple WP instances, you'll need to duplicate that setup
with read replica + load balancer

2\. If you want to do things enterprise-proper, you'll want to set up a VPC,
which means you need the internet gateway + some routing setup

And voila, thats the above architecture. If you don't need it you don't need
it. But none of this is particulary complicated to set up if you know what
youre doing, especially if you're using Terraform.

~~~
neurostimulant
> 1\. If you need multiple WP instances, you'll need to duplicate that setup
> with read replica + load balancer

Don't forget that wordpress is not stateless. It'll need write access to its
code directory, which complicates scaling. AWS has EFS which support up to
1000 clients, but at that point you'll probably better off rewriting your
website with something else that easier to scale on the cloud.

------
dadarepublic
Holy smokes!

And I just did a WordPress demo site using AWS Lightsail for $3.50 USD per
month (obv the lowest tier).

Took me less than an hour to get it all rolling.

Just check out the feature list:
[https://aws.amazon.com/lightsail/features/](https://aws.amazon.com/lightsail/features/)

There are some oddball finicky things about it (like not seeing the instance
except in the Lightsail interface unless you manually fix or migrate it) but
it was easy as pie.

~~~
acomjean
I set a wordpress using digital ocean . They had a template and make it pretty
easy..

I used to use AWS when I was at a startup. this light sail seems similar to
Digital Ocean's built in templates.

I appreciate the easy set up of the database and code and that it gives you a
sane path to https.

[https://marketplace.digitalocean.com/apps/wordpress](https://marketplace.digitalocean.com/apps/wordpress)

------
bluedino
When I hear things like "Wordpress Powers 85% of websites", I wonder how many
of them are running on a $5/month shared host

~~~
antoineMoPa
You can handle a surprising amount of traffic with a $5/month Linode.

------
drchopchop
Provisioning a single Wordpress box is very straightforward using AWS
Lightsail and a Bitnami Wordpress image. You can push it up to the equivalent
of an m5.2xlarge instance. Unless you really need auto-scaling, this seems
like overkill?

~~~
gtsteve
EBS volumes have a ~99.9% reliability rating so on a long enough timeline
you're going to need to restore from snapshots.

Writing to EFS however means that you can just treat your EC2 instance as a
stateless front-end server that can be swapped out whenever needed.

Couple that with an autoscaling group and when the instance inevitably dies,
it'll automatically come back up on a new host.

I don't have an ops team so I prefer AWS to be responsible for stuff like this
wherever possible.

~~~
dannyw
I have a Wordpress blog that has been continuously up, barring two host server
migrations (~3 hours), since 2004.

It is powered by a Linode. I have literally never needed to manage the server
unless I was actively working on something, like changing the theme.

People overcomplicate things these days. You could spend X hours creating some
autoscaling, autorecovering setup; but for 99% of projects, it'd maybe
encounter an outage that requires your intervention, once a decade.

~~~
brodouevencode
Fair, but how many customers do you serve? The example provided was enterprise
level. TBH I would have not chosen Wordpress as the application of choice but
the architecture is what would be recommended for an enterprise solution.

Too many people are getting hung up on it being WP. AMZN doesn't care what the
application is and has no vested interest (that I'm aware of) in WP.

EDIT: clarity

------
vital101
I personally wouldn't build my own infrastructure. I'd use something like
[https://convesio.com/](https://convesio.com/) and let them handle the
scaling.

~~~
harryf
Exactly. Wordpress (plus plugins) is too much of a liability to host yourself.
Personally use wpengine.com with comes with tools to keep Wordpress updated
with the latest security patches automatically

------
api
Amazon has found a way to attach a profit model to sophomore developers'
tendency to over-engineer absolutely everything. It's brilliant. Imagine if
the authors of all those object oriented design patterns books had found a way
to charge people by the minute for each abstraction in a
FactoryFactorySingleton!

~~~
folkhack
> Amazon has found a way to attach a profit model to sophomore developers'
> tendency to over-engineer absolutely everything.

Amen. It's also becoming the "safe" thing to do to put all of your eggs in the
Amazon basket. Also just think about all of that data we're giving them...
gah.

They're laughing all the way to the bank.

~~~
api
Meanwhile for companies like ours _not_ doing this is a competitive advantage.
Peoples' jaws drop when we tell them how insanely little we spend on cloud for
hundreds of thousands of concurrent users and a lot of stuff going on.

The secret: a la carte bare metal (packet.net, datapacket.com) for high-load
and especially high-bandwidth stuff combined with a simple not-over-engineered
services deployment in Google GKS with instances tuned for our work load (high
CPU, low memory) so we are not paying for RAM we do not need. We are really
only using Kubernetes to keep stuff up and auto-scale. We are not even close
to using all K8S's features.

GKS is most of the price tag. We could go down to about 1/4 the monthly cost
if we rolled our own Kubernetes cluster with Rancher or something at
packet.net, but the added labor might negate the savings. We may still do this
if we grow significantly, since at larger scales the added labor may be
justified by geometric cost savings.

In general the bigger you are the more sense it makes to roll your own. Small
and growing: use managed stuff. Bigger: consider bare metal and DIY clusters.
Even bigger: co-location can start to make sense. HUGE: build your own data
center! It's always a pure spreadsheet decision though. Also keep in mind that
you can sometimes split your stack and do managed for one part and bare metal
for another. For us the split is between things that are micro-service based,
database backed, and need to auto-scale vs. dumb simple bandwidth and CPU
pumping services that are trivial to deploy and require little attention. The
latter is what goes on bare metal for massive operational cost savings. The
former goes in GKS for _labor_ cost savings.

... and yes, it's very stable, and probably more stable than finicky over-
engineered Rube Goldberg machine AWS deployments that engineers constantly
have to fiddle with.

~~~
folkhack
> We could go down to half that price or less if we rolled our own Kubernetes
> cluster, but the added labor might negate the savings.

It's so refreshing to see someone taking this sort of complexity into account
when planning new infrastructure. There's a HUGE cost to training and
supporting a new infra paradigm that I often see decision makers ignore
completely... in their defense it's a difficult thing to predict but that's
not a reason to ignore the overhead!

~~~
api
Always try to do a spreadsheet and estimate the actual cost.

The last time we did the math it was kind of a toss-up between roll your own
bare metal cluster and GKS when labor costs were considered, but those are
_estimated_ labor costs and we also didn't want to deal with the hassle. GKS
meanwhile destroys Amazon, so all the big clouds are not equal.

We decided we might make the leap when the savings for bare metal over GKE are
2X or more, since that leaves a good margin for under-estimating labor (which
is probably the case).

I edited the parent to add a point about rolling your own making progressively
more sense the bigger you get. That's because modern devops lets you leverage
a skilled admin across a big deployment, so the labor cost becomes somewhat
fixed and allows you to leverage the economy of scale and the cost advantages
of bare metal (to a point).

But do your own estimates! There are many details. One detail is that bare
metal bandwidth (especially egress) is _radically, ridiculously, nonsensically
cheaper_ than any of the big cloud providers. I italicized those because damn
do the big cloud providers ever rape you on bandwidth. The same is true to a
less absurd degree for raw CPU power. It mystifies me when I see people doing
deep learning on AWS. Storage on the other hand is _far_ cheaper on Amazon and
other major cloud providers, especially when you factor in the cost of
achieving high reliability and uptime. If you are warehousing a ton of data,
you might never want to leave these (unless you actually get bit enough to
build your own racks!).

Edit: caveat on storage: check out Backblaze B2 and using it from a nearby
bare metal data center if you are mostly warehousing and not heavily
accessing. Again: there are many many devils in these details, so do your own
shopping and cost estimates for your unique workload and factor in everything:
compute, storage, bandwidth, labor, uptime requirements, agility, rapid auto-
scaling needs, geographic location and/or multi-location needs, etc.
Hosting/cloud is a huge market with a dizzying array of choices each making
sense for different mixtures of customer profiles. Expect to spend a day or
two just surfing around and shopping.

FYI the bare metal hosts we've had good luck with are packet.net,
datapacket.com, and OVH, in approximately that order. (For OVH we have only
used their bare metal hosts and can't comment on their other services.) There
are others. We had lots of odd issues with Hetzner but they are dirt cheap
partly because they cut corners like using desktop-grade hardware. Hetzner may
actually make sense if you need a ton of super-cheap _compute_ and have a load
that is very tolerant of node failures, since as far as I can see nobody sells
CPU as cheap as they do. If I were doing a ton of deep learning model training
I'd consider using Hetzner for raw compute power cheap and I'd treat the nodes
operationally like on-demand/spot instances.

You might look at Digital Ocean and Vultr too. DO is adding some managed
services and apparently they're decent. In the past the simple "we give you a
VM" hosts have occupied an uncanny valley though: not as cheap in
price/performance terms as bare metal "rent-a-box" hosting, and not rich
enough in services and value-adds to compete with big cloud.

~~~
folkhack
> One detail is that bare metal bandwidth (especially egress) is radically,
> ridiculously, nonsensically cheaper than any of the big cloud providers.

I work in the ISP space specifically with folks who help place businesses in
datacenters. This is 100% accurate - sometimes it can literally be pennies on
the dollar for egress.

> Storage on the other hand is far cheaper

Agree - S3 will always be a backbone component of my toolbox when building out
infrastructure. It's been solid from day-1 when I was in my early 20's using
it on remix competition sites (wavs are big... 10-20 wavs per-song oh god).
The awe still hasn't worn off for me as to how quick I can scale large-scale
web-friendly/secure storage with tools like that.

\---

On the VPS hosts... I'm weirdly loyal to them and have kept infra on Linode
for _years_. I also have had extraordinary positive experiences with DO as
well. I find they have the customer service that solutions like AWS lack for
the little guys... if we're talking about a big spend then yea I will totally
go to a more formal provider, but for my stuff I prefer knowing that there's
solid engineers on the hosting side that I can actually talk to.

I'm REALLY excited to see folks like Linode/DO support things like
S3-compatible APIs, K8s, etc. I think that's important for those solutions to
stay relevant.

\---

Thanks for such a great response - conversations like these are invaluable,
and I've definitely learned from this! I am VERY interested in Vultr - totally
could have used them 5-6 months ago for GIS crunching (fiber maps, lit
building lists, geocoding, census blocks, etc). Once I open that up as a
service to my users I'll likely be on the hunt for SSD-backed high-compute on-
demand instances.

~~~
kyuudou
DO also has great overall documentation quality and standards, IMO.

------
adamsvystun
The question of whether this is overly complicated is a wrong question. While
in most cases this might be too much, there are situations where this is
necessary. It all depends on the use case.

I for one welcome the description of high-traffic production level
architecture description. Usually on the internet, it is hard to find a real
setup description. Most of architecture examples are simple, not something you
would use in production yourself. Plus, simplifying architecture is always
simpler, than thinking how to make it scalable properly.

------
john-shaffer
I would not use Wordpress to serve a large site directly unless I had no other
choice. You can create a much simpler, faster, scalable and more secure
architecture using WP2Static
([https://wp2static.com/](https://wp2static.com/)) to publish static files to
an S3 Bucket/CloudFront deployment (or your storage+CDN of choice). WP isn't
designed for this type of deployment, but the right plugins (e.g., WP
Serverless Forms, WP Offload Media Lite) make it work.

If you run the Wordpress instance locally or otherwise secure it from public
access, you don't have to worry about keeping WP and its plugins updated,
which saves considerable time and expense. This is my go-to solution whenever
a WP install is requested.

------
gtsteve
As it happens my company has hired a marketing company who wants to use
Wordpress for our new website.

This design really isn't so far out from what I planned but the NAT gateways
aren't necessary and I was going to use Fargate instead of EC2 to host it.
Also, I was going to host it in a single availability zone as the impact of
downtime is minimal.

EFS seems like a natural choice also for upload directories, etc, and you can
attach EFS volumes to Fargate containers now.

I'd be interested to hear if anyone else has done this and what their
experiences are.

~~~
tilolebo
You can aim to run a truly stateless wordpress container using
[https://github.com/roots/bedrock](https://github.com/roots/bedrock) for the
wordpress install together with Composer and
[https://wpackagist.org/](https://wpackagist.org/) to manage your plugins and
dependencies.

Then there's [https://github.com/deliciousbrains/wp-amazon-s3-and-
cloudfro...](https://github.com/deliciousbrains/wp-amazon-s3-and-cloudfront)
to offload all media files to S3 (consider putting a Cloudfront distribution
in front to get a custom domain + HTTPS).

I've been there and I can't honestly recommend it. You'll be on your own.
Also, it really really feels like Wordpress wasn't made to be deployed this
way. If I were you I'd rather use a good hosting provider like WPEngine and
spend my time on more valuable things.

~~~
folkhack
Yep - I run my own composer-driven WP setup based on the roots design and can
honestly say that it's an anti-pattern with how WP wants to handle
plugins/themes/core updates/etc.

I say it's an anti-pattern because we're talking about WP - a platform that
allows you to arbitrarily update PHP files from a web admin/install
plugins/etc... It's this sort of flexibility that our clients/end users want
and when we start designing that BS out due to security reasons it typically
ends up in a user yelling at me for "WHY CAN'T I JUST INSTALL A PLUGIN LIKE
THE WORDPRESS TUTORIAL SAYS?!"

When you come back with "security", and "I have to review that code" it almost
always falls on deaf ears. When you start designing the flaws out of WP infra
you start to realize those same flaws are why marketing is hellbent on keeping
it as a CMS - they hate working with/waiting on devs to get things done.

------
neurostimulant
WordPress is popular because it's very easy to setup. Just throw it into your
htdocs folder, install some theme and plugins and you're good to go. Make
install harder and you'll lost a big reason to use WordPress.

I think the only reason someone would do this is when a company that's using
WordPress want to migrate to AWS to use "cloud" technology but don't want to
invest in rewriting their website, which is odd. If they have engineering
capability to move their WordPress site to AWS with complicated architecture,
surely they have the chop to rebuild the website with something else that more
compatible with cloud technology?

~~~
kyuudou
If it's low traffic enough and the footprint is small, couldn't the whole
thing be thrown in a couple S3 buckets? I'm still learning this stuff.

~~~
neurostimulant
I don't think you can run PHP off an S3 bucket. You certainly can sync the
upload folder (static files) to S3 as a form of CDN via some 3rd party
plugins, but it doesn't solve the main issue when you try to run wordpress in
multiple instances load-balanced setup (I assume the diagram is intended to
address this, hence the complexity). For low traffic sites, you probably won't
need to worry about this yet.

The real headache is when you need a load-balanced setup. You'll need to keep
the whole wordpress directory (not just the upload directory) synced between
instances, which you can't do over S3. NFS is a popular option, but NFS server
usually have a hard limit on the number of connected clients, which ultimately
limit the number of your instances (e.g. if you need 20 instances but your NFS
server crap out when >12 clients connected, you're out of luck). Amazon EFS
supposed to solve this since it supports up to 1000 connected clients, but
people often complain about slow performance.

That being said, scaling wordpress to handle high traffic is a headache unless
you write your own theme and plugins. The majority of wordpress sites use
those visual editor plugins and mega themes, which is a nightmare when you hit
by high traffic as they are often impossible to cache (crazy stuff such as
css/ generator that vary its output depending on user agents, etc) and often
rely on wordpress statefulness (write arbitrary files to arbitrary locations,
which means typical load-balanced/autoscaling architecture will not work as it
assume your application is stateless). Personally, I'd rather rewrite the
whole site with something else than trying to setup a load-balanced wordpress
installation.

------
aussieguy1234
During a hack day a small team and I got WordPress running on Lambda,
serverless.

We custom compiled PHP from source to run in the lambda environment then got
node to pass the requests to PHP cli.

Took some tinkering but we got it working.

------
ausjke
Unless I'm a fortune 500 why do I need AWS at all, I can use Linode,
DigitalOcean etc to set up stuff fast and easy, instead of spending all the
bandwidth then getting lost in AWS maze.

~~~
btian
You can get an AWS VM for $3.50/mo

And you can keep using AWS until you're part of Fortune 500

~~~
system2
Using AWS always gives me anxiety. It starts at $3.5, then some random thing
happens you click a random amazon page and bam you pay 350 that month until
you realize. Might not apply to 3.5 you are talking about but I've had enough
with AWS. Unless the client really needs something global, I think you can
always get away with DigitalOcean or Linode type of VPS. Heck I've been using
CloudWays for basic projects, takes 10 seconds to launch the app and manages
DigitalOcean for me. Magento and other root necessary apps I use LEMP image
with Digital Ocean. AWS development and debugging are not for small players.

------
minimaul
And that design (the Amazon one) will be unusably slow for any actual uncached
access!

WordPress on EFS (and they explicitly mention PHP files on their EFS
description) is unbearably slow!

------
vr46
That is an extremely simple architecture design. Reverse proxy to load balance
two instances. Two instances with a cache talking to RDS and an EFS volume to
ensure uploads and files stay consistent.

The networking is the bit that most people don't care about or see, but it's
there, and that's it.

------
vangelis
There's always the Lambda PHP runtime...

------
ptrenko
Next: How to run wordpress on ENIAC

------
romille
Just use lightsail? Literally what it’s for

~~~
robjan
Good for your personal site, bad if your revenue depends on it

