
In The Works – Amazon Aurora Serverless - polmolea
https://aws.amazon.com/blogs/aws/in-the-works-amazon-aurora-serverless/
======
superasn
Last week I created a small framework called lambdaphp[1].

My aim was to host a Wordpress or Laravel site on aws lambda without paying
any monthly hosting charges. I got everything running (sessions, fs, request,
etc) except of course I still had to use RDS and I think this takes care of it
too. So now I can expect to run a full site which is only billed by the amount
of resources consumed.

Of course my project was just for my own amusement but I think this is the way
how it's going to be done soon or where Aws is heading. Seems pretty nifty!

[1] [https://github.com/san-kumar/lambdaphp](https://github.com/san-
kumar/lambdaphp)

~~~
meritt
AWS will likely add PHP support, yes but they absolutely will not do what this
project does: Running a NodeJS http server that launches a PHP binary and runs
local scripts.

~~~
superasn
That's just one way to make it happen until we get proper support. I wrote to
AWS support and they said that they will consider PHP support for AWS lamba as
many people have requested it too (did not give me an ETA though).

The funny thing is the response time is quite good despite running it through
a NodeJS server that launches a PHP binary (340ms, faster than 98% of the
sites as per pingdom[1]).

[1]
[https://tools.pingdom.com/#!/ex9izm/https://www.lambdaphp.ho...](https://tools.pingdom.com/#!/ex9izm/https://www.lambdaphp.host/)

~~~
meritt
I understand your options are limited until AWS adds first-class PHP support
but there are still plenty of superior ways you could arrange this. You're
running the PHP CLI as opposed to using a persistent process daemon and
speaking to it over one of the other SAPIs (e.g. FastCGI). You're also
completely defeating the inherent async properties of NodeJS too by launching
a synchronous PHP process, and you're completely missing the benefit of "pre-
warming" servers. Only the NodeJS aspect gets to pre-warm but a PHP process
must launch from cold for every single request.

340ms is not a good response time at all. You need to elevate your
expectations.

~~~
lozenge
You can't launch a persistent process on AWS Lambda.

~~~
meritt
You most definitely can. Lambda freezes the state of the container and thaws
it for the next request (assuming the next request is relatively soon,
otherwise it deletes the frozen instance entirely -- This is the entire
premise behind keeping lambda functions "warm") including any background
processes that might be running.

It's not persistent in that it exists between requests, but for the use case
here it's exactly what is needed. He would save significant response time by
having php-fpm already running, code parsed, opcodes cached and just issuing a
new FCGI request to php-fpm instead of relaunching the PHP CLI each and every
time.

------
sologoub
Really wish "serverless" also meant that it can work with AWS Lambda
efficiently. As is, each function would try to open a connection, making the
overall overhead extremely high and stressing DBs.

~~~
k__
Can't you open the connection outside of the function?

So as long as the function is hot, it won't reconnect.

~~~
mjb
That's right. Make connections to databases (and most other things) the first
time your Lambda handler runs, and stash them in a static/global variable for
re-use on future runs. That allows you to amortize the cost of forming the
connection over many executions of your function, which improves latency,
reduces cost, and reduces load on the backend.

~~~
sologoub
Haven't heard of this approach yet. Do you have a write up I could reference
to try it out?

~~~
k__
I never read anything official, but some stuff by framework makers
(serverless/apex up)

edit: [https://medium.com/@tjholowaychuk/aws-lambda-lifecycle-
and-i...](https://medium.com/@tjholowaychuk/aws-lambda-lifecycle-and-in-
memory-caching-c9cd0844e072)

~~~
sologoub
Thanks, but I think this is very different from connection pooling on a DB,
say pdbouncer.

Doing some searching, I did find this that seems much closer:
[http://blog.rowanudell.com/database-connections-in-
lambda/](http://blog.rowanudell.com/database-connections-in-lambda/)

TLDR; you can define the connection to DB outside of the scope of a given
function, so it’s scoped to the container and can be reused so long as the
container is not recycled. Seems promising!

~~~
k__
That's basically what I wrote

------
brootstrap
3 months later... AWS announces serverless, databaseless database system.

~~~
ignoramous
Which might be when their marketing team decides to re-launch S3?

~~~
noobiemcfoob
I'd merrily laugh at that ad :)

------
dhd415
Given the existing architecture of Aurora, the independent scaling of CPU and
storage seems pretty straightforward. What is much harder to scale up and
(especially) down is a warmed-up buffer pool which is critical for consistent
query performance. I wonder if that is what they mean in the article when they
say that scaling happens on a "pool of 'warm' instances". If so, I'd be very
interested in more details on how that works.

------
bpicolo
I wonder how the cost differs, given they have to keep hot standbys? Perhaps a
lot of behind-the scenes prediction to make it cost effective? Takes a while
to create a standby from scratch for large DBs.

Either way, sweet tech. Seems like a fun thing to build

~~~
zwily
Aurora doesn’t work that way - it has one storage layer shared by all the
nodes. Adding a new replica is very fast no matter your data set size.

~~~
bpicolo
Ahh, makes sense. Thanks for the clarification there!

------
mrep
Is this for the master or the read replicas?

If it is for the master, that would be amazing and I would wonder if you can
scale up past the 1 instance size max (currently r4.16xl)

------
drej
Cool... but.. how?

~~~
jeffbarr
Read the post!

~~~
drej
I have, I'm just excited and baffled at the same time :-)

