Do your persistence elsewhere, probably in an API wrapped around RDS.
SSM or other secrets needing decryption and fetching at runtime should use the AWS recommended method of storing this data in the global scope (at least that's how it works in Node), but this should feel icky because it is icky.
Mixing statefulness into things that are inherently intended to be stateless probably indicates you've chosen the wrong tool for the job.
Each lambda has some state which, if one is sensible can be used to cache things. Having a db connection cached is perfectly sensible, assuming there is some retry logic.
Making an API around RDS its just a boat load of tech debt, when its trivial to cache a connection object. Not only that, it will add latency, especially as the goto API fabric of choice is http. So a 20ms call(worst case) can easily balloon upto 80ms (average)
its now possible to have SSM secrets referenced directly in lambda environment variables, so I don't see the need for the complications of doing it yourself.
Lambda is just cgi-bin, for the 21st century. There is nothing magic to it.
* Cold start, connection is established.
* Code runs and returns.
* Lambda stops the process entirely but retains the container.
* DB detects the socket is closed and kills the connection.
* Warm start.
* Code runs and fails because the connection is invalid.
The issue is that actions taken by stateful clients can't always be retried.
You can detect these kinds of errors, but it's klugey because these drivers generally assume that sockets dying is relatively uncommon, and it's painful to test this behavior when FaaS doesn't give you direct control over it.
So you typically wind up either with a proxy, or you decorate any handlers to refresh the connection ahead of time to make sure you have a fresh object.
Re: the connection issue, it would be great if there were a service or RDS feature that could do the DB connection pooling behind a standard AWS API, so that we don't have manage connections in Lambda and so that we can query the DB from public internet with standard AWS auth without having to expose the DB itself.
Google's recent Cloud Run service feels like a much more generally useful Serverless platform. Even AWS Fargate, despite not having the whole "spin the containers down when its not serving requests" feature, is how I envision most customers adopting for Serverless on the medium term (especially with their recent huge price reduction, its actually competitive with EC2 instances now).
AWS needs to get Fargate support added to EKS. Stat. It was promised 18 months ago at Re:Invent and is still listed in their "In Development" board on Github. That'll change the game for Kubernetes on AWS, because right now its kind of rough.
...that says it’s REALLY slow.