I use AWS extensively, so I can elaborate on AWS's approach to these problems. Deployment is straightforward with CloudFormation, VPCs/security groups/KMS/etc. provide well-documented security features, CloudWatch provides out-of-the-box logging and monitoring. Integration testing is definitely important, but becomes a breeze if your whole stack is modeled in CloudFormation (just deploy a testing stack...). CloudFormation also makes regionalization much easier.
The most painful part has been scaling up the complexity of the system while maintaining fault tolerance. At some point you start running into more transient failures, and recovering from them gracefully can be difficult if you haven't designed your system for them from the beginning. This means everything should be idempotent and retryable--which is surprisingly hard to get right. And there isn't an easy formula to apply here--it requires a clear understanding of the business logic and what "graceful recovery" means for your customers.
Lambda executions occasionally fail and need to retry, occasionally you'll get duplicate SQS messages, eventual consistency can create hard-to-find race conditions, edge cases in your code path can inadvertently create tight loops which can spin out of control and cost you serious $$$, whales can create lots of headaches that affect availability for other customers (hot partitions, throttling by upstream dependencies, etc.). These are the real time-consuming problems with serverless architectures. Most of the "problems" in this article are relatively easy to overcome, and non-coincidentally, are easy to understand and sell solutions for.