Scaling applications in k8s, updating, and keeping configs consistent are a great deal easier for me than using Ansible or any other config management tool.
In the end, it's a singular platform that one can build tooling against that allows an organization to abstract away the infrastructure. My team has done that (on top of k8s). As such, a developer can spin up a new environment with the click of a button, deploy whatever code they like, scale the environment, etc. with very little to no training. Those capabilities were a tremendous accelerator for my organization.
Sure, you can build something similar with Ansible on AWS, but then you're married to AWS, you have to worry about sizing, and the cost of idle instances. In my experience, it's just a great deal more overhead.
I'm running a production service on EKS, also tried it on GKE. Both take away most of the cluster management pain.
Kubernetes is great, but you're not being honest with yourself if you can't acknowledge the difficulty in going from 0 to production-ready. There is a ton of complexity and lots of grief on the path to a fully functional k8s environment.
Migrating between the two isn’t even all that difficult if you change your mind later.
Creating a cluster is a one-liner, no Ansible required. The node pool comes with that, which sets up the hosts. Databases are all created in cluster with a helm chart.
Many people will advise against running stateful workloads in k8s. It's tricky, sure, but the benefits are still there.
As a result of the orchestration my team has built on top of k8s, any developer in my organization can clone any production environment at any time with production data. Those cloned environments can be configured to receive streaming updates from the production environment, once created.
Developers can test bug fixes and features on live streaming production data with absolute certainty that they won't break anything. This capability is immensely valuable.