Well, There's not going to be much because it would violate NDA, but, nothing is 'elastic'.
Somewhere, someone, has to buy a set amount of servers, based on a running capacity projection and build those into usable machines. The basis of a datacenter, is an inventory system, a dhcp server, a tftp server, and a DNS server that get used to manage the lifecycle of hardware servers. That's what everyone did at one point, and the best of them build themselves tooling.
What amazon has is built on what was available at the time both for tooling and existing systems that they'd have to integrate with. You almost certainly don't have to build anything that complex. Additionally, you can get an off the shelf DCIM that integrates with your DHCP and DNS servers and trigger ansible runners in your boot sequences that handle the lifecycle steps. It's considerably easier to do now than it was 15 years ago.
While they don't use AWS specifically for a lot of stuff, the internal tooling can still build thousands of boxes an hour though they don't really pay for UI work for that stuff.
You can put a host(s) in a fleet, tell it the various software sets you want installed and click go and you'll have a fleet when you come back, so don't think that what you're being asked to build is impossible or not being used under every single major cloud provider or VPS provider.
The slightly harder part is deciding what you're going to give to devs for a front end. Are you providing raw hosts, VMs, container fleets, all of it? how are you handling multi-zone or multi-region . . ., how are you billing or throttling resources between teams.
The beauty of this is you get a lot of stuff for free these days. You can build out a fleet, provide a few build scripts that can be pulled into some CI/CD pipeline in your code forge of choice and you don't really need to build a UI.
Provisioning tooling is hard, but it's a lot easier now that it was 15/20 years ago and all the parts are there. I've built it several times on very small teams. I would have loved to have 10 devs to build something like that, but the reality is that you can get 80% with a little glue code and a few open source servers.
Somewhere, someone, has to buy a set amount of servers, based on a running capacity projection and build those into usable machines. The basis of a datacenter, is an inventory system, a dhcp server, a tftp server, and a DNS server that get used to manage the lifecycle of hardware servers. That's what everyone did at one point, and the best of them build themselves tooling.
What amazon has is built on what was available at the time both for tooling and existing systems that they'd have to integrate with. You almost certainly don't have to build anything that complex. Additionally, you can get an off the shelf DCIM that integrates with your DHCP and DNS servers and trigger ansible runners in your boot sequences that handle the lifecycle steps. It's considerably easier to do now than it was 15 years ago.
While they don't use AWS specifically for a lot of stuff, the internal tooling can still build thousands of boxes an hour though they don't really pay for UI work for that stuff.
You can put a host(s) in a fleet, tell it the various software sets you want installed and click go and you'll have a fleet when you come back, so don't think that what you're being asked to build is impossible or not being used under every single major cloud provider or VPS provider.
The slightly harder part is deciding what you're going to give to devs for a front end. Are you providing raw hosts, VMs, container fleets, all of it? how are you handling multi-zone or multi-region . . ., how are you billing or throttling resources between teams.
The beauty of this is you get a lot of stuff for free these days. You can build out a fleet, provide a few build scripts that can be pulled into some CI/CD pipeline in your code forge of choice and you don't really need to build a UI.
Provisioning tooling is hard, but it's a lot easier now that it was 15/20 years ago and all the parts are there. I've built it several times on very small teams. I would have loved to have 10 devs to build something like that, but the reality is that you can get 80% with a little glue code and a few open source servers.