@llcoolv has responded and the response is correct.
For scenarios where you create a lot of collections/tables, we recommend to provision throughput on a database and share the provisioned throughput among all the containers – to save on costs. You can change the throughput at database exactly like how you will change the throughput at container level. It is the same API, it is supported via CLI, SDK, Rest API and Portal as well.
The entry point for database level throughout is 400RUs $24 per month and you can add any number of collections/tables that can share this throughput. You can scale up and down in the increments of 100RUs (or $6/month).
Please see the following links for details:
Please see the sample here for moving data:
I am aware that one can provision per database, but as another poster mentioned, you can't change that once a database is created, and also you need to specify something that's called partition keys and that can't be done using MongoDB driver. If I recall correctly, you don't even have examples available for that in CosmosDB Python API.
I mean yeah - it's possible. But it is also a pain in the ass.
Yes, for shared throughput database, you will need to create a new database.
- You can create MongoDB API collections with shard keys defined. It is fully supported by MongoDB driver
- Support MongoDB collections without shard key specified is coming in May
Thank you for the feedback on the docs and the missing code samples. We will add those in the docs.