
Lessons learned using single-table design with DynamoDB and GraphQL - ranman
https://servicefull.cloud/blog/dynamodb-single-table-design-lessons/
======
delive
>> In our use case, having attribute model as the primary key in one of the
GSIs (Global Secondary Index) which always indicated the type of row was very
helpful. With a simple query where model was a hashKey we could get all
Members, Channels, Roles, Audiences, etc.

Won't this cause problems with having every item of a type on a single node in
AWS since the hash keys are the same? Or, are they suggesting that on the GSI
they use KEYS_ONLY, and even though every item lives on one node, the size of
all are only the keys (even so, I don't see how that's very useful short of
counting number of items).

~~~
haolez
I think this doesn’t apply to Global Secondary Indexes, but I might be wrong.

~~~
erik_seaberg
Yeah, partition size is only limited if you create a local index, because
those are strongly consistent.

~~~
delive
Ah interesting, it looks like you are right. If the any partition grows to >
10GB on the main table or a GSI, that partition splits into sub-partitions
using the sort key as part of the hashing function. If there is no sort key,
the partitioning scheme equally distributes items across partitions, so all
partitions sub-divide at the same time.

[https://stackoverflow.com/questions/40272600/is-there-a-
dyna...](https://stackoverflow.com/questions/40272600/is-there-a-dynamodb-max-
partition-size-of-10gb-for-a-single-partition-key-value)

~~~
ro_sharp
Couldn’t this still hit partition throughput limits though?

~~~
erik_seaberg
It's pretty opaque. My impression is that each shard (storage backend quorum)
gets a roughly equal share of capacity you pay for, and items in the same
partition tend to live on the same shard to keep range queries small (and
local indices require one-shard partitions). They've made improvements in
loaning cold shards' unused capacity to hot shards, but they still recommend
avoiding hot partitions and keeping load roughly even.

