Hacker Newsnew | comments | show | ask | jobs | submitlogin

Can you explain your use case a bit more? I'm having a hard time imagining something that does ~430M DB writes/day but can't easily afford to pay $120 for those writes.



Remember that the throughput is per item, not per query. For instance we have an indexed query that returns ~1500 rows each time. Just doing that query a couple of times per second would create that kind of throughput requirement.

-----


The amount of consumed read units by a query is not necessarily proportional to the # of items. It is equal to the cumulative size of processed items, rounded up to the next kilobyte increment. For example if you have a query returning 1,500 items of 64 bytes each, then you’ll consume 94 read units, not 1,500.

-----


If that's the case then it's a completely different ball-game. I was about to abandon the whole idea of using DynamoDB due to the pricing of throughput. This makes it a whole lot more interesting!

-----


The official documentation seems to clearly contradict you. The pricing calculator doesn't let you specify a value of less than 1KB. Who's right? Or maybe I'm just not understanding what either you or the official pricing doc is saying :)

-----


From the pricing page (http://aws.amazon.com/dynamodb/pricing):

---

If your items are less than 1KB in size, then each unit of Read Capacity will give you 1 read/second of capacity and each unit of Write Capacity will give you 1 write/second of capacity. For example, if your items are 512 bytes and you need to read 100 items per second from your table, then you need to provision 100 units of Read Capacity.

---

Looks like 1KB is the minimum for calculations.

-----


Agree, but Amazon's CTO said something different, hence my question.

-----


Werner is right. The query operation is able to be more efficient than GetItem and BatchGetItems. To calculate how many units of read capacity will be consumed by a query, take the total size of all items combined and round up to the nearest whole KB. For example, if your query returns 10 items that were each 1KB, then you will consume 10 units of read capacity. If your query returns 10 items that were all 0.1KB, then you will consume only 1 unit of read capacity.

This is currently an undocumented benefit of the query operation, but we will be adding that to our documentation shortly.

-----


A small mobile marketing company jumping into the wild wild west of real time bidding. It would be used more so for logging impression requests to be used later for further analysis. Our bidder would need to be able to handle upwards of 5000 bid requests per second. Though these requests can be throttled down, naturally the more data we can collect the better. This also doesn't include the associated costs with querying the data which would end up adding up quickly.

Now I'm not sure this would be the ideal solution for such a thing (in fact it probably is not), but it's just the first thing that came to mind. In the grand scheme of things sure that may seem like a trivial amount due to the use case, but we're still more in the realm of a startup where dropping ~$3k/month on the data store alone makes me cringe a little when we have other expenses to account for also. :)

-----


$3k/month = $36k/year

Consider this cost relative to the cost of a trustworthy ops person, plus the capex & opex of running your own reliable & scalable DB.

-----




Guidelines | FAQ | Support | API | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: