
DynamoDB Compatibility Layer for Apache Cassandra - hannahlee
https://github.com/datastax/dynamo-cassandra-proxy/blob/master/docs/Summary.md
======
i_am_nomad
Interesting that this comes out a day after ScyllaDB announced DynamoDB
compatibility. It wouldn’t be a bad thing if DynamoDB’s API (or something
else) became a sort of ODBC for NoSQL databases. (Edit: corrected spelling)

~~~
jjirsa
Git history suggests it’s been around for a year. It’s not super interesting
really, clearly there’s a poke here that someone felt this feature was worth a
press release and their more established competitor just “meh, here”’d them.

Also, cql is now implemented by like 5-6 nosql databases and is becoming the
implicit standard (Cassandra, cosmosdb, datastax, yugabyte, etc).

------
redact207
Is dynamo affordable yet? When I did my AWS certs a few years ago they seemed
to be pushing it hard for almost every data persistence use case that was non-
relational. In practice the throughput provisioning meant that you knew
exactly what your read/write load was and that it was constant; or you had to
set it to peak levels and pay a fortune. If not, Dynamo would simply reject
your request and it's up to you to build retry logic back up the chain.

This got a little better when autoscaling went from a workaround to an actual
feature; but there's still a fair bit of ramp-up time so you still have to
handle request limits.

I haven't paid much attention to it since, but was wondering if these things
are a non-issue nowadays?

~~~
awinder
You can now pay a per-request rate and skip the autoscaling / wcu & rcu math.

[https://aws.amazon.com/dynamodb/pricing/on-
demand/](https://aws.amazon.com/dynamodb/pricing/on-demand/)

~~~
ledauphin
last I ran the numbers this was still roughly 10x more expensive than 100%
utilization of provisioned capacity. But for many use it will still end up
roughly equivalent since you won't have to over-provision.

~~~
Dunedan
I'd argue that running at 100% of provisioned capacity is something that only
makes sense for a very, very small subset of DynamoDB users. Actually I'd only
do it if my access patterns were really predictable and if it's fine if
requests get throttled.

The more common use case is that you over-provision your provisioned capacity
to be able to handle some spikes in the number of accesses well. While there
was already DynamoDB Autoscaling, I suggest everybody using it checks out
DynamoDB OnDemand, as that might (depending on the access patterns) still save
a lot over DynamoDB Autoscaling and is overall way less hazzle, because you
simply don't have to care about scaling the provisioned capacity at all
anymore.

~~~
victoriadave
That would be assuming you don't write more than 40k ops per second. See here
in AWS's document
[https://docs.aws.amazon.com/amazondynamodb/latest/developerg...](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html)

~~~
mritun
40K is just a soft limit. You can contact support to increase the limit to any
number you like.

In OnDemand mode tables have practically no limits on throughput or storage
(just like provisioned mode).

Disclaimer: I work for DynamoDB but comment is my own.

------
segmondy
I wish to see more of this, compatibility layer for all AWS products. Let's
have enough of this to make it easy so you can run an "AWS app" else where. I
figured kubernetes is that path, but AWS has too many customer API/products
that it's still easy to be baked in.

~~~
bdcravens
S3 has the open source Minio, and it seems every cloud provider has adopted
the S3 API for object storage.

------
brianmhess
This is a pretty nice video (for those of us who prefer watching over
reading)...
[https://twitter.com/syllogistic/status/1172537068783898624](https://twitter.com/syllogistic/status/1172537068783898624)

