
Lambda Store: Serverless Redis - iampims
https://medium.com/lambda-store/serverless-redis-is-here-34c2fa335f24
======
pachico
Don't get me wrong, I celebrate any competitor of AWS, even though I use it
massively, but Redis is a tricky one.

For start, you want redis to be as near as you can to your application. It
many times is used as cache and it makes no sense to have long latencies to
your cache layer (many times Redis is even deployed in the same pod as the app
using it precisely because you want it to be nearby). And if your
infrastructure is already in AWS (why would you choose elasticache otherwise?)
you would be paying all the data from AWS to your external Redis as a service
provider and that might be much more than what you even expected to save in
the first place.

To be honest, AWS elasticache is not even an expensive service (t3.micro
instances work just great and allow you do no upfront reservations, which you
failed to use as comparison for obvious reasons).

Really, I don't think Redis is a problem to solve and I'd put my money on
someone giving cheaper documentdb alternatives, or redshifts, or managed
ClickHouse services, etc. Those are the real killers!

Anyhow, sorry for being a bummer and wish you best of luck!!!

~~~
edaemon
The egress costs this would incur with AWS Lambda did give me immediate pause
when looking at the product. It seems like it would be a good idea to add some
information on that for prospective users.

~~~
svennnaa
(Disclaimer: I work for lambda store) Right, we need to mention this more
explicitly.

------
jedberg
> Younger Team

This is not a selling point to me. When it comes to storage, I prefer
experience over motivation, just like I prefer durability over speed (cough,
MongoDB).

Otherwise though this looks pretty cool.

I do wonder what their scaling story looks like. How can they maintain a
profit at such low prices and handle sudden load spikes?

~~~
redisman
If you're not paying by the hour then I'm guessing the business model is that
you're sitting on a shared redis instance with other consumers, which sounds a
bit scary for many reasons. Just guessing though.

~~~
mattiabi
This is Mattia from Lambda Store.

We are not using shared Redis instances. We have our own Redis server
implementation, which has a tiered storage model which keeps the hot entries
in the memory to better utilize memory usage. Using this we can use smaller
instances for the most of the time and migrate the Redis db to a bigger
machine in a few seconds when needed.

------
sairamkunala
This is a great solution to an important problem which IaaS companies are not
trying to solve. This looks very similar to Cloudflare's KV (in terms of
offering) [https://www.cloudflare.com/products/workers-
kv/](https://www.cloudflare.com/products/workers-kv/)

Currently, looks like the current version is limited few AWS regions. This may
not be a good fit for those outside the AWS ecosystem. The main reason users
use Cache Stores is to remove latencies of hitting a database or re-
computations. This may not work out for those on DigitalOcean or Azure
endpoints.

Also, within the AWS ecosystem, power users may not use it since it bypasses
IAM and no insight in case of failures because of connection limits, as well
as no SSL in the free version.

A note about "accessibility" on the page, table data is represented as an
image and screenshots from Twitter should be have embeds. Otherwise, all users
cannot read the content.

------
klohto
Nice offering for hobbyists but not usable for any private data handling or
enterprise applications.

I would like to use something like this but with no SOC2 or any other security
certification, storing customer's data out of your cloud provider is a no-no
for me. On top of that, there are no ACLs, nor any fine-grained control
access. No audit log, SSO, or similar features are required.

So I wonder what market is this trying to capture? If you're not happy with
DynamoDB and you need even better latency, it means that you're running this
in production and probably handling sensitive data. Would this service be
viable for you? Just curious what the HN crowd thinks.

EDIT: Comment bellow me said this:

> TOS says not to upload data that "contains personally sensitive information"
> so that may limit some use cases

That, for me, concludes this is aimed at casual use.

~~~
davestore
Hey, Dave from lambda store here, in the short term, we plan to support ACL of
Redis 6. Also in the long term, we are planning a version which will work on
customer’s VPC for the enterprise customers with high security requirements.
We are applying GDPR practices. Also we are planning to obtain security
certificates in the future then we can revisit our terms again. Also note that
“personally sensitive information” requires extra procedures as part of GDPR.

------
sytse
Congrats on launching. It must have been a lot of work to make Redis
serverless. If you feel comfortable it would be interesting to know more about
the technical underpinnings.

~~~
mattiabi
Thanks. Mattia from Lambda Store here.

Sure, we are already planning to post blogs about our architecture details.
You can follow us on medium: [https://medium.com/lambda-
store](https://medium.com/lambda-store)

~~~
sytse
Awesome!

------
alexpapworth
So, this is just RaaS? (Redis as a Service)

~~~
bdcravens
Somewhat, though there are some missing features:

[https://docs.lambda.store/docs/overall/rediscompatibility/](https://docs.lambda.store/docs/overall/rediscompatibility/)

From a customer's perspective, "serverless" means consumption-based pricing as
opposed to reservation-based, regardless of the actual technology being used.

~~~
mattiabi
This is Mattia from Lambda Store. We started with the most used commands, and
we are planning to add missing features gradually.

------
tyingq
"Serverless" \+ "obviously stateful thing" feels click-baity to me. Does this
link get right down to brass tacks? (it didn't for me). Is there a "ctrl-f"
thing I can search for that explains?

~~~
scarface74
Serverless by AWS standards doesn’t mean “stateless”. It means that you don’t
have to deal with the underlying servers and worry about scaling them up and
scaling them down.

Classic Aurora for instance is not “stateless”. You still have to
appropriately size the underlying server, you pay for it whether you are using
it or not, you might need to reboot it, etc.

DynamoDB on the other hand is considered Serverless by AWS because you don’t
size the “DynamoDB server”, and it can scale write and read capacity
automatically.

~~~
tyingq
_Serverless by AWS standards doesn’t mean “stateless” "_

Willing to be proved wrong, but what I've seen so far does equate serverless
with stateless, at least at the compute tier. Aurora has no "Lambda" or
"serverless" branding, right? Any stateful stuff is sort of called out as an
integration.

~~~
scarface74
There is regular Aurora where you are responsible for sizing the servers and
it doesn’t scale dynamically.

And there is Serverless Aurora

[https://aws.amazon.com/rds/aurora/serverless/](https://aws.amazon.com/rds/aurora/serverless/)

Also here there are many stareful serverless products:

[https://aws.amazon.com/serverless/](https://aws.amazon.com/serverless/)

It’s more about compute than storage.

You can now have stateful storage attached to AWS Fargate - serverless Docker.
By attaching your containers to EFS.

[https://www.sdxcentral.com/articles/news/aws-adds-direct-
sto...](https://www.sdxcentral.com/articles/news/aws-adds-direct-storage-to-
ecs-fargate/2020/04/)

~~~
tyingq
Not sure I get the difference. This is sort of the core issue. At some point,
some service is stateful. Why all the hoopla and nuance around naming? What,
exactly, is new from the 1960's on? Stateful and stateless is as old as
computing.

~~~
scarface74
It’s not the data it’s the compute. In the 60s, you couldn’t just bring in an
IBM mainframe when you needed it and not pay for it when you didn’t based on
the server load.

With regular Aurora, whether you are using it or not, you’re always paying for
both the server and the storage and you have to provision the server for peak
workload. With Aurora Serverless, if you don’t connect to the database, you
only pay for storage.

With lambda and to a lesser extent Fargate, you don’t pay for the underlying
server at all until you actually need to run something and then with lambda
you can scale up to as many instances as your account allows (a soft limit you
can ask for more anytime) and pay nothing when you don’t.

With EC2 you have a server sitting there listening for events whether or not
anything is sending you an event.

~~~
tyingq
Interesting example. Mainframes were one of the first to deploy lots of
capacity to your data center floor you hadn't yet paid for. Upgrades were
often just a phone call and a remote (often, zero down time) re-config.

------
petercooper
I really like the idea because Redis could make a _really_ good lightweight
data store for all sorts of things (especially where longevity is not required
or the data isn't urgent) but the pricing of Redis services is oddly high (due
to the memory requirements, I assume).

I'm going to give it a go! The only thing that jumped out at me, though, was
in the TOS which says not to upload data that "contains personally sensitive
information" so that may limit some use cases.

Update: I've created a basic free database for now. Using it with redis-cli
with no real issues so far but not stressed it yet.

~~~
davestore
Hey Dave from lambda.store here, thanks for trying our service, appreciated.
We would like to hear more feedback If you play with our product further. For
your TOS note, we can say that we are applying GDPR practices and storing
personal sensitive information requires extra procedures, that’s why it is not
allowed for now. In the future, we are planning to obtain security
certificates(like soc2) then we can revisit our terms. again thanks for your
comment!

------
gingerlime
That's really cool. Would definitely switch to this for hobby projects like my
A/B testing backend[0].

The thing is, however, as someone else touched on, the pricing of Redis Labs
is still reasonable, and despite feeling outdated, it's also stable and a
safer bet... So I don't really know how many organizations are willing to
trade cost-saving/coolness for higher risk, at least when it's still new and
not well established.

How do you plan to address this type of concerns?

[0] [https://github.com/alephbet/lamed](https://github.com/alephbet/lamed)

~~~
svennnaa
(Disclalimer: I work for lambda store) The problem with Redis Labs, we see
that they move to enterprise space more and more. As an example, they still do
not support TLS for paid essential plans. And their pricing is per memory
reserved. So you have to pay even if you do not actually use the database. I
am aware we are new but we believe that in a short time the quality and
stability of our service will gain trust of our users.

------
Serow225
Neat! Are you taking requests for Azure regions? : ) FYI, all of the mid-level
doc links are returning 403, like:
[https://docs.lambda.store/docs](https://docs.lambda.store/docs)
[https://docs.lambda.store/help](https://docs.lambda.store/help) It would be
nice if those headers on the left side of the nav were links, I kept wanting
to click on them. And then when I tried to go up one level from the leafs in
the address bar, I ran into the 403s. Cheers!

~~~
svennnaa
(Disclaimer: I work for lambda store) Azure, not right now. We are planning to
support Azure and GCP. Thanks for reporting the docs issue; will fix it asap.

~~~
Serow225
Thanks! The product pricing seems off to me. If you have a service that
averages a continuous 10req/sec, that would be like $100 a month plus storage
costs?

~~~
svennnaa
Storage cost per month. So if your data is 10GB it will be $1.5 per month.

When you have steady traffic, it makes more sense to move to reserved pricing.
Currently the reserved pricing plans start from 500 req/sec. We may have to
have more plans to cover smaller throughput cases.

------
Legogris
Great to see some new entrants on the IaaS market! Marketing feedback; for me,
using the word "serverless" in this context for me is a huge turnoff. It
signals either lack of understanding or knowing fully well that it's a
misnomer and can easily to be interpreted as being something else than what it
is. Especially as a headline or tagline. It's a buzzword that was dead right
around the time when it was hot.

Just my 5c.

~~~
jedberg
Serverless is just the term people use to mean "you don't manage the servers".
It's pretty well established and I don't think it's going away.

~~~
cactus2093
It arguably has a more specific definition, like "you only interact with it on
a per-invocation level" or maybe "you don't manage the servers or the runtime
environment". With other systems that are not considered Serverless (like
Heroku or Kubernetes) it can also be true that you don't manage the servers.

~~~
Legogris
"Fully managed", "pay-as-you-go (PAYG)". This is the language used by AWS to
market their services such as Lambda, for example, and IMO is a lot clearer.

------
29athrowaway
Redis is often used in low latency scenarios (e.g.: caching) and adding an
intermediate service increases latency.

------
staticautomatic
Will you be able to support modules in the future?

~~~
mattiabi
Mattia from Lambda Store here.

Since we are not using OSS Redis code, it would be very difficult to adapt it
to the serverless model, we cannot support Redis modules directly. But after
completing missing Redis commands, we can work on modules support too.

