Hacker News new | past | comments | ask | show | jobs | submit | thinkingfish's comments login

We also published it as a crate https://crates.io/crates/segcache


What’s your billing model?


We tried really hard to come up with a simple model that is pay as you go and has only one dimension: 15 cents/GB transferred. We also have a free tier: first 50GB are free, which is sufficient for many production applications. The free tier has the same availability and security features as the paid tier. You can build a production app on it!


You can check out our public pricing page on dev docs as well!

https://docs.momentohq.com/docs/pricing


Congrats! Although I look back fondly on the many years I spent on debugging cache incidents (https://danluu.com/cache-incidents/) I don’t think the rest of the world should go through the same bumpy road.

It’s truly interesting to see a new generation of applications not only doing away with using bare metal hardware, but continuing to be built on increasingly higher level abstractions.

Of course, the key is for those abstractions to be dependable in production as well as easy to use. I think Momento actually takes runtime predictability quite seriously.


Thank you Yao. We have been studying the lessons learned around the industry from caching outages as well as techniques employed by the giants to run great caching fleets. Our mission is to make those benefits available to everyone without requiring any effort. We have certainly learned a ton from you and your team at Twitter.


It was supposed to be a bit tongue-in-cheek... Obviously nobody should create a pingserver just for the sake of with the specs we had. But, as a halfway marker to a production-ready cache service, it's good to have something that lets you test the parts that will account for well over 50% of the CPU time in production.


Yea definitely, 90% of the time it meant to be positive; just totally shocked by how badly engineered Juicero was.


They are not quite the same thing. Pelikan has the building blocks for an RPC server, storage, wire protocols, data structures, and whatever code that is needed to glue these together in some form to provide functionalities as a service (which is loosely defined, it can be a proxy too).

CacheLib, in terms of the role it plays, is closer to the modules under src/storage, but obviously in itself a lot more sophisticated in design to handle tiered memory/SSD storage.

We actually have been in touch with CacheLib team since before their public release. And we remain interested in integrating CacheLib into Pelikan as a storage backend as soon as their Rust bindings become functional :P


I'm a big fan of their fountain pens and the piston filling mechanism.


Project creator here, didn't expect to see the link on HN :P

Here's the latest code architecture diagram: https://raw.githubusercontent.com/pelikan-io/pelikan-io.gith...

And a slightly outdated but mostly accurate description of our motivation. https://www.pelikan.io/2019/why-pelikan.html

#AMA We've done a few research-y and go-to-production projects in the last few years.


I’m always more curious about the operational story, but new projects tend to focus on the low-level implementation.

I love Redis, but managing HA is a pain and requires a good bit of engineering on its own. I think this is how RedisLabs stays in business.

This seems to separate backend and front end, maybe so you could use a more appropriate storage for your use case?

But what would a production-ready deployment look like? How would you handle failover for patching or… failure?

Adding a front end can sometimes double the problem needing to have 1 failure setup for it and 1 for the backend. If I were to use the slab storage (looks really ideal for most of our workloads), how would that work?

Too much to answer here, but stuff I’d like to see as it matures. It’s the unsexy stuff, I know. Way more fun to get into the bits.


Hopefully we can open source our deployment "generator" sometime soon. It's not gonna work in AWS (Twitter isn't a cloud first company) but you will get the basic idea.

Yes, swappable storage backend is a big driver of the design. Segcache for TTL-centric workloads, maybe some simple allocator-backed data structures for many popular Redis features, tiered storage for large time series... these were all internally discussed and sketched out to various extent.

Failure handling is very, very context dependent, both in terms of what the product does (which drives the ROI analyses) and where it runs (which determines ways things could fail). Still figuring out how to talk about fishing not the fish. Will give this more thoughts.


Thanks!

My dream world is to run the “same thing” in a public cloud, datacenter, and (limited resource) remote facility.

The latter wouldn’t need to be failure tolerant, just quickly recoverable as a service (not necessarily the data).

If I could provide 1 app usage pattern and 1-3 operational patterns that would solve so much.

This is hard for every data service, partly because of technical debt/entrenchment and data has gravity. But cache is the most flexible.

The joys of having a portfolio of 1000s of apps ranging from COBOL to “cloud native.”

Not a demand, just sharing the “needs” of my big traditional company for perspective. I feel that often times IT as an enabler vs IT as the product gets lost in the shuffle.

And so much assumes more Human Resources can be dedicated than a company that sees IT as overhead can dedicate.

All that said, constraints + scale can be a fun problem to solve. Making “right” easiest is always better than rules.


Wow! It dawned on me that you have been working on problems in this space for ten years now! It's amazing to see how far this has come. I still remember the days of proxying memcache and kernel modules to increase file descriptor limits on running memcache instances!


Hello! I still have your cookie jar LOL


I have a question for someone with your expertise. Why not use the CLOCK-Pro cache eviction algorithm in a system like this?


I assume you meant this as a comparison against Segcache?

We touched upon the CLOCK-Pro a bit in related work of the paper (https://www.usenix.org/system/files/nsdi21-yang.pdf, Section 6.1). I haven't done a deep dive, but can see two general issues. 1. CLOCK-Pro is designed for fixed-size elements, and doesn't work naturally for mixed-size workloads. Naively evicting "cold keys" may not even meet the memory size and contingency requirement needed for the new key. This is why memcached's slab design focuses primarily on size, and using a secondary data structure (LRUq) to manage recency. Something like CLOCK-Pro may be modified to handle mixed size, but I suspect that significantly complicates the design. 2. An algorithm without the notion of a TTL obviously doesn't take the strong hint of TTL. Given the workloads we have, we concluded that TTL is an excellent signal to use to make memory related decisions, and hence went for a TTL centric design.


thanks a lot. I didn’t realize that because TTL was present this could be optimized.


This is really interesting. Thanks for open sourcing it!


What is momento?


It's a SaaS startup offering managed cache (like Elastic Cache). They have their own protocol but uses Pelikan-segcache as one of their backends.


Disclosure: I'm maintainer/author of Pelikan.

I wrote about Pelikan, a unified cache framework, and its relationship to Memcached and Redis in this blog from 2019: https://twitter.github.io/pelikan/2019/why-pelikan.html

Specifically talked about what we would like to change about each.


What’s the “clean thread” model that’s mentioned isn’t that article? The link for it 404s.


Oh the link broke when I changed the permanent URL format after the post was published. Here is the correct link: https://twitter.github.io/pelikan/2016/separation-concerns.h...

I've fixed the post as well, thanks for finding this bug.


Here’s a blog version of the paper for anybody who isn’t into 12 page double column pdf files

https://twitter.github.io/pelikan/2021/segcache.html


Internally I’ve written a generator to spit out a deploy config based on input vectors such as (qps, data size, connections). But it assumes using Twitter’s container environment.

Publicly I think I want to do two things: 1. write a blog about cache operations and how to config Pelikan properly in general; 2. create some templates for common deploy mechanisms. What do you use for deploying services? What are your cache requirements? I can produce an example and put that in the repo/doc


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: