Hacker News new | past | comments | ask | show | jobs | submit login
Pelikan, Twitter’s framework for building caches (github.com/twitter)
110 points by MrBuddyCasino on Oct 4, 2022 | hide | past | favorite | 21 comments



Project creator here, didn't expect to see the link on HN :P

Here's the latest code architecture diagram: https://raw.githubusercontent.com/pelikan-io/pelikan-io.gith...

And a slightly outdated but mostly accurate description of our motivation. https://www.pelikan.io/2019/why-pelikan.html

#AMA We've done a few research-y and go-to-production projects in the last few years.


I’m always more curious about the operational story, but new projects tend to focus on the low-level implementation.

I love Redis, but managing HA is a pain and requires a good bit of engineering on its own. I think this is how RedisLabs stays in business.

This seems to separate backend and front end, maybe so you could use a more appropriate storage for your use case?

But what would a production-ready deployment look like? How would you handle failover for patching or… failure?

Adding a front end can sometimes double the problem needing to have 1 failure setup for it and 1 for the backend. If I were to use the slab storage (looks really ideal for most of our workloads), how would that work?

Too much to answer here, but stuff I’d like to see as it matures. It’s the unsexy stuff, I know. Way more fun to get into the bits.


Hopefully we can open source our deployment "generator" sometime soon. It's not gonna work in AWS (Twitter isn't a cloud first company) but you will get the basic idea.

Yes, swappable storage backend is a big driver of the design. Segcache for TTL-centric workloads, maybe some simple allocator-backed data structures for many popular Redis features, tiered storage for large time series... these were all internally discussed and sketched out to various extent.

Failure handling is very, very context dependent, both in terms of what the product does (which drives the ROI analyses) and where it runs (which determines ways things could fail). Still figuring out how to talk about fishing not the fish. Will give this more thoughts.


Thanks!

My dream world is to run the “same thing” in a public cloud, datacenter, and (limited resource) remote facility.

The latter wouldn’t need to be failure tolerant, just quickly recoverable as a service (not necessarily the data).

If I could provide 1 app usage pattern and 1-3 operational patterns that would solve so much.

This is hard for every data service, partly because of technical debt/entrenchment and data has gravity. But cache is the most flexible.

The joys of having a portfolio of 1000s of apps ranging from COBOL to “cloud native.”

Not a demand, just sharing the “needs” of my big traditional company for perspective. I feel that often times IT as an enabler vs IT as the product gets lost in the shuffle.

And so much assumes more Human Resources can be dedicated than a company that sees IT as overhead can dedicate.

All that said, constraints + scale can be a fun problem to solve. Making “right” easiest is always better than rules.


Wow! It dawned on me that you have been working on problems in this space for ten years now! It's amazing to see how far this has come. I still remember the days of proxying memcache and kernel modules to increase file descriptor limits on running memcache instances!


Hello! I still have your cookie jar LOL


I have a question for someone with your expertise. Why not use the CLOCK-Pro cache eviction algorithm in a system like this?


I assume you meant this as a comparison against Segcache?

We touched upon the CLOCK-Pro a bit in related work of the paper (https://www.usenix.org/system/files/nsdi21-yang.pdf, Section 6.1). I haven't done a deep dive, but can see two general issues. 1. CLOCK-Pro is designed for fixed-size elements, and doesn't work naturally for mixed-size workloads. Naively evicting "cold keys" may not even meet the memory size and contingency requirement needed for the new key. This is why memcached's slab design focuses primarily on size, and using a secondary data structure (LRUq) to manage recency. Something like CLOCK-Pro may be modified to handle mixed size, but I suspect that significantly complicates the design. 2. An algorithm without the notion of a TTL obviously doesn't take the strong hint of TTL. Given the workloads we have, we concluded that TTL is an excellent signal to use to make memory related decisions, and hence went for a TTL centric design.


thanks a lot. I didn’t realize that because TTL was present this could be optimized.


This is really interesting. Thanks for open sourcing it!


What is momento?


It's a SaaS startup offering managed cache (like Elastic Cache). They have their own protocol but uses Pelikan-segcache as one of their backends.


I like this bird pun. Another project at Twitter with a bird pun for a name: Summingbird.

https://github.com/twitter/summingbird


Almost all projects at Twitter are based on some sort of bird pun :)


Would like to see a comparison between this and Facebook's Cachelib:

https://github.com/facebook/CacheLib


They are not quite the same thing. Pelikan has the building blocks for an RPC server, storage, wire protocols, data structures, and whatever code that is needed to glue these together in some form to provide functionalities as a service (which is loosely defined, it can be a proxy too).

CacheLib, in terms of the role it plays, is closer to the modules under src/storage, but obviously in itself a lot more sophisticated in design to handle tiered memory/SSD storage.

We actually have been in touch with CacheLib team since before their public release. And we remain interested in integrating CacheLib into Pelikan as a storage backend as soon as their Rust bindings become functional :P


For a moment I thought this would be about stationery.


I'm a big fan of their fountain pens and the piston filling mechanism.


> pelikan_pingserver_rs: an over-engineered, production-ready ping server useful as a tutorial and for measuring baseline RPC performance

> an over-engineered...

Ever since I saw Juicero teardown by AvE [1], I'll never see the term "over-engineered" in a positive light. Usually that means, it is sub-optimal and not engineered well, usually in a hurry and developed with a broad brush approach towards safety margins. Juicero was so over-engineered, it was embarrassing.

[1] https://www.youtube.com/watch?v=_Cp-BGQfpHQ


It was supposed to be a bit tongue-in-cheek... Obviously nobody should create a pingserver just for the sake of with the specs we had. But, as a halfway marker to a production-ready cache service, it's good to have something that lets you test the parts that will account for well over 50% of the CPU time in production.


Yea definitely, 90% of the time it meant to be positive; just totally shocked by how badly engineered Juicero was.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: