
Olric: Distributed in-memory data structures in Go - mastabadtomm
https://github.com/buraksezer/olric/releases/tag/v0.3.0-beta.1#=
======
ddorian43
Distributed in-process cache can make things very fast. See vimeo with group
cache: [https://medium.com/vimeo-engineering-blog/video-metadata-
cac...](https://medium.com/vimeo-engineering-blog/video-metadata-caching-at-
vimeo-a54b25f0b304)

Or in-process distributed rate limiting:
[https://github.com/mailgun/gubernator](https://github.com/mailgun/gubernator)

[https://github.com/golang/groupcache](https://github.com/golang/groupcache):
groupcache is in production use by dl.google.com (its original user), parts of
Blogger, parts of Google Code, parts of Google Fiber, parts of Google
production monitoring systems, etc.

There was also a paper by google that I can't find right now.

Edit: Found it [https://blog.acolyer.org/2019/06/24/fast-key-value-
stores/](https://blog.acolyer.org/2019/06/24/fast-key-value-stores/)

~~~
rrdharan
Maybe you’re thinking of the RAMCloud paper?

[https://web.stanford.edu/~ouster/cgi-
bin/papers/ramcloud.pdf](https://web.stanford.edu/~ouster/cgi-
bin/papers/ramcloud.pdf)

~~~
ddorian43
Nope. It's "Fast key-value stores: An idea whose time has come and gone":

[https://blog.acolyer.org/2019/06/24/fast-key-value-
stores/](https://blog.acolyer.org/2019/06/24/fast-key-value-stores/)

~~~
FridgeSeal
This is such a fascinating idea, and I dig that blog-anyone know of any
similar blogs/sites?

------
hu3
Perhaps a better link would be the README which explains what's the project
about and its features.

[https://github.com/buraksezer/olric/blob/master/README.md](https://github.com/buraksezer/olric/blob/master/README.md)

------
whalesalad
Very curious about the origins of this tool and how it is used by the creator.
Looks very comprehensive, but some real world examples of use and performance
would be great.

------
didip
I have been curious about Olric for a while now.

* It is embeddable.

* Seems easy to install, even without Kubernetes.

I am just missing performance number to answer questions like: Is it
significantly faster than etcd? Is it faster than tikv? And finally, is it
faster than Redis Cluster?

~~~
lnsp
I don't think it should be compared to etcd since it only offers best-effort
consistency while etcd is strictly consistent.

------
pojntfx
Played around w/ olric a while back as a kv store for a routing graph, really
enjoyed it even though I ended up home-brewing it later (the network topology
meant that olric wasn’t the right tool for the job). Highly recommend it ;)

------
hellofunk
I’m curious how you actually would access this, would you need some kind of an
RPC library to use in conjunction with it? The page doesn’t really seem to go
far enough to answer actual usage information.

What would be the alternative to using a library like this? I’ve been looking
for a good way to distribute workload to many machines and don’t want to
invent it myself or jump into a very heavy system.

~~~
harikb
This example gives a flavor of the client-server mode. Seems to be using some
custom binary protocol - that seems like a surprising choice.

[https://github.com/buraksezer/olric#golang-
client](https://github.com/buraksezer/olric#golang-client)

    
    
      var clientConfig = &client.Config{
        Addrs:       []string{"localhost:3320"},
        DialTimeout: 10 * time.Second,
        KeepAlive:   10 * time.Second,
        MaxConn:     100,
      }
    
      client, err := client.New(clientConfig)
      if err != nil {
         return err
      }
    
      dm := client.NewDMap("foobar")
      err := dm.Put("key", "value")
      // Handle this error

------
meddlepal
Reminds me of a stripped down version of Hazelcast.

------
dang
Also discussed 6 months ago:
[https://news.ycombinator.com/item?id=22297507](https://news.ycombinator.com/item?id=22297507)

------
mancini0
So would this be similar to Apache Ignite?

------
Thaxll
Looks like Hazelcast kind of solution.

------
jwineinger
Maybe just a personal thing, but going to a github repo and seeing "build
failing" and low test coverage icons are usually turn offs for me to continue
reading.

~~~
mholt
Why? It's normal for builds to fail between releases. And (warning:
controversial opinion inbound) code coverage is hardly a useful metric -- one
can write a single test case that gets 100% coverage, but doesn't test
anything; what you _want_ is _assertion coverage_ but I don't know how
possible that is.

~~~
sneak
Is it normal for builds to fail on master between releases?

I would think a green build CI should be required for merge to master, even if
perhaps the tests don’t all pass.

~~~
mholt
Those badges are also often useless. For example, here's their current failure
reason:

    
    
        Bad response status from coveralls: 422
    
        {"message":"service_job_id (716381595) must be unique for Travis Jobs not supplying a Coveralls Repo Token","error":true}
    

Has nothing to do with the actual compilation status.

In projects I'm involved with, the _vast_ majority of CI errors were due to
stupid things, not actual code problems. For example, last week our tests were
failing because the east coast IBM data center that ran the tests was offline
due to extended power outages from the weather.

~~~
jnwatson
Like all metrics, badges are just a model. However, for open source projects,
optics matter, and failing badges are a hint that perhaps quality isn't the
top priority.

------
bellwether
Love this idea, can’t wait to try it out!

------
phpjsnerd
Thanks man!

