
Ring: Advanced cache interface for Python - youknowone
https://ring-cache.readthedocs.io/
======
suvelx
Every example seems to follow this pattern

    
    
      client = pymemcache.client.Client(('127.0.0.1', 11211))  #2 create a client
    
      # save to memcache client, expire in 60 seconds.
      @ring.memcache(client, expire=60)  #3 lru -> memcache
      def get_url(url):
          return requests.get(url).content
    
    

How are you supposed to configure the client at 'runtime' instead of 'compile
time' (when the code is executed and not when it's imported)?

Careful placement of imports in order to correctly configure something just
introduces delicate pain points. It'll work now, but an absent minded import
somewhere else later can easily lead to hours of debugging.

~~~
vngzs
You can use a closure to pass in the configuration.

    
    
        def configure_memcache(client_ip, port):
            client = pymemcache.client.Client((client_ip, port))
            @ring.memcache(client, expire=60)
            def get_url(url):
                return requests.get(url).content
    
            return get_url
    

Then in your code which imports the above library:

    
    
        get_url = configure_memcache('127.0.0.1', 11211)
        result = get_url('https://www.google.com')

~~~
coleifer
I'd rather have a sane API.

    
    
        def configure_ring():
            if DEBUG:
                return Ring(backend='debug')
            else:
                return Ring(backend='memcache', ...)
    
        ring = configure_ring()
        
        @ring.cache(expire=60)
        def get_url(...):
            ...
    

Tons of other libraries out there that implement this exact pattern.

~~~
youknowone
agree

------
coleifer
Extremely poor design:

* Not DRY. What if I want to use a cache for production but disable caching in development? And I have 10s or even 100s of functions that rely on the cache? Because the decorators contain implementation/client-specific parameters, I now have to add another entire layer of abstraction over this.

* Implementation is tied to the decorator, e.g. `ring.memcache` -- seriously? Why does it matter?

* What about setting application defaults, such as an encoding scheme, a key prefix/namespace, a default timeout?

I'm sorry but this is over-engineered garbage and good luck to anyone who uses
it.

~~~
youknowone
I agree they are missing features but still they are easy goal with small
refactoring so it will be solved soon. Issue #129 is about application
default. After that, dryrun is just replacing default action from
'get_or_update' to 'execute'

------
tyingq
Is there a python equivalent to php's apcu? Apcu, in the PHP world, leverages
mmap to provide a multi-process kv store, with fast, built in serialization.
So it's simple and very fast for single server, multi-process caching.

~~~
bpicolo
It's not necessary in python (or many other server frameworks), because python
doesn't typically follow a model of process-per-request. You can just stick it
in memory available to all of your threads.

~~~
tyingq
I imagine there are python users doing multi-process, aren't there? (since the
GIL is limiting for some use cases).

~~~
bpicolo
Yeah - if necessary you can use something like uWSGIs caching interface
([https://uwsgi-docs.readthedocs.io/en/latest/Caching.html](https://uwsgi-
docs.readthedocs.io/en/latest/Caching.html)). For most of these sorts of
things you'll typically be fine caching the object in question once per
process though, because the processes are still persistent. If you truly need
shared memory between processes (rather than just an in-memory version of an
object used for a lot of requests) there's other options, and it's infrequent
that you need something that's shared between request processes but is not
shared between other web servers.

------
kristoff_it
Great project. There is only one angle that I feel is missing: multiple
requests for the same resource could cause duplicated work, especially if the
value generating function is slow.

I wrote a sample solution to that problem, feel free to reach out if you ever
consider adding a similar feature, I'd be happy to contribute. (fyi: the
current implementation is in Go)

[https://github.com/kristoff-it/redis-memolock](https://github.com/kristoff-
it/redis-memolock)

~~~
youknowone
Actually it is common requests from the users but it wasn't solved yet. I will
check the project, thanks!

------
bsdz
Looks extensive and I'll likely try using the module at some point.

One thing, why not stash all the function methods under a "ring" or "cache"
attribute, eg

    
    
      @ring.lru()
      def foo()
        ..
    
      foo.cache.update()
      foo.cache.delete() 
      ..
    

This might be less likely to clash with any existing function attributes (if
you're wrapping a 3rd party function say).

~~~
youknowone
Thanks for the great advice. I never thought about this problem.

------
mrlinx
Like this a lot.

How could only invalidate everything related to a specific
client/customer/account?

I wonder how they cascade these invalidations at bigger and more complex
systems.

~~~
youknowone
It doesn't have any cascading feature for now. @ring.redis_hash can be helpful
for certain cases, but it is not a generic solution.

In future, there is a plan for indirect invalidation. It will use another key
to decide expiration. Though this is not designed for cascading, but it will
probably work for a part of them

------
ergo14
The api doesn't seem to be fleshed out compared to dogpile.cache yet.

Normally you don't want to pass cache backend instance to decorators on module
level.

------
TeeWEE
How does this compare to dogpile?

~~~
youknowone
I reviewed a few cache libraries, but this is the new one I didn't checked.

Roughly, Ring consists of 6 key features - sub-functions, universal decorator,
data coder, asyncio support, consistent and readable key generation and
abstract-transparent back-end access. I will check dogpile soon, thanks.

~~~
tfaruq
Any blogpost or link to your review? Thanks

~~~
youknowone
Because I didn't write one before, I made a new one with the projects I
remember: [https://github.com/youknowone/ring/tree/feature-
table#featur...](https://github.com/youknowone/ring/tree/feature-
table#feature-table)

------
mychael
>Cache is a popular concept widely spread on the broad range of computer
science but its interface is not well developed yet.

This sentence is grammatically incorrect. Replace "Cache" with Caching".

~~~
youknowone
Thanks, I will fix it

------
alexeiz
I needed something like this that allows access to and manual manipulation of
the cache, and I ended up forking functools.lru_cache code. This library
definitely fits the bill.

------
tomnipotent
> Memcached itself is out of the Python world

Don't know why this bothers me so much... but it's actually from Perl. It was
born at LiveJournal, a well-known Perl shop.

~~~
jteppinette
I actually read this as _outside_.

~~~
youknowone
Will it be correct expression if I fix it to outside?
[https://github.com/youknowone/ring/pull/133/files](https://github.com/youknowone/ring/pull/133/files)

~~~
jteppinette
Yeah, I commented on the PR as such.

------
merlincorey
To me, mocking of the caches for testing is super important and missing.

I searched the article, the linked "Why Ring?", and this page of responses for
"mock", but no results.

Maybe it's just me!

~~~
youknowone
Thanks. I didn't think adding them to the why page. For now, the actual
projects work like:

    
    
      if DEBUG:
          ring_cache = functools.partial(ring.dict, {}, default_action='execute')
      else:
          ring_cache = functools.partial(ring.redis, client)
    
      @ring_cache(...)
      def ...
    
    

Which is not very good solution at all. I will fix the design and properly
document it. Thanks for suggesting why page and mock section.

------
Dowwie
no dogpile lock support?

~~~
youknowone
I want to say "not yet". It is shame that I didn't know docpile lock.

