Welcome to aiocache’s documentation!¶
Installing¶
pip install aiocache
pip install aiocache[redis]
pip install aiocache[memcached]
pip install aiocache[redis,memcached]
Usage¶
Using a cache is as simple as
>>> import asyncio
>>> loop = asyncio.get_event_loop()
>>> from aiocache import Cache
>>> cache = Cache()
>>> loop.run_until_complete(cache.set('key', 'value'))
True
>>> loop.run_until_complete(cache.get('key'))
'value'
Here we are using the SimpleMemoryCache but you can use any other listed in Caches. All caches contain the same minimum interface which consists on the following functions:
add
: Only adds key/value if key does not exist. Otherwise raises ValueError.get
: Retrieve value identified by key.set
: Sets key/value.multi_get
: Retrieves multiple key/values.multi_set
: Sets multiple key/values.exists
: Returns True if key exists False otherwise.increment
: Increment the value stored in the given key.delete
: Deletes key and returns number of deleted items.clear
: Clears the items stored.raw
: Executes the specified command using the underlying client.
You can also setup cache aliases like in Django settings:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 | import asyncio
from aiocache import caches, Cache
from aiocache.serializers import StringSerializer, PickleSerializer
caches.set_config({
'default': {
'cache': "aiocache.SimpleMemoryCache",
'serializer': {
'class': "aiocache.serializers.StringSerializer"
}
},
'redis_alt': {
'cache': "aiocache.RedisCache",
'endpoint': "127.0.0.1",
'port': 6379,
'timeout': 1,
'serializer': {
'class': "aiocache.serializers.PickleSerializer"
},
'plugins': [
{'class': "aiocache.plugins.HitMissRatioPlugin"},
{'class': "aiocache.plugins.TimingPlugin"}
]
}
})
async def default_cache():
cache = caches.get('default') # This always returns the same instance
await cache.set("key", "value")
assert await cache.get("key") == "value"
assert isinstance(cache, Cache.MEMORY)
assert isinstance(cache.serializer, StringSerializer)
async def alt_cache():
# This generates a new instance every time! You can also use `caches.create('alt')`
# or even `caches.create('alt', namespace="test", etc...)` to override extra args
cache = caches.create(**caches.get_alias_config('redis_alt'))
await cache.set("key", "value")
assert await cache.get("key") == "value"
assert isinstance(cache, Cache.REDIS)
assert isinstance(cache.serializer, PickleSerializer)
assert len(cache.plugins) == 2
assert cache.endpoint == "127.0.0.1"
assert cache.timeout == 1
assert cache.port == 6379
await cache.close()
def test_alias():
loop = asyncio.get_event_loop()
loop.run_until_complete(default_cache())
loop.run_until_complete(alt_cache())
cache = Cache(Cache.REDIS)
loop.run_until_complete(cache.delete("key"))
loop.run_until_complete(cache.close())
loop.run_until_complete(caches.get('default').close())
if __name__ == "__main__":
test_alias()
|
In examples folder you can check different use cases:
Contents¶
Caches¶
You can use different caches according to your needs. All the caches implement the same interface.
Caches are always working together with a serializer which transforms data when storing and retrieving from the backend. It may also contain plugins that are able to enrich the behavior of your cache (like adding metrics, logs, etc).
This is the flow of the set
command:
Let’s go with a more specific case. Let’s pick Redis as the cache with namespace “test” and PickleSerializer as the serializer:
- We receive
set("key", "value")
. - Hook
pre_set
of all attached plugins (none by default) is called. - “key” will become “test:key” when calling
build_key
. - “value” will become an array of bytes when calling
serializer.dumps
because ofPickleSerializer
. - the byte array is stored together with the key using
set
cmd in Redis. - Hook
post_set
of all attached plugins is called.
By default, all commands are covered by a timeout that will trigger an asyncio.TimeoutError
in case of timeout. Timeout can be set at instance level or when calling the command.
The supported commands are:
- add
- get
- set
- multi_get
- multi_set
- delete
- exists
- increment
- expire
- clear
- raw
If you feel a command is missing here do not hesitate to open an issue
BaseCache¶
-
class
aiocache.base.
BaseCache
(serializer=None, plugins=None, namespace=None, key_builder=None, timeout=5, ttl=None)[source]¶ Base class that agregates the common logic for the different caches that may exist. Cache related available options are:
Parameters: - serializer – obj derived from
aiocache.serializers.BaseSerializer
. Default isaiocache.serializers.StringSerializer
. - plugins – list of
aiocache.plugins.BasePlugin
derived classes. Default is empty list. - namespace – string to use as default prefix for the key used in all operations of the backend. Default is None
- key_builder – alternative callable to build the key. Receives the key and the namespace as params and should return something that can be used as key by the underlying backend.
- timeout – int or float in seconds specifying maximum timeout for the operations to last. By default its 5. Use 0 or None if you want to disable it.
- ttl – int the expiration time in seconds to use as a default in all operations of the backend. It can be overriden in the specific calls.
-
add
(key, value, ttl=<object object>, dumps_fn=None, namespace=None, _conn=None)[source]¶ Stores the value in the given key with ttl if specified. Raises an error if the key already exists.
Parameters: - key – str
- value – obj
- ttl – int the expiration time in seconds. Due to memcached restrictions if you want compatibility use int. In case you need miliseconds, redis and memory support float ttls
- dumps_fn – callable alternative to use as dumps function
- namespace – str alternative namespace to use
- timeout – int or float in seconds specifying maximum timeout for the operations to last
Returns: True if key is inserted
Raises: - ValueError if key already exists
asyncio.TimeoutError
if it lasts more than self.timeout
-
clear
(namespace=None, _conn=None)[source]¶ Clears the cache in the cache namespace. If an alternative namespace is given, it will clear those ones instead.
Parameters: - namespace – str alternative namespace to use
- timeout – int or float in seconds specifying maximum timeout for the operations to last
Returns: True
Raises: asyncio.TimeoutError
if it lasts more than self.timeout
-
close
(*args, _conn=None, **kwargs)[source]¶ Perform any resource clean up necessary to exit the program safely. After closing, cmd execution is still possible but you will have to close again before exiting.
Raises: asyncio.TimeoutError
if it lasts more than self.timeout
-
delete
(key, namespace=None, _conn=None)[source]¶ Deletes the given key.
Parameters: - key – Key to be deleted
- namespace – str alternative namespace to use
- timeout – int or float in seconds specifying maximum timeout for the operations to last
Returns: int number of deleted keys
Raises: asyncio.TimeoutError
if it lasts more than self.timeout
-
exists
(key, namespace=None, _conn=None)[source]¶ Check key exists in the cache.
Parameters: - key – str key to check
- namespace – str alternative namespace to use
- timeout – int or float in seconds specifying maximum timeout for the operations to last
Returns: True if key exists otherwise False
Raises: asyncio.TimeoutError
if it lasts more than self.timeout
-
expire
(key, ttl, namespace=None, _conn=None)[source]¶ Set the ttl to the given key. By setting it to 0, it will disable it
Parameters: - key – str key to expire
- ttl – int number of seconds for expiration. If 0, ttl is disabled
- namespace – str alternative namespace to use
- timeout – int or float in seconds specifying maximum timeout for the operations to last
Returns: True if set, False if key is not found
Raises: asyncio.TimeoutError
if it lasts more than self.timeout
-
get
(key, default=None, loads_fn=None, namespace=None, _conn=None)[source]¶ Get a value from the cache. Returns default if not found.
Parameters: - key – str
- default – obj to return when key is not found
- loads_fn – callable alternative to use as loads function
- namespace – str alternative namespace to use
- timeout – int or float in seconds specifying maximum timeout for the operations to last
Returns: obj loaded
Raises: asyncio.TimeoutError
if it lasts more than self.timeout
-
increment
(key, delta=1, namespace=None, _conn=None)[source]¶ Increments value stored in key by delta (can be negative). If key doesn’t exist, it creates the key with delta as value.
Parameters: - key – str key to check
- delta – int amount to increment/decrement
- namespace – str alternative namespace to use
- timeout – int or float in seconds specifying maximum timeout for the operations to last
Returns: Value of the key once incremented. -1 if key is not found.
Raises: asyncio.TimeoutError
if it lasts more than self.timeoutRaises: TypeError
if value is not incrementable
-
multi_get
(keys, loads_fn=None, namespace=None, _conn=None)[source]¶ Get multiple values from the cache, values not found are Nones.
Parameters: - keys – list of str
- loads_fn – callable alternative to use as loads function
- namespace – str alternative namespace to use
- timeout – int or float in seconds specifying maximum timeout for the operations to last
Returns: list of objs
Raises: asyncio.TimeoutError
if it lasts more than self.timeout
-
multi_set
(pairs, ttl=<object object>, dumps_fn=None, namespace=None, _conn=None)[source]¶ Stores multiple values in the given keys.
Parameters: - pairs – list of two element iterables. First is key and second is value
- ttl – int the expiration time in seconds. Due to memcached restrictions if you want compatibility use int. In case you need miliseconds, redis and memory support float ttls
- dumps_fn – callable alternative to use as dumps function
- namespace – str alternative namespace to use
- timeout – int or float in seconds specifying maximum timeout for the operations to last
Returns: True
Raises: asyncio.TimeoutError
if it lasts more than self.timeout
-
raw
(command, *args, _conn=None, **kwargs)[source]¶ Send the raw command to the underlying client. Note that by using this CMD you will lose compatibility with other backends.
Due to limitations with aiomcache client, args have to be provided as bytes. For rest of backends, str.
Parameters: - command – str with the command.
- timeout – int or float in seconds specifying maximum timeout for the operations to last
Returns: whatever the underlying client returns
Raises: asyncio.TimeoutError
if it lasts more than self.timeout
-
set
(key, value, ttl=<object object>, dumps_fn=None, namespace=None, _cas_token=None, _conn=None)[source]¶ Stores the value in the given key with ttl if specified
Parameters: - key – str
- value – obj
- ttl – int the expiration time in seconds. Due to memcached restrictions if you want compatibility use int. In case you need miliseconds, redis and memory support float ttls
- dumps_fn – callable alternative to use as dumps function
- namespace – str alternative namespace to use
- timeout – int or float in seconds specifying maximum timeout for the operations to last
Returns: True if the value was set
Raises: asyncio.TimeoutError
if it lasts more than self.timeout
- serializer – obj derived from
Cache¶
-
class
aiocache.
Cache
[source]¶ This class is just a proxy to the specific cache implementations like
aiocache.SimpleMemoryCache
,aiocache.RedisCache
andaiocache.MemcachedCache
. It is the preferred method of instantiating new caches over using the backend specific classes.You can instatiate a new one using the
cache_type
attribute like:>>> from aiocache import Cache >>> Cache(Cache.REDIS) RedisCache (127.0.0.1:6379)
If you don’t specify anything,
Cache.MEMORY
is used.Only
Cache.MEMORY
,Cache.REDIS
andCache.MEMCACHED
types are allowed. If the type passed is invalid, it will raise aaiocache.exceptions.InvalidCacheType
exception.-
MEMORY
¶ alias of
aiocache.backends.memory.SimpleMemoryCache
-
classmethod
from_url
(url)[source]¶ Given a resource uri, return an instance of that cache initialized with the given parameters. An example usage:
>>> from aiocache import Cache >>> Cache.from_url('memory://') <aiocache.backends.memory.SimpleMemoryCache object at 0x1081dbb00>
a more advanced usage using queryparams to configure the cache:
>>> from aiocache import Cache >>> cache = Cache.from_url('redis://localhost:10/1?pool_min_size=1') >>> cache RedisCache (localhost:10) >>> cache.db 1 >>> cache.pool_min_size 1
Parameters: url – string identifying the resource uri of the cache to connect to
-
RedisCache¶
SimpleMemoryCache¶
-
class
aiocache.
SimpleMemoryCache
(serializer=None, **kwargs)[source]¶ - Memory cache implementation with the following components as defaults:
- serializer:
aiocache.serializers.JsonSerializer
- plugins: None
- serializer:
Config options are:
Parameters: - serializer – obj derived from
aiocache.serializers.BaseSerializer
. - plugins – list of
aiocache.plugins.BasePlugin
derived classes. - namespace – string to use as default prefix for the key used in all operations of the backend. Default is None.
- timeout – int or float in seconds specifying maximum timeout for the operations to last. By default its 5.
MemcachedCache¶
Serializers¶
Serializers can be attached to backends in order to serialize/deserialize data sent and retrieved from the backend. This allows to apply transformations to data in case you want it to be saved in a specific format in your cache backend. For example, imagine you have your Model
and want to serialize it to something that Redis can understand (Redis can’t store python objects). This is the task of a serializer.
To use a specific serializer:
>>> from aiocache import Cache
>>> from aiocache.serializers import PickleSerializer
cache = Cache(Cache.MEMORY, serializer=PickleSerializer())
Currently the following are built in:
NullSerializer¶
-
class
aiocache.serializers.
NullSerializer
(*args, encoding=<object object>, **kwargs)[source]¶ This serializer does nothing. Its only recommended to be used by
aiocache.SimpleMemoryCache
because for other backends it will produce incompatible data unless you work only with str types because it store data as is.DISCLAIMER: Be careful with mutable types and memory storage. The following behavior is considered normal (same as
functools.lru_cache
):cache = Cache() my_list = [1] await cache.set("key", my_list) my_list.append(2) await cache.get("key") # Will return [1, 2]
StringSerializer¶
-
class
aiocache.serializers.
StringSerializer
(*args, encoding=<object object>, **kwargs)[source]¶ Converts all input values to str. All return values are also str. Be careful because this means that if you store an
int(1)
, you will get back ‘1’.The transformation is done by just casting to str in the
dumps
method.If you want to keep python types, use
PickleSerializer
.JsonSerializer
may also be useful to keep type of symple python types.
PickleSerializer¶
JsonSerializer¶
-
class
aiocache.serializers.
JsonSerializer
(*args, encoding=<object object>, **kwargs)[source]¶ Transform data to json string with json.dumps and json.loads to retrieve it back. Check https://docs.python.org/3/library/json.html#py-to-json-table for how types are converted.
ujson will be used by default if available. Be careful with differences between built in json module and ujson:
- ujson dumps supports bytes while json doesn’t
- ujson and json outputs may differ sometimes
MsgPackSerializer¶
In case the current serializers are not covering your needs, you can always define your custom serializer as shown in examples/serializer_class.py
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | import asyncio
import zlib
from aiocache import Cache
from aiocache.serializers import BaseSerializer
class CompressionSerializer(BaseSerializer):
# This is needed because zlib works with bytes.
# this way the underlying backend knows how to
# store/retrieve values
DEFAULT_ENCODING = None
def dumps(self, value):
print("I've received:\n{}".format(value))
compressed = zlib.compress(value.encode())
print("But I'm storing:\n{}".format(compressed))
return compressed
def loads(self, value):
print("I've retrieved:\n{}".format(value))
decompressed = zlib.decompress(value).decode()
print("But I'm returning:\n{}".format(decompressed))
return decompressed
cache = Cache(Cache.REDIS, serializer=CompressionSerializer(), namespace="main")
async def serializer():
text = (
"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt"
"ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation"
"ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in"
"reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur"
"sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit"
"anim id est laborum.")
await cache.set("key", text)
print("-----------------------------------")
real_value = await cache.get("key")
compressed_value = await cache.raw("get", "main:key")
assert len(compressed_value) < len(real_value.encode())
def test_serializer():
loop = asyncio.get_event_loop()
loop.run_until_complete(serializer())
loop.run_until_complete(cache.delete("key"))
loop.run_until_complete(cache.close())
if __name__ == "__main__":
test_serializer()
|
You can also use marshmallow as your serializer (examples/marshmallow_serializer_class.py
):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 | import random
import string
import asyncio
from marshmallow import fields, Schema, post_load
from aiocache import Cache
from aiocache.serializers import BaseSerializer
class RandomModel:
MY_CONSTANT = "CONSTANT"
def __init__(self, int_type=None, str_type=None, dict_type=None, list_type=None):
self.int_type = int_type or random.randint(1, 10)
self.str_type = str_type or random.choice(string.ascii_lowercase)
self.dict_type = dict_type or {}
self.list_type = list_type or []
def __eq__(self, obj):
return self.__dict__ == obj.__dict__
class MarshmallowSerializer(Schema, BaseSerializer):
int_type = fields.Integer()
str_type = fields.String()
dict_type = fields.Dict()
list_type = fields.List(fields.Integer())
# marshmallow Schema class doesn't play nicely with multiple inheritance and won't call
# BaseSerializer.__init__
encoding = 'utf-8'
def dumps(self, *args, **kwargs):
# dumps returns (data, errors), we just want to save data
return super().dumps(*args, **kwargs).data
def loads(self, *args, **kwargs):
# dumps returns (data, errors), we just want to return data
return super().loads(*args, **kwargs).data
@post_load
def build_my_type(self, data):
return RandomModel(**data)
class Meta:
strict = True
cache = Cache(serializer=MarshmallowSerializer(), namespace="main")
async def serializer():
model = RandomModel()
await cache.set("key", model)
result = await cache.get("key")
assert result.int_type == model.int_type
assert result.str_type == model.str_type
assert result.dict_type == model.dict_type
assert result.list_type == model.list_type
def test_serializer():
loop = asyncio.get_event_loop()
loop.run_until_complete(serializer())
loop.run_until_complete(cache.delete("key"))
if __name__ == "__main__":
test_serializer()
|
By default cache backends assume they are working with str
types. If your custom implementation transform data to bytes, you will need to set the class attribute encoding
to None
.
Plugins¶
Plugins can be used to enrich the behavior of the cache. By default all caches are configured without any plugin but can add new ones in the constructor or after initializing the cache class:
>>> from aiocache import Cache
>>> from aiocache.plugins import TimingPlugin
cache = Cache(plugins=[HitMissRatioPlugin()])
cache.plugins += [TimingPlugin()]
You can define your custom plugin by inheriting from BasePlugin and overriding the needed methods (the overrides NEED to be async). All commands have pre_<command_name>
and post_<command_name>
hooks.
Warning
Both pre and post hooks are executed awaiting the coroutine. If you perform expensive operations with the hooks, you will add more latency to the command being executed and thus, there are more probabilities of raising a timeout error. If a timeout error is raised, be aware that previous actions won’t be rolled back.
A complete example of using plugins:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 | import asyncio
import random
import logging
from aiocache import Cache
from aiocache.plugins import HitMissRatioPlugin, TimingPlugin, BasePlugin
logger = logging.getLogger(__name__)
class MyCustomPlugin(BasePlugin):
async def pre_set(self, *args, **kwargs):
logger.info("I'm the pre_set hook being called with %s %s" % (args, kwargs))
async def post_set(self, *args, **kwargs):
logger.info("I'm the post_set hook being called with %s %s" % (args, kwargs))
cache = Cache(
plugins=[HitMissRatioPlugin(), TimingPlugin(), MyCustomPlugin()],
namespace="main")
async def run():
await cache.set("a", "1")
await cache.set("b", "2")
await cache.set("c", "3")
await cache.set("d", "4")
possible_keys = ["a", "b", "c", "d", "e", "f"]
for t in range(1000):
await cache.get(random.choice(possible_keys))
assert cache.hit_miss_ratio["hit_ratio"] > 0.5
assert cache.hit_miss_ratio["total"] == 1000
assert cache.profiling["get_min"] > 0
assert cache.profiling["set_min"] > 0
assert cache.profiling["get_max"] > 0
assert cache.profiling["set_max"] > 0
print(cache.hit_miss_ratio)
print(cache.profiling)
def test_run():
loop = asyncio.get_event_loop()
loop.run_until_complete(run())
loop.run_until_complete(cache.delete("a"))
loop.run_until_complete(cache.delete("b"))
loop.run_until_complete(cache.delete("c"))
loop.run_until_complete(cache.delete("d"))
if __name__ == "__main__":
test_run()
|
BasePlugin¶
-
class
aiocache.plugins.
BasePlugin
[source]¶ -
-
post_add
(*args, **kwargs)¶
-
post_clear
(*args, **kwargs)¶
-
post_delete
(*args, **kwargs)¶
-
post_exists
(*args, **kwargs)¶
-
post_expire
(*args, **kwargs)¶
-
post_get
(*args, **kwargs)¶
-
post_increment
(*args, **kwargs)¶
-
post_multi_get
(*args, **kwargs)¶
-
post_multi_set
(*args, **kwargs)¶
-
post_raw
(*args, **kwargs)¶
-
post_set
(*args, **kwargs)¶
-
pre_add
(*args, **kwargs)¶
-
pre_clear
(*args, **kwargs)¶
-
pre_delete
(*args, **kwargs)¶
-
pre_exists
(*args, **kwargs)¶
-
pre_expire
(*args, **kwargs)¶
-
pre_get
(*args, **kwargs)¶
-
pre_increment
(*args, **kwargs)¶
-
pre_multi_get
(*args, **kwargs)¶
-
pre_multi_set
(*args, **kwargs)¶
-
pre_raw
(*args, **kwargs)¶
-
pre_set
(*args, **kwargs)¶
-
TimingPlugin¶
-
class
aiocache.plugins.
TimingPlugin
[source]¶ Calculates average, min and max times each command takes. The data is saved in the cache class as a dict attribute called
profiling
. For example, to access the average time of the operation get, you can docache.profiling['get_avg']
-
post_add
(client, *args, took=0, **kwargs)¶
-
post_clear
(client, *args, took=0, **kwargs)¶
-
post_delete
(client, *args, took=0, **kwargs)¶
-
post_exists
(client, *args, took=0, **kwargs)¶
-
post_expire
(client, *args, took=0, **kwargs)¶
-
post_get
(client, *args, took=0, **kwargs)¶
-
post_increment
(client, *args, took=0, **kwargs)¶
-
post_multi_get
(client, *args, took=0, **kwargs)¶
-
post_multi_set
(client, *args, took=0, **kwargs)¶
-
post_raw
(client, *args, took=0, **kwargs)¶
-
post_set
(client, *args, took=0, **kwargs)¶
-
HitMissRatioPlugin¶
-
class
aiocache.plugins.
HitMissRatioPlugin
[source]¶ Calculates the ratio of hits the cache has. The data is saved in the cache class as a dict attribute called
hit_miss_ratio
. For example, to access the hit ratio of the cache, you can docache.hit_miss_ratio['hit_ratio']
. It also provides the “total” and “hits” keys.
Configuration¶
The caches module allows to setup cache configurations and then use them either using an alias or retrieving the config explicitly. To set the config, call caches.set_config
:
-
classmethod
caches.
set_config
(config)¶ Set (override) the default config for cache aliases from a dict-like structure. The structure is the following:
{ 'default': { 'cache': "aiocache.SimpleMemoryCache", 'serializer': { 'class': "aiocache.serializers.StringSerializer" } }, 'redis_alt': { 'cache': "aiocache.RedisCache", 'endpoint': "127.0.0.10", 'port': 6378, 'serializer': { 'class': "aiocache.serializers.PickleSerializer" }, 'plugins': [ {'class': "aiocache.plugins.HitMissRatioPlugin"}, {'class': "aiocache.plugins.TimingPlugin"} ] } }
‘default’ key must always exist when passing a new config. Default configuration is:
{ 'default': { 'cache': "aiocache.SimpleMemoryCache", 'serializer': { 'class': "aiocache.serializers.StringSerializer" } } }
You can set your own classes there. The class params accept both str and class types.
All keys in the config are optional, if they are not passed the defaults for the specified class will be used.
If a config key already exists, it will be updated with the new values.
To retrieve a copy of the current config, you can use caches.get_config
or caches.get_alias_config
for an alias config.
Next snippet shows an example usage:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 | import asyncio
from aiocache import caches, Cache
from aiocache.serializers import StringSerializer, PickleSerializer
caches.set_config({
'default': {
'cache': "aiocache.SimpleMemoryCache",
'serializer': {
'class': "aiocache.serializers.StringSerializer"
}
},
'redis_alt': {
'cache': "aiocache.RedisCache",
'endpoint': "127.0.0.1",
'port': 6379,
'timeout': 1,
'serializer': {
'class': "aiocache.serializers.PickleSerializer"
},
'plugins': [
{'class': "aiocache.plugins.HitMissRatioPlugin"},
{'class': "aiocache.plugins.TimingPlugin"}
]
}
})
async def default_cache():
cache = caches.get('default') # This always returns the same instance
await cache.set("key", "value")
assert await cache.get("key") == "value"
assert isinstance(cache, Cache.MEMORY)
assert isinstance(cache.serializer, StringSerializer)
async def alt_cache():
# This generates a new instance every time! You can also use `caches.create('alt')`
# or even `caches.create('alt', namespace="test", etc...)` to override extra args
cache = caches.create(**caches.get_alias_config('redis_alt'))
await cache.set("key", "value")
assert await cache.get("key") == "value"
assert isinstance(cache, Cache.REDIS)
assert isinstance(cache.serializer, PickleSerializer)
assert len(cache.plugins) == 2
assert cache.endpoint == "127.0.0.1"
assert cache.timeout == 1
assert cache.port == 6379
await cache.close()
def test_alias():
loop = asyncio.get_event_loop()
loop.run_until_complete(default_cache())
loop.run_until_complete(alt_cache())
cache = Cache(Cache.REDIS)
loop.run_until_complete(cache.delete("key"))
loop.run_until_complete(cache.close())
loop.run_until_complete(caches.get('default').close())
if __name__ == "__main__":
test_alias()
|
When you do caches.get('alias_name')
, the cache instance is built lazily the first time. Next accesses will return the same instance. If instead of reusing the same instance, you need a new one every time, use caches.create('alias_name')
. One of the advantages of caches.create
is that it accepts extra args that then are passed to the cache constructor. This way you can override args like namespace, endpoint, etc.
-
classmethod
caches.
add
(alias: str, config: dict) → None¶ Add a cache to the current config. If the key already exists, it will overwrite it:
>>> caches.add('default', { 'cache': "aiocache.SimpleMemoryCache", 'serializer': { 'class': "aiocache.serializers.StringSerializer" } })
Parameters: - alias – The alias for the cache
- config – Mapping containing the cache configuration
-
classmethod
caches.
get
(alias: str)¶ Retrieve cache identified by alias. Will return always the same instance
If the cache was not instantiated yet, it will do it lazily the first time this is called.
Parameters: alias – str cache alias Returns: cache instance
-
classmethod
caches.
create
(alias=None, cache=None, **kwargs)¶ Create a new cache. Either alias or cache params are required. You can use kwargs to pass extra parameters to configure the cache.
Deprecated since version 0.11.0: Only creating a cache passing an alias is supported. If you want to create a cache passing explicit cache and kwargs use
aiocache.Cache
.Parameters: - alias – str alias to pull configuration from
- cache – str or class cache class to use for creating the new cache (when no alias is used)
Returns: New cache instance
Decorators¶
aiocache comes with a couple of decorators for caching results from asynchronous functions. Do not use the decorator in synchronous functions, it may lead to unexpected behavior.
cached¶
-
class
aiocache.
cached
(ttl=<object object>, key=None, key_builder=None, cache=<class 'aiocache.backends.memory.SimpleMemoryCache'>, serializer=None, plugins=None, alias=None, noself=False, **kwargs)[source]¶ Caches the functions return value into a key generated with module_name, function_name and args. The cache is available in the function object as
<function_name>.cache
.In some cases you will need to send more args to configure the cache object. An example would be endpoint and port for the Redis cache. You can send those args as kwargs and they will be propagated accordingly.
Only one cache instance is created per decorated call. If you expect high concurrency of calls to the same function, you should adapt the pool size as needed.
Extra args that are injected in the function that you can use to control the cache behavior are:
cache_read
: Controls whether the function call will try to read from cache first or- not. Enabled by default.
cache_write
: Controls whether the function call will try to write in the cache once- the result has been retrieved. Enabled by default.
aiocache_wait_for_write
: Controls whether the call of the function will wait for the- value in the cache to be written. If set to False, the write happens in the background. Enabled by default
Parameters: - ttl – int seconds to store the function call. Default is None which means no expiration.
- key – str value to set as key for the function return. Takes precedence over key_builder param. If key and key_builder are not passed, it will use module_name + function_name + args + kwargs
- key_builder – Callable that allows to build the function dynamically. It receives the function plus same args and kwargs passed to the function.
- cache – cache class to use when calling the
set
/get
operations. Default isaiocache.SimpleMemoryCache
. - serializer – serializer instance to use when calling the
dumps
/loads
. If its None, default one from the cache backend is used. - plugins – list plugins to use when calling the cmd hooks Default is pulled from the cache class being used.
- alias – str specifying the alias to load the config from. If alias is passed, other config parameters are ignored. Same cache identified by alias is used on every call. If you need a per function cache, specify the parameters explicitly without using alias.
- noself – bool if you are decorating a class function, by default self is also used to generate the key. This will result in same function calls done by different class instances to use different cache keys. Use noself=True if you want to ignore it.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | import asyncio
from collections import namedtuple
from aiocache import cached, Cache
from aiocache.serializers import PickleSerializer
Result = namedtuple('Result', "content, status")
@cached(
ttl=10, cache=Cache.REDIS, key="key", serializer=PickleSerializer(),
port=6379, namespace="main")
async def cached_call():
return Result("content", 200)
def test_cached():
cache = Cache(Cache.REDIS, endpoint="127.0.0.1", port=6379, namespace="main")
loop = asyncio.get_event_loop()
loop.run_until_complete(cached_call())
assert loop.run_until_complete(cache.exists("key")) is True
loop.run_until_complete(cache.delete("key"))
loop.run_until_complete(cache.close())
if __name__ == "__main__":
test_cached()
|
multi_cached¶
-
class
aiocache.
multi_cached
(keys_from_attr, key_builder=None, ttl=<object object>, cache=<class 'aiocache.backends.memory.SimpleMemoryCache'>, serializer=None, plugins=None, alias=None, **kwargs)[source]¶ Only supports functions that return dict-like structures. This decorator caches each key/value of the dict-like object returned by the function. Note that in this decorator, the function name is not prefixed in the key when stored so, if there is another function returning a dict with same keys, they will be overwritten. To avoid this, use a specific namespace in each cache decorator or pass a key_builder.
The cache is available in the function object as
<function_name>.cache
.If key_builder is passed, before storing the key, it will be transformed according to the output of the function.
If the attribute specified to be the key is an empty list, the cache will be ignored and the function will be called as expected.
Only one cache instance is created per decorated function. If you expect high concurrency of calls to the same function, you should adapt the pool size as needed.
Extra args that are injected in the function that you can use to control the cache behavior are:
cache_read
: Controls whether the function call will try to read from cache first or- not. Enabled by default.
cache_write
: Controls whether the function call will try to write in the cache once- the result has been retrieved. Enabled by default.
aiocache_wait_for_write
: Controls whether the call of the function will wait for the- value in the cache to be written. If set to False, the write happens in the background. Enabled by default
Parameters: - keys_from_attr – arg or kwarg name from the function containing an iterable to use as keys to index in the cache.
- key_builder – Callable that allows to change the format of the keys before storing. Receives the key the function and same args and kwargs as the called function.
- ttl – int seconds to store the keys. Default is 0 which means no expiration.
- cache – cache class to use when calling the
multi_set
/multi_get
operations. Default isaiocache.SimpleMemoryCache
. - serializer – serializer instance to use when calling the
dumps
/loads
. If its None, default one from the cache backend is used. - plugins – plugins to use when calling the cmd hooks Default is pulled from the cache class being used.
- alias – str specifying the alias to load the config from. If alias is passed, other config parameters are ignored. Same cache identified by alias is used on every call. If you need a per function cache, specify the parameters explicitly without using alias.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 | import asyncio
from aiocache import multi_cached, Cache
DICT = {
'a': "Z",
'b': "Y",
'c': "X",
'd': "W"
}
@multi_cached("ids", cache=Cache.REDIS, namespace="main")
async def multi_cached_ids(ids=None):
return {id_: DICT[id_] for id_ in ids}
@multi_cached("keys", cache=Cache.REDIS, namespace="main")
async def multi_cached_keys(keys=None):
return {id_: DICT[id_] for id_ in keys}
cache = Cache(Cache.REDIS, endpoint="127.0.0.1", port=6379, namespace="main")
def test_multi_cached():
loop = asyncio.get_event_loop()
loop.run_until_complete(multi_cached_ids(ids=['a', 'b']))
loop.run_until_complete(multi_cached_ids(ids=['a', 'c']))
loop.run_until_complete(multi_cached_keys(keys=['d']))
assert loop.run_until_complete(cache.exists('a'))
assert loop.run_until_complete(cache.exists('b'))
assert loop.run_until_complete(cache.exists('c'))
assert loop.run_until_complete(cache.exists('d'))
loop.run_until_complete(cache.delete("a"))
loop.run_until_complete(cache.delete("b"))
loop.run_until_complete(cache.delete("c"))
loop.run_until_complete(cache.delete("d"))
loop.run_until_complete(cache.close())
if __name__ == "__main__":
test_multi_cached()
|
Warning
This was added in version 0.7.0 and the API is new. This means its open to breaking changes in future versions until the API is considered stable.
Locking¶
Warning
The implementations provided are NOT intented for consistency/synchronization purposes. If you need a locking mechanism focused on consistency, consider implementing your mechanism based on more serious tools like https://zookeeper.apache.org/.
There are a couple of locking implementations than can help you to protect against different scenarios:
RedLock¶
-
class
aiocache.lock.
RedLock
(client: aiocache.base.BaseCache, key: str, lease: Union[int, float])[source]¶ Implementation of Redlock with a single instance because aiocache is focused on single instance cache.
This locking has some limitations and shouldn’t be used in situations where consistency is critical. Those locks are aimed for performance reasons where failing on locking from time to time is acceptable. TLDR: do NOT use this if you need real resource exclusion.
Couple of considerations with the implementation:
- If the lease expires and there are calls waiting, all of them will pass (blocking just happens for the first time).
- When a new call arrives, it will wait always at most lease time. This means that the call could end up blocked longer than needed in case the lease from the blocker expires.
Backend specific implementation:
- Redis implements correctly the redlock algorithm. It sets the key if it doesn’t exist. To release, it checks the value is the same as the instance trying to release and if it is, it removes the lock. If not it will do nothing
- Memcached follows the same approach with a difference. Due to memcached lacking a way to execute the operation get and delete commands atomically, any client is able to release the lock. This is a limitation that can’t be fixed without introducing race conditions.
- Memory implementation is not distributed, it will only apply to the process running. Say you have 4 processes running APIs with aiocache, the locking will apply only per process (still useful to reduce load per process).
Example usage:
from aiocache import Cache from aiocache.lock import RedLock cache = Cache(Cache.REDIS) async with RedLock(cache, 'key', lease=1): # Calls will wait here result = await cache.get('key') if result is not None: return result result = await super_expensive_function() await cache.set('key', result)
In the example, first call will start computing the
super_expensive_function
while consecutive calls will block at most 1 second. If the blocking lasts for more than 1 second, the calls will proceed to also calculate the result ofsuper_expensive_function
.
OptimisticLock¶
-
class
aiocache.lock.
OptimisticLock
(client: aiocache.base.BaseCache, key: str)[source]¶ Implementation of optimistic lock
Optimistic locking assumes multiple transactions can happen at the same time and they will only fail if before finish, conflicting modifications with other transactions are found, producing a roll back.
Finding a conflict will end up raising an aiocache.lock.OptimisticLockError exception. A conflict happens when the value at the storage is different from the one we retrieved when the lock started.
Example usage:
cache = Cache(Cache.REDIS) # The value stored in 'key' will be checked here async with OptimisticLock(cache, 'key') as lock: result = await super_expensive_call() await lock.cas(result)
If any other call sets the value of
key
before thelock.cas
is called, anaiocache.lock.OptimisticLockError
will be raised. A way to make the same call crash would be to change the value inside the lock like:cache = Cache(Cache.REDIS) # The value stored in 'key' will be checked here async with OptimisticLock(cache, 'key') as lock: result = await super_expensive_call() await cache.set('random_value') # This will make the `lock.cas` call fail await lock.cas(result)
If the lock is created with an unexisting key, there will never be conflicts.
Testing¶
It’s really easy to cut the dependency with aiocache functionality:
import asyncio
from asynctest import MagicMock
from aiocache.base import BaseCache
async def async_main():
mocked_cache = MagicMock(spec=BaseCache)
mocked_cache.get.return_value = "world"
print(await mocked_cache.get("hello"))
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(async_main())
Note that we are passing the BaseCache as the spec for the Mock (you need to install asynctest
).
Also, for debuging purposes you can use AIOCACHE_DISABLE = 1 python myscript.py to disable caching.