简体   繁体   English

如何在Redis中设置/获取pandas.DataFrame?

[英]How to set/get pandas.DataFrame to/from Redis?

After setting a DataFrame to redis, then getting it back, redis returns a string and I can't figure out a way to convert this str to a DataFrame.将 DataFrame 设置为 redis,然后将其取回后,redis 返回一个字符串,我想不出将这个 str 转换为 DataFrame 的方法。

How can I do these two appropriately?我怎样才能适当地做到这两个?

set:设置:

redisConn.set("key", df.to_msgpack(compress='zlib'))

get:得到:

pd.read_msgpack(redisConn.get("key"))

I couldn't use msgpack because of Decimal objects in my dataframe.由于我的数据框中的Decimal对象,我无法使用 msgpack。 Instead I combined pickle and zlib together like this, assuming a dataframe df and a local instance of Redis:相反,我像这样将 pickle 和 zlib 组合在一起,假设数据帧df和 Redis 的本地实例:

import pickle
import redis
import zlib

EXPIRATION_SECONDS = 600

r = redis.StrictRedis(host='localhost', port=6379, db=0)

# Set
r.setex("key", EXPIRATION_SECONDS, zlib.compress( pickle.dumps(df)))

# Get
rehydrated_df = pickle.loads(zlib.decompress(r.get("key")))

There isn't anything dataframe specific about this.没有任何关于此的特定数据框。

Caveats注意事项

  • the other answer using msgpack is better -- use it if it works for you使用msgpack的另一个答案更好 - 如果它适合您,请使用它
  • pickling can be dangerous -- your Redis server needs to be secure or you're asking for trouble酸洗可能很危险——您的 Redis 服务器需要安全,否则您会自找麻烦

For caching a dataframe use this.要缓存数据帧,请使用它。

import pyarrow as pa

def cache_df(alias,df):

    pool = redis.ConnectionPool(host='host', port='port', db='db')
    cur = redis.Redis(connection_pool=pool)
    context = pa.default_serialization_context()
    df_compressed =  context.serialize(df).to_buffer().to_pybytes()

    res = cur.set(alias,df_compressed)
    if res == True:
        print('df cached')

For fetching the cached dataframe use this.要获取缓存的数据帧,请使用它。

def get_cached_df(alias):

    pool = redis.ConnectionPool(host='host',port='port', db='db') 
    cur = redis.Redis(connection_pool=pool)
    context = pa.default_serialization_context()
    all_keys = [key.decode("utf-8") for key in cur.keys()]

    if alias in all_keys:   
        result = cur.get(alias)

        dataframe = pd.DataFrame.from_dict(context.deserialize(result))

        return dataframe

    return None
import pandas as pd
df = pd.DataFrame([1,2])
redis.setex('df',100,df.to_json())
df = redis.get('df')
df = pd.read_json(df)

to_msgpack is not available at the last versions of Pandas. to_msgpack 在 Pandas 的最新版本中不可用。

import redis
import pandas as pd

# Create a redis client
redisClient = redis.StrictRedis(host='localhost', port=6379, db=0)
# Create un dataframe
dd = {'ID': ['H576','H577','H578','H600', 'H700'],
  'CD': ['AAAAAAA', 'BBBBB', 'CCCCCC','DDDDDD', 'EEEEEEE']}
df = pd.DataFrame(dd)
data = df.to_json()
redisClient.set('dd', data)
# Retrieve the data
blob = redisClient.get('dd')
df_from_redis = pd.read_json(blob)
df_from_redis.head()

output输出

It's 2021, which means df.to_msgpack() is deprecated AND pyarrow has deprecated their custom serialization functionality as of pyarrow 2.0.现在是 2021 年,这意味着不推荐使用df.to_msgpack()并且pyarrow从 pyarrow 2.0 开始不推荐使用它们的自定义序列化功能。 (see the "Arbitrary Object Serialization" section on pyarrow's serialization page (请参阅 pyarrow 的序列化页面上的“任意对象序列化”部分

That leaves good & trusty msgpack to serialize objects such that they can be pushed/stored into redis.这留下了良好且可靠的msgpack来序列化对象,以便它们可以被推送/存储到 redis 中。

import msgpack
import redis 

# ...Writing to redis (already have data & a redis connection client)
redis_client.set('data_key_name', msgpack.packb(data))

# ...Retrieving from redis
retrieved_data = msgpack.unpackb(redis_client.get('data_key_name'))


声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM