簡體   English   中英

使用 sqlalchemy 和 PostgreSQL 查詢二進制數據

[英]Query binary data using sqlalchemy with PostgreSQL

我有一個簡單的數據庫,將附件存儲為 blob。

CREATE TABLE public.attachment
(
  id integer NOT NULL,
  attachdata oid,
  CONSTRAINT attachment_pkey PRIMARY KEY (id)
)

-- Import a file
INSERT INTO attachment (id, attachdata) VALUES (1, lo_import('C:\\temp\blob_import.txt'))
-- Export back as file. 
SELECT lo_export(attachdata, 'C:\temp\blob_export_postgres.txt') FROM attachment WHERE id = 1

我可以直接使用 psycopg2 讀回這個文件。

from psycopg2 import connect
con = connect(dbname="blobtest", user="postgres", password="postgres", host="localhost")
cur = con.cursor()
cur.execute("SELECT attachdata FROM attachment WHERE id = 1")
oid = cur.fetchone()[0]
obj = con.lobject(oid)
obj.export('C:\\temp\\blob_export_psycopg.txt')

當我使用 sqlalchemy 嘗試同樣的操作時,attachdata 是一個由零組成的字節串。 我已經使用 BLOB、LargeBinary 和 BINARY 等類型測試了以下代碼。 attachdata 字節串的大小似乎是 OID 值。

from sqlalchemy import create_engine
from sqlalchemy import Column, Integer, Binary
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker

Base = declarative_base()
Session = sessionmaker()

engine = create_engine('postgresql://postgres:postgres@localhost:5432/blobtest', echo=True)
Base.metadata.create_all(engine)
Session.configure(bind=engine)

class Attachment(Base):
    __tablename__ ="attachment"
    id = Column(Integer, primary_key=True)
    attachdata = Column(Binary)

session = Session()
attachment = session.query(Attachment).get(1)
with open('C:\\temp\\blob_export_sqlalchemy.txt', 'wb') as f:
    f.write(attachment.attachdata)

我搜索了 sqlalchemy 文檔和各種來源,但找不到如何使用 sqlalchemy 導出二進制數據的解決方案。

我有同樣的問題。 似乎沒有辦法通過 ORM 獲取大對象數據。 所以我把 ORM 和 psycopg2 引擎組合成這樣:

from sqlalchemy import create_engine
from sqlalchemy import Column, Integer
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker, scoped_session
from sqlalchemy.dialects.postgresql import OID

Base = declarative_base()
session_factory = sessionmaker()

engine = create_engine('postgresql+psycopg2://postgres:postgres@localhost:5432/postgres', echo=True)
Base.metadata.create_all(engine)
session_factory.configure(bind=engine)
Session = scoped_session(session_factory)


class Attachment(Base):
    __tablename__ ="attachment"
    id = Column(Integer, primary_key=True)
    oid = Column(OID)

    @classmethod
    def insert_file(cls, filename):
        conn = engine.raw_connection()
        l_obj = conn.lobject(0, 'wb', 0)
        with open(filename, 'rb') as f:
            l_obj.write(f.read())
        conn.commit()
        conn.close()
        session = Session()
        attachment = cls(oid=l_obj.oid)
        session.add(attachment)
        session.commit()
        return attachment.id

    @classmethod
    def get_file(cls, attachment_id, filename):
        session = Session()
        attachment = session.query(Attachment).get(attachment_id)
        conn = engine.raw_connection()
        l_obj = conn.lobject(attachment.oid, 'rb')
        with open(filename, 'wb') as f:
            f.write(l_obj.read())
        conn.close()


if __name__ == '__main__':
    my_id = Attachment.insert_file(r'C:\path\to\file')
    Attachment.get_file(my_id, r'C:\path\to\file_out')

不是很優雅,但似乎有效。

更新:

我現在正在使用事件

from sqlalchemy import create_engine, event
from sqlalchemy import Column, Integer
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker, scoped_session
from sqlalchemy.dialects.postgresql import OID

Base = declarative_base()
session_factory = sessionmaker()

engine = create_engine('postgresql+psycopg2://postgres:postgres@localhost:5432/postgres', echo=True)
Base.metadata.create_all(engine)
session_factory.configure(bind=engine)
Session = scoped_session(session_factory)

class Data(Base):
    __tablename__ = "attachment"
    id = Column(Integer, primary_key=True)
    oid = Column(OID)


@event.listens_for(Data, 'after_delete')
def remove_large_object_after_delete(_, connection, target):
    raw_connection = connection.connection
    l_obj = raw_connection.lobject(target.oid, 'n')
    l_obj.unlink()
    raw_connection.commit()


@event.listens_for(Data, 'before_insert')
def add_large_object_before_insert(_, connection, target):
    raw_connection = connection.connection
    l_obj = raw_connection.lobject(0, 'wb', 0)
    target.oid = l_obj.oid
    l_obj.write(target.ldata)
    raw_connection.commit()


@event.listens_for(Data, 'load')
def inject_large_object_after_load(target, _):
    session = object_session(target)
    conn = session.get_bind().raw_connection()
    l_obj = conn.lobject(target.oid, 'rb')
    target.ldata = l_obj.read()

if __name__ == '__main__':
   session = Session()  
   # Put
   data = Data()
   data.ldata = 'your large data'
   session.add(data)
   session.commit()

   id = data.id

   # Get
   data2 = session.query(Data).get(id)
   print(data.ldata) # Your large data is here

   # Delete
   session.delete(data)
   session.delete(data2)
   session.commit()   

   session.flush()
   session.close()

到目前為止效果很好。

我不明白為什么現在 postgres 大對象如此被忽視。 我大量使用它們。 或者假設我想要,但它具有挑戰性,尤其是在 asyncio 中......

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM