[英]Postgres export large table to another database
問題是什么? 我的 Postgres 數據庫中有一個表,大約有 5600 萬行,20~GB 摘要。 它存儲在具有 16GB RAM 和 i7-7700 3.6GHz 的本地機器上。 為了管理我的數據庫,我使用 DataGrip 並且一次打開了多個數據庫服務器連接。 我需要將表從一台服務器導出到另一台服務器,但是當我嘗試通過簡單的鼠標拖動(從本地服務器到遠程) "Database client process needs more memory to perform the request"
我收到下一個錯誤"Database client process needs more memory to perform the request"
。
DataGrip 允許導出/導入表 DataGrip 顧問說:
要配置:打開“PostgreSQL 10 - postgres@localhost”數據源屬性,轉到“高級”選項卡並將“-XmxNNNm”添加到“VM 選項”字段,其中 NNN 是兆字節數(例如 -Xmx256m)。
我嘗試了多個VM options (256, 1024, 8048)
值VM options (256, 1024, 8048)
,還調整了我的 Postgres 本地服務器的配置,但它沒有解決我的問題。 這是配置:
#effective_cache_size = 8GB
#------------------------------------------------------------------------------
# RESOURCE USAGE (except WAL)
#------------------------------------------------------------------------------
# - Memory -
#shared_buffers = 4GB # min 128kB
# (change requires restart)
#huge_pages = try # on, off, or try
# (change requires restart)
#temp_buffers = 256MB # min 800kB
#max_prepared_transactions = 0 # zero disables the feature
# (change requires restart)
# Caution: it is not advisable to set max_prepared_transactions nonzero unless
# you actively intend to use prepared transactions.
#work_mem = 4MB # min 64kB
#maintenance_work_mem = 1024MB # min 1MB
#replacement_sort_tuples = 150000 # limits use of replacement selection sort
#autovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_mem
#max_stack_depth = 2MB # min 100kB
dynamic_shared_memory_type = windows # the default is the first option
# supported by the operating system:
# posix
# sysv
# windows
# mmap
# use none to disable dynamic shared memory
# (change requires restart)
# - Disk -
#temp_file_limit = -1 # limits per-process temp file space
# in kB, or -1 for no limit
# - Kernel Resource Usage -
#max_files_per_process = 1000 # min 25
# (change requires restart)
#shared_preload_libraries = '' # (change requires restart)
# - Cost-Based Vacuum Delay -
#vacuum_cost_delay = 0 # 0-100 milliseconds
#vacuum_cost_page_hit = 1 # 0-10000 credits
#vacuum_cost_page_miss = 10 # 0-10000 credits
#vacuum_cost_page_dirty = 20 # 0-10000 credits
#vacuum_cost_limit = 200 # 1-10000 credits
# - Background Writer -
#bgwriter_delay = 200ms # 10-10000ms between rounds
#bgwriter_lru_maxpages = 100 # 0-1000 max buffers written/round
#bgwriter_lru_multiplier = 2.0 # 0-10.0 multiplier on buffers scanned/round
#bgwriter_flush_after = 0 # measured in pages, 0 disables
# - Asynchronous Behavior -
#effective_io_concurrency = 0 # 1-1000; 0 disables prefetching
#max_worker_processes = 8 # (change requires restart)
#max_parallel_workers_per_gather = 2 # taken from max_parallel_workers
#max_parallel_workers = 8 # maximum number of max_worker_processes that
# can be used in parallel queries
#old_snapshot_threshold = -1 # 1min-60d; -1 disables; 0 is immediate
# (change requires restart)
#backend_flush_after = 0 # measured in pages, 0 disables
DataGrip 將整個文件放入 RAM,然后嘗試將其導出。 為了獲得最佳性能,最好使用本機工具。
閱讀 DataGrip 幫助主題關於:
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.