简体   繁体   English

如何使用 Docker 创建 Percona XtraDB 集群?

[英]How can I create Percona XtraDB cluster with Docker?

  • I need to create Percona XtraDB cluster with "star" topology: with one master node (where I insert data) and two slave nodes (changes in master must be applied to slaves)我需要创建具有“星形”拓扑结构的 Percona XtraDB 集群:一个主节点(我在其中插入数据)和两个从节点(主节点的更改必须应用于从节点)

  • Also I need to use Docker for this.此外,我需要为此使用 Docker。

What do I do:我该怎么办:

I got output to my console:我得到了我的控制台的输出:

Running --initialize-insecure on /var/lib/mysql/
total 8.0K
drwxr-xr-x  2 mysql mysql 4.0K Dec  6 17:49 .
drwxr-xr-x 18 root  root  4.0K Oct 29 00:12 ..
2016-12-06T17:49:10.703651Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2016-12-06T17:49:10.703793Z 0 [Warning] WSREP: Node is running in bootstrap/initialize mode. Disabling pxc_strict_mode checks
2016-12-06T17:49:11.082540Z 0 [Warning] InnoDB: New log files created, LSN=45790
2016-12-06T17:49:11.166052Z 0 [Warning] InnoDB: Creating foreign key constraint system tables.
2016-12-06T17:49:11.219667Z 0 [Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: 4b5149ee-bbdc-11e6-9aa1-0242ac110002.
2016-12-06T17:49:11.246135Z 0 [Warning] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened.
2016-12-06T17:49:11.489440Z 0 [Warning] CA certificate ca.pem is self signed.
2016-12-06T17:49:11.860045Z 1 [Warning] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
2016-12-06T17:49:12.488607Z 1 [Warning] 'user' entry 'root@localhost' ignored in --skip-name-resolve mode.
2016-12-06T17:49:12.488721Z 1 [Warning] 'user' entry 'mysql.sys@localhost' ignored in --skip-name-resolve mode.
2016-12-06T17:49:12.488935Z 1 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode.
2016-12-06T17:49:12.488977Z 1 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode.
2016-12-06T17:49:12.489358Z 1 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.
Finished --initialize-insecure
MySQL init process in progress...
2016-12-06T17:49:15.343012Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2016-12-06T17:49:15.344705Z 0 [Note] mysqld (mysqld 5.7.14-8-57) starting as process 41 ...
2016-12-06T17:49:15.347186Z 0 [Note] WSREP: Read nil XID from storage engines, skipping position init
2016-12-06T17:49:15.347427Z 0 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib/galera3/libgalera_smm.so'
2016-12-06T17:49:15.351457Z 0 [Note] WSREP: wsrep_load(): Galera 3.17(r447d194) by Codership Oy <info@codership.com> loaded successfully.
2016-12-06T17:49:15.351682Z 0 [Note] WSREP: CRC-32C: using hardware acceleration.
2016-12-06T17:49:15.352398Z 0 [Warning] WSREP: Could not open state file for reading: '/var/lib/mysql//grastate.dat'
2016-12-06T17:49:15.352545Z 0 [Note] WSREP: Found saved state: 00000000-0000-0000-0000-000000000000:-1
2016-12-06T17:49:15.356038Z 0 [Note] WSREP: Passing config to GCS: base_dir = /var/lib/mysql/; base_host = 172.17.0.2; base_port = 4567; cert.log_conflicts = no; debug = no; evs.auto_evict = 0; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.join_retrans_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.user_send_window = 2; evs.view_forget_timeout = PT24H; gcache.dir = /var/lib/mysql/; gcache.keep_pages_count = 0; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.size = 128M; gcomm.thread_prio = ; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.segment = 0; gmcast.version = 0; pc.announce_timeout = PT3S; pc.checksum = false
2016-12-06T17:49:15.399119Z 0 [Note] WSREP: Service thread queue flushed.
2016-12-06T17:49:15.399261Z 0 [Note] WSREP: Assign initial position for certification: -1, protocol version: -1
2016-12-06T17:49:15.399486Z 0 [Note] WSREP: wsrep_sst_grab()
2016-12-06T17:49:15.399647Z 0 [Note] WSREP: Start replication
2016-12-06T17:49:15.399804Z 0 [Note] WSREP: Setting initial position to 00000000-0000-0000-0000-000000000000:-1
2016-12-06T17:49:15.399924Z 0 [Note] WSREP: protonet asio version 0
2016-12-06T17:49:15.400319Z 0 [Note] WSREP: Using CRC-32C for message checksums.
2016-12-06T17:49:15.400411Z 0 [Note] WSREP: backend: asio
2016-12-06T17:49:15.400657Z 0 [Note] WSREP: gcomm thread scheduling priority set to other:0
2016-12-06T17:49:15.400797Z 0 [Warning] WSREP: access file(/var/lib/mysql//gvwstate.dat) failed(No such file or directory)
2016-12-06T17:49:15.401003Z 0 [Note] WSREP: restore pc from disk failed
2016-12-06T17:49:15.403105Z 0 [Note] WSREP: GMCast version 0
2016-12-06T17:49:15.405058Z 0 [Note] WSREP: (4dcf57f4, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
2016-12-06T17:49:15.405301Z 0 [Note] WSREP: (4dcf57f4, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
2016-12-06T17:49:15.409642Z 0 [Note] WSREP: EVS version 0
2016-12-06T17:49:15.410146Z 0 [Note] WSREP: gcomm: connecting to group 'Theistareykjarbunga', peer ''
2016-12-06T17:49:15.411849Z 0 [Note] WSREP: start_prim is enabled, turn off pc_recovery
2016-12-06T17:49:15.412522Z 0 [Note] WSREP: Node 4dcf57f4 state prim
2016-12-06T17:49:15.412731Z 0 [Note] WSREP: view(view_id(PRIM,4dcf57f4,1) memb {
        4dcf57f4,0
} joined {
} left {
} partitioned {
})
2016-12-06T17:49:15.413170Z 0 [Note] WSREP: save pc into disk
2016-12-06T17:49:15.413448Z 0 [Note] WSREP: gcomm: connected
2016-12-06T17:49:15.413841Z 0 [Note] WSREP: Changing maximum packet size to 64500, resulting msg size: 32636
2016-12-06T17:49:15.413924Z 0 [Note] WSREP: Shifting CLOSED -> OPEN (TO: 0)
2016-12-06T17:49:15.414182Z 0 [Note] WSREP: Opened channel 'Theistareykjarbunga'
2016-12-06T17:49:15.417075Z 0 [Note] WSREP: Waiting for SST to complete.
2016-12-06T17:49:15.421054Z 0 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 0, memb_num = 1
2016-12-06T17:49:15.429569Z 0 [Note] WSREP: Starting new group from scratch: 4dd38c07-bbdc-11e6-9e05-ff24cdfdc605
2016-12-06T17:49:15.432205Z 0 [Note] WSREP: STATE_EXCHANGE: sent state UUID: 4dd3f402-bbdc-11e6-badb-ca345528032f
2016-12-06T17:49:15.432821Z 0 [Note] WSREP: STATE EXCHANGE: sent state msg: 4dd3f402-bbdc-11e6-badb-ca345528032f
2016-12-06T17:49:15.433725Z 0 [Note] WSREP: STATE EXCHANGE: got state msg: 4dd3f402-bbdc-11e6-badb-ca345528032f from 0 (6814cd5862dd)
2016-12-06T17:49:15.434021Z 0 [Note] WSREP: Quorum results:
        version    = 4,
        component  = PRIMARY,
        conf_id    = 0,
        members    = 1/1 (joined/total),
        act_id     = 0,
        last_appl. = -1,
        protocols  = 0/7/3 (gcs/repl/appl),
        group UUID = 4dd38c07-bbdc-11e6-9e05-ff24cdfdc605
2016-12-06T17:49:15.434743Z 0 [Note] WSREP: Flow-control interval: [16, 16]
2016-12-06T17:49:15.434898Z 0 [Note] WSREP: Restored state OPEN -> JOINED (0)
2016-12-06T17:49:15.435307Z 1 [Note] WSREP: New cluster view: global state: 4dd38c07-bbdc-11e6-9e05-ff24cdfdc605:0, view# 1: Primary, number of nodes: 1, my index: 0, protocol version 3
2016-12-06T17:49:15.435407Z 0 [Note] WSREP: SST complete, seqno: 0
2016-12-06T17:49:15.437118Z 0 [Note] InnoDB: PUNCH HOLE support available
2016-12-06T17:49:15.437801Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2016-12-06T17:49:15.438830Z 0 [Note] InnoDB: Uses event mutexes
2016-12-06T17:49:15.439745Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
2016-12-06T17:49:15.442794Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.8
2016-12-06T17:49:15.443451Z 0 [Note] InnoDB: Using Linux native AIO
2016-12-06T17:49:15.443796Z 0 [Note] InnoDB: Number of pools: 1
2016-12-06T17:49:15.444197Z 0 [Note] InnoDB: Using CPU crc32 instructions
2016-12-06T17:49:15.445949Z 0 [Note] WSREP: Member 0.0 (6814cd5862dd) synced with group.
2016-12-06T17:49:15.448773Z 0 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 0)
2016-12-06T17:49:15.453625Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
2016-12-06T17:49:15.466242Z 0 [Note] InnoDB: Completed initialization of buffer pool
2016-12-06T17:49:15.468558Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
2016-12-06T17:49:15.481743Z 0 [Note] InnoDB: Crash recovery did not find the parallel doublewrite buffer at /var/lib/mysql/xb_doublewrite
2016-12-06T17:49:15.486318Z 0 [Note] InnoDB: Highest supported file format is Barracuda.
2016-12-06T17:49:15.522431Z 0 [Note] InnoDB: Created parallel doublewrite buffer at /var/lib/mysql/xb_doublewrite, size 3932160 bytes
2016-12-06T17:49:15.548331Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables
2016-12-06T17:49:15.548465Z 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
2016-12-06T17:49:15.559240Z 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
2016-12-06T17:49:15.562965Z 0 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active.
2016-12-06T17:49:15.563078Z 0 [Note] InnoDB: 32 non-redo rollback segment(s) are active.
2016-12-06T17:49:15.564561Z 0 [Note] InnoDB: Waiting for purge to start
2016-12-06T17:49:15.615700Z 0 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.7.14-8 started; log sequence number 2491156
2016-12-06T17:49:15.619333Z 0 [Note] Plugin 'FEDERATED' is disabled.
2016-12-06T17:49:15.629815Z 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
2016-12-06T17:49:15.635698Z 0 [Note] Found ca.pem, server-cert.pem and server-key.pem in data directory. Trying to enable SSL support using them.
2016-12-06T17:49:15.636018Z 0 [Note] Skipping generation of SSL certificates as certificate files are present in data directory.
2016-12-06T17:49:15.642714Z 0 [Warning] CA certificate ca.pem is self signed.
2016-12-06T17:49:15.643054Z 0 [Note] Skipping generation of RSA key pair as key files are present in data directory.
2016-12-06T17:49:15.653055Z 0 [Warning] 'user' entry 'root@localhost' ignored in --skip-name-resolve mode.
2016-12-06T17:49:15.653222Z 0 [Warning] 'user' entry 'mysql.sys@localhost' ignored in --skip-name-resolve mode.
2016-12-06T17:49:15.653494Z 0 [Note] InnoDB: Buffer pool(s) load completed at 161206 17:49:15
2016-12-06T17:49:15.653568Z 0 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode.
2016-12-06T17:49:15.653838Z 0 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode.
2016-12-06T17:49:15.666732Z 0 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.
2016-12-06T17:49:15.673941Z 0 [Note] Event Scheduler: Loaded 0 events
2016-12-06T17:49:15.678543Z 0 [Note] mysqld: ready for connections.
Version: '5.7.14-8-57'  socket: '/var/run/mysqld/mysqld.sock'  port: 0  Percona XtraDB Cluster (GPL), Release rel8, Revision a3f9d06, WSREP version 26.17, wsrep_26.17
2016-12-06T17:49:15.681560Z 1 [Note] WSREP: Initialized wsrep sidno 2
2016-12-06T17:49:15.681661Z 1 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2016-12-06T17:49:15.681852Z 1 [Note] WSREP: REPL Protocols: 7 (3, 2)
2016-12-06T17:49:15.681899Z 0 [Note] WSREP: Service thread queue flushed.
2016-12-06T17:49:15.682066Z 1 [Note] WSREP: Assign initial position for certification: 0, protocol version: 3
2016-12-06T17:49:15.682224Z 0 [Note] WSREP: Service thread queue flushed.
2016-12-06T17:49:15.682439Z 1 [Note] WSREP: Synchronized with group, ready for connections
2016-12-06T17:49:15.682481Z 1 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
Warning: Unable to load '/usr/share/zoneinfo/Factory' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/leap-seconds.list' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/posix/Factory' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/right/Factory' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it.
2016-12-06T17:49:18.650211Z 7 [Warning] 'user' entry 'root@localhost' ignored in --skip-name-resolve mode.
2016-12-06T17:49:18.650386Z 7 [Warning] 'user' entry 'mysql.sys@localhost' ignored in --skip-name-resolve mode.
2016-12-06T17:49:18.650614Z 7 [Warning] 'user' entry 'xtrabackup@localhost' ignored in --skip-name-resolve mode.
2016-12-06T17:49:18.650655Z 7 [Warning] 'user' entry 'monitor@localhost' ignored in --skip-name-resolve mode.
2016-12-06T17:49:18.650856Z 7 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode.
2016-12-06T17:49:18.650899Z 7 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode.
2016-12-06T17:49:18.651181Z 7 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.
2016-12-06T17:49:18.653727Z 0 [Note] WSREP: Stop replication
2016-12-06T17:49:18.654028Z 0 [Note] WSREP: Closing send monitor...
2016-12-06T17:49:18.654495Z 0 [Note] WSREP: Closed send monitor.
2016-12-06T17:49:18.654616Z 0 [Note] WSREP: gcomm: terminating thread
2016-12-06T17:49:18.655589Z 0 [Note] WSREP: gcomm: joining thread
2016-12-06T17:49:18.656926Z 0 [Note] WSREP: gcomm: closing backend
2016-12-06T17:49:18.658959Z 0 [Note] WSREP: view((empty))
2016-12-06T17:49:18.660103Z 0 [Note] WSREP: gcomm: closed
2016-12-06T17:49:18.661882Z 0 [Note] WSREP: Received self-leave message.
2016-12-06T17:49:18.662356Z 0 [Note] WSREP: Flow-control interval: [0, 0]
2016-12-06T17:49:18.662411Z 0 [Note] WSREP: Received SELF-LEAVE. Closing connection.
2016-12-06T17:49:18.662868Z 0 [Note] WSREP: Shifting SYNCED -> CLOSED (TO: 13)
2016-12-06T17:49:18.662966Z 0 [Note] WSREP: RECV thread exiting 0: Success
2016-12-06T17:49:18.663075Z 4 [Note] WSREP: New cluster view: global state: 4dd38c07-bbdc-11e6-9e05-ff24cdfdc605:13, view# -1: non-Primary, number of nodes: 0, my index: -1, protocol version 3
2016-12-06T17:49:18.663752Z 4 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2016-12-06T17:49:18.663852Z 4 [Note] WSREP: applier thread exiting (code:0)
2016-12-06T17:49:18.665487Z 0 [Note] WSREP: recv_thread() joined.
2016-12-06T17:49:18.666011Z 0 [Note] WSREP: Closing replication queue.
2016-12-06T17:49:18.666660Z 0 [Note] WSREP: Closing slave action queue.
2016-12-06T17:49:18.673299Z 1 [Note] WSREP: applier thread exiting (code:6)
2016-12-06T17:49:20.667240Z 0 [Note] Forcefully disconnecting 1 remaining clients
2016-12-06T17:49:20.667378Z 2 [Note] WSREP: rollbacker thread exiting
2016-12-06T17:49:20.667851Z 0 [Note] Giving 0 client threads a chance to die gracefully
2016-12-06T17:49:20.667902Z 0 [Note] Shutting down slave threads
2016-12-06T17:49:20.668072Z 0 [Note] Forcefully disconnecting 0 remaining clients
2016-12-06T17:49:20.668208Z 0 [Note] Event Scheduler: Purging the queue. 0 events
2016-12-06T17:49:20.668383Z 0 [Note] WSREP: dtor state: CLOSED
2016-12-06T17:49:20.674153Z 0 [Note] WSREP: mon: entered 13 oooe fraction 0 oool fraction 0
2016-12-06T17:49:20.688689Z 0 [Note] WSREP: mon: entered 13 oooe fraction 0 oool fraction 0
2016-12-06T17:49:20.705880Z 0 [Note] WSREP: mon: entered 17 oooe fraction 0 oool fraction 0
2016-12-06T17:49:20.706271Z 0 [Note] WSREP: cert index usage at exit 0
2016-12-06T17:49:20.707073Z 0 [Note] WSREP: cert trx map usage at exit 9
2016-12-06T17:49:20.707367Z 0 [Note] WSREP: deps set usage at exit 0
2016-12-06T17:49:20.708561Z 0 [Note] WSREP: avg deps dist 1
2016-12-06T17:49:20.708778Z 0 [Note] WSREP: avg cert interval 0
2016-12-06T17:49:20.709342Z 0 [Note] WSREP: cert index size 134659
2016-12-06T17:49:20.775344Z 0 [Note] WSREP: Service thread queue flushed.
2016-12-06T17:49:20.779232Z 0 [Note] WSREP: wsdb trx map usage 0 conn query map usage 0
2016-12-06T17:49:20.779340Z 0 [Note] WSREP: MemPool(LocalTrxHandle): hit ratio: 0.307692, misses: 9, in use: 0, in pool: 9
2016-12-06T17:49:20.779687Z 0 [Note] WSREP: MemPool(SlaveTrxHandle): hit ratio: 0, misses: 0, in use: 0, in pool: 0
2016-12-06T17:49:20.779956Z 0 [Note] WSREP: Shifting CLOSED -> DESTROYED (TO: 13)
2016-12-06T17:49:20.780299Z 0 [Note] WSREP: Flushing memory map to disk...
2016-12-06T17:49:20.793824Z 0 [Note] Binlog end
2016-12-06T17:49:20.794999Z 0 [Note] Shutting down plugin 'ngram'
2016-12-06T17:49:20.795209Z 0 [Note] Shutting down plugin 'ARCHIVE'
2016-12-06T17:49:20.795243Z 0 [Note] Shutting down plugin 'partition'
2016-12-06T17:49:20.795535Z 0 [Note] Shutting down plugin 'BLACKHOLE'
2016-12-06T17:49:20.795591Z 0 [Note] Shutting down plugin 'MEMORY'
2016-12-06T17:49:20.795742Z 0 [Note] Shutting down plugin 'MRG_MYISAM'
2016-12-06T17:49:20.795797Z 0 [Note] Shutting down plugin 'CSV'
2016-12-06T17:49:20.795981Z 0 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA'
2016-12-06T17:49:20.796033Z 0 [Note] Shutting down plugin 'MyISAM'
2016-12-06T17:49:20.796208Z 0 [Note] Shutting down plugin 'INNODB_SYS_VIRTUAL'
2016-12-06T17:49:20.796242Z 0 [Note] Shutting down plugin 'INNODB_CHANGED_PAGES'
2016-12-06T17:49:20.796538Z 0 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES'
2016-12-06T17:49:20.796608Z 0 [Note] Shutting down plugin 'INNODB_SYS_TABLESPACES'
2016-12-06T17:49:20.796731Z 0 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS'
2016-12-06T17:49:20.796761Z 0 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN'
2016-12-06T17:49:20.796884Z 0 [Note] Shutting down plugin 'INNODB_SYS_FIELDS'
2016-12-06T17:49:20.796911Z 0 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS'
2016-12-06T17:49:20.797046Z 0 [Note] Shutting down plugin 'INNODB_SYS_INDEXES'
2016-12-06T17:49:20.797074Z 0 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS'
2016-12-06T17:49:20.797192Z 0 [Note] Shutting down plugin 'INNODB_SYS_TABLES'
2016-12-06T17:49:20.797226Z 0 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE'
2016-12-06T17:49:20.797419Z 0 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE'
2016-12-06T17:49:20.797463Z 0 [Note] Shutting down plugin 'INNODB_FT_CONFIG'
2016-12-06T17:49:20.797576Z 0 [Note] Shutting down plugin 'INNODB_FT_BEING_DELETED'
2016-12-06T17:49:20.797617Z 0 [Note] Shutting down plugin 'INNODB_FT_DELETED'
2016-12-06T17:49:20.797798Z 0 [Note] Shutting down plugin 'INNODB_FT_DEFAULT_STOPWORD'
2016-12-06T17:49:20.797839Z 0 [Note] Shutting down plugin 'INNODB_METRICS'
2016-12-06T17:49:20.798021Z 0 [Note] Shutting down plugin 'INNODB_TEMP_TABLE_INFO'
2016-12-06T17:49:20.798167Z 0 [Note] Shutting down plugin 'INNODB_BUFFER_POOL_STATS'
2016-12-06T17:49:20.798457Z 0 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE_LRU'
2016-12-06T17:49:20.798507Z 0 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE'
2016-12-06T17:49:20.798706Z 0 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX_RESET'
2016-12-06T17:49:20.798741Z 0 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX'
2016-12-06T17:49:20.799066Z 0 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'
2016-12-06T17:49:20.799159Z 0 [Note] Shutting down plugin 'INNODB_CMPMEM'
2016-12-06T17:49:20.799598Z 0 [Note] Shutting down plugin 'INNODB_CMP_RESET'
2016-12-06T17:49:20.799646Z 0 [Note] Shutting down plugin 'INNODB_CMP'
2016-12-06T17:49:20.799775Z 0 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'
2016-12-06T17:49:20.799807Z 0 [Note] Shutting down plugin 'INNODB_LOCKS'
2016-12-06T17:49:20.800011Z 0 [Note] Shutting down plugin 'INNODB_TRX'
2016-12-06T17:49:20.800194Z 0 [Note] Shutting down plugin 'XTRADB_RSEG'
2016-12-06T17:49:20.800339Z 0 [Note] Shutting down plugin 'XTRADB_INTERNAL_HASH_TABLES'
2016-12-06T17:49:20.800380Z 0 [Note] Shutting down plugin 'XTRADB_READ_VIEW'
2016-12-06T17:49:20.800553Z 0 [Note] Shutting down plugin 'InnoDB'
2016-12-06T17:49:20.800932Z 0 [Note] InnoDB: FTS optimize thread exiting.
2016-12-06T17:49:20.801288Z 0 [Note] InnoDB: Starting shutdown...
2016-12-06T17:49:20.902647Z 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool
2016-12-06T17:49:20.902941Z 0 [Note] InnoDB: Buffer pool(s) dump completed at 161206 17:49:20
2016-12-06T17:49:22.634067Z 0 [Note] InnoDB: Shutdown completed; log sequence number 12099523
2016-12-06T17:49:22.634992Z 0 [Note] InnoDB: Removed temporary tablespace data file: "ibtmp1"
2016-12-06T17:49:22.636093Z 0 [Note] Shutting down plugin 'sha256_password'
2016-12-06T17:49:22.636314Z 0 [Note] Shutting down plugin 'mysql_native_password'
2016-12-06T17:49:22.637397Z 0 [Note] Shutting down plugin 'wsrep'
2016-12-06T17:49:22.638613Z 0 [Note] Shutting down plugin 'binlog'
2016-12-06T17:49:22.646108Z 0 [Note] mysqld: Shutdown complete


MySQL init process done. Ready for start up.

2016-12-06T17:49:22.908420Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2016-12-06T17:49:22.909260Z 0 [Note] mysqld (mysqld 5.7.14-8-57) starting as process 1 ...
2016-12-06T17:49:22.911520Z 0 [Note] WSREP: Read nil XID from storage engines, skipping position init
2016-12-06T17:49:22.911724Z 0 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib/galera3/libgalera_smm.so'
2016-12-06T17:49:22.915717Z 0 [Note] WSREP: wsrep_load(): Galera 3.17(r447d194) by Codership Oy <info@codership.com> loaded successfully.
2016-12-06T17:49:22.916056Z 0 [Note] WSREP: CRC-32C: using hardware acceleration.
2016-12-06T17:49:22.916756Z 0 [Note] WSREP: Found saved state: 4dd38c07-bbdc-11e6-9e05-ff24cdfdc605:13
2016-12-06T17:49:22.917543Z 0 [Note] WSREP: Passing config to GCS: base_dir = /var/lib/mysql/; base_host = 172.17.0.2; base_port = 4567; cert.log_conflicts = no; debug = no; evs.auto_evict = 0; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.join_retrans_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.user_send_window = 2; evs.view_forget_timeout = PT24H; gcache.dir = /var/lib/mysql/; gcache.keep_pages_count = 0; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.size = 128M; gcomm.thread_prio = ; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.segment = 0; gmcast.version = 0; pc.announce_timeout = PT3S; pc.checksum = false
2016-12-06T17:49:22.937056Z 0 [Note] WSREP: Service thread queue flushed.
2016-12-06T17:49:22.937196Z 0 [Note] WSREP: Assign initial position for certification: 13, protocol version: -1
2016-12-06T17:49:22.937480Z 0 [Note] WSREP: wsrep_sst_grab()
2016-12-06T17:49:22.937525Z 0 [Note] WSREP: Start replication
2016-12-06T17:49:22.937745Z 0 [Note] WSREP: Setting initial position to 4dd38c07-bbdc-11e6-9e05-ff24cdfdc605:13
2016-12-06T17:49:22.937863Z 0 [Note] WSREP: protonet asio version 0
2016-12-06T17:49:22.938502Z 0 [Note] WSREP: Using CRC-32C for message checksums.
2016-12-06T17:49:22.938655Z 0 [Note] WSREP: backend: asio
2016-12-06T17:49:22.939359Z 0 [Note] WSREP: gcomm thread scheduling priority set to other:0
2016-12-06T17:49:22.941844Z 0 [Warning] WSREP: access file(/var/lib/mysql//gvwstate.dat) failed(No such file or directory)
2016-12-06T17:49:22.942719Z 0 [Note] WSREP: restore pc from disk failed
2016-12-06T17:49:22.946124Z 0 [Note] WSREP: GMCast version 0
2016-12-06T17:49:22.946553Z 0 [Note] WSREP: (524e2801, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
2016-12-06T17:49:22.946787Z 0 [Note] WSREP: (524e2801, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
2016-12-06T17:49:22.948351Z 0 [Note] WSREP: EVS version 0
2016-12-06T17:49:22.948530Z 0 [Note] WSREP: gcomm: connecting to group 'cluster', peer ''
2016-12-06T17:49:22.948926Z 0 [Note] WSREP: start_prim is enabled, turn off pc_recovery
2016-12-06T17:49:22.949137Z 0 [Note] WSREP: Node 524e2801 state prim
2016-12-06T17:49:22.949772Z 0 [Note] WSREP: view(view_id(PRIM,524e2801,1) memb {
        524e2801,0
} joined {
} left {
} partitioned {
})
2016-12-06T17:49:22.949844Z 0 [Note] WSREP: save pc into disk
2016-12-06T17:49:22.950334Z 0 [Note] WSREP: gcomm: connected
2016-12-06T17:49:22.950414Z 0 [Note] WSREP: Changing maximum packet size to 64500, resulting msg size: 32636
2016-12-06T17:49:22.951083Z 0 [Note] WSREP: Shifting CLOSED -> OPEN (TO: 0)
2016-12-06T17:49:22.951181Z 0 [Note] WSREP: Opened channel 'cluster'
2016-12-06T17:49:22.951614Z 0 [Note] WSREP: Waiting for SST to complete.
2016-12-06T17:49:22.951824Z 0 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 0, memb_num = 1
2016-12-06T17:49:22.953494Z 0 [Note] WSREP: STATE_EXCHANGE: sent state UUID: 524f9e4e-bbdc-11e6-9190-b66393d75772
2016-12-06T17:49:22.953585Z 0 [Note] WSREP: STATE EXCHANGE: sent state msg: 524f9e4e-bbdc-11e6-9190-b66393d75772
2016-12-06T17:49:22.953825Z 0 [Note] WSREP: STATE EXCHANGE: got state msg: 524f9e4e-bbdc-11e6-9190-b66393d75772 from 0 (6814cd5862dd)
2016-12-06T17:49:22.953904Z 0 [Note] WSREP: Quorum results:
        version    = 4,
        component  = PRIMARY,
        conf_id    = 0,
        members    = 1/1 (joined/total),
        act_id     = 13,
        last_appl. = -1,
        protocols  = 0/7/3 (gcs/repl/appl),
        group UUID = 4dd38c07-bbdc-11e6-9e05-ff24cdfdc605
2016-12-06T17:49:22.954201Z 0 [Note] WSREP: Flow-control interval: [16, 16]
2016-12-06T17:49:22.954388Z 0 [Note] WSREP: Restored state OPEN -> JOINED (13)
2016-12-06T17:49:22.954772Z 1 [Note] WSREP: New cluster view: global state: 4dd38c07-bbdc-11e6-9e05-ff24cdfdc605:13, view# 1: Primary, number of nodes: 1, my index: 0, protocol version 3
2016-12-06T17:49:22.954848Z 0 [Note] WSREP: SST complete, seqno: 13
2016-12-06T17:49:22.959718Z 0 [Note] InnoDB: PUNCH HOLE support available
2016-12-06T17:49:22.959984Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2016-12-06T17:49:22.960376Z 0 [Note] InnoDB: Uses event mutexes
2016-12-06T17:49:22.960571Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
2016-12-06T17:49:22.959959Z 0 [Note] WSREP: Member 0.0 (6814cd5862dd) synced with group.
2016-12-06T17:49:22.961183Z 0 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 13)
2016-12-06T17:49:22.961467Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.8
2016-12-06T17:49:22.961518Z 0 [Note] InnoDB: Using Linux native AIO
2016-12-06T17:49:22.962451Z 0 [Note] InnoDB: Number of pools: 1
2016-12-06T17:49:22.963873Z 0 [Note] InnoDB: Using CPU crc32 instructions
2016-12-06T17:49:22.970069Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
2016-12-06T17:49:22.980110Z 0 [Note] InnoDB: Completed initialization of buffer pool
2016-12-06T17:49:22.985756Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().

<part of output is truncated>
  • docker ps -a docker ps -a

     CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 6814cd5862dd percona/percona-xtradb-cluster "/entrypoint.sh " 10 minutes ago Up 10 minutes 3306/tcp, 4567-4568/tcp xtradb

The quastions are:问题是:

  • how can I access to database in cluster?如何访问集群中的数据库?
  • how can I add slave nodes to cluster?如何将从节点添加到集群?
  • how can I get cluster node list?如何获取集群节点列表?

If you need a multi-host environment, you can check https://www.percona.com/blog/2016/06/10/percona-xtradb-cluster-in-a-multi-host-docker-network/如果需要多主机环境,可以查看https://www.percona.com/blog/2016/06/10/percona-xtradb-cluster-in-a-multi-host-docker-network/

If not, I've been working on some projects on this, feel free to check https://github.com/guriandoro/docker/tree/master/pxc , and in particularhttps://github.com/guriandoro/docker/tree/master/pxc/N-node-pxc .如果没有,我一直在做这方面的一些项目,请随时查看https://github.com/guriandoro/docker/tree/master/pxc ,特别是https://github.com/guriandoro/docker /tree/master/pxc/N-node-pxc

In the readme, you can find how to connect to them, and how to get the list of running nodes.在自述文件中,您可以找到如何连接到它们,以及如何获取正在运行的节点列表。 What do you mean by slave nodes, though?但是,从节点是什么意思? Regular async replication?定期异步复制? Or to add more nodes to the cluster?或者向集群添加更多节点?

Let me know if it helps, or if you have any feedback on it.如果它有帮助,或者您有任何反馈,请告诉我。 It's a work in progress, but you should be able to quickly have some nodes running in one host with minimal tweaking.这是一项正在进行的工作,但您应该能够通过最少的调整快速让一些节点在一台主机上运行。

Depend on other snippets ,let's create percona xtradb cluster 8.0 with consul in docker swarm cluster依赖于其他片段,让我们在 docker swarm 集群中使用 consul 创建 percona xtradb 集群 8.0

Create percona xtradb cluster in docker swarm cluster.在 docker swarm 集群中创建 percona xtradb 集群。

REQUIREMENTS要求

  • 3 nodes swarm workers(node1,node2,node3) 3 个节点群工作者(节点 1、节点 2、节点 3)
  • 1 node swarm manager(node4) 1 个节点群管理器(node4)

Ip's v4 nodes ip的v4节点

node1 - swarm worker - 192.168.1.108
node2 - swarm worker - 192.168.1.109
node3 - swarm worker - 192.168.1.110
node4 - swarm manager - 192.168.1.111

PREPARE准备

Add label to swarm nodes为群节点添加标签

docker node update --label-add pxc=true node1
docker node update --label-add pxc=true node2
docker node update --label-add pxc=true node3
docker node update --label-add consul=true node4

Set heartbeat period设置心跳周期

docker swarm update --dispatcher-heartbeat 20s

Make directories制作目录

mkdir -p /docker-compose/SWARM/pxc8/configs/consul

Create config file consul server创建配置文件领事服务器

vim /docker-compose/SWARM/pxc8/configs/consul/config.json

{ 
  "advertise_addr" : "{{ GetInterfaceIP \"eth0\" }}",
  "bind_addr": "{{ GetInterfaceIP \"eth0\" }}",
  "addresses" : {
    "http" : "0.0.0.0"
  },
  "ports" : {
    "server": 8300,
    "http": 8500,
    "dns": 8600
  },
  "skip_leave_on_interrupt": true,
  "server_name" : "pxc.service.consul",
  "primary_datacenter":"dc1",
  "acl_default_policy":"allow",
  "acl_down_policy":"extend-cache",
  "datacenter":"dc1",
  "data_dir":"/consul/data",
  "bootstrap": true,
  "server":true,
  "ui" : true

}

Create docker compose file创建 docker compose 文件

vim /docker-compose/SWARM/pxc8/docker-compose.yml

version: '3.6'

services:
  
  consul:
    image: "devsadds/consul:1.8.3"
    hostname: consul
    volumes:
      - "/docker-compose/SWARM/pxc8/configs/consul:/consul/config"
    ports:
      - target: 8500
        published: 8500
        protocol: tcp
        mode: host
    networks:
      pxc8-net:
        aliases:
          - pxc.service.consul
    command: "consul agent -config-file /consul/config/config.json"
    deploy:
      mode: replicated
      replicas: 1
      restart_policy:
        condition: on-failure
        delay: 15s
        max_attempts: 13
        window: 18s
      update_config:
        parallelism: 1
        delay: 20s
        failure_action: continue
        monitor: 60s
        max_failure_ratio: 0.3
      placement:
        constraints: [ node.labels.consul  == true ]

  pxc:
    image: "devsadds/pxc:8.0.19-10.1-consul-1.8.3-focal-v1.1.0"
    environment:
      CLUSTER_NAME: "percona"
      MYSQL_ROOT_PASSWORD: "root32456"
      MYSQL_PROXY_USER: "mysqlproxyuser"
      MYSQL_PROXY_PASSWORD: "mysqlproxy32456"
      PXC_SERVICE: "pxc.service.consul"
      DISCOVERY_SERVICE: "consul"
      DATADIR: "/var/lib/mysql"
      MONITOR_PASSWORD: "mys3232323323"
      XTRABACKUP_PASSWORD: "mys3232323323"
    volumes:
      - "pxc_8_0:/var/lib/mysql"
    networks:
      pxc8-net:
        aliases:
          - mysql
    deploy:
      mode: replicated
      replicas: 1
      restart_policy:
        condition: on-failure
        delay: 15s
        max_attempts: 23
        window: 180s
      update_config:
        parallelism: 1
        delay: 20s
        failure_action: continue
        monitor: 60s
        max_failure_ratio: 0.3
      placement:
        constraints: [ node.labels.pxc  == true ]


volumes:
  pxc_8_0:

networks:
  pxc8-net:
    driver: overlay
    ipam:
      driver: default
      config:
        - subnet: 10.23.0.0/24

Deploy stack部署堆栈

cd /docker-compose/SWARM/pxc8
docker stack deploy -c docker-compose.yml pxc8 --resolve-image always --prune --with-registry-auth 

Go to web ui(unsecured)转到 web ui(不安全)

http://192.168.1.111:8500/ui/dc1/services/pxc8/instances http://192.168.1.111:8500/ui/dc1/services/pxc8/instances

Wait until first node in cluster ok.等到集群中的第一个节点正常。

Then scale to 3 nodes然后扩展到 3 个节点

docker service scale pxc8_pxc=3 -d 

Wait until cluster scale.等到集群规模。

Scale cluster to one node将集群扩展到一个节点

docker service scale pxc8_pxc=1 -d

Cluster with one node become non-Primary and not ready for operations具有一个节点的集群变为非主节点且未准备好进行操作

SHOW GLOBAL STATUS LIKE 'wsrep_cluster_status';

+----------------------+---------+
| Variable_name        | Value   |
+----------------------+---------+
| wsrep_cluster_status | non-Primary |
+----------------------+---------+

After scale to one node, exec command on the last node with pxc(Most Advanced Node) make cluster Primary from non-Primary state扩展到一个节点后,使用 pxc(最高级节点)在最后一个节点上执行命令使集群从非主要状态变为主要

SET GLOBAL wsrep_provider_options='pc.bootstrap=YES';

The node now operates as the starting node in a new Primary Component.该节点现在作为新主要组件中的起始节点运行。

SHOW GLOBAL STATUS LIKE 'wsrep_cluster_status';

+----------------------+---------+
| Variable_name        | Value   |
+----------------------+---------+
| wsrep_cluster_status | Primary |
+----------------------+---------+

Now we can scale our cluster to 3 or more nodes现在我们可以将集群扩展到 3 个或更多节点

docker service scale pxc8_pxc=3 -d 

Fix cluster after crash all nodes在所有节点崩溃后修复集群

Edit file and set safe_to_bootstrap to 1 on node with latest data.编辑文件并在具有最新数据的节点上将 safe_to_bootstrap 设置为 1。

cd /docker-compose/SWARM/pxc8
docker stack rm pxc8

Edit file /var/lib/docker/volumes/pxc8_pxc_8_0/_data/gvwstate.dat编辑文件 /var/lib/docker/volumes/pxc8_pxc_8_0/_data/gvwstate.dat

nano /var/lib/docker/volumes/pxc8_pxc_8_0/_data/gvwstate.dat 
my_uuid: 505b00f5-f33a-11ea-9ee4-abd76ca92272
#vwbeg
view_id: 3 505b00f5-f33a-11ea-9ee4-abd76ca92272 52
bootstrap: 0
member: 505b00f5-f33a-11ea-9ee4-abd76ca92272 0
member: 88f6da78-f1fc-11ea-81cc-e386dd9bf4d3 0
member: 91aaf3ec-f33a-11ea-88f8-93c04e238f30 0
#vwend

And make changes并做出改变

my_uuid: 505b00f5-f33a-11ea-9ee4-abd76ca92272
#vwbeg
view_id: 3 505b00f5-f33a-11ea-9ee4-abd76ca92272 52
bootstrap: 0
member: 505b00f5-f33a-11ea-9ee4-abd76ca92272 0
#vwend

Edit file /var/lib/docker/volumes/pxc8_pxc_8_0/_data/grastate.dat编辑文件 /var/lib/docker/volumes/pxc8_pxc_8_0/_data/grastate.dat

nano /var/lib/docker/volumes/pxc8_pxc_8_0/_data/grastate.dat
safe_to_bootstrap 1

and stark cluster with one node again并再次与一个节点形成鲜明的集群

cd /docker-compose/SWARM/pxc8
docker stack deploy -c docker-compose.yml pxc8 --resolve-image always --prune --with-registry-auth 

Reconnect pxc nodes to consul docker, if consul docker restarted如果 consul docker 重新启动,则将 pxc 节点重新连接到 consul docker

Go into container with pxc使用 pxc 进入容器

ps aux | grep consul

kill consul agent杀死领事代理

kill -9 13

and run process again in foregroud, with last command, added at the end of line并在前台再次运行进程,最后一个命令,添加到行尾

 /bin/consul agent -retry-join consul -client 0.0.0.0 -bind 10.22.0.17 -node -99f341353c95 -data-dir /tmp -config-file /tmp/pxc.json &amp;

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Docker Compose:Percona XtraDB 集群引导 - Docker Compose : Percona XtraDB Cluster Bootstrapping 使用Percona XtraDB Cluster Docker映像进行服务发现 - Service discovery with Percona XtraDB Cluster docker image Docker中的Bootstraping Percona Xtradb Cluster出现错误:无法打开和锁定特权表:表&#39;mysql.user&#39;不存在 - Bootstraping Percona Xtradb Cluster in Docker gives the error: Can't open and lock privilege tables: Table 'mysql.user' doesn't exist 在容器中恢复 Percona Xtradb - Recover Percona Xtradb in container 如何从 docker 容器连接到 mongoDB 云集群? - How can I connect to a mongoDB cloud cluster from docker container? 通过 R 在 gcloud 上启动 VM Cluster (gce_vm_cluster) 时,如何在主目录中自动创建 .docker 文件夹? - How can I auto-create .docker folder in the home directory when spinning up VM Cluster (gce_vm_cluster) on gcloud through R? 如何在 Docker 桌面上创建一个新的 Kubernetes 集群? - How to create a new Kubernetes cluster on Docker Desktop? 如何将MongoDB集群创建为Docker容器 - How to Create MongoDB cluster as Docker Containers 我可以使用Redis容器[Docker]作为集群吗? - Can I use Redis container [Docker] as cluster? 如何使用 Docker 创建分布式 Spark 集群 - How to create a Distributed spark cluster using Docker
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM