简体   繁体   English

在其他percona xtradb群集节点上重用EBS快照

[英]Reusing EBS snapshots on a different percona xtradb cluster node

I'm evaluating a Percona xtradb 5.6 cluster of 3 nodes in AWS environment. 我正在评估AWS环境中3个节点的Percona xtradb 5.6集群。 I'm using ec2-consistent-snapshot with --mysql to make an EBS snapshot of the data. 我将ec2-consistent-snapshot--mysql配合--mysql来制作数据的EBS快照。 However when a snapshot was made on node 1 , and then node 2 is relaunched using that snapshot, the cluster would break. 但是,当在node 1上创建快照,然后使用该快照重新启动node 2 ,群集将损坏。

Through trial-and-error I've found that this is caused by reusing auto.cnf and gvwstate.dat files in mysql datadir, which would contain ids of node 1 , and the issues were (apparently) caused by another node trying to join with id of another node already in cluster. 通过反复试验,我发现这是由于在mysql datadir中重用了auto.cnfgvwstate.dat文件引起的,该文件将包含node 1 ID,而这些问题(显然)是由另一个试图加入的节点引起的集群中已经有另一个节点的ID。 Removing the said files appears to have fixed the issue and now nodes go up and down as expected. 删除上述文件似乎已解决了该问题,现在节点可以按预期的方式上下移动。

My question is: did I do the right thing? 我的问题是:我做对了吗? Do I need to remove auto.cnf and gvwstate.dat before using another server's datadir? 使用其他服务器的数据目录之前,是否需要删除auto.cnf和gvwstate.dat? Do I need to do anything else? 我还需要做其他事情吗? What's the standard practice for this sort of thing? 这种事情的标准做法是什么?

What you did was correct. 你所做的是正确的。 However, be sure to check your gcache size to avoid SST. 但是,请务必检查您的gcache大小以避免SST。 It is quite possible that you could take the ebs-snap now, go to lunch, come back and create node3 using that snap, start mysql up and an SST happens anyway. 很可能您现在可以立即进行ebs快照,吃午餐,返回并使用该快照创建node3,启动mysql并无论如何都会发生SST。

I would check your logs on the new node to ensure that an SST did NOT happen. 我将检查您在新节点上的日志,以确保未发生SST。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM