简体   繁体   English

Kinesis如何在工人之间分配碎片?

[英]How does Kinesis distribute shards among workers?

Is there any attempt to keep adjacent shards together when spreading them out over multiple workers? 将相邻的分片散布到多个工作人员上时,是否有尝试将它们保持在一起? In the documentation example it started with 1 worker/instance and 4 shards. 在文档示例中,它以1个工作人员/实例和4个碎片开始。 Then auto-scaling occurred and a 2nd worker/instance was started up. 然后发生自动缩放,并启动了第二个工作人员/实例。 The KCL auto-magically moved 2 shards over to worker 2. Is there any attempt at keeping adjacent shards together with a worker when autoscaling? KCL自动将2个分片移动到工作程序2。自动缩放时,是否有任何尝试将相邻分片与工作程序保持在一起的尝试? What about when splitting shards? 分割碎片怎么办?

Thanks 谢谢

Random. 随机。

If you mean "Kinesis Consumer Application" as "Worker", then the consumer application with the most shards loses 1 shard to another application who has less shards. 如果您将“ Kinesis消费者应用程序”表示为“工作人员”,则分片数量最多的消费者应用程序将失去一个分片,而另一个分片数量较少的应用程序将丢失一个分片。

"Lease" is the correct term here, it describes a consumer application & shard association. “租约”在这里是正确的术语,它描述了消费者应用程序与分片的关联。 And there is not adjacency check for taking leases, it is pure random. 而且没有进行租赁的邻接检查,它是纯随机的。

See source code, chooseLeaseToSteal method: https://github.com/awslabs/amazon-kinesis-client/blob/c6e393c13ec348f77b8b08082ba56823776ee48a/src/main/java/com/amazonaws/services/kinesis/leases/impl/LeaseTaker.java#L414 参见源代码,选择chooseLeaseToSteal方法: https : //github.com/awslabs/amazon-kinesis-client/blob/c6e393c13ec348f77b8b08082ba56823776ee48a/src/main/java/com/amazonaws/services/kinesis/leases/impl/LeaseTaker.java#L414

Is there any attempt to keep adjacent shards together when spreading them out over multiple workers? 将相邻的分片散布到多个工作人员上时,是否有尝试将它们保持在一起?

I doubt that's the case. 我怀疑是这样。 My understanding is that order is maintained only within the boundary of a single key and the boundary of a single key falls within a single shard. 我的理解是,仅在单个键的边界内维护顺序,并且单个键的边界在单个碎片内。

Imagine I have 2 keys, key-a and key-b , and the following events occurred: 想象一下,我有两个键,分别是key-akey-b ,并且发生了以下事件:

["event-1-key-a", "event-2-key-b", "event-3-key-a"]

Now we have 2 events for key-a : ["event-1-key-a", "event-3-key-a"] 现在我们为key-a提供了2个事件: ["event-1-key-a", "event-3-key-a"]

and 1 event for key-b : ["event-2-key-b"] key-b 1个事件: ["event-2-key-b"]

Note that sharding happens exactly like the above -- the 2 events for key-a will always end up in the same shard. 请注意,分片操作与上述操作完全相同-key key-a的2个事件将始终以同一分片结束。 With that being the guarantee, maintaining the order among shards is not necessary. 有了这种保证,就不必维持分片之间的顺序。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM