简体   繁体   English

Cephadm orch 守护进程添加 osd 挂起

[英]Cephadm orch daemon add osd Hangs

On both v15 and v16 of Cephadm I am able to successfully bootstrap a cluster with 3 nodes.在 Cephadm 的 v15 和 v16 上,我都能够成功引导具有 3 个节点的集群。 What I have found is that adding more than 26 OSDs on a single host causes cephadm orch daemon add osd to hang forever and no crash.我发现在单个主机上添加超过 26 个 OSD 会导致 cephadm orch daemon add osd 永远挂起并且不会崩溃。 Each of my nodes has 60 disks that lsblk will report as /dev/sda though /dev/sdbh.我的每个节点都有 60 个磁盘,lsblk 将通过 /dev/sdbh 报告为 /dev/sda。 It doesn't appear that addressing the disk id /dev/sdXX is the problem but rather the quantity of disks.解决磁盘ID /dev/sdXX 似乎不是问题,而是磁盘数量。 I was able to re-build and add the /dev/sdXX disks first then again once I hit the 27th OSD, it hangs indefinitely.我能够先重新构建并添加 /dev/sdXX 磁盘,然后在我点击第 27 个 OSD 后再次添加,它会无限期挂起。 Resources do not appear to be an issue, unless this is an Ubuntu Docker limitation to the number of containers?资源似乎不是问题,除非这是 Ubuntu Docker 对容器数量的限制? This is easily reproduced in a lab as I have created several with identical results.这很容易在实验室中重现,因为我已经创建了几个具有相同结果的。

It's been a while since I posted this and came upon the fix late in the fall of 2021 by using an HWE kernel from Ubuntu.自从我发布此内容已经有一段时间了,并在 2021 年秋末使用来自 Ubuntu 的 HWE kernel 进行了修复。 Once the HWE kernel was installed (at the time), this got me over the hump.一旦安装了 HWE kernel(当时),这让我克服了困难。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM