简体   繁体   English

CEPH原始空间使用

[英]CEPH raw space usage

I can't understand, where my ceph raw space is gone. 我不明白,我的ceph原始空间不在哪里。

cluster 90dc9682-8f2c-4c8e-a589-13898965b974
     health HEALTH_WARN 72 pgs backfill; 26 pgs backfill_toofull; 51 pgs backfilling; 141 pgs stuck unclean; 5 requests are blocked > 32 sec; recovery 450170/8427917 objects degraded (5.341%); 5 near full osd(s)
     monmap e17: 3 mons at {enc18=192.168.100.40:6789/0,enc24=192.168.100.43:6789/0,enc26=192.168.100.44:6789/0}, election epoch 734, quorum 0,1,2 enc18,enc24,enc26
     osdmap e3326: 14 osds: 14 up, 14 in
      pgmap v5461448: 1152 pgs, 3 pools, 15252 GB data, 3831 kobjects
            31109 GB used, 7974 GB / 39084 GB avail
            450170/8427917 objects degraded (5.341%)
                  18 active+remapped+backfill_toofull
                1011 active+clean
                  64 active+remapped+wait_backfill
                   8 active+remapped+wait_backfill+backfill_toofull
                  51 active+remapped+backfilling
recovery io 58806 kB/s, 14 objects/s

OSD tree (each host has 2 OSD): OSD树(每个主机有2个OSD):

# id    weight  type name       up/down reweight
-1      36.45   root default
-2      5.44            host enc26
0       2.72                    osd.0   up      1
1       2.72                    osd.1   up      0.8227
-3      3.71            host enc24
2       0.99                    osd.2   up      1
3       2.72                    osd.3   up      1
-4      5.46            host enc22
4       2.73                    osd.4   up      0.8
5       2.73                    osd.5   up      1
-5      5.46            host enc18
6       2.73                    osd.6   up      1
7       2.73                    osd.7   up      1
-6      5.46            host enc20
9       2.73                    osd.9   up      0.8
8       2.73                    osd.8   up      1
-7      0               host enc28
-8      5.46            host archives
12      2.73                    osd.12  up      1
13      2.73                    osd.13  up      1
-9      5.46            host enc27
10      2.73                    osd.10  up      1
11      2.73                    osd.11  up      1

Real usage: 实际用法:

/dev/rbd0        14T  7.9T  5.5T  59% /mnt/ceph

Pool size: 泳池大小:

osd pool default size = 2

Pools: ceph osd lspools 池:ceph osd lspools

0 data,1 metadata,2 rbd,

rados df 拉多斯df

pool name       category                 KB      objects       clones     degraded      unfound           rd        rd KB           wr        wr KB
data            -                          0            0            0            0           0            0            0            0            0
metadata        -                          0            0            0            0           0            0            0            0            0
rbd             -                15993591918      3923880            0       444545           0        82936      1373339      2711424    849398218
  total used     32631712348      3923880
  total avail     8351008324
  total space    40982720672

Raw usage is 4x real usage. 原始用量是实际用量的4倍。 As I understand, it must be 2x ? 据我了解,它必须是2倍?

Yes, it must be 2x. 是的,它必须是2倍。 I don't really shure, that the real raw usage is 7.9T. 我不太确定真正的原始使用量是7.9T。 Why do you check this value on mapped disk? 为什么要在映射磁盘上检查此值?

This are my pools: 这是我的游泳池:


pool name                 KB      objects       clones     degraded      unfound           rd        rd KB           wr        wr KB
admin-pack           7689982         1955            0            0            0       693841      3231750     40068930    353462603
public-cloud       105432663        26561            0            0            0     13001298    638035025    222540884   3740413431
rbdkvm_sata      32624026697      7968550        31783            0            0   4950258575 232374308589  12772302818 278106113879
  total used     98289353680      7997066
  total avail    34474223648
  total space   132763577328

You can see, that the total amount of used space is 3 times more than the used space in the pool rbdkvm_sata (+-). 您会看到,已使用空间的总量是池rbdkvm_sata(+-)中已使用空间的3倍。

ceph -s shows the same result too: ceph -s显示相同的结果:


pgmap v11303091: 5376 pgs, 3 pools, 31220 GB data, 7809 kobjects
            93736 GB used, 32876 GB / 123 TB avail

I don't think you have just one rbd image. 我认为您只有一张rbd图片。 The result of "ceph osd lspools" indicated that you had 3 pools and one of pools had name "metadata".(Maybe you were using cephfs). “ ceph osd lspools”的结果表明您有3个池,其中一个池的名称为“ metadata”。(也许您正在使用cephfs)。 /dev/rbd0 was appeared because you mapped the image but you could have other images also. / dev / rbd0的出现是因为您映射了图像,但您可能还拥有其他图像。 To list the images you can use "rbd list -p ". 要列出图像,您可以使用“ rbd list -p”。 You can see the image info with "rbd info -p " 您可以使用“ rbd info -p”查看图像信息

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM