简体   繁体   English

如何解释根文件夹上的“ du -sh /”结果小于“ df -h”的结果? 以及如何弥补差距?

[英]How to explain the fact that “du -sh /” result on root folder is less than “df -h” result ? And how to fix the gap?

I have few docker containers. 我只有几个Docker容器。 I'm facing storage issues. 我面临存储问题。

When I do the following command the host (NOT in a docker container)in order to measure the size of all file 当我执行以下命令时,主机(不在Docker容器中)为了测量所有文件的大小

du -sh /

I can see that the total size is 50% of total storage. 我可以看到总大小为总存储空间的50%。

And when I do this 当我这样做时

df -h 

I can see that I have 20% space free and I used 80%. 我可以看到我有20%的可用空间,而我使用了80%的空间。

I use lsof command in order to find deleted open file and It mentions many file from Docker containers : 我使用lsof命令来查找已删除的打开文件,它提到了Docker容器中的许多文件:

lsof -nP | grep '(deleted)'

How to explain the difference ? 如何解释差异? I miss something with docker configuration ? 我错过了docker配置的东西吗?

The question is a very general one in Unix environments. 这个问题在Unix环境中是一个非常普遍的问题。

In Unixes you can remove a file from the file system but if there is a process which is still holding an open file handle on the file, the file is still stored and kept on the disk, available only via this file handle. 在Unix中,您可以从文件系统中删除文件,但是如果某个进程仍在文件中保持打开的文件句柄,则该文件仍会存储并保留在磁盘上,只能通过此文件句柄使用。 As soon as the file handle is dropped (closed or the process terminates), the kernel will take care that the remainder of the file is removed from the disk and the disk space is freed. 一旦文件句柄被删除(关闭或进程终止),内核将注意从磁盘上删除文件的其余部分并释放磁盘空间。

Such temp-files will influence the output of df , but they will not appear in the output of du which only scans directories. 这样的临时文件将影响df的输出,但不会出现在仅扫描目录的du的输出中。

This feature often is used for using a temp file which shall automatically be removed at process termination: To get such a thing a process creates a file (by opening it for writing) and keeps the file handle open but removes (unlink(2)) the file itself (ie removes the directory entry). 此功能通常用于使用临时文件,该临时文件应在进程终止时自动删除:要获取此类信息,进程将创建文件(通过打开文件进行写入)并保持文件句柄打开但将其删除(unlink(2))文件本身(即删除目录条目)。 Then the process can still write to this file and read from it via the file handle, and it doesn't have to clean up after itself after termination. 然后,该进程仍可以写入该文件并通过文件句柄从中读取该文件,并且终止后不必自行清理。

Docker stuff often seems to have things like this. Docker的东西通常看起来像这样。

Your solution is to close all these open file handles. 您的解决方案是关闭所有这些打开的文件句柄。

This can be achieved by (from shotgun to scalpel): 这可以通过(从shot弹枪到手术刀)实现:

  1. Rebooting. 正在重新启动。 This is a clear cut and often a good solution. 这是一个明确的方法,通常是一个很好的解决方案。
  2. Killing all processes which hold the open file handles. 杀死所有持有打开文件句柄的进程。 This avoids the reboot and thus can keep all processes unharmed which have nothing to do with the situation. 这避免了重新启动,因此可以使与情况无关的所有进程不受损害。
  3. Closing all temp-file descriptors in all processes which hold open file handles. 关闭所有具有打开文件句柄的进程中的所有临时文件描述符。 This might be an option, depending greatly on your processes of course. 当然,这可能是一个选择,这很大程度上取决于您的过程。

To see the file handles a process has open, you can probably (depending on your system) have a look at /proc/<PID>/fd/ . 要查看进程已打开的文件句柄,您可以(取决于您的系统)查看/proc/<PID>/fd/ For each process such a directory exists and represents the file handles. 对于每个进程,都有一个这样的目录并表示文件句柄。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM