简体   繁体   中英

git pull: Unable to create ORIG_HEAD.lock No space left on device

I'm having an issue where when attempting to perform a "git pull", I receive the following errors message:

Unable to create '/path/.git/ORIG_HEAD.lock': No space left on device

The thing that's puzzling me is that I definitely have quite a bit of space left on the device:

Filesystem Size Used Avail Use% Mounted on

/dev/xvda1 7.8G 2.6G 5.2G 33% /

devtmpfs 7.4G 16K 7.4G 1% /dev

tmpfs 7.4G 0 7.4G 0% /dev/shm

/dev/xvdf 250G 8.5G 242G 4% /path

I'm also witnessing logs being written to on the same device that is being said to be full.

The only thing that comes to mind is that this disk was recently upgraded from 8GB to 250GB over at AWS, and it appears that git believes it's still an 8GB drive?

Output of fdisk -l:

Disk /dev/xvda1: 8589 MB, 8589934592 bytes, 16777216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/xvdf: 268.4 GB, 268435456000 bytes, 524288000 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes

df -i also shows only 1% used on that volume?

Does this make any sense? Thanks for any tips and comments you can provide.

Linux file systems have to limited ressources: Blocks to write data to, and inodes to write metadata to. If there is no space left on the device, one of the two is completely used, and if you still can write to existing files, then more likely you are out of inodes.

There are some questions about this general problem in the stackoverflow world, some of them here:

https://unix.stackexchange.com/questions/26598/how-can-i-increase-the-number-of-inodes-in-an-ext4-filesystem

https://serverfault.com/questions/396768/ext4-file-system-max-inode-limit-can-anyone-please-explain

https://superuser.com/questions/585641/changing-max-inode-count-number-in-ext3-filesystem-in-cent-os

For anyone encountering the same issue, the way I solved it was to unmount the drive, run xfs_repair on it, re-mount it, and restart the EC2 instance.

Not very elegant, but it saved me the headache.

Hope it helps someone else.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM