简体   繁体   English

可以指示git只写packfiles吗?

[英]Is it possible to instruct git to only write packfiles?

I sometimes store git repositories on Flash media, which is slow to create multiple, individual files but fairly quick at writing single large files. 我有时将git存储库存储在Flash介质上,这在创建多个单个文件时比较慢,但是在写入单个大文件时相当快。 The repositories are bare repositories, so it's basically just the .git folder. 该存储库是裸存储库,因此基本上只是.git文件夹。

Whenever I push to these repositories, I notice that git copies a pack file, but then unpacks it. 每当我推送到这些存储库时,我都会注意到git复制了一个打包文件,然后将其解压缩。 I really don't want git to unpack objects; 我真的不希望git解包对象; rather, I want it to keep the objects compressed. 相反,我希望它可以使对象保持压缩状态。

Is there a way to instruct git to only write packfiles and not unpack them? 有没有一种方法可以指示git只写packfile而不解压它们?

Reviewing the standard config options that git supports , I happened to read the following description of receive.unpackLimit : 回顾git支持标准配置选项 ,我碰巧阅读了以下receive.unpackLimit描述:

If the number of objects received in a push is below this limit then the objects will be unpacked into loose object files. 如果在一次推送中收到的对象数量低于此限制,则这些对象将被解压缩到松散的对象文件中。 However if the number of received objects equals or exceeds this limit then the received pack will be stored as a pack, after adding any missing delta bases. 但是,如果接收到的对象数等于或超过此限制,则在添加任何缺失的增量基数之后,接收到的数据包将存储为一个数据包。 Storing the pack from a push can make the push operation complete faster, especially on slow filesystems. 通过推送存储数据包可以使推送操作更快地完成,尤其是在速度较慢的文件系统上。 If not set, the value of transfer.unpackLimit is used instead. 如果未设置,则使用transfer.unpackLimit的值。

I configured one of the bare repositories on the Flash drive with transfer.unpackLimit set to 0 and a subsequent push of 7 objects did not result in unpacking. 我在flash驱动器上配置了裸存储库之一,将transfer.unpackLimit设置为0,随后推送7个对象不会导致解压缩。

I'm not an adept of git repacking, but I would try looking at git-repack . 我不擅长git repacking,但我会尝试查看git-repack

This script is used to combine all objects that do not currently reside in a "pack", into a pack. 该脚本用于将当前不在“包”中的所有对象组合到一个包中。 It can also be used to re-organize existing packs into a single, more efficient pack. 它也可以用于将现有包重新组织为一个更有效的包。

A pack is a collection of objects, individually compressed, with delta compression applied, stored in a single file, with an associated index file. 包是对象的集合,这些对象分别进行压缩并应用增量压缩,并与相关的索引文件一起存储在单个文件中。

Packs are used to reduce the load on mirror systems, backup engines, disk storage, etc. 软件包用于减少镜像系统,备份引擎,磁盘存储等上的负载。

Edit 1 : maybe the -a option is what you're looking for: 编辑1 :也许-a选项就是您想要的:

-a -一种

Instead of incrementally packing the unpacked objects, pack everything referenced into a single pack. 与其增量打包未打包的对象,不如将引用的所有内容打包到一个包中。 Especially useful when packing a repository that is used for private development and there is no need to worry about people fetching via dumb protocols from it. 在打包用于私有开发的存储库时特别有用,无需担心人们通过愚蠢的协议从中获取信息。 Use with -d. 与-d一起使用。 This will clean up the objects that git prune leaves behind, but git fsck --full shows as dangling. 这将清除git prune留下的对象,但是git fsck --full显示为悬空。

Edit 2 : As I stated in the comment, this is not a real answer, but rather a possible pointer to try with. 编辑2 :正如我在评论中所说,这不是一个真正的答案,而是一个可能的尝试指针。

A nice solution for what you'd like to do might be bup . 一个理想的解决方案可能是bup The idea of bup is to write git pack files directly (and independently of git) so that you can use the efficient pack file format for backups, without git's performance problems with repacking huge repositories. bup的想法是直接(独立于git)编写git pack文件,以便您可以使用高效的pack文件格式进行备份,而不会因为git打包大型存储库而导致性能问题。

An example of how you might create pack files in a remote git repository is given in the "Make a backup on a remote server" example in bup's README . bup的README文件中的“在远程服务器上进行备份”示例中给出了如何在远程git存储库中创建打包文件的示例。

As a disclaimer, I haven't tried this myself, so maybe there's some fundamental problem with using bup for your use case, but it seems like a nice idea to me. 作为免责声明,我自己没有尝试过,因此在您的用例中使用bup可能存在一些基本问题,但对我来说似乎是个好主意。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM