简体   繁体   English

安装 Big Sur 后,无法再通过 HTTPS 克隆大型存储库 git

[英]Can no longer git clone large repos via HTTPS since installing Big Sur

Whenever I try to clone large repos, this is what happens:每当我尝试克隆大型存储库时,都会发生以下情况:

$ git clone https://github.com/yarnpkg/berry.git
Cloning into 'berry'...
remote: Enumerating objects: 60762, done.
remote: Counting objects: 100% (1155/1155), done.
remote: Compressing objects: 100% (588/588), done.
Receiving objects:   5% (3454/60762), 13.86 MiB | 4.60 MiB/ (etc etc)
fatal: fetch-pack: invalid index-pack output

I have no anti-virus installed, and I'm not on a VPN.我没有安装防病毒软件,也没有使用 VPN。 I tried connecting to another network and that didn't solve it, so there must be something with Big Sur causing this.我尝试连接到另一个网络,但并没有解决它,所以 Big Sur 一定是有什么东西导致了这个问题。 I'm not sure what else to try.我不确定还能尝试什么。

macOS 11.4, APFS+ filesystem, git 2.31.1 macOS 11.4,APFS+ 文件系统,git 2.31.1

I've already tried changing compression settings, messing with pack size settings, this post did not help either.我已经尝试过更改压缩设置,弄乱了包大小设置, 这篇文章也没有帮助。 I'm posting this because I've tried everything else I've seen on the Internet so far and nothing has worked.我发布这个是因为到目前为止我已经尝试了我在 Internet 上看到的所有其他内容,但没有任何效果。

This should be a comment (as it's not an answer) but I need some formatting and a lot of space here.这应该是评论(因为它不是答案),但我需要一些格式和大量空间。 The short version is that you need to find out why git index-pack is misbehaving or failing.简短的版本是您需要找出git index-pack行为不端或失败的原因。 (Fetching over a smart protocol normally retrieves a so-called thin pack, which git fetch needs to "fatten" using git index-pack --fix-thin .) (通过智能协议获取通常会检索所谓的包, git fetch需要使用git index-pack --fix-thin来“增肥”。)

The "invalid index-pack output" error occurs if the output from git index-pack does not match what git fetch-pack expects.如果git index-pack中的 output 与git fetch-pack期望的不匹配,则会出现“无效的 index-pack 输出”错误。 Here's the code involved : 这是涉及的代码

char *index_pack_lockfile(int ip_out, int *is_well_formed)
{
    char packname[GIT_MAX_HEXSZ + 6];
    const int len = the_hash_algo->hexsz + 6;

    /*
     * The first thing we expect from index-pack's output
     * is "pack\t%40s\n" or "keep\t%40s\n" (46 bytes) where
     * %40s is the newly created pack SHA1 name.  In the "keep"
     * case, we need it to remove the corresponding .keep file
     * later on.  If we don't get that then tough luck with it.
     */
    if (read_in_full(ip_out, packname, len) == len && packname[len-1] == '\n') {
        const char *name;

        if (is_well_formed)
            *is_well_formed = 1;
        packname[len-1] = 0;
        if (skip_prefix(packname, "keep\t", &name))
            return xstrfmt("%s/pack/pack-%s.keep",
                       get_object_directory(), name);
        return NULL;
    }
    if (is_well_formed)
        *is_well_formed = 0;
    return NULL;
}

This is run from fetch-pack.c 's get_pack function , which runs git index-pack with arguments based on a lot of variables. This is run from fetch-pack.c 's get_pack function , which runs git index-pack with arguments based on a lot of variables. If you run your git clone with environment variable GIT_TRACE set to 1, you can observe Git running git index-pack .如果您运行git clone并将环境变量GIT_TRACE设置为 1,您可以观察到 Git 运行git index-pack The call to index_pack_lockfile here only happens if do_keep is set, which is based on args->keep_pack initially, but can become set if the pack header's hdr_entries value equals or exceeds unpack_limit (see around line 859).此处对index_pack_lockfile的调用仅在设置了do_keep时发生,该设置最初基于args->keep_pack ,但如果包头的hdr_entries值等于或超过unpack_limit则可以设置(参见第 859 行左右)。

You can control the unpack_limit value using fetch.unpackLimit and/or transfer.unpackLimit .您可以使用fetch.unpackLimit和/或transfer.unpackLimit控制unpack_limit值。 The default value is 100. You might be able to use these to work around some problem with index-pack, maybe—but index-pack should not be failing in whatever way it is failing.默认值为 100。您也许可以使用这些来解决 index-pack 的一些问题,也许——但 index-pack 不应该以任何失败的方式失败。 Note that if you want to force git fetch to use git unpack-objects instead, you must also disable object checking ( fsck_objects ).请注意,如果您想强制git fetch使用git unpack-objects ,您还必须禁用 object 检查 ( fsck_objects )。

It could be interesting to run git index-pack directly, too, on the data retrieved by git fetch .git fetch检索到的数据上直接运行git index-pack可能会很有趣。 (Consider installing a shell script in place of the normal git index-pack , where the script prints its arguments, then uses kill -STOP on its own process group, so that you can inspect the temporary files.) (考虑安装 shell 脚本代替正常的git index-pack ,脚本在其中打印其 arguments,然后您可以在其自己的进程组上使用kill -STOP 。)

Solved!解决了!

TL;DR: ulimit was limiting max file size on my filesystem. TL;DR: ulimit限制了我的文件系统上的最大文件大小。 Why this was never an issue in Catalina, who knows.为什么这在 Catalina 从来不是问题,谁知道呢。

I had this in my shell startup script:我的 shell 启动脚本中有这个:

ulimit -n 65536 65536

That second parameter was the bad one.第二个参数是坏的。 It was limiting max file size to 32MB.它将最大文件大小限制为 32MB。 Strangely, I have this exact same setting on Catalina and it imposes no limitation whatsoever.奇怪的是,我在 Catalina 上有完全相同的设置,并且没有任何限制。 Someone must've provided this setting and I just copied it without understanding the implications.一定有人提供了这个设置,我只是复制它而不理解它的含义。

I ran ulimit -n unlimited unlimited and now all is well!我跑了ulimit -n unlimited unlimited ,现在一切都很好!

Interesting notes on ulimit changes from Catalina to Big Sur:关于从 Catalina 到 Big Sur 的 ulimit 变化的有趣说明:

I ran this on my Catalina install:我在我的 Catalina 安装上运行了这个:

$ ulimit -a                                                   
-t: cpu time (seconds)              unlimited
-f: file size (blocks)              65536
-d: data seg size (kbytes)          unlimited
-s: stack size (kbytes)             8192
-c: core file size (blocks)         0
-v: address space (kbytes)          unlimited
-l: locked-in-memory size (kbytes)  unlimited
-u: processes                       11136
-n: file descriptors                65536
$ mkfile 500m whoa                                             
$ 

Look at that, No problem creating a 500MB file.看那个,创建一个 500MB 的文件没问题。 clearly ignoring ulimit's 65536-block file size limit.显然忽略了 ulimit 的 65536 块文件大小限制。

Yet on Big Sur, with the same ulimit settings:然而在 Big Sur 上,使用相同的 ulimit 设置:

$ mkfile 500m whoa                                                                                                                               
[1]    22267 file size limit exceeded  mkfile 500m whoa

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM