简体   繁体   English

libcurl下载文件大小超过缓冲区大小

[英]libcurl download file size exceed buffer size

I have a question regarding this code at https://curl.haxx.se/libcurl/c/ftpget.html 我在https://curl.haxx.se/libcurl/c/ftpget.html上有关于此代码的问题

In the call back function 在回拨功能中

static size_t my_fwrite(void *buffer, size_t size, size_t nmemb, void *stream)
{
  struct FtpFile *out=(struct FtpFile *)stream;
  if(out && !out->stream) {
    /* open file for writing */ 
    out->stream=fopen(out->filename, "wb");
    if(!out->stream)
      return -1; /* failure, can't open file to write */ 
  }
  return fwrite(buffer, size, nmemb, out->stream);
}

What if the file size exceed the buffer size? 如果文件大小超过缓冲区大小怎么办? I think the function will not be called iteratively since it overwrites the file everytime. 我认为该函数不会被迭代调用,因为它每次都会覆盖文件。 Is there a work-around of it? 有解决方法吗? Thanks! 谢谢!

From curl documentation : 从curl 文档

The callback function will be passed as much data as possible in all invokes, but you must not make any assumptions. 回调函数将在所有调用中传递尽可能多的数据,但是您不得做任何假设。 It may be one byte, it may be thousands. 它可能是一个字节,也可能是数千。 The maximum amount of body data that will be passed to the write callback is defined in the curl.h header file: CURL_MAX_WRITE_SIZE (the usual default is 16K) . 将在curl.h头文件中定义将传递到写回调的最大正文数据量:CURL_MAX_WRITE_SIZE(通常默认值为16K) If CURLOPT_HEADER is enabled, which makes header data get passed to the write callback, you can get up to CURL_MAX_HTTP_HEADER bytes of header data passed into it. 如果启用了CURLOPT_HEADER,这使标头数据传递到写回调,则最多可以传递CURL_MAX_HTTP_HEADER个字节的标头数据。 This usually means 100K. 这通常意味着100K。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM