简体   繁体   English

例外:gzip完成而不用尽资源,关于Okhttp,okio

[英]An exception:gzip finished without exhausting source, about Okhttp,okio

I encountered this error while using okhttp. 我在使用okhttp时遇到此错误。 Please help me analyze the reason for the error and give me a solution 请帮助我分析错误的原因并提供解决方案

 @Override public long read(Buffer sink, long byteCount) throws IOException { 
if (byteCount < 0) throw new IllegalArgumentException("byteCount < 0: " + byteCount); 
if (byteCount == 0) return 0; 

// If we haven't consumed the header, we must consume it before anything else. 
if (section == SECTION_HEADER) { 
  consumeHeader(); 
  section = SECTION_BODY; 
} 

// Attempt to read at least a byte of the body. If we do, we're done. 
if (section == SECTION_BODY) { 
  long offset = sink.size; 
  long result = inflaterSource.read(sink, byteCount); 
  if (result != -1) { 
    updateCrc(sink, offset, result); 
    return result; 
  } 
  section = SECTION_TRAILER; 
} 

// The body is exhausted; time to read the trailer. We always consume the 
// trailer before returning a -1 exhausted result; that way if you read to 
// the end of a GzipSource you guarantee that the CRC has been checked. 
if (section == SECTION_TRAILER) { 
  consumeTrailer(); 
  section = SECTION_DONE; 

  // Gzip streams self-terminate: they return -1 before their underlying 
  // source returns -1. Here we attempt to force the underlying stream to 
  // return -1 which may trigger it to release its resources. If it doesn't 
  // return -1, then our Gzip data finished prematurely! 
 if (!source.exhausted()) { 
    throw new IOException("gzip finished without exhausting source"); 
  }
} 

return -1; 

} }

enter image description here 在此处输入图片说明

enter image description here CH.png 在此处输入图片描述 CH.png

throw new IOException("gzip finished without exhausting source"); 抛出新的IOException(“ gzip完成而不耗尽资源”);

JakeWharton BillBosiolis 杰克·沃顿·比尔·波西奥利斯

OK. 好。 This one can be closed. 这一个可以关闭。 It has nothing to do with retrofit/OkHttp. 它与Retrofit / OkHttp没有关系。

In fact, it seems that the problem was that the server code (not Apache) was always sending a Content-Length header back even in cases where chunked encoding was being used. 实际上,似乎的问题在于,即使在使用分块编码的情况下,服务器代码(而非Apache)也总是向后发送Content-Length标头。

https://github.com/square/retrofit/issues/1170 https://github.com/square/retrofit/issues/1170

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM