简体   繁体   English

await fetch(file_url) 返回有时不返回完整的文件内容

[英]await fetch(file_url) returns sometimes doesn't return the full file contents

I have the following javascript code to fetch and process the contents of the.csv file我有以下 javascript 代码来获取和处理 .csv 文件的内容

async function fetchCsv() {
    const response = await fetch("levels.csv");
    const reader = response.body.getReader();
    const result = await reader.read();
    const decoder = new TextDecoder("utf-8");
    const csv = await decoder.decode(result.value);
    return csv;
}

    useEffect(() => {
        fetchCsv().then((csv) => {
            // process csv
                (...)

When running this code 99% of the time the csv variable contains the correct contents of the file, but in rare cases the csv variable is only truncated part of the actual file.运行此代码时,csv 变量在 99% 的情况下都包含文件的正确内容,但在极少数情况下,csv 变量只是实际文件的截断部分。

What could be the reason and how to improve the code to handle that?可能是什么原因以及如何改进代码来处理这个问题?

It's in a React App if that's relevant.如果相关的话,它在 React App 中。

Extra info:额外信息:

  • I have verified that when the problem occurs the.network response for the levels.csv file is a proper response (200 and full 38kb are returned)我已经验证,当问题发生时,levels.csv 文件的.network 响应是正确的响应(返回 200 和完整的 38kb)

What you get when calling response.body.getReader() is a ReadableStreamDefaultReader object.调用response.body.getReader()时得到的是ReadableStreamDefaultReader object。

Calling its .read() method will return a Promise that will resolve with either the full content of the response body, in case the request was honored fast enough and the body size isn't too big (apparently 256MB in Firefox), or with just one chunk of the response body.调用它的.read()方法将返回一个Promise ,如果请求被足够快地响应并且主体大小不太大(在 Firefox 中显然是 256MB),或者用响应主体的全部内容解析只是响应主体的一大块。
This allows you to handle the response as a stream, before it's entirely fetched.这允许您在响应被完全获取之前将其作为 stream 处理。

If you wish to process this stream as text, you could either use a TextDecoderStream , which finally got support in all major browsers:如果你想将这个 stream 作为文本处理,你可以使用TextDecoderStream ,它最终得到了所有主流浏览器的支持:

const response = await fetch("levels.csv");
const textStream = response.body.pipeThrough(new TextDecoderStream());
// now you can handle each chunk as text from textStream.getReader();
// or pipe it in yet another TransformStream

or in more old-school style, you could use the { stream: true } option of the TextDecoder#decode() method and handle each chunk one by one in there:或者以更老式的方式,您可以使用TextDecoder#decode()方法的{ stream: true }选项,并在其中一个接一个地处理每个块:

const response = await fetch("levels.csv");
const decoder = new TextDecoder();
const reader = response.body.getReader();
while (true) {
  const {value, done} = await reader.read();
  if (value) {
    csv_chunks.push(decoder.decode(value, {stream: true}));
    // do something with all the chunks we have so far
  }
  if (done) {
    break;
  }
}

But maybe you don't want to handle this response as a stream at all, in which case it might very well be enough for you to ask the browser to first fetch the whole response body before it itself decodes it as text.但也许您根本不想将此响应作为 stream 来处理,在这种情况下,您可以要求浏览器首先获取整个响应主体,然后再将其解码为文本。 For this, if you need to decode the text as UTF-8, you'd use the Response#text() method:为此,如果您需要将文本解码为 UTF-8,您可以使用Response#text()方法:

const response = await fetch("levels.csv");
if (!response.ok) { // don't forget to handle possible network errors
  throw new Error("NetworkError");
}
return response.text();

And if you need to handle an other encoding, then first consume the response as an ArrayBuffer then decode it to text:如果您需要处理其他编码,则首先将响应作为ArrayBuffer使用,然后将其解码为文本:

const response = await fetch("levels.csv");
if (!response.ok) { // don't forget to handle possible network errors
  throw new Error("NetworkError");
}
const buf = await response.arrayBuffer();
const decoder = new TextDecoder(encoding);
return decoder.decode(buf);

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM