简体   繁体   中英

Question regarding FileReader.readAsArrayBuffer()

If I have a big file (eg hundreds of MB) does FileReader.readAsArrayBuffer() read the entire file data into an ArrayBuffer?
According to [here[(https://developer.mozilla.org/en-US/docs/Web/API/FileReader/readAsArrayBuffer) this is what happens.

I have a big .zip file with multiple GB.
I'm concerned of taking the entire RAM when working eg on a mobile device.
Is there an alternative approach where a file handle is returned and a portion of the file is read as needed?

You can use Blob.stream() for this:

 const bigBlob = new Blob(["somedata".repeat(10e5)]); console.log("Will read a %sbytes Blob", bigBlob.size) const reader = bigBlob.stream().getReader(); let chunks = 0; reader.read().then(function processChunk({ done, value }) { if (done) { console.log("Stream complete. Read in %s chunks", chunks); return; } // do whatever with the chunk // 'value' is an ArrayBuffer here chunks++; return reader.read().then(processChunk); }).catch(console.error);

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM