簡體   English   中英

在 node.js 變換 stream 中處理背壓的正確方法是什么?

[英]What's the proper way to handle back-pressure in a node.js Transform stream?

##Intro 這些是我編寫 node.js 服務器端的第一次冒險。 到目前為止它很有趣,但我很難理解實現與 node.js 流相關的某些東西的正確方法。

###Problem 出於測試和學習目的,我正在處理內容為 zlib 壓縮的大文件。 壓縮后的內容是二進制數據,每個數據包長度為 38 字節。 我正在嘗試創建一個看起來幾乎與原始文件相同的結果文件,除了每 1024 個 38 字節數據包有一個未壓縮的 31 字節 header 。

###原始文件內容(解壓后)

+----------+----------+----------+----------+
| packet 1 | packet 2 |  ......  | packet N |
| 38 bytes | 38 bytes |  ......  | 38 bytes |
+----------+----------+----------+----------+

###結果文件內容

+----------+--------------------------------+----------+--------------------------------+
| header 1 |    1024 38 byte packets        | header 2 |    1024 38 byte packets        |
| 31 bytes |       zlib compressed          | 31 bytes |       zlib compressed          |
+----------+--------------------------------+----------+--------------------------------+

如您所見,這有點翻譯問題。 這意味着,我將一些源 stream 作為輸入,然后將其稍微轉換為一些 output stream。 因此,實現一個Transform stream感覺很自然。

class 只是嘗試完成以下任務:

  1. 以 stream 作為輸入
  2. zlib 對數據塊進行膨脹以計算數據包的數量,將其中的 1024 個放在一起,zlib 放氣,並在前面加上 header。
  3. 通過this.push(chunk)通過管道傳遞新的結果塊。

用例類似於:

var fs = require('fs');
var me = require('./me'); // Where my Transform stream code sits
var inp = fs.createReadStream('depth_1000000');
var out = fs.createWriteStream('depth_1000000.out');
inp.pipe(me.createMyTranslate()).pipe(out);

###Question(s) 假設Transform是這個用例的一個不錯的選擇,我似乎遇到了一個可能的背壓問題。 我在_transform中對this.push(chunk)的調用不斷返回false 為什么會這樣以及如何處理這些事情?

這個 2013 年的問題是我在創建節點轉換流時如何處理“背壓”的全部內容。

從節點 7.10.0 Transform stream and Readable stream文檔中我收集到的是,一旦push返回 false,在調用_read之前不應再推送任何其他內容。

Transform 文檔沒有提到_read只是提到基礎 Transform 類實現了它(和 _write)。 我在可讀流文檔中找到了有關push返回 false 和_read被調用的信息。

我在 Transform back pressure 上找到的唯一其他權威評論只提到它是一個問題,這是在節點文件_stream_transform.js頂部的評論中。

這是該評論中有關背壓的部分:

// This way, back-pressure is actually determined by the reading side,
// since _read has to be called to start processing a new chunk.  However,
// a pathological inflate type of transform can cause excessive buffering
// here.  For example, imagine a stream where every byte of input is
// interpreted as an integer from 0-255, and then results in that many
// bytes of output.  Writing the 4 bytes {ff,ff,ff,ff} would result in
// 1kb of data being output.  In this case, you could write a very small
// amount of input, and end up with a very large amount of output.  In
// such a pathological inflating mechanism, there'd be no way to tell
// the system to stop doing the transform.  A single 4MB write could
// cause the system to run out of memory.
//
// However, even in such a pathological case, only a single written chunk
// would be consumed, and then the rest would wait (un-transformed) until
// the results of the previous transformed chunk were consumed.

解決方案示例

這是我拼湊起來處理 Transform 流中的背壓的解決方案,我很確定它是有效的。 (我沒有寫過任何真正的測試,這需要寫一個 Writable 流來控制背壓。)

這是一個基本的線變換,需要作為線變換工作,但確實演示了處理“背壓”。

const stream = require('stream');

class LineTransform extends stream.Transform
{
    constructor(options)
    {
        super(options);

        this._lastLine = "";
        this._continueTransform = null;
        this._transforming = false;
        this._debugTransformCallCount = 0;
    }

    _transform(chunk, encoding, callback)
    {
        if (encoding === "buffer")
            return callback(new Error("Buffer chunks not supported"));

        if (this._continueTransform !== null)
            return callback(new Error("_transform called before previous transform has completed."));

        // DEBUG: Uncomment for debugging help to see what's going on
        //console.error(`${++this._debugTransformCallCount} _transform called:`);

        // Guard (so we don't call _continueTransform from _read while it is being
        // invoked from _transform)
        this._transforming = true;

        // Do our transforming (in this case splitting the big chunk into lines)
        let lines = (this._lastLine + chunk).split(/\r\n|\n/);
        this._lastLine = lines.pop();

        // In order to respond to "back pressure" create a function
        // that will push all of the lines stopping when push returns false,
        // and then resume where it left off when called again, only calling
        // the "callback" once all lines from this transform have been pushed.
        // Resuming (until done) will be done by _read().
        let nextLine = 0;
        this._continueTransform = () =>
            {
                let backpressure = false;
                while (nextLine < lines.length)
                {

                    if (!this.push(lines[nextLine++] + "\n"))
                    {
                        // we've got more to push, but we got backpressure so it has to wait.
                        if (backpressure)
                            return;

                        backpressure = !this.push(lines[nextLine++] + "\n");
                    }
                }

                // DEBUG: Uncomment for debugging help to see what's going on
                //console.error(`_continueTransform ${this._debugTransformCallCount} finished\n`);

                // All lines are pushed, remove this function from the LineTransform instance
                this._continueTransform = null;
                return callback();
            };

        // Start pushing the lines
        this._continueTransform();

        // Turn off guard allowing _read to continue the transform pushes if needed.
        this._transforming = false;
    }

    _flush(callback)
    {
        if (this._lastLine.length > 0)
        {
            this.push(this._lastLine);
            this._lastLine = "";
        }

        return callback();
    }

    _read(size)
    {
        // DEBUG: Uncomment for debugging help to see what's going on
        //if (this._transforming)
        //    console.error(`_read called during _transform ${this._debugTransformCallCount}`);

        // If a transform has not pushed every line yet, continue that transform
        // otherwise just let the base class implementation do its thing.
        if (!this._transforming && this._continueTransform !== null)
            this._continueTransform();
        else
            super._read(size);
    }
}

我通過在 ~10000 行 ~200KB 文件中未注釋的調試行運行它來測試上述內容。 將 stdout 或 stderr 重定向到文件(或兩者)以將調試語句與預期輸出分開。 node test.js > out.log 2> err.log

const fs = require('fs');
let inStrm = fs.createReadStream("testdata/largefile.txt", { encoding: "utf8" });
let lineStrm = new LineTransform({ encoding: "utf8", decodeStrings: false });
inStrm.pipe(lineStrm).pipe(process.stdout);

有用的調試提示

最初寫這篇文章時,我沒有意識到_read可以_transform返回之前被調用,所以我沒有實現this._transforming保護,我收到以下錯誤:

Error: no writecb in Transform class
    at afterTransform (_stream_transform.js:71:33)
    at TransformState.afterTransform (_stream_transform.js:54:12)
    at LineTransform._continueTransform (/userdata/mjl/Projects/personal/srt-shift/dist/textfilelines.js:44:13)
    at LineTransform._transform (/userdata/mjl/Projects/personal/srt-shift/dist/textfilelines.js:46:21)
    at LineTransform.Transform._read (_stream_transform.js:167:10)
    at LineTransform._read (/userdata/mjl/Projects/personal/srt-shift/dist/textfilelines.js:56:15)
    at LineTransform.Transform._write (_stream_transform.js:155:12)
    at doWrite (_stream_writable.js:331:12)
    at writeOrBuffer (_stream_writable.js:317:5)
    at LineTransform.Writable.write (_stream_writable.js:243:11)

查看節點實現,我意識到這個錯誤意味着給_transform的回調被調用了不止一次。 也沒有找到關於此錯誤的太多信息,所以我想我會在此處包含我發現的內容。

我認為Transform適合於此,但我會將膨脹作為管道中的單獨步驟執行。

這是一個快速且大部分未經測試的示例:

var zlib        = require('zlib');
var stream      = require('stream');
var transformer = new stream.Transform();

// Properties used to keep internal state of transformer.
transformer._buffers    = [];
transformer._inputSize  = 0;
transformer._targetSize = 1024 * 38;

// Dump one 'output packet'
transformer._dump       = function(done) {
  // concatenate buffers and convert to binary string
  var buffer = Buffer.concat(this._buffers).toString('binary');

  // Take first 1024 packets.
  var packetBuffer = buffer.substring(0, this._targetSize);

  // Keep the rest and reset counter.
  this._buffers   = [ new Buffer(buffer.substring(this._targetSize)) ];
  this._inputSize = this._buffers[0].length;

  // output header
  this.push('HELLO WORLD');

  // output compressed packet buffer
  zlib.deflate(packetBuffer, function(err, compressed) {
    // TODO: handle `err`
    this.push(compressed);
    if (done) {
      done();
    }
  }.bind(this));
};

// Main transformer logic: buffer chunks and dump them once the
// target size has been met.
transformer._transform  = function(chunk, encoding, done) {
  this._buffers.push(chunk);
  this._inputSize += chunk.length;

  if (this._inputSize >= this._targetSize) {
    this._dump(done);
  } else {
    done();
  }
};

// Flush any remaining buffers.
transformer._flush = function() {
  this._dump();
};

// Example:
var fs = require('fs');
fs.createReadStream('depth_1000000')
  .pipe(zlib.createInflate())
  .pipe(transformer)
  .pipe(fs.createWriteStream('depth_1000000.out'));

如果您正在寫入的流(在本例中為文件輸出流)緩沖了太多數據,則push將返回 false。 由於您正在寫入磁盤,因此這是有道理的:您處理數據的速度比寫入數據的速度要快。

out的緩沖區已滿時,您的轉換流將無法推送,並開始緩沖數據本身。 如果該緩沖區應填滿,則inp將開始填滿。 這就是事情應該如何運作。 管道流只會以鏈中最慢的鏈接可以處理的速度處理數據(一旦您的緩沖區已滿)。

最近遇到了類似的問題,需要在膨脹的轉換流中處理背壓 - 處理push()返回 false 的秘訣是注冊和處理流上的'drain'事件

_transform(data, enc, callback) {
  const continueTransforming = () => {
    // ... do some work / parse the data, keep state of where we're at etc
    if(!this.push(event)) 
         this._readableState.pipes.once('drain', continueTransforming); // will get called again when the reader can consume more data
    if(allDone)
       callback();
  }
  continueTransforming()
}

注意這有點麻煩,因為我們正在深入了解內部結構, pipes甚至可以是一個Readable數組,但它確實適用於....pipe(transform).pipe(...

如果來自 Node 社區的人可以建議一個“正確”的方法來處理.push()返回 false,那就太好了

我最終遵循了 Ledion 的示例並創建了一個實用程序 Transform 類,該類有助於背壓。 該實用程序添加了一個名為 addData 的異步方法,實現 Transform 可以等待該方法。

'use strict';

const { Transform } = require('stream');

/**
 * The BackPressureTransform class adds a utility method addData which
 * allows for pushing data to the Readable, while honoring back-pressure.
 */
class BackPressureTransform extends Transform {
  constructor(...args) {
    super(...args);
  }

  /**
   * Asynchronously add a chunk of data to the output, honoring back-pressure.
   *
   * @param {String} data
   * The chunk of data to add to the output.
   *
   * @returns {Promise<void>}
   * A Promise resolving after the data has been added.
   */
  async addData(data) {
    // if .push() returns false, it means that the readable buffer is full
    // when this occurs, we must wait for the internal readable to emit
    // the 'drain' event, signalling the readable is ready for more data
    if (!this.push(data)) {
      await new Promise((resolve, reject) => {
        const errorHandler = error => {
          this.emit('error', error);
          reject();
        };
        const boundErrorHandler = errorHandler.bind(this);

        this._readableState.pipes.on('error', boundErrorHandler);
        this._readableState.pipes.once('drain', () => {
          this._readableState.pipes.removeListener('error', boundErrorHandler);
          resolve();
        });
      });
    }
  }
}

module.exports = {
  BackPressureTransform
};

使用這個實用程序類,我的變換現在看起來像這樣:

'use strict';

const { BackPressureTransform } = require('./back-pressure-transform');

/**
 * The Formatter class accepts the transformed row to be added to the output file.
 * The class provides generic support for formatting the result file.
 */
class Formatter extends BackPressureTransform {
  constructor() {
    super({
      encoding: 'utf8',
      readableObjectMode: false,
      writableObjectMode: true
    });

    this.anyObjectsWritten = false;
  }

  /**
   * Called when the data pipeline is complete.
   *
   * @param {Function} callback
   * The function which is called when final processing is complete.
   *
   * @returns {Promise<void>}
   * A Promise resolving after the flush completes.
   */
  async _flush(callback) {
    // if any object is added, close the surrounding array
    if (this.anyObjectsWritten) {
      await this.addData('\n]');
    }

    callback(null);
  }

  /**
   * Given the transformed row from the ETL, format it to the desired layout.
   *
   * @param {Object} sourceRow
   * The transformed row from the ETL.
   *
   * @param {String} encoding
   * Ignored in object mode.
   *
   * @param {Function} callback
   * The callback function which is called when the formatting is complete.
   *
   * @returns {Promise<void>}
   * A Promise resolving after the row is transformed.
   */
  async _transform(sourceRow, encoding, callback) {
    // before the first object is added, surround the data as an array
    // between each object, add a comma separator
    await this.addData(this.anyObjectsWritten ? ',\n' : '[\n');

    // update state
    this.anyObjectsWritten = true;

    // add the object to the output
    const parsed = JSON.stringify(sourceRow, null, 2).split('\n');
    for (const [index, row] of parsed.entries()) {
      // prepend the row with 2 additional spaces since we're inside a larger array
      await this.addData(`  ${row}`);

      // add line breaks except for the last row
      if (index < parsed.length - 1) {
        await this.addData('\n');
      }
    }

    callback(null);
  }
}

module.exports = {
  Formatter
};

我認為邁克·利珀特的回答最接近事實。 似乎等待新的_read()調用從讀取流再次開始是主動通知Transform讀取器已准備就緒的唯一方法。 我想分享一個關於如何臨時覆蓋_read()的簡單示例。

_transform(buf, enc, callback) {

  // prepend any unused data from the prior chunk.
  if (this.prev) {
    buf = Buffer.concat([ this.prev, buf ]);
    this.prev = null;
  }

  // will keep transforming until buf runs low on data.
  if (buf.length < this.requiredData) {
    this.prev = buf;
    return callback();
  }

  var result = // do something with data...
  var nextbuf = buf.slice(this.requiredData);

  if (this.push(result)) {
    // Continue transforming this chunk
    this._transform(nextbuf, enc, callback);
  }
  else {
    // Node is warning us to slow down (applying "backpressure")
    // Temporarily override _read request to continue the transform
    this._read = function() {
        delete this._read;
        this._transform(nextbuf, enc, callback);
    };
  }
}

我試圖找到轉換源代碼中提到的注釋,並且參考鏈接一直在更改,因此我將其留在這里以供參考:

// a transform stream is a readable/writable stream where you do
// something with the data.  Sometimes it's called a "filter",
// but that's not a great name for it, since that implies a thing where
// some bits pass through, and others are simply ignored.  (That would
// be a valid example of a transform, of course.)
//
// While the output is causally related to the input, it's not a
// necessarily symmetric or synchronous transformation.  For example,
// a zlib stream might take multiple plain-text writes(), and then
// emit a single compressed chunk some time in the future.
//
// Here's how this works:
//
// The Transform stream has all the aspects of the readable and writable
// stream classes.  When you write(chunk), that calls _write(chunk,cb)
// internally, and returns false if there's a lot of pending writes
// buffered up.  When you call read(), that calls _read(n) until
// there's enough pending readable data buffered up.
//
// In a transform stream, the written data is placed in a buffer.  When
// _read(n) is called, it transforms the queued up data, calling the
// buffered _write cb's as it consumes chunks.  If consuming a single
// written chunk would result in multiple output chunks, then the first
// outputted bit calls the readcb, and subsequent chunks just go into
// the read buffer, and will cause it to emit 'readable' if necessary.
//
// This way, back-pressure is actually determined by the reading side,
// since _read has to be called to start processing a new chunk.  However,
// a pathological inflate type of transform can cause excessive buffering
// here.  For example, imagine a stream where every byte of input is
// interpreted as an integer from 0-255, and then results in that many
// bytes of output.  Writing the 4 bytes {ff,ff,ff,ff} would result in
// 1kb of data being output.  In this case, you could write a very small
// amount of input, and end up with a very large amount of output.  In
// such a pathological inflating mechanism, there'd be no way to tell
// the system to stop doing the transform.  A single 4MB write could
// cause the system to run out of memory.
//
// However, even in such a pathological case, only a single written chunk
// would be consumed, and then the rest would wait (un-transformed) until
// the results of the previous transformed chunk were consumed.

我發現了一個類似於Ledion 的解決方案,而無需深入了解當前 stream 管道的內部結構。 您可以通過以下方式實現:

_transform(data, enc, callback) {
  const continueTransforming = () => {
    // ... do some work / parse the data, keep state of where we're at etc
    if(!this.push(event)) 
         this.once('data', continueTransforming); // will get called again when the reader can consume more data
    if(allDone)
       callback();
  }
  continueTransforming()
}

這是有效的,因為只有當下游某人正在使用您this.push()Transform的可讀緩沖區時才會發出data 因此,只要下游有能力拉出這個緩沖區,您就應該能夠開始寫回緩沖區。

在下游監聽drain的缺陷(除了進入節點的內部)是您還依賴於您的Transform的緩沖區也已被耗盡,不能保證當下游發出drain時它已經被耗盡.

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM