简体   繁体   中英

Is there any race condition below?

I have below code in my express server (have cut it down for brevity). I have a common object which I'm adding/modifying/reading in three different restful end points. Since all http requests in nodejs are asynchronous, I could get both put and get request at the same time . So although PUT happened but lets say status is not updated, my GET could get a slightly stale response?

From what I understand, and my testing shows that there is no race condition here. Because updating results object is a synchronous operation and all async operations should wait for it. Can somebody help with a better explanation whether this is correct or not?

    var obj = {};
    const exec = require('child_process').exec;
    app.post('/foo', (req, res) => {
         var result = {};
         result.id = generateSomeRandomId();
         result.failed = 0;
         result.status = 'running'
         //execute some command and update result
         const child = exec('some command');
         child.stdout.on('data',  (data) => {
             //some logic
         });
         child.stderr.on('data',  (data) => {
             result.failed = result.failed + 1;
          });
         child.on('close',  (code, signal) => {
              if (signal !== null && signal !== undefined) {
                    result.status = 'cancelled';
              } else {
                    result.status = 'completed';
                    result.runtime = calculateRunTime();
                }
         });
         result.pid = child.pid;
         obj[result.id] = result; 
         res.send(result); 
    }   

    app.put('/foo/:id', (req, res) => {
         var result =  obj[req.params.id];
         if (result.status === 'running' && result.pid !== undefined) {
              kill(result.pid, 'SIGKILL');
              result.status = 'cancelled';
              result.runtime = calculateRunTime();
         }
         res.send(result);
    }   
    app.get('/foo/:id', (req, res) => {
         var result =  obj[req.params.id];
         res.send(result);
    }

It's just an idea, but maybe a Promise could be helpful here:

var obj = {};
const exec = require('child_process').exec;
app.post('/foo', (req, res) => {
     var result = {};
     result.id = generateSomeRandomId();
     result.status = 'running';
     const child = exec('some command');
     child.stdout.on('data',  (data) => {
         //some logic
     });

     result.promise = new Promise(resolve => {
       child.stderr.on('data',  (data) => {
           result.failed = result.failed + 1;
           resolve(false);
        });
       child.on('close',  (code, signal) => {
            // ...
           resolve(true);
       });
     });

     result.pid = child.pid;
     obj[result.id] = result;
     res.send(result); 
}   

app.get('/foo/:id', (req, res) => {
     var result =  obj[req.params.id];
     if(result.status === 'running') {
       result.promise.then(() => res.send(result));
     }
     else {
       res.send(result);
     }
}

In this case GET would respond only when the child is done by error or 'close' event.

You don't have anything I would call a "race condition"; there is an element of uncertainty at play here, but in practice it's not likely to matter.

It looks like your post starts a process and returns the ID, your put cancels the process, and your get returns the current state of the process. From this I gather that you'll never be able to get until your post completes and provides the ID.

If you make a get call that is received and returned before your exec asynchronous listeners complete​, you'll get whatever the last in-progress state is - I assume that's by design. So the only possible conflict here is if you've made a put call to halt your process.

Both your put and your get are synchronous when it comes to interacting with your result object, so whichever one is received first is the one that will complete first. You can ignore the process cancellation request for our purposes because it doesn't modify the result object. There's no guarantee that they will be received in the same order they were sent by the client(s), that may or may not be a practical concern in your scenario.

I believe (though my memory may be faulty here), that if you were using cluster to handle requests in different processes, you wouldn't be able to pass data around via a shared object anyway, so any complexity added by that possibility is already ruled out.

So, variances in network performance and reliability are your only real wildcard here. The server will handle requests in the order they come in and give you the expected results. If you only have a single client, you can wait until you get the response from your previous request to send the next one, which would potentially slow down your performance unacceptably, but make it more or less bulletproof. Otherwise, just send off your requests and don't worry about it, just make your app robust enough to recognize & handle a 2nd cancellation request even if you've already cancelled the process.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM