简体   繁体   中英

Socket.io: How to limit the size of emitted data from client to the websocket server

I have a node.js server with socket.io. My clients use socket.io to connect to the node.js server.

Data is transmitted from clients to server in the following way:

On the client

var Data = {'data1':'somedata1', 'data2':'somedata2'};
socket.emit('SendToServer', Data);

On the server

socket.on('SendToServer', function(Data) {
    for (var key in Data) {
           // Do some work with Data[key]
    }
});

Suppose that somebody modifies his client and emits to the server a really big chunk of data. For example:

var Data = {'data1':'somedata1', 'data2':'somedata2', ...and so on until he reach for example 'data100000':'data100000'};
socket.emit('SendToServer', Data);

Because of this loop on the server...

for (var key in Data) {
       // Do some work with Data[key]
}

... the server would take a very long time to loop through all this data.

So, what is the best solution to prevent such scenarios?

Thanks

EDIT:

I used this function to validate the object:

function ValidateObject(obj) {
    var i = 0;
    for(var key in obj) {
        i++;
        if (i > 10) { // object is too big
            return false;
        }
    }
    return false;
}

So the easiest thing to do is just check the size of the data before doing anything with it.

socket.on('someevent', function (data) {
    if (JSON.stringify(data).length > 10000) //roughly 10 bytes
        return;

    console.log('valid data: ' + data);
});

To be honest, this is a little inefficient. Your client sends the message, socket.io parses the message into an object, and then you get the event and turn it back into a String.

If you want to be even more efficient then on the client side you should be enforcing max lengths of messages.

For even more efficiency (and to protect against malicious users), as packets come into Socket.io, if the length gets too long, then you should discard them. You'll either need to figure a way to extend the prototypes to do what you want or you'll need to pull the source and modify it yourself. Also, I haven't looked into the socket.io protocol but I'm sure you'll have to do more than just "discard" the packet. Also, some packets are ack-backs and nack-backs so you don't want to mess with those, either.


Side note: If you ONLY care about the number of keys then you can use Object.keys(obj) which returns an array of keys:

if (Object.keys(obj).length > 10)
    return;

Probably you may consider switching to socket.io-stream and handle input stream directly.

This way you should join chunks and finally parse json input manually, but you have chance to close connection when input data length exceeds threshold you decide.

Otherwise (staying with socket.io approach) your callback won't be called until whole data stream were received. This doesn't stop your js main thread execution, but waste memory, cpu and bandwith.

On the other hand, if your only goal is to avoid overload of your processing algorithm you can continue limitting it by counting elements in the received object. For instance:

if (Object.keys(data).length > n) return; // Where n is your maximum acceptable number of elements.
// But, anyway, this doesn't control the actual size of each element.

Well, I'll go with the Javascript side of the thing... let's say you don't want to allow users to go over a certain limit of data, you can just:

var allowedSize = 10;

Object.keys(Data).map(function( key, idx ) {
    if( idx > allowedSize ) return;
    // Do some work with Data[key]
});

this not only allows you to properly cycle through the elements of your object, it lets you limit easily. ( obviously this can also ruin your own pre-set requests )

EDIT : Because the question is about "how to handle server overload" You should check load balancing with gninx http://nginx.com/blog/nginx-nodejs-websockets-socketio/ - you could have additional servers in case one client is creating a bottleneck. The other servers would be available. Even if you solve this problem, there are still other problems, like client sending several small packets and so on.

The Socket.io -library seems to be a bit problematic, managing too big messages is not available at the websockets layer, there was a pull -request three years ago, which gives an idea how it might be solved:

https://github.com/Automattic/socket.io/issues/886

However, because WebSockets -protocol does have finite packet size it would allow you to stop processing of the packets if certain size has been achieved. The most effective way of doing this would be before the packet is stransformed to JavaScript heap. This means that you should handle the WebSocket transform manually - this is what the socket.io is doing for you but it does not take into account the packet size.

If you want to implement you own websocket layer, using this WebSocket -node implementation might be useful:

https://github.com/theturtle32/WebSocket-Node

If you do not need to support older browsers, using this pure websockets -approach might be suitable solution.

Maybe destroy buffer size is what you need.

From the wiki :

  • destroy buffer size defaults to 10E7

Used by the HTTP transports. The Socket.IO server buffers HTTP request bodies up to this limit. This limit is not applied to websocket or flashsockets.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM