简体   繁体   中英

Reusing one opened TCP connection between TCP client and TCP server

There is third party service that expose TCP server to which my node server(TCP client) should establish TCP connection using tls node module. Being TCP client, node server is also HTTP server at same time, that should act something like proxy between customers coming from web browser and third party TCP server. So common use case would be that browser will send HTTP request to node server from where node server will communicate with TCP server via TCP sockets to gather/construct response and to send it back to browser.

Current solution that i had is that per each customer/each HTTP request coming from web browser new separated TCP connection will be established with TCP server. This solution proved to be bad, wasting time to do SSL handshake every time and TCP server does not allow more than 50 concurrent connections coming from single client. So with this solution it is not possible to have more than 50 customers communicate with node server at once.

What would be something like standard approach to do this thing with tls module in node?

What i'm up to is having one single TCP connection that will be something like always active and that connection will be established in the time when node app will start and what is most important this connection should be reused for many HTTP request coming from web browser.

在此处输入图片说明

First concern that i have is how to construct different HTTP responses based on data that is coming from TCP server via TCP raw socket. The good thing is that i can send something like unique token via headers to TCP server when describing which action should be taken on TCP server side.

socket.write(JSON.stringify({
  header: {uniqueToken: 032424242, type: 'create something bla bla'},
  data: {...}
}))

Having unique token TCP server side guarantee that JSON when combined from different chunks coming over TCP socket and parsed will have this uniqueToken which means im able to map this JSON to HTTP request and to return HTTP response.

My question is does in general TCP protocol guarantee that in this case different successive chunks will belong to the same response that needs to created when those chunks are combined and parsed (when '\\n\\n' occur) In another words is there any guarantee that chunks will not be mixed. (Im aware that it can happen that chunk that contains '\\n\\n' can belong to two different responses but i will be able to handle that)

If that is not possible than i don't see a way in which first solution (having one connection for one response that needs to be created) can be improved. Only way would be to introduce some connection pooling concept which as far as i know tls module does not provide in any way out of the box.

EDIT based on comments bellow, short version of question: Lets say that TCP server needs 2 seconds to send all chunks when it receives command create something bla bla If TCP client send command create something bla bla and immediately after 1 millisecond it send second create something bla bla , is there any chance that could happen that TCP server will write chunk related to second command before it writes all chunks related to first command.

... is there any chance that could happen that TCP server will write chunk related to second command before it writes all chunks related to first command.

If I understand your question correctly you are asking if a write("AB") followed by a write("CD") on the same socket at the server side could result that the clients reads ACDB from the server.

This is not the case if both writes are successful and have actually written all the data to the underlying socket buffer. But, since TCP is a stream protocol with no implicit message boundary the read on the client side could be something like ABCD or AB followed by CD or A followed by BC followed by D etc. Thus to distinguish between the messages from the server you have to add some application level message detection, like an end of message marker, a size prefix or similar.

Also, I restricted the previous statement to both writes are successful and have actually written all the data to the underlying socket buffer . This is not necessarily the case. For example you might functions which do a buffered write like (in C) fwrite instead of write. In this case you usually don't control which parts of the buffer are written at which time so it might be that fwrite("AB") would result in "A" written to the socket while "B" kept in the buffer. If you then have another buffered writer which use the same underlying file descriptor (ie socket) but not the same buffer then you could actually end up with something like ACDB send to the underlying socket and thus to the client.

This case could even happen if the unbuffered write with not fully successful, ie if a write("AB") has only written "A" and signals through the return value that "B" needs to be written later. If you have then a multi-threaded application with insufficient synchronization between threads you could end up with a case that the first threads sends "A" to the socket in the incomplete attempt to write "AB", followed by another thread sending "CD" successfully and then the first thread again completing the send by writing "B". In this case you also end up with "ACDB" on the socket.

In summary: the TCP layer guarantees that the send order is the same as the received order but the user space (ie application) needs to make sure that it really sends the data in the right order to the socket. Also, TCP has no message boundary so the distinction of messages inside the TCP stream need to be implemented inside the application by using message boundaries, length prefix, fixed message size or similar.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM