简体   繁体   中英

how to exchange data using c++ network programmimg?

------------------update 2015.12.13---------------

For now , I am developing a program that has network function on windows

Assume that there are four PCs,I choose one as the master node and others are slave nodes.slave nodes need to pass xml files(only once) and some other real-time information(eg:pass data every 5 seconds) to the master node.

I have no knowledge about network programming.Does there exist an framework can handle this?

As for how to solve this problem,I have there main points that I can't make choice:

  1. which library or framework is easy to use? winsocket ? POCO ? QT networkframe or others?

  2. which level of the network should I focus? socket ?or just using already packed HTTP request so that don't need to care about the realize of socket?

  3. Because of having to pass some real-time information,are there any important points to design this network function?

One possibility would be to abstract the "network" part away entirely, and use one of the many RPC mechanisms, so the clients just pass parameters to a function, and shortly after that, a function is called on your server with the same parameters.

Just for one example, I've used Apache Thrift like this a number of times. Recently I've done a bit of work using Google RPC as well. Both seem to work, though Thrift is definitely the more mature of the two. Depending on your needs, there are quite a few more possibilities out there to choose from.

There are many ways to do this. The easiest is to not use network API's per se. Use the file system. Slaves can write to the server, and the server will pick up the files once the slave is done writing. You can depend on windows to enforce SHARE_READ / SHARE_WRITE. If the client creates the file with SHARE_NONE, then nobody else can open the file until it closes it. This will only fail if the client drops the network connection, crashes, etc. A more robust way is for the client to create the file with a name like "\\server-name\\sharename\\CLIENT_X_TEMP.xml", and when it is ready for the server to use it, rename to "\\server-name\\sharename\\SERVER_PLEASE_README_NOW.xml". The server will only look for files of the correct name. Seems kind-of low-tech, but it is secure, portable, robust, and easy to get working. Since you don't need any special APIs you can write the client and server in any language that has access to the file system.

If you want to use direct networking APIs, you might use DCOM. It allows you to specify a C/C++ like interface for the client to call and the server to implement. It deals with all the complexity of serializing the data and transporting to the the other machine. This is really flexible, the client and server and be in the same process, different processes on the same machine, or on separate machines. (Microsoft has called this technology by many names over the years. The "base" layer is "RPC", then COM, D-COM, ActiveX, maybe some other names.

Sockets are pretty easy to setup and are more easily portable than DCOM, but are fairly "raw". You will need to invent your own protocol for sending the data as sockets simply transport bunches of data.

HTTP is just a protocol that is typically transported over sockets. The advantage here is that there are a number of different HTTP libraries available, and a number of ready-made servers. IIS for instance can be used to handle the incoming requests and automatically hand them off to your code -- you can code your server in several ways and Microsofts Visual Studio has pretty good support for doing this.

I've not used poco or QT so I can't comment.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM