简体   繁体   中英

How to create a large numbers of sockets for performance testing in Linux

I am developing a Linux based test tool for a custom application which will need to create roughly 200,000 sockets to an external system, then generate traffic over said sockets and create some performance metrics.

What I am wondering is the best approach to do this in Linux. First, to have 200,000 sockets we would surely hit the FD limits. Can the FD limit be increased this high (on a very powerful machine)? Or what sort of reasonable maximum values can we expect to obtain per-Linux instance?

Also, the easiest first-thought for implementation of such a tool would be one thread per test client, which would create the connection, send traffic, measure performance, etc. What sort of maximum numbers can we get for threads in the kernel? Or does having several worker threads handling a subset of the endpoints make more sense?

Is this possible using one Linux implementation, or is splitting it out into multiple servers the only option?

This problem is call "C10K" and it was extended when 10,000 connection were not a problem anymore. You can find lots of information in google.

On a strong linux machine (4 cpus, 16GB ram) you should be able to reach 1M.

The easiest way to handle such amount of open file descriptors is using poll . However, you will have to extend the limits of your host.

  • ulimits
  • kernel number of file descriptors
  • size of socket buffer

See C500K problem documentation

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM