简体   繁体   English

一个“ a”时间有多少个请求可以处理一个端口

[英]How many requests can handle a port at 'a' time

I am creating a web application having a login page , where number of users can tries to login at same time. 我正在创建一个具有登录页面的Web应用程序,其中许多用户可以尝试同时登录。 so here I need to handle number of requests at a time. 所以在这里我需要一次处理多个请求。

I know this is already implemented for number of popular sites like G talk . 我知道这已经在许多热门网站(如G talk

So I have some questions in my mind. 所以我有一些疑问。

"How many requests can a port handle at a time ?" “一个端口一次可以处理多少个请求?”

How many sockets can I(server) create ? 我(服务器)可以创建多少个套接字? is there any limitations? 有什么限制吗?

For eg . 例如。 As we know when we implement client server communication using Socket programming(TCP), we pass 'a port number(unreserved port number)to server for creating a socket . 众所周知,当我们使用套接字编程(TCP)实现客户端服务器通信时,我们将端口号(未保留的端口号)传递给服务器以创建套接字。

So I mean to say if 100000 requests came at a single time then what will be approach of port to these all requests. 因此,我的意思是说,如果一次发出100000个请求,那么对所有这些请求进行移植的方式将是什么。

Is he manitains some queue for all these requests , or he just accepts number of requests as per his limit? 他是否为所有这些请求准备了一些队列,或者他只是按照自己的限制接受请求数? if yes what is handling request limit size of port ? 如果是,处理端口的请求限制大小是什么?

Summary: I want to know how server serves multiple requests simultaneously?I don't know any thing about it. 简介:我想知道服务器如何同时处理多个请求?对此我一无所知。 I know we are connection to a server via its ip address and port number that's it. 我知道我们通过它的IP地址和端口号连接到服务器。 So I thought there is only one port and many request come to that port only via different clients so how server manages all the requests? 因此,我认为只有一个端口,并且许多请求仅通过不同的客户端到达该端口,因此服务器如何管理所有请求?

This is all I want to know. 这就是我想知道的一切。 If you explain this concept in detail it would be very helpful. 如果您详细解释此概念,将非常有帮助。 Thanks any way. 不管怎么说,还是要谢谢你。

A port doesn't handle requests, it receives packets. 端口不处理请求,它接收数据包。 Depending on the implementation of the server this packets may be handled by one or more processes / threads, so this is unlimited theoretically. 根据服务器的实现,此数据包可能由一个或多个进程/线程处理,因此从理论上讲这是不受限制的。 But you'll always be limited by bandwith and processing performance. 但是您总是会受到带宽和处理性能的限制。

If lots of packets arrive at one port and cannot be handled in a timely manner they will be buffered (by the server, the operating system or hardware). 如果大量数据包到达一个端口并且无法及时处理,则它们将被缓冲(由服务器,操作系统或硬件)。 If those buffers are full, the congestion maybe handled by network components (routers, switches) and the protocols the network traffic is based on. 如果这些缓冲区已满,则可以由网络组件(路由器,交换机)和网络流量所基于的协议来处理拥塞。 TCP for example has some methods to avoid or control congestion: http://en.wikipedia.org/wiki/Transmission_Control_Protocol#Congestion_control 例如,TCP有一些避免或控制拥塞的方法: http : //en.wikipedia.org/wiki/Transmission_Control_Protocol#Congestion_control

This is typically configured in the application/web server you are using. 通常,这是在您使用的应用程序/ Web服务器中配置的。 How you limit the number of concurrent requests is by limiting the number of parallel worker threads you allow the server to spawn to serve requests. 如何限制并发请求数是通过限制允许服务器产生以服务请求的并行工作线程数。 If more requests come in than there are available threads to handle them, they will start to queue up. 如果收到的请求多于可用线程来处理,则它们将开始排队。 This is the second thing you typically configure, the socket back-log size. 这是您通常配置的第二件事,即套接字积压日志大小。 When the back-log is full, the server will start responding with "connection refused" when new requests come in. 待办事项日志已满时,服务器将在收到新请求时以“连接被拒绝”开始响应。

Then you'll probably be restricted by number of File Descriptors your os supports (in case of *nix) or the number of simultaneous connections your webserver supports. 然后,您可能会受到操作系统支持的文件描述符数量(如果是* nix)或Web服务器支持的同时连接数量的限制。 The OS maximum on my machine seems to be 75000. 我的计算机上的最大操作系统似乎为75000。

100,000 concurrent connections should be easily possible in Java if you use something like Netty . 如果您使用Netty之类的东西,在Java中应该很容易实现100,000个并发连接。

You need to be able to: 您需要能够:

  • Accept incoming connections fast enough. 足够快地接受传入的连接。 The NIO framework helps enormously here, which is what Netty uses internally. NIO框架在这里极大地帮助了Netty内部使用。 There is a smallish queue for incoming requests, so you need to be able to handle these faster than the queue can fill up. 传入请求的队列很小,因此您需要能够以比队列填满的速度更快的速度处理这些请求。
  • Create connections for each client (this implies some memory overhead for things like connection info, buffers etc.) - you may need to tweak your VM settings to have enough free memory for all the connections 为每个客户端创建连接(这意味着连接信息,缓冲区等内容会占用一些内存)-您可能需要调整VM设置以为所有连接留出足够的可用内存

See this article from 2009 where they discuss achieving 100,000 concurrent connections with about 20% CPU usage on a quad-core server. 请参阅2009年的这篇文章 ,他们讨论在四核服务器上使用大约20%的CPU使用率实现100,000个并发连接。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM