简体   繁体   中英

Why Getting Http Request and Response too late

I am using http post method to send request to a Http Server URL.

Time difference between request and response is around 60 seconds but as per Server team they are sending response withing 7 seconds once request reached at their end.

I don't think network is taking remaining 53 seconds of time to reach packet at server end, so what can be the issue.

In this application we are using synchronous communication in between client and server. Kindly provide me following details also.

  1. Is it due to server is sending request at more speed than server is able to handle. In this case many times client is getting request at interval of 3 seconds whereas server is taking 7 seconds to handle this.
  2. What is network buffer. Whether there are two buffers at network level one at client place and other at server place.
  3. If server is unable to handle request at same speed what client is sending is all request get buffered at client buffer and what will happen if more request are pending to be processed than maximum size of that buffer.
  4. What are alternative way to improve performance if we are at client end and no control on server

EDIT : When I used wireshark in my network to capture network logs I found that it is appearning in wireshark 20 seconds after actually my application is send to Server. What is reason behind this delay. what can be the possible reason in which request is appearing in network 20 seconds delay from actually it has been sent.

In regards to your edit, to help you understand. Networking follows a model called Open Source Intercommunication (OSI) . This model is broken down into seven distinct layers, all with a function.

Those layers are here:

OSI模型

Wireshark detects packets which are located at Layer 3. Which is handled by a Router . The Network Interface Card (NIC) takes the allotted data and turns it into a packet to send across the wire.

Wireshark won't detect the packet until your NIC has converted it into a packet in which a Router handles.

You see once it is converted into a packet it contains the following information:

  • 4 Bits that contain the version (IPV4 or IPV6)
  • 4 Bits that contain the Internet Header.
  • 8 Bits that contain the Type Of Service or the Quality Of Service and priority.
  • 16 Bits that contain the length of the packet in Bytes.
  • 16 Bits that contain the identification tag to help Reconstruct the packet from Fragments .
  • 3 Bits, the first is a zero followed by a flag that says to allow Fragment or Not. And quantity.
  • 13 Bits that contain the Fragment offset, field to identify the position to original.
  • 8 Bits that contain the Time To Live (TTL) and Hops across Routers .
  • 8 Bits that contain the Protocol (TCP, UDP, ICMP, etc.)
  • 16 Bits that contain the Header Checksum
  • 32 Bits that contain the Source IP Address
  • 32 Bits that contain the Destination IP Address

Those are the key 160 Bits that are created when creating such a packet.

What does this mean?

Well, you know that it takes twenty seconds for Wireshark to detect your packet. So right off the bat we know it took your application twenty seconds to actually build this packet.

Which we know the Server will also need to Reconstruct this packet so that it can handle the data and possibly send off a request.

We also know that the Router is acting like a traffic cop, sending your data across the internet or local network.

Well, that adds quite a bit of inference? But where do I go?

You have a utility called: tracert

On average it takes a route request one to two milliseconds to pass through five to six foot cable, so if it generates the initial hop one or two milliseconds, but the second hop is triggered in twenty-thirty milliseconds then you could use a simple formula:

6 * 20

Based on the current speed from our tracert we can estimate the time duration. This is a very generic approach but tools exists for exact accuracy. But the more hops, the more time it will take to reach the destination. Also the more you'll multiply.

What about that in between from Client to Server?

  • Local Area Networks (LAN) : The internal efficiency of a network is due to the optimizations of each Network Protocol, Equipment, and Physical Median. A Network Administrator has to measure reliability with speed; as well as all the traffic generated by the Network. So an equipments throughput and physical median are important. You wouldn't want a ten cars merging into a one lane tunnel, that could create a bottle neck same applies for a Network.

  • Wide Area Network (WAN) : This essentially is the connection to the Internet, the Cloud. Think of it like this: Your computer is on a LAN, the Router goes to a WAN. Then your ISP more then likely has a LAN which then has it's WAN open up to a larger distribution facility. It keeps working it's way up until it reaches the internet.

What can I do though?

You know what is in between now, but what can I do?

Well, when your generating your Service you obviously want to ensure your code is lean and highly efficient. As efficiency can be crucial in speed. So altering buffer sizes, transmission rates, and etc. can greatly improve your application.

Obviously good code practice will help.

My code is rock solid though?

If you believe your code isn't the problem at this point, or the method in which you Host and Create your Service then these factors may be the cause:

  • Local Machine may be generating excessive chatter, so it takes quite a bit longer.
  • Local Network is generating excessive chatter or inefficient / low throughput.
  • Your request is traveling long distance, so the time is delayed.
  • Your Internet Service Provider may have Hardware Firewalls, Proxies, and so on that scan these packets.
  • Your Server may have excessive request or the Host method isn't efficient.

Those are a larger chunk of variables. All you can try is to refactor the Service and ensure your Server is hosting it the most efficient way possible. Otherwise you'll want to get a Information Technology Team involved it is critical.

But keep this in mind, your experience may be better or worst then another clients with interfacing with this service.

I'm speaking under the assumption your deployed in one location and you could be several states away from your Server.

Tools:

Command Line:

  • Ping
  • Tracert

Network and Protocol Analyzers:

  • Fiddler (HTTP/HTTPS) : See if Fiddler displays any HTTP Status Codes for Troubleshooting.
  • Wireshark : Will analyze your network traffic which can help time durations.

There are other utilities available to actually mitigate and test Network Speeds even to other locations, just Google "Network Tools". Fluke has a few.

Hopefully that explains why it may take twenty seconds for Wireshark to even display the packet on the Network.

Hope that helps.

Use Wireshark to capture your request and response 60 seconds apart on the wire, and send it to the server team. They may respond with a capture showing the request and response closer to 7 seconds on their side. That's great! Send them both to the network team.

On the other hand, it's possible that the trace shows that the delay is in your code. There may be some kind of throttling or delay on your end that keeps the request from leaving your process for a significant amount of time. A Wireshark trace can tell you that, also.

TCP/IP stream will control most of the stuff you are worried about.

"Sending request faster than the server can't handle"

No you will never send anything to the server faster than it can receive when using a TCP session. The transmission flow is guided and impossible to send faster than the server can handle.

"What is network buffer"

There are two queue buffers envolved in the TCP/IP communication, a send buffer and a receive buffer.

When you make a send() call, it doesn't really send anything, it will just queue the data in the send buffer, and will return to you how many of the bytes you were trying to send were actually put into the queue for sending. If it return less than you were trying to send, it means you have to wait before sending the rest because the buffer is full.

This is how you control the traffic, it is your cue to know if you are trying to send stuff too fast. Try to keep the buffer full as long as you have data to send, but don't ignore the fact that not everything will fit in the buffer and you have to retry later.

The recv() also has its own buffer. This command is not actually to receive anything, the data is already received, recv() will only read it from the receive buffer. When using a blocking socket (default), recv() will hang if the buffer is empty, waiting for new data to arrive. recv() will only return 0 if the connection was terminated or if you try to use it with a non-blocking socket.

If you forget to recv() and the receive buffer becomes full, it is okay too. The peer will be unable to continue sending, as send() will start to return 0 to them, but as long you pay attention to the send() return value no data is lost at any time.

As far as I know, the buffer size is 64KB in all systems by default, both for sending or receiving. There are commands to adjust this buffer size, but it is not interesting to mess with. If you need a bigger buffer, make it in your application.

"What are alternative way to improve performance if we are at client end and no control on server"

Its not exactly a valid question, you cannot do that. I think you might be doing some mistake with the HTTP requests, specially considering you are doing POST requests.

When doing a POST request, your HTTP header will contain two headers that are usually only seen on server HTTP response headers: The Content-Type and Content-Length. They are very important when doing a POST, and if one of them is missing or wrong, the HTTP session may hang for a couple seconds or not succeed at all.

Another important aspect of HTTP, is the most essential difference between HTTP 1.0 and HTTP 1.1: HTTP 1.0 does not support Keep-Alive . It is easier for a beginner to deal with HTTP 1.0, because with HTTP 1.0, you connect, send your request, the server answers and the connection is terminated, meaning you finished downloading the server response. With HTTP 1.1 it will not happen by default. The connection remains open until it times out. You have to pay attention to the HTTP response header Content-Length and count bytes in order to know if the server concluded sending the stuff you requested.

It is also important to observe that not all servers support HTTP 1.0. You may make a HTTP 1.0 request, but it will respond with 1.1 anyway, with Keep-Alives and all stuff you wasn't expecting. On a perfect world it shouldn't happen, but it does.

Your mistake might be around in this area. Sinse you said your request is taking exactly 60 seconds, I suspect it is because its timing out. You probably already received everything you had to, but the application is not handling that properly and than hangs until the connection is terminated by the server.

On point number 1:

In your point number 1 I believe you meant to say '1. Is it due to CLIENT sending....' and 'In this case many times client is getting RESPONSE at interval....'.

On point number 3:

  • The server machine is typically a more powerful computer than the client machine so this "If server is unable to handle request at same speed what client is sending" is very rarely the case. If I remember right, HTTP server goes by "first come first serve" and what you are calling request buffer would be a queue on the server side not the client. I have never heard of "all request get buffered at client buffer".

  • The time server takes to response is usually fast but the time result takes to come back to you, the client, has more to do with what the original request is doing, whether it's only retrieving a static html file, or retrieving a big image or document, or having to execute a heavy query on the DB server, etc.

  • Are you the only client submitting HTTP requests to the server? Probably not. Is that server machine housing only the HTTP server? Remember the server is or will be handling requests from multiple clients and this could be another factor for delayed response sometimes, even when the server can handle multiple request simultaneously and depending on what those other requests are doing. The extreme example of an HTTP server unable to respond because of multiple requests is when the server is under a [distributed] denial of service attack.

  • Do you have any firewalls protecting the HTTP server or protecting the network where the HTTP server resides?? Having firewall rules or network intrusion detection systems can delay the request/response time.

  • The following could be another factor for "what can be the possible reason in which request is appearing in network 20 seconds delay from actually it has been sent." It is called 'network congestion' and typically happens at the router level.

On point number 4:

  • Once you discard or eliminate factors related to point 3, you can improve performance by:

  • First, all else not being a [major] factor of delay, figure out what is your expectated response time.

  • Second, make sure you have a reasonable accurate measure of the time the round trip is taking under what conditions/circumstances. I would assume the HTTP server is not to blame.

  • Third, if still experiencing delay, you (or someone else) will need look at router's configuration, firewalls configuration, and your html page code if any or your web application if any.

Edit:

Try pinging your HTTP server to know the network roundtrip time.

Here's an example; I did submit this command:

ping www.yahoo.com

from an MS-DOS window on the client PC.

在此输入图像描述

1) Notice how round-trip takes aprox 0.5 seconds or less (0.5 seconds = 500 ms). I'm in the US East coast, Yahoo.com is likely to be on the US West cost (I'm not sure).

2) Notice every round-trip is not always same length, but variation is small.

3) From my local PC to a world class exposed server like Yahoo.com, there are several requests, routers, and firewalls in between and the response time was not even 1 second. This means network and server are rarely the cause of a delay if they are properly configured, which means your page or application should be examined/reviewed thoroughly before blaming it on the HTTP server.

4) When I submit request http://www.yahoo.com from my browser, loading the page sure took a little more than 0.5 second, I think because of all the html elements and ads in the page but the HTTP server response was likely 0.5 second.

On point # 4 : For large volumes of data the amount of time spent in the wire can be significant. If your server supports it you may want to compress the contents before you send it across.

Try to check your DNS configuration, I think it could be a problem. Before your browser or script make a request is necessary to resolve domain name to ip address.

You can easliy check this by adding additional information to your host file which is located in windows:

C:\\Windows\\System32\\drivers\\etc\\hosts

or in linux system:

\\etc\\hosts

To check information about domain and ip address try to use nslookup (the same command on Windows and Linux OS)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM