简体   繁体   中英

How to improve the performance of Client-Server Architecture Application?

We have a product built on the Client-Server architecture. Some details about the technology stack used.

  • Client - Java Swing
  • Server - RMI
  • Java Database - Oracle

The clients are located at different parts of the world but the java server & the oracle database are located on the same machine in Sweden. Because of this there is a lot of network latency. The clients located at distant locations have terrible performance. The application is used for processing files with the size over 50MB. Each operation in general requires about over 1000 Network calls.

Based on your experience, how do you tackle this problem and improve the performance?

EDIT: To answer a few questions

  1. Files contains the actual business data which needs to be processed and updated to the database cannot be sent in part.
  2. Some of the network calls could be batched but it would require major refactoring of the code. This is a very old application written way back in 2001. And the design of the application is such that, the server holds all the services and they are made reusable across the code and the business logic is written on the client side. So, this business logic calls server numerous times and hence the astronomical figure.

-Snehal

Decrease your number of round trips

1000 round trips for a single operation is an astronomic figure. No way you should be seeing those numbers.

You still have a problem though with the 50MB files. In which case, you will either need to find a way to make the transfer more efficient (transfer only deltas between two similar files?), or employ caching of some sort.

The WAN traffic is killing your app, and it sounds like you have major refactoring to do.

Sending big files and lot of requests over the net costs a lot of time. Period. Even if you could upgrade to gigabit Ethernet, the protocol still demands that your client idles a few milliseconds between two consecutive network packets (so other hosts get a chance to talk, too).

But gigabit Ethernet is not feasible since the clients are far away (probably connected via the Internet).

So the only path which will work is to move the business code closer to the server. The most simple solution will be to install the clients on little boxes in the same LAN as the server and use VNC or a similar protocol to access them remotely.

The next level would be to cut the clients into a business layer and a display layer. Turn the business layer into a service and install the display layer on the clients. This way, data is only pushed around on the (fast) intranet. When the results are ready for display, the clients get only the results (little data).

-Make server stateless if it is not yet the case

-Consider lighter remote protocols such as Hessian

-Latency is probably your bottle neck, consider using cache in clients and read bigger chunks of data, 1000 round trips is huge load.

-Consider refactoring the client to make it able to work locally and have it synchronized in the background

-Use a profiler to see where the application spends most time and optimize that

Have you performed any measurements of the relative time consumption of different parts of the operation ? I wouldn't touch anything until you've measured how long individual processes take.

I suspect the latency issue is the key. But I would measure and determine this first, before looking at any solutions.

Perhaps the best thing to do would be to get a better understanding of how the infrastrucure works

  1. Why are the files so large?
  2. Must the entire files be sent, or could you get by with sending just the required parts for processing
  3. What are these network calls? Are they all necessary? If so, can the calls be batched into one call?

Not sure but looks like you have some data in the 50 MB file which you want to validate/process and store in database. Is it correct?

Why can the client simply pass the file to server and the server do the validate/process task and store in the database? This will there are no network calls except passing the file data to server.

Other possibility is if you can club multiple operations in one call ie a session facade pattern.

"High Performance Web Sites: Essential Knowledge for Front-End Engineers" by Steve Souders is a really good book on this. See here .

Steve's website is here .

I would suggest looking to making as much of this process asynchronous as possible. Do the clients need to have real time response to the processing? If not, you could move to a MOM (Messaging Oriented Middleware) concept and place JMS queues/topics in between the client and the server.

The client could post records to be processed onto a queue that the server is monitoring. Once processing is complete, the server would place the results to a reply-to queue that a client would be listening on. This would force a refactor, but assuming your code is loosely coupled, it should not be terribly invasive.

Either refactor the code in a major way or rewrite the entire app. If your manager says that will take to long add a progress bar so the user thinks it goes faster.

Each operation requires 1000 requests is a lot! This will kill any server. This is typically a problem which cannot be solved by adding more hardware or adding more bandwith. This is a design problem. Anyway I would install a profiler in order to see the server status (memory consumption, cpu, etc). Take a look at lambda probe ( http://www.lambdaprobe.org )

RMI is a very expensive protocol. I would look at replacing it.

If you can not change the protocol, at least change the payloads:

1) This sounds like a closed system. If yes, there is no need to use generalized Serializable protocol. Switch to Externalizable and write the minimal data required to capture object state for wire transfer.

2) Compress the data that is sent from server to client. Beyond object state in (1), you should also reduce the data blobs ("files") that you are moving around the net.

Beyond that (and this obviously depends on the system) you should explore forward caching nodes. In many applications there is a pattern to domain entity access. If there is geographic access patterns that can be exploited, you should be able to trivially create new proxy nodes that are clients to the remote server and then act as a (rmi) server to nearby (rmi) clients.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM