简体   繁体   中英

Safely allow multiple clients to share a single resource

I am creating a program where I need to talk to multiple devices (in the order of 10-20 devices), over different physical connections (serial port, UDP). These devices only reply to requests I do to them, and each of them only process one request before allowing a new one. The application might request a value update from each of them every second.

As of now, I have an interface IRequestReplyDevice

public interface IRequestReplyDevice
{
    T SendMessage<T>(IMessage message) where T : IMessage;
}

Where SendMessage is a blocking call that returns the response received from the device. In each implementation of this interface, ex. SerialPortDevice : IRequestReplyDevice , I have a lock in SendMessage that ensures that no new message is sent until the response to the previous reply is received and returned to caller.

I was wanting to build a Web API on top of this, and that may lead to several clients wanting to request something from the same device at the same time.

Is this approach robust or even sane? Would you approach it differently?

Based on the above, my initial thought would be to remove the blocking calls and instead decouple the request and response chain with queues if it is possible.

The flow would be similar to the below

Request -> RequestQueue -> RequestHandler -> ResponseQueue -> ResponseHandler

The rationale behind this recommendation is that blocking calls and the inherent concurrency of multiple users will lead to a lot of complex locking, will have an inherent bottleneck at the lock and may not scale well.

The issue with this solution however is that it will involve a good amount of extra work and moving parts. This leads to the real question of what behaviour do you actually need from the system? Designing a system that requires high-throughput (1mb/s? 1gb/s?) and low-latency (under 100ms? under 3ms?) that handles concurrency well can get very complicated very quickly.

If the system can tolerate the latency, throughput and scale requirements behind a simple block / lock design than by all means use it. If you have tested the performance of a lock based architecture under load and it doesn't perform adequately or if you have the reasonable expectation that the system will grow to the point where it will fail to meet requirements in the near future than I would definitely recommend looking at using queues.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM