[英]Safely allow multiple clients to share a single resource
I am creating a program where I need to talk to multiple devices (in the order of 10-20 devices), over different physical connections (serial port, UDP). 我正在创建一个程序,需要通过不同的物理连接(串行端口,UDP)与多个设备(按10-20个设备的顺序)进行通信。 These devices only reply to requests I do to them, and each of them only process one request before allowing a new one.
这些设备仅答复我对它们的请求,并且每个设备仅在允许一个新请求之前处理一个请求。 The application might request a value update from each of them every second.
应用程序可能每秒请求它们中的每个值更新。
As of now, I have an interface IRequestReplyDevice
到目前为止,我有一个接口
IRequestReplyDevice
public interface IRequestReplyDevice
{
T SendMessage<T>(IMessage message) where T : IMessage;
}
Where SendMessage is a blocking call that returns the response received from the device. 其中,SendMessage是阻止调用,它返回从设备接收到的响应。 In each implementation of this interface, ex.
在此接口的每种实现中,例如。
SerialPortDevice : IRequestReplyDevice
, I have a lock in SendMessage
that ensures that no new message is sent until the response to the previous reply is received and returned to caller. SerialPortDevice : IRequestReplyDevice
,我在SendMessage
中有一个锁,可以确保在收到对上一个答复的响应并将其返回给调用方之前,不发送新消息。
I was wanting to build a Web API on top of this, and that may lead to several clients wanting to request something from the same device at the same time. 我想在此基础上构建一个Web API,这可能会导致多个客户端希望同时从同一设备请求某些内容。
Is this approach robust or even sane? 这种方法是否健全或理智? Would you approach it differently?
您会采取不同的方式吗?
Based on the above, my initial thought would be to remove the blocking calls and instead decouple the request and response chain with queues if it is possible. 基于以上内容,我最初的想法是删除阻塞调用,并尽可能将请求和响应链与队列分离。
The flow would be similar to the below 流程类似于以下内容
Request -> RequestQueue -> RequestHandler -> ResponseQueue -> ResponseHandler 请求-> RequestQueue-> RequestHandler-> ResponseQueue-> ResponseHandler
The rationale behind this recommendation is that blocking calls and the inherent concurrency of multiple users will lead to a lot of complex locking, will have an inherent bottleneck at the lock and may not scale well. 该建议的基本原理是,阻塞呼叫和多个用户的固有并发性将导致很多复杂的锁定,锁定将具有固有的瓶颈,并且可能无法很好地扩展。
The issue with this solution however is that it will involve a good amount of extra work and moving parts. 但是,该解决方案的问题在于它将涉及大量的额外工作和活动部件。 This leads to the real question of what behaviour do you actually need from the system?
这就引出了一个真正的问题,即您实际上需要系统采取什么行为? Designing a system that requires high-throughput (1mb/s? 1gb/s?) and low-latency (under 100ms? under 3ms?) that handles concurrency well can get very complicated very quickly.
设计一个要求高吞吐量(1mb / s?1gb / s?)和低延迟(100ms以下?3ms以下)的系统并发性很好的系统很快就会变得非常复杂。
If the system can tolerate the latency, throughput and scale requirements behind a simple block / lock design than by all means use it. 如果系统可以容忍简单的块/锁设计背后的延迟,吞吐量和扩展要求,则绝对不能使用它。 If you have tested the performance of a lock based architecture under load and it doesn't perform adequately or if you have the reasonable expectation that the system will grow to the point where it will fail to meet requirements in the near future than I would definitely recommend looking at using queues.
如果您已经在负载下测试了基于锁的体系结构的性能,并且性能不能令人满意,或者您有合理的预期,该系统将增长到不久的将来无法满足要求的程度,建议看一下使用队列。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.