简体   繁体   English

多个Mirth Connect客户端的服务器架构

[英]Servers' architecture for multiple Mirth Connect clients

Question: What is the best server architecture for multiple mirth connect installations for different clients? 问题:针对不同客户端的多个虚拟连接安装的最佳服务器体系结构是什么?

Detailed Problem: We have a client that is sending HL7 msgs as well as other data with CSV files. 详细的问题:我们有一个客户端正在发送HL7消息以及其他带有CSV文件的数据。 we have used Mirth Connect to process these data into our systems (using around 7 channels in Mirth Connect). 我们已使用Mirth Connect将这些数据处理到我们的系统中(在Mirth Connect中使用了大约7个通道)。 Mirth connect installation and its internal database are on the same server. Mirth connect安装及其内部数据库位于同一服务器上。 However, in the near future we are adding many clients (around 10 this year), and we need to come up with a scalable solution that should be able to handle the load. 但是,在不久的将来,我们将增加许多客户(今年大约有10个),并且我们需要提出一个可处理负载的可伸缩解决方案。 We are planning of using a single central server (powerful) for the internal database of all the mirth connect installations (Postgresql db with a different schema for each mirth connect instance). 我们计划为所有mirth connect安装的内部数据库使用单个中央服务器(功能强大)(每个mirth connect实例的Postgresql数据库具有不同的架构)。 and one mirth connect instance per client, each on a separate (smaller) server connected to the central database server. 每个客户端一个欢乐连接实例,每个实例都在连接到中央数据库服务器的单独(较小)服务器上。
Is this a good approach? 这是一个好方法吗?

Thanks in advance. 提前致谢。

Certainly what you've described is a viable solution. 当然,您所描述的是可行的解决方案。 If all servers connect to the same internal DB, then all channels will be deployed on all server instances. 如果所有服务器都连接到同一内部数据库,则所有通道都将部署在所有服务器实例上。 But if you keep the schemata (always feel weird using that word) separate for each instance, then you sacrifice maintainability because now you have multiple MC instances to login to and manage. 但是,如果将每个实例的模式(总是使用该词感到奇怪)分开,那么您会牺牲可维护性,因为现在您有多个MC实例可以登录和管理。

It's still possible to do what you want on a single DB... for example channels A1..An should only be deployed on the instance for Client A, channels B1..Bn should only be deployed on the instance for Client B, and so on. 仍然可以在单个数据库上做您想做的事情...例如,通道A1..An仅应部署在客户端A的实例上,通道B1..Bn仅应部署在客户端B的实例上,并且以此类推。 One thing you could do is have a global deploy script that checks your current server ID and looks up what channels are "allowed" for that server in your own custom lookup table. 您可以做的一件事是拥有一个全局部署脚本,该脚本检查您当前的服务器ID,并在您自己的自定义查找表中查找“允许”该服务器使用的通道。 Then if it's not allowed, throw an exception so the channel won't be deployed. 然后,如果不允许,则引发异常,以便不部署通道。

There's also a hybrid approach. 还有一种混合方法。 Still have a separate instance/DB per client, but you can also have multiple instances for each client. 每个客户端仍然具有单独的实例/数据库,但是每个客户端也可以具有多个实例。 You can do this for primary/backup failover purposes, or active/active load balancing purposes. 您可以出于主要/备份故障转移目的或活动/活动负载平衡目的来执行此操作。 That way you're also bring high availability at the MC application level. 这样,您还将在MC应用程序级别上带来高可用性。 That's where the Advanced Clustering extension really shines... it's built specifically for horizontally scaling MC instances that share a single shared DB (that may also be horizontally scaled itself). 这就是Advanced Clustering扩展的真正亮点……它是专为水平缩放共享单个共享DB(也可以水平扩展)的MC实例而构建的。

As a general note, whenever anyone has issues with performance/throughput, in the vast majority of cases the bottleneck is not MC per se, but instead the Disk I/O write times. 通常要注意的是,每当有人遇到性能/吞吐量问题时,在大多数情况下,瓶颈不是MC本身,而是磁盘I / O写入时间。 So I'd definitely recommend using SSD for your database storage layer. 因此,我绝对建议您将SSD用于数据库存储层。 Or at the very least an SSD fast-cache on top of spinning disks. 或者至少是在旋转磁盘之上的SSD快速缓存。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM