简体   繁体   English

可以采取哪些步骤来优化tibco JMS以提高性能?

[英]What steps can be taken to optimize tibco JMS to be more performant?

We are running a high throughput system that utilizes tibco-ems JMS to pass large numbers of messages to and from our main server to our client connections. 我们正在运行一个高吞吐量的系统,该系统利用tibco-ems JMS将大量消息往返于主服务器与客户端连接之间传递。 We've done some statistics and have determined that JMS is the causing a lot of latency. 我们已经进行了一些统计,并确定JMS是造成大量延迟的原因。 How can we make tibco JMS more performant? 如何使tibco JMS更具性能? Are there any resources that give a good discussion on this topic. 是否有任何资源可以很好地讨论该主题。

Using non-persistent messages is one option if you don't need persistence. 如果您不需要持久性,则使用非持久性消息是一种选择。 Note that even if you do need persistence, sometimes it's better to use non persistent messages, and in case of a crash perform a different recovery action (like resending all messages) 请注意,即使您确实需要持久性,有时也最好使用非持久性消息,如果发生崩溃,请执行其他恢复操作(例如重新发送所有消息)

This is relevant if: 这在以下情况下是相关的:

  • crashes are rare (as the recovery takes time) 崩溃很少(因为恢复需要时间)
  • you can easily detect a crash 您可以轻松检测到崩溃
  • you can handle duplicate messages (you may not know exactly which messages were delivered before the crash 您可以处理重复的消息(崩溃前您可能不知道确切地传递了哪些消息

EMS also provides some mechanisms that are persistent, but less bullet proof then classic guaranteed delivery these include: EMS还提供了一些持久的机制,但是比经典的保证交付要少的防弹措施包括:

  • instead of "exactly once" message delivery you can use "at least once" or "up to once" delivery. 您可以使用“至少一次”或“最多一次”传递,而不是“仅一次”传递消息。
  • you may use the pre-fetch mechanism which causes the client to fetch messages to memory before your application request them. 您可以使用预提取机制,该机制使客户端在应用程序请求消息之前将消息提取到内存。

EMS should not be the bottle neck. EMS不应成为瓶颈。 I've done testing and we have gotten a shitload of throughput on our server. 我已经完成了测试,并且服务器上的吞吐量非常低。

You need to try to determine where the bottle neck is. 您需要尝试确定瓶颈在哪里。 Is the problem in the producer of the message or the consumer. 是消息的产生者还是消费者的问题。 Are messages piling up on the queue. 消息是否堆积在队列上。

What type of scenario are you doing. 您正在执行哪种类型的方案。

Pub/sup or request reply? 发布/订阅还是要求回复? are you having temporary queue pile up. 你有临时队列堆积吗? Too many temporary queues can cause performance issues. 过多的临时队列可能会导致性能问题。 (Mostly when they linger because you didn't close something properly) (通常是因为他们没有正确关闭某些东西而让他们流连忘返)

Are you publishing to a topic with durable subscribers if so. 您是否向持久订阅者发布主题? Try bridging the topic to queue and reading from those. 尝试桥接主题以排队并从中读取内容。 Durable subscribers can cause a little hiccup in performance too since it needs to track who has copies of all messages. 持久的订户也会导致性能下降,因为它需要跟踪谁拥有所有邮件的副本。

Ensure that your sending process has one session and multiple calls through that session. 确保您的发送过程中只有一个会话,并且该会话中有多个呼叫。 Don't open a complete session for each operation. 不要为每个操作打开完整的会话。 Re-use where possible. 尽可能重复使用。 Do the same for the consumer. 为消费者做同样的事情。

make sure you CLOSE when you are done. 完成后,请确保关闭。 EMS doesn't clear things up. EMS无法解决问题。 So if you make a connection and just close your app the connection still is there and sucking up resources. 因此,如果您建立连接并仅关闭您的应用程序,则该连接仍然存在并占用资源。

review your tolerance for lost messages in the even of a crash. 即使发生崩溃,也要检查您对丢失消息的容忍度。 If you are doing Client ack and it doesn't matter if you crash processing the message then switch to auto. 如果您正在执行Client ack,而崩溃处理消息也没关系,请切换到自动。 Also I believe if you are using (TEMS - Tibco EMS for WCF) there's a problem with the session acknowledge. 我也相信,如果您正在使用(TEMS-用于WCF的Tibco EMS),则会话确认存在问题。 So a message is only when its processed on the whole message, we switched from Client ACK to the one that had Dups ok and it worked better) 因此,只有在处理完整个邮件后,我们才会从客户端ACK切换到可以正常运行Dups且效果更好的邮件)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM