简体   繁体   中英

IBM MQSeries connection pooling with Tomcat

We are trying to set up a jms connection from tomcat toward IBM MQSeries, with the concern to make connection pooling.

We have followed the link below, with the solution suggested :

WebSphere MQ connection pooling with Tomcat

I don't know how we can manage the different jms connections with the method suggested, we have made tests, and we have noticed that CachingConnectionFactory manage different jms session and not jms connection.

I share with you the link below, in which it is explained that CachingConnectionFactory does not allow to manage different jms connection but just jms sessions !

https://jira.spring.io/browse/SPR-13586

I also share with you , both files context.xml (datasource and services.xml (spring services file)

context.xml

<Resource name="jms/AN8.NOTI.MOBILE.01" auth="Container" type="org.springframework.jms.connection.CachingConnectionFactory" 
    factory="com.cl.fwk.jms.utilities.RSFCachingMQQueueConnectionFactoryFactory" 
    description="JMS Queue Connection Factory for sending messages" HOST="**********" 
    PORT="****" CHAN="******" TRAN="*" QMGR="***" />

<Resource name="jms/MQAN8.NOTI.MOBILE.01" auth="Container"
    type="com.ibm.mq.jms.MQQueue" factory="com.ibm.mq.jms.MQQueueFactory"
    description="JMS Queue for receiving messages from Dialog" QU="********" />

services.xml

<!-- Ressource JNDI pour la connexion MQSeries-->
<bean id="xxxx.jmsRefConnectionFactory.mqseries" class="org.springframework.jndi.JndiObjectFactoryBean">
    <property name="jndiName" value="java:comp/env/jms/AN8.NOTI.MOBILE.01" />
    <property name="resourceRef" value="true" />
</bean>

<!-- Ressource JNDI pour la file d'attente du broker MQSeries-->
<bean id="xxxx.jmsRefQueue.mqseries" class="org.springframework.jndi.JndiObjectFactoryBean">
    <property name="jndiName" value="java:comp/env/jms/MQAN8.NOTI.MOBILE.01" />
    <property name="resourceRef" value="true" />
</bean>
<!-- A cached connection to wrap the MQSeries connection -->
<bean id="xxxx.jmsConnectionFactory.mqseries" class="org.springframework.jms.connection.CachingConnectionFactory">
    <!-- <constructor-arg ref="xxxx.jmsRefConnectionFactory.mqseries" /> -->
    <property name="targetConnectionFactory" ref="xxxx.jmsRefConnectionFactory.mqseries"/>
    <property name="sessionCacheSize" value="10" />
</bean>

<bean id="xxxx.jmsDestinationResolver.amq" class="org.springframework.jms.support.destination.DynamicDestinationResolver" />

<bean id="xxxx.jmsTemplate" class="org.springframework.jms.core.JmsTemplate">
    <property name="connectionFactory" ref="xxxx.jmsConnectionFactory.mqseries" />
    <property name="defaultDestination" ref="xxxx.jmsRefQueue.mqseries" />
    <property name="destinationResolver" ref="xxxx.jmsDestinationResolver.amq" />
    <property name="sessionTransacted" value="true" />
    <property name="sessionAcknowledgeMode" value="#{T(javax.jms.Session).AUTO_ACKNOWLEDGE}" />
</bean>

Best regards.

Summary

You need to either upgrade to a later version of the MQ classes for JMS or have your MQ admin increase the MAXINST/MAXINSTC settings to allow more channel instances.

Note the version you are using has been out of support since 2012 so I would recommend upgrading.

Product        Version  Release      End of Service
============   =======  ==========   =================
Websphere MQ   6.0      2005-06-24   2012-09-30

BACKGROUND INFO FROM COMMENTS

Based on what you provided in comments the following information is known about your current setup:

IBM MQ Server version: 8.0.0.? (specific maintenance level unknown)
IBM MQ jar names: mq-7.0.0.jar and mqjms-7.0.0.jar
IBM MQ jar version: 6.0.2.11
SVRCONN Channel settings: SHARECNV(10) MAXINST(9) MAXINSTC(9)

Note that even though the jar files have names that contain the string 7.0.0, they are actually from IBM MQ v6.0.2.11 (technically it was called Websphere MQ at that time).


The other StackOveflow question " WebSphere MQ connection pooling with Tomcat " you point to is referencing the fact that the IBM MQ prior to v7.0 (for example v6.0) provided connection pooling, but this was removed at MQ v7.0, and was asking how to get similar functionality at v7.0 and later.


The v6 connection pooling was the default in MQ v6.0 JMS as described on page 504 of the " WebSphere MQ Using Java Version 6.0 " manual:

setUseConnectionPooling

public void setUseConnectionPooling(boolean usePooling);

Chooses whether to use connection pooling. If you set this to true, JMS enables connection pooling for the lifetime of any connections created through the ConnectionFactory. This also affects connections created with usePooling set to false; to disable connection pooling throughout a JVM, ensure that all ConnectionFactories used within the JVM have usePooling set to false.


In the fact that the connection pooling was removed in MQ v7 is documented in the IBM MQ v8.0 Knowledge Center page Developing applications>Developing JMS and Java Platform, Enterprise Edition applications>Using IBM MQ classes for JMS>IBM MQ classes for JMS>Class MQConnectionFactory

setUseConnectionPooling

public void setUseConnectionPooling(boolean usePooling)

Deprecated. JMS does not use connection pooling anymore. Any connection pooling should be done using the facilities provided by the App Server. Set the use of ConnectionPooling in earlier versions of the WebSphere MQ classes for JMS. This method is retained for compatibility with older MQJMS applications, but, because this Connection Pooling functionality has been removed from version 7, setting this property will have no effect.


To explain the behavior you see today you also need to to know about MQ Client channel shared conversations behavior that was added in MQ v7.0, you can read about this in the IBM MQ v8.0 Knowledge Center page Migrating and upgrading>Introduction to IBM MQ migration>Coexistence, compatibility, and interoperability>MQI client: Default behavior of client-connection and server-connection channels . Quoting a few specifics below:

In Version 7.0 the default settings for client and server connection channels changed to use shared conversations. This change affects the behavior of heartbeats and channels exits, and can have an impact on performance.

Before Version 7.0, each conversation was allocated to a different channel instance. From Version 7.0, the default for client and server connections is to share an MQI channel. You use the SHARECNV (sharing conversations) parameter to specify the maximum number of conversations that can be shared over a particular TCP/IP client channel instance. The possible values are as follows:

SHARECNV(0)

  • This value specifies no sharing of conversations over a TCP/IP socket. The channel instance behaves exactly as if it was a Version 6.0 server or client connection channel , and you do not get the extra features such as bi-directional heartbeats that are available when you set SHARECNV to 1 or greater. Only use a value of 0 if you have existing client applications that do not run correctly when you set SHARECNV to 1 or greater.


Putting this all together, you have a SVRCONN channel with the following settings:

  • SHARECNV(10)
  • MAXINST(9)
  • MAXINSTC(9)

These settings when used with a MQ v7.0 and later client would mean that you could have 9 channel instances (TCP connections) between the client and queue manager and each of those could have 10 shared conversations for a total max of 90 conversations.

Because you are using MQ v6.0 classes for JMS the channel is operating as if the setting were:

  • SHARECNV(0)
  • MAXINST(9)
  • MAXINSTC(9)

This means that can have 9 channel instances (TCP connections) between the client and the queue manager and each of those supports only a single conversation.

At MQ v6.0 classes for JMS each underlying JMS connection and every JMS session that is created on top of the JMS connection will allocate a channel instance to the queue manager.


To understand more about how connection and sessions interact with each other and with the SHARECNV setting look at the IBM MQ v8.0 Knowledge Center page Developing applications>Developing JMS and Java Platform, Enterprise Edition applications>Using IBM MQ classes for JMS>Writing IBM MQ classes for JMS applications>Accessing IBM MQ features>Sharing a TCP/IP connection in IBM MQ classes for JMS :

Every JMS connection and JMS session that is created by a JMS application creates its own conversation with the queue manager.

In your case because you are using MQ v6.0 classes for JMS each "conversation" is a MQ channel instance (TCP connection) to the queue manager.


I would recommend you get a current IBM MQ classes for Java version to use, this would then allow you to have up to 90 shared conversations. If contention is an issue you would need to have your MQ admin increase the MAXINST / MAXINSTC settings and decrease SHARECNV .

For IBM MQ Classes for JMS you can find the list of files required on the IBM MQ v9 Knowledge Center page " What is installed for IBM MQ classes for JMS ":

Relocatable JAR files
Within an enterprise, the following files can be moved to systems that need to run IBM MQ classes for JMS:

  • com.ibm.mq.allclient.jar
  • com.ibm.mq.traceControl.jar
  • jms.jar
  • fscontext.jar
  • providerutil.jar
  • The Bouncy Castle security provider and CMS support jars

The fscontext.jar and providerutil.jar files are required if your application performs JNDI lookups using a file system context.

The Bouncy Castle security provider and CMS support jar files are required. For more information, see Support for non-IBM JREs.

Note that only com.ibm.mq.allclient.jar , jms.jar , and the Bouncy Castle security provider and CMS support jars are included in the Redistributable client, but all are included in Java All client. You are also running 9.0.0.0 and I would recommend you go to 9.0.0.5. You can find both the Redistributable and Java All clients on Fix Central .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM