Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 8DD68200C4E for ; Wed, 22 Mar 2017 12:11:50 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 8C65F160B86; Wed, 22 Mar 2017 11:11:50 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id BAC85160B98 for ; Wed, 22 Mar 2017 12:11:47 +0100 (CET) Received: (qmail 93483 invoked by uid 500); 22 Mar 2017 11:11:46 -0000 Mailing-List: contact commits-help@qpid.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@qpid.apache.org Delivered-To: mailing list commits@qpid.apache.org Received: (qmail 91423 invoked by uid 99); 22 Mar 2017 11:11:44 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 22 Mar 2017 11:11:44 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 81652DFE7B; Wed, 22 Mar 2017 11:11:44 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: orudyy@apache.org To: commits@qpid.apache.org Date: Wed, 22 Mar 2017 11:12:19 -0000 Message-Id: <202d7d2b5d394add9e9271a811fdea84@git.apache.org> In-Reply-To: <72085beea32b40e7a5c63799d32c05dd@git.apache.org> References: <72085beea32b40e7a5c63799d32c05dd@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [37/51] [partial] qpid-site git commit: Update site for release 6.1.2 of Qpid for Java archived-at: Wed, 22 Mar 2017 11:11:50 -0000 http://git-wip-us.apache.org/repos/asf/qpid-site/blob/87eb27cf/content/releases/qpid-java-6.1.2/java-broker/book/Java-Broker-Runtime-Flow-To-Disk.html ---------------------------------------------------------------------- diff --git a/content/releases/qpid-java-6.1.2/java-broker/book/Java-Broker-Runtime-Flow-To-Disk.html b/content/releases/qpid-java-6.1.2/java-broker/book/Java-Broker-Runtime-Flow-To-Disk.html new file mode 100644 index 0000000..e401094 --- /dev/null +++ b/content/releases/qpid-java-6.1.2/java-broker/book/Java-Broker-Runtime-Flow-To-Disk.html @@ -0,0 +1,159 @@ + + + + + 9.6. Flow to Disk - Apache Qpid™ + + + + + + + + + + + + + +
+ + + + + + +
+ + +
+

9.6. Flow to Disk

Flow to disk limits the amount of direct and heap memory + that can be occupied by messages. Once this + limit is reached any new transient messages and all existing transient messages will be + transferred to disk. Newly arriving transient messages will continue to go to the disk until the + cumulative size of all messages falls below the limit once again.

By default the Broker makes 40% of the max direct available memory for messages. This memory is + divided between all the queues across all virtual hosts defined on the Broker with a percentage + calculated according to their current queue size. These calculations are refreshed periodically + by the housekeeping cycle.

For example if there are two queues, one containing 75MB and the second 100MB messages + respectively and the Broker has 1GB direct memory with the default of 40% available for messages. + The first queue will have a target size of 170MB and the second 230MB. Once 400MB is taken by + messages, messages will begin to flow to disk. New messages will cease to flow to disk when + their cumulative size falls beneath 400MB.

Flow to disk is configured by Broker context variable + broker.flowToDiskThreshold. It is expressed as a size in bytes and defaults + to 40% of the JVM maximum heap size.

Log message BRK-1014 is written when the feature activates. Once the total space of all messages + decreases below the threshold, the message BRK-1015 is written + to show that the feature is no longer active.

+ +
+ + + + +
+
+
+ + http://git-wip-us.apache.org/repos/asf/qpid-site/blob/87eb27cf/content/releases/qpid-java-6.1.2/java-broker/book/Java-Broker-Runtime-Handling-Undeliverable-Messages.html ---------------------------------------------------------------------- diff --git a/content/releases/qpid-java-6.1.2/java-broker/book/Java-Broker-Runtime-Handling-Undeliverable-Messages.html b/content/releases/qpid-java-6.1.2/java-broker/book/Java-Broker-Runtime-Handling-Undeliverable-Messages.html new file mode 100644 index 0000000..4e7dedd --- /dev/null +++ b/content/releases/qpid-java-6.1.2/java-broker/book/Java-Broker-Runtime-Handling-Undeliverable-Messages.html @@ -0,0 +1,182 @@ + + + + + 9.4. Handing Undeliverable Messages - Apache Qpid™ + + + + + + + + + + + + + +
+ + + + + + +
+ + +
+

9.4. Handing Undeliverable Messages

9.4.1. Introduction

Messages that cannot be delivered successfu lly to a consumer (for instance, because the + client is using a transacted session and rolls-back the transaction) can be made available on + the queue again and then subsequently be redelivered, depending on the precise session + acknowledgement mode and messaging model used by the application. This is normally desirable + behaviour that contributes to the ability of a system to withstand unexpected errors. However, it + leaves open the possibility for a message to be repeatedly redelivered (potentially indefinitely), + consuming system resources and preventing the delivery of other messages. Such undeliverable + messages are sometimes known as poison messages.

For an example, consider a stock ticker application that has been designed to consume prices + contained within JMS TextMessages. What if inadvertently a BytesMessage is placed onto the queue? + As the ticker application does not expect the BytesMessage, its processing might fail and cause it + to roll-back the transaction, however the default behavior of the Broker would mean that the + BytesMessage would be delivered over and over again, preventing the delivery of other legitimate + messages, until an operator intervenes and removes the erroneous message from the queue.

Qpid has maximum delivery count and dead-letter queue (DLQ) features which can be used in + concert to construct a system that automatically handles such a condition. These features are + described in the following sections.

9.4.2. Maximum Delivery Count

Maximum delivery count is a property of a queue. If a consumer application is unable to + process a message more than the specified number of times, then the broker will either route the + message to a dead-letter queue (if one has been defined), or will discard the message.

In order for a maximum delivery count to be enforced, the consuming client + must call Session#rollback() (or Session#recover() if the session is not transacted). It is during the Broker's + processing of Session#rollback() (or Session#recover()) that if a message has been seen + at least the maximum number of times then it will move the message to the DLQ or discard the + message.

If the consuming client fails in another manner, for instance, closes the connection, the + message will not be re-routed and consumer application will see the same poison message again + once it reconnects.

If the consuming application is using AMQP 0-9-1, 0-9, or 0-8 protocols, it is necessary to + set the client system property qpid.reject.behaviour or connection or binding + URL option rejectbehaviour to the value server.

It is possible to determine the number of times a message has been sent to a consumer via + the Management interfaces, but is not possible to determine this information from a message client. + Specifically, the optional JMS message header JMSXDeliveryCount is not + supported.

Maximum Delivery Count can be specified when a new queue is created or using the the + queue declare property x-qpid-maximum-delivery-count

9.4.3. Dead Letter Queues (DLQ)

A Dead Letter Queue (DLQ) acts as an destination for messages that have somehow exceeded the + normal bounds of processing and is utilised to prevent disruption to flow of other messages. When + a DLQ is enabled for a given queue if a consuming client indicates it no longer wishes the + receive the message (typically by exceeding a Maximum Delivery Count) then the message is moved + onto the DLQ and removed from the original queue.

The DLQ feature causes generation of a Dead Letter Exchange and a Dead Letter Queue. These + are named convention QueueName_DLE and QueueName_DLQ.

DLQs can be enabled when a new queue is created + or using the queue declare property x-qpid-dlq-enabled.

Avoid excessive queue depth

Applications making use of DLQs should make provision for the frequent + examination of messages arriving on DLQs so that both corrective actions can be taken to resolve + the underlying cause and organise for their timely removal from the DLQ. Messages on DLQs + consume system resources in the same manner as messages on normal queues so excessive queue + depths should not be permitted to develop.

+ +
+ + + + +
+
+
+ + http://git-wip-us.apache.org/repos/asf/qpid-site/blob/87eb27cf/content/releases/qpid-java-6.1.2/java-broker/book/Java-Broker-Runtime-Memory.html ---------------------------------------------------------------------- diff --git a/content/releases/qpid-java-6.1.2/java-broker/book/Java-Broker-Runtime-Memory.html b/content/releases/qpid-java-6.1.2/java-broker/book/Java-Broker-Runtime-Memory.html new file mode 100644 index 0000000..edb0efa --- /dev/null +++ b/content/releases/qpid-java-6.1.2/java-broker/book/Java-Broker-Runtime-Memory.html @@ -0,0 +1,270 @@ + + + + + 9.11. Memory - Apache Qpid™ + + + + + + + + + + + + + +
+ + + + + + +
+ + +
+

9.11. Memory

9.11.1. Introduction

+ Understanding how the Qpid broker uses memory is essential to running a high performing and reliable service. + A wrongly configured broker can exhibit poor performance or even crash with an OutOfMemoryError. + Unfortunately, memory usage is not a simple topic and thus requires some in depth explanations. + This page should give the required background information to make informed decisions on how to configure your broker. +

+ Section 9.11.2, “Types of Memory” explains the two different kinds of Java memory most relevant to the broker. + Section 9.11.3, “Memory Usage in the Broker” goes on to explain which parts of the broker use what kind of memory. + Section 9.11.4, “Low Memory Conditions” explains what happens when the system runs low on memory. + Section 9.11.5, “Defaults” lays out the default settings of the Qpid broker. + Finally, Section 9.11.6, “Memory Tuning the Broker” gives some advice on tuning your broker. +

9.11.2. Types of Memory

+ While Java has a couple of different internal memory types we will focus on the two types that are relevant to the Qpid broker. + Both of these memory types are taken from the same physical memory (RAM). +

9.11.2.1. Heap

+ Normally, all objects are allocated from Java's heap memory. + Once, nothing references an object it is cleaned up by the Java Garbage Collector and it's memory returned to the heap. + This works fine for most use cases. + However, when interacting with other parts of the operating system using Java's heap is not ideal. + This is where the so called direct memory comes into play. +

9.11.2.2. Direct

+ The world outside of the JVM, in particular the operating system (OS), does not know about Java heap memory and uses other structures like C arrays. + In order to interact with these systems Java needs to copy data between its own heap memory and these native structures. + This can become a bottle neck when there is a lot of exchange between Java and the OS like in I/O (both disk and network) heavy applications. + Java's solution to this is to allow programmers to request ByteBuffers from so called direct memory. + This is an opaque structure that might have an underlying implementation that makes it efficient to interact with the OS. + Unfortunately, the GC is not good at tracking direct memory and in general it is inadvisable to use direct memory for regular objects. +

9.11.3. Memory Usage in the Broker

+ This section lists some note worthy users of memory within the broker and where possible lists their usage of heap and direct memory. + Note that to ensure smooth performance some heap memory should remain unused by the application and be reserved for the JVM to do house keeping and garbage collection. + Some guides advise to reserve up to 30% of heap memory for the JVM. +

9.11.3.1. Broker

+ The broker itself uses a moderate amount of heap memory (≈15 MB). + However, each connection and session comes with a heap overhead of about 17 kB and 15 kB respectively. + In addition, each connection reserves 512 kB direct memory for network I/O. +

9.11.3.2. Virtual Hosts

+ The amount of memory a Virtual Host uses depends on its type. + For a JSON Virtual Host Node with a BDB Virtual Host the heap memory usage is approximately 2 MB. + However, each BDB Virtual Hosts has a mandatory cache in heap memory which has an impact on performance. + See below for more information. +

9.11.3.3. Messages

+ Messages and their headers are kept in direct memory and have an additional overhead of approximately 1 kB heap memory each. + This means that most brokers will want to have more direct memory than heap memory. + When many small messages accumulate on the broker the 1 kB heap memory overhead can become a limiting factor. +

+ When the broker is running low on direct memory + it will evict messages from memory and flow them to disk. + For persistent messages this only means freeing the direct memory representation because they always have an on-disk representation to guard against unexpected failure (e.g., a power cut). + For transient messages this implies additional disk I/O. + After being flown to disk messages need to be re-read from disk before delivery. +

9.11.3.4. Message Store

Berkeley DB (BDB)

+ The broker can use Oracle's BDB JE (BDB) as a message store to persist messages by writing them to a database. + BDB uses a mandatory cache for navigating and organising its database structure. + Sizing and tuning this cache is a topic of its own and would go beyond the scope of this guide. + Suffice to say that by default Qpid uses 5% of heap memory for BDB caches (each Virtual Host uses a separate cache) or 10 MB per BDB store, whichever is greater. + See the official webpage especially this page for more information. + For those interested, Qpid uses EVICT_LN as its default JE cacheMode. +

+ Note that due to licensing concerns Qpid does not ship the BDB JE jar files. +

Derby

+ TODO +

9.11.3.5. HTTP Management

+ Qpid uses Jetty for the HTTP Management (both REST and Web Management Console). + When the management plugin is loaded it will allocate the memory it needs and should not require more memory during operation and can thus be largely ignored. +

9.11.4. Low Memory Conditions

9.11.4.1. Low on Heap Memory

+ When the broker runs low on heap memory performance will degrade because the JVM will trigger full garbage collection (GC) events in a struggle to free memory. + These full GC events are also called stop-the-world events as they completely halt the execution of the Java application. + Stop-the-world-events may take any where from a couple of milliseconds up to several minutes. + Should the heap memory demands rise even further the JVM will eventually throw an OutOfMemoryError which will cause the broker to shut down. +

9.11.4.2. Low on Direct Memory

+ When the broker detects that it uses 40% of available direct memory it will start flowing incoming transient messages to disk and reading them back before delivery. + This will prevent the broker from running out of direct memory but may degrade performance by requiring disk I/O. +

9.11.5. Defaults

+ By default Qpid uses these settings: +

  • + 0.5 GB heap memory +
  • + 1.5 GB direct memory +
  • + 5% of heap reserved for the JE cache. +
  • + Start flow-to-disk at 40% direct memory utilisation. +

+ As an example, this would accommodate a broker with 50 connections, each serving 5 sessions, and each session having 1000 messages of 1 kB on queues in the broker. + This means a total of 250 concurrent sessions and a total of 250000 messages without flowing messages to disk. +

9.11.6. Memory Tuning the Broker

9.11.6.1. Java Tuning

+ Most of these options are implementation specific. It is assumed you are using Oracle Java 1.7 and Qpid v6. +

+

9.11.6.2. Qpid Tuning

  • + The system property qpid.broker.bdbTotalCacheSize sets the total amount of heap memory (in bytes) allocated to BDB caches. +
  • + The system property broker.flowToDiskThreshold sets the threshold (in bytes) for flowing transient messages to disk. + Should the broker use more than direct memory it will flow incoming messages to disk. + Should utilisation fall beneath the threshold it will stop flowing messages to disk. +

9.11.6.3. Formulas

+ We developed a simple formula which estimates the minimum memory usage of the broker under certain usage. + These are rough estimate so we strongly recommend testing your configuration extensively. + Also, if your machine has more memory available by all means use more memory as it can only improve the performance and stability of your broker. + However, remember that both heap and direct memory are served from your computer's physical memory so their sum should never exceed the physically available RAM (minus what other processes use). +

+

+ memoryheap = 15 MB + 15 kB * Nsessions + 1.5 kB * Nmessages + 17 kB * Nconnections +

+

+

+ memorydirect = 2 MB + (200 B + averageSizemsg *2)* Nmessages + 1MB * Nconnections +

+

+ Where N denotes the total number of connections/sessions/messages on the broker. Furthermore, for direct memory only the messages that have not been flown to disk are relevant. +

Note

The formulae assume the worst case in terms of memory usage: persistent messages and TLS connections. Transient messages consume less heap memory than peristent and plain connections consume less direct memory than TLS + connections. +

9.11.6.4. Things to Consider

Performance

+ Choosing a smaller direct memory size will lower the threshold for flowing transient messages to disk when messages accumulate on a queue. + This can have impact on performance in the transient case where otherwise no disk I/O would be involved. +

+ Having too little heap memory will result in poor performance due to frequent garbage collection events. See Section 9.11.4, “Low Memory Conditions” for more details. +

OutOfMemoryError

+ Choosing too low heap memory can cause an OutOfMemoryError which will force the broker to shut down. + In this sense the available heap memory puts a hard limit on the number of messages you can have in the broker at the same time. +

+ If the Java runs out of direct memory it also throws a OutOfMemoryError resulting the a broker shutdown. + Under normal circumstances this should not happen but needs to be considered when deviating from the default configuration, especially when changing the flowToDiskThreshold. +

+ If you are sending very large messages you should accommodate for this by making sure you have enough direct memory. +

+ +
+ + + + +
+
+
+ + http://git-wip-us.apache.org/repos/asf/qpid-site/blob/87eb27cf/content/releases/qpid-java-6.1.2/java-broker/book/Java-Broker-Runtime-Message-Compression.html ---------------------------------------------------------------------- diff --git a/content/releases/qpid-java-6.1.2/java-broker/book/Java-Broker-Runtime-Message-Compression.html b/content/releases/qpid-java-6.1.2/java-broker/book/Java-Broker-Runtime-Message-Compression.html new file mode 100644 index 0000000..6b5faaa --- /dev/null +++ b/content/releases/qpid-java-6.1.2/java-broker/book/Java-Broker-Runtime-Message-Compression.html @@ -0,0 +1,158 @@ + + + + + 9.9. Message Compression - Apache Qpid™ + + + + + + + + + + + + + +
+ + + + + + +
+ + +
+

9.9. Message Compression

The Apache Qpid Broker for Java supports[13] message compression. This feature works in co-operation with Qpid + Clients implementing the same feature.

Once the feature is enabled (using Broker context variable + broker.messageCompressionEnabled), the Broker will advertise support for the + message compression feature to the client at connection time. This allows clients to opt to turn + on message compression, allowing message payload sizes to be reduced.

If the Broker has connections from clients who have message compression enabled and others who + do not, it will internally, on-the-fly, decompress compressed messages when sending to clients + without support and conversely, compress uncomressed messages when sending to clients who do.

The Broker has a threshold below which it will not consider compressing a message, this is + controlled by Broker content variable + (connection.messageCompressionThresholdSize) and expresses a size in bytes.

This feature may have a beneficial effect on performance by:

  • Reducing the number of bytes transmitted over the wire, both between Client and Broker, and + in the HA case, Broker to Broker, for replication purposes.

  • Reducing storage space when data is at rest within the Broker, both on disk and in + memory.

Of course, compression and decompression is computationally expensive. Turning on the feature + may have a negative impact on CPU utilization on Broker and/or Client. Also for small messages + payloads, message compression may increase the message size. It is recommended to test the feature + with representative data.



[13] Message compression is not yet supported for the 1.0 + protocol.

+ +
+ + + + +
+
+
+ + http://git-wip-us.apache.org/repos/asf/qpid-site/blob/87eb27cf/content/releases/qpid-java-6.1.2/java-broker/book/Java-Broker-Runtime-Producer-Transaction-Timeout.html ---------------------------------------------------------------------- diff --git a/content/releases/qpid-java-6.1.2/java-broker/book/Java-Broker-Runtime-Producer-Transaction-Timeout.html b/content/releases/qpid-java-6.1.2/java-broker/book/Java-Broker-Runtime-Producer-Transaction-Timeout.html new file mode 100644 index 0000000..01478df --- /dev/null +++ b/content/releases/qpid-java-6.1.2/java-broker/book/Java-Broker-Runtime-Producer-Transaction-Timeout.html @@ -0,0 +1,200 @@ + + + + + 9.3. Producer Transaction Timeout - Apache Qpid™ + + + + + + + + + + + + + +
+ + + + + + +
+ + +
+

9.3. Producer Transaction Timeout

9.3.1. General Information

The transaction timeout mechanism is used to c ontrol broker resources when clients + producing messages using transactional sessions hang or otherwise become unresponsive, or simply + begin a transaction and keep using it without ever calling Session#commit().

Users can choose to configure an idleWarn or openWarn threshold, after which the identified + transaction should be logged as a WARN level alert as well as (more importantly) an idleClose or + openClose threshold after which the transaction and the connection it applies to will be + closed.

This feature is particularly useful in environments where the owner of the broker does not + have full control over the implementation of clients, such as in a shared services + deployment.

The following section provide more details on this feature and its use.

9.3.2. Purpose

This feature has been introduced to address the scenario where an open transaction on the + broker holds an open transaction on the persistent store. This can have undesirable consequences + if the store does not time out or close long-running transactions, such as with BDB. This can can + result in a rapid increase in disk usage size, bounded only by available space, due to growth of + the transaction log.

9.3.3. Scope

Note that only MessageProducer clients will be affected by a transaction timeout, since store + transaction lifespan on a consumer only spans the execution of the call to Session#commit() and + there is no scope for a long-lived transaction to arise.

It is also important to note that the transaction timeout mechanism is purely a JMS + transaction timeout, and unrelated to any other timeouts in the Qpid client library and will have + no impact on any RDBMS your application may utilise.

9.3.4. Effect

Full details of configuration options are provided in the sections that follow. This section + gives a brief overview of what the Transaction Timeout feature can do.

9.3.4.1. Broker Logging and Connection Close

When the openWarn or idleWarn specified threshold is exceeded, the broker will log a WARN + level alert with details of the connection and channel on which the threshold has been exceeded, + along with the age of the transaction.

When the openClose or idleClose specified threshold value is exceeded, the broker will + throw an exception back to the client connection via the ExceptionListener, log the + action and then close the connection.

The example broker log output shown below is where the idleWarn threshold specified is + lower than the idleClose threshold and the broker therefore logs the idle transaction 3 times + before the close threshold is triggered and the connection closed out.

CHN-1008 : Idle Transaction : 13,116 ms
+CHN-1008 : Idle Transaction : 14,116 ms
+CHN-1008 : Idle Transaction : 15,118 ms
+CHN-1003 : Close
+   

The second example broker log output shown below illustrates the same mechanism operating + on an open transaction.

+CHN-1007 : Open Transaction : 12,406 ms
+CHN-1007 : Open Transaction : 13,406 ms
+CHN-1007 : Open Transaction : 14,406 ms
+CHN-1003 : Close
+   

9.3.4.2. Client Side Effect

After a Close threshold has been exceeded, the trigger client will receive this exception + on its exception + listener, prior to being disconnected:

org.apache.qpid.AMQConnectionClosedException: Error: Idle transaction timed out + [error code 506: resource error]

Any later attempt to use the connection will result in this exception being thrown:

Producer: Caught an Exception: javax.jms.IllegalStateException: Object org.apache.qpid.client.AMQSession_0_8@129b0e1 has been closed
+    javax.jms.IllegalStateException: Object org.apache.qpid.client.AMQSession_0_8@129b0e1 has been closed
+    at org.apache.qpid.client.Closeable.checkNotClosed(Closeable.java:70)
+    at org.apache.qpid.client.AMQSession.checkNotClosed(AMQSession.java:555)
+    at org.apache.qpid.client.AMQSession.createBytesMessage(AMQSession.java:573)
+   

Thus clients must be able to handle this case successfully, reconnecting where required and + registering an exception listener on all connections. This is critical, and must be communicated + to client applications by any broker owner switching on transaction timeouts.

9.3.5. Configuration

9.3.5.1. Configuration

The transaction timeouts can be specified when a new virtualhost is created or an exiting + virtualhost is edited.

We would recommend that only warnings are configured at first, which should allow broker + administrators to obtain an idea of the distribution of transaction lengths on their systems, + and configure production settings appropriately for both warning and closure. Ideally + establishing thresholds should be achieved in a representative UAT environment, with clients and + broker running, prior to any production deployment.

It is impossible to give suggested values, due to the large variation in usage depending on + the applications using a broker. However, clearly transactions should not span the expected + lifetime of any client application as this would indicate a hung client.

When configuring warning and closure timeouts, it should be noted that these only apply to + message producers that are connected to the broker, but that a timeout will cause the connection + to be closed - this disconnecting all producers and consumers created on that connection.

This should not be an issue for environments using Mule or Spring, where connection + factories can be configured appropriately to manage a single MessageProducer object per JMS + Session and Connection. Clients that use the JMS API directly should be aware that sessions + managing both consumers and producers, or multiple producers, will be affected by a single + producer hanging or leaving a transaction idle or open, and closed, and must take appropriate + action to handle that scenario.

+ +
+ + + + +
+
+
+ + --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscribe@qpid.apache.org For additional commands, e-mail: commits-help@qpid.apache.org