activemq-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tim Bain <tb...@alumni.duke.edu>
Subject Re: DLQ, cause:null
Date Tue, 28 Apr 2015 13:09:06 GMT
On Apr 28, 2015 3:21 AM, "James Green" <james.mk.green@gmail.com> wrote:
>
> On 27 April 2015 at 17:32, Tim Bain <tbain@alumni.duke.edu> wrote:
>
> > On Mon, Apr 27, 2015 at 7:30 AM, James Green <james.mk.green@gmail.com>
> > wrote:
> >
> > >
> > > Surely if a client takes more than the receive time-out to process a
> > > message, a re-delivery will occur? If not, what does happen?
> >
> >
> > I don't believe the receive timeout relates to processing a message at
> > all.  The receive timeout is the amount of time you'll wait to see if a
> > message is available to be processed before returning control; it ends
when
> > message processing begins, whereas your description indicates you're
> > expecting that it starts when message processing begins.
> >
> > When a client takes more than the receive timeout to process a message,
the
> > client will continue processing the message, and continue processing the
> > message, and continue processing the message, until eventually it
> > finishes.  The only way I know of to time out a client is by using the
> > AbortSlowAckConsumerStrategy (and even then, I'm not sure the consumer
will
> > actually stop processing the current message, it just won't get to
process
> > any more after the current one), but that's a completely different path
> > (and entirely optional, and not enabled by default).  By default,
message
> > processing just takes as long as it takes, which is why we have
strategies
> > available to react to slow consumers.
> >
>
> So to re-work my understanding, the pre-fetch buffer in the connected
> client is filled with messages and the broker's queue counters react as if
> this was not the case (else we'd see 1,000 drop from the pending queue
> extremely quickly, and slowly get dequeued which we do not see).

Pretty much right, though I wouldn't say the broker's counters react as if
this was not the case; rather, the broker's dispatch counter increases
immediately but the dequeue counter won't increase until the broker removes
the message, and that won't happen until the consumer acks it.  Until that
happens, the message exists in both places and the counters reflect that.

It sounds like you're observing the broker through the web console; there's
WAY more information available through the JMX beans and you'll understand
this better by watching them instead of the web console.  So I highly
recommend firing up JConsole and looking at the JMX beans.

> The client (we're using standard Camel JMS consumer routing) has this
> buffer and the session polls this buffer for the next message, sending it
> to the receive() method of the consumer (which is a Spring object by the
> looks of things). This polling is subject to a receiveTimeout time-out.

That matches my understanding of how it works.

> All accurate or almost so far?
>
> What I'm not "getting" is why we we no longer getting the DLQ entries,
> having only increased the receiveTimeout. It is as if the client process
is
> so busy the pre-fetch buffer cannot react fast enough.
>
> James

I agree, I don't understand that, particularly because even if the broker
was so loaded down that you were hitting that timeout, I don't see how that
would result in a failed delivery attempt.   Your receive() call would just
return null and Camel would just call receive() again and everything would
be fine.  (This is exactly what happens when there aren't any messages on
the queue, and nothing bad happens then.)  So my gut reaction is that the
timeout is a red herring and something else is going on.  Have you switched
that setting between the two values while playing identical messages
(either generated or recorded) to be sure that that setting really is the
cause of this behavior?

Also, when messages are failing, do all of them fail?  If it's only some of
them, what's the common thread?

Tim

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message