activemq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Colin MacNaughton" <>
Subject RE: ActiveMQ 6.0 Broker Core Prototype -- Flow Control / Memory Management
Date Tue, 16 Jun 2009 00:46:35 GMT
Hi Gary, 

Putting maximums on these blocks may not be possible in cases where we are
not willing to discard messages on a queue. Ultimately the publisher rate is
tied to the disk sync rate. I think the best we can do in this case is to
try to smooth out the publisher's profile. The point of the store queue is
to allow the store writing thread to batch up several writes into a single
FileDescriptor.sync() which increases throughput. Choosing too small a store
queue size will ultimately result in low throughput though potentially
smoother publish rate. Choosing too large a queue can make the publisher's
bursty, but will likely have closer to optimal throughput (e.g. accept large
burst of messages but long disk sync times). 

In the end I lean towards a larger store queue since ultimately a publish
rate higher than a consumer rate isn't sustainable anyway, and we should
really be optimizing for a bursty publisher case with highest possible disk
throughput. In cases where smoothing out the publisher rate is desired
(perhaps prolonged but finite bursts), we can introduce rate based limiters
for the queues/publishers in question. 

-----Original Message-----
From: Gary Tully [] 
Sent: Monday, June 15, 2009 5:22 AM
Subject: Re: ActiveMQ 6.0 Broker Core Prototype -- Flow Control / Memory

On the 'temporary blocking', we need to think of ways to place maximums on
these blocks.

A 5.x trait is that flushing buffers to disk on reaching a memory limit is a
single op, so a producer or consumer could be blocked for N writes (where N
can be quite large)
The flow model can be more linear in this regard.
The difficulty may be in combining flow limiters that work with single entry
queues on one side and limiters that want to batch writes on the other.
When blocking is needed, it may be sufficient to have the enqueue delay for
only half of a batch write. Or max 0.5 of a batch.
The same would hold for paging out, a flow could resume when some portion or
the limit is paged out on demand.

2009/6/11 Hiram Chirino <>

> So Colin was focusing on just the flow controller package.  Most of
> the logic of moving messages 'offline' is in the activemq-queue
> module.
> That package it self needs a good write up.  But in general the goal
> is have queues which support the following options which can be
> enabled/disabled:
> * Paging: will the queue even attempt to spool to disk?  In some high
> performance scenarios you may never want to spool messages to disk and
> instead prefer the producers to block when memory limits are reached.
> * Page Out Place Holders: If you don't page them out, then the queue
> will keep a list of pointers to the location of the message on disk in
> memory at all times.  Keeping this list in memory will speed up
> accessing paged out messages as their location on disk is already
> known.  Should be enabled for large queues so that even the message
> order and locations are determined by cursoring the persistence store.
> * Throttle Sources To Memory Limit : When disabled, the queue will
> behave very much like disabling flow control in 5.x.  Fast producer's
> messages will be spooled to disk to avoid blocking the source.
>  BTW the above is implemented in the CursoredQueue class if anyone is
> interested.
> We will need to review how best to implement that 'combined policy',
> but in the new architecture, I do think your going to see more
> 'temporary' producer blocking.  For example, even with the 'Throttle
> Sources To Memory Limit' option disabled, the producers flow
> controller may block as the queue is trying to persist messages to the
> message store.  This is because unlike 5.x, the message store is also
> flow controlled resource that is access async.
> On Thu, Jun 11, 2009 at 4:11 AM, Rob Davies<> wrote:
> > Hi Colin,
> >
> > In 5.x flow control behaves as if its binary - off or on. When its off -
> > messages can be offlined (for non-persistent messages this means being
> > dumped to temporary storage) - but when its on - the producers slow and
> > stop.
> > Also - there can be cases when you get a temporary slow consumer (the
> > consuming app may be doing a big gc) - which means with flow control off
> -
> > messages get dumped to disk - and then the producers may never slow down
> > enough again for the consumer to catch up. Flow control is difficult to
> > implement for all cases - but we should allow for configuration of the
> > following:
> >
> > * maximum overall broker memory
> > * maximum memory allocation per destination
> > * maximum storage allocation
> > * maximum storage allocation per destination
> > * maximum temporary storage allocation
> > * maximum temporary storage allocation per destination
> >
> > when we start to hit a resource limit - we should aggressively gc
> messages
> > that have expired, then either offline (an flow control when that limit
> is
> > hit) or flow control.
> > It would be great to have a combined policy where we can block a
> > for a short time (seconds) then offline
> > For non-persistent messages - we still need a policy where we can remove
> > messages based on a selector (which would be in addition to expiring
> > messages).
> >
> > cheers,
> >
> > Rob
> >


Open Source Integration

View raw message