activemq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Martyn Taylor <mtay...@redhat.com>
Subject Re: [DISCUSS] Use pooled buffers on message body
Date Fri, 26 May 2017 15:36:42 GMT
@Clebert It's been added as a configuration option on the InVM
 acceptor/connector.  Take a look at those classes.

On Fri, May 26, 2017 at 4:26 PM, Clebert Suconic <clebert.suconic@gmail.com>
wrote:

> @Martyn: you recently added some configuration on InVM to make pooled
> or Not.. where is that? Where is the pool right now after your
> changes?
>
> I can read the code. but it's easier to ask... :) Perhaps we should
> make a class with a PoolServer for such things?
>
>
> Like, I'm looking into perhaps add a ClientTransaction retry, and i
> would use the pool there as well. it would be best to have such class
> somewhere.
>
> On Fri, May 26, 2017 at 11:22 AM, Matt Pavlovich <mattrpav@gmail.com>
> wrote:
> > +1 having the memory pool/allocator be a configurable strategy or
> policy-type deal would be bonus level 12. Esp. for embedded / kiosk /
> Raspberry Pi and linux host container scenarios as Martyn mentioned.
> >
> >> On May 26, 2017, at 9:50 AM, Clebert Suconic <clebert.suconic@gmail.com>
> wrote:
> >>
> >> Perhaps we need a place to set the allocator.. Pooled versus Unpooled..
> >>
> >>
> >> PooledRepository.getPool()...
> >>
> >>
> >>
> >> Regarding the ref counts.. we will need to add a new reference
> >> counting.. the current one is a bit complex to be used because of
> >> delivering.. DLQs.. etc... it's a big challenge for sure!
> >>
> >> On Fri, May 26, 2017 at 4:04 AM, Martyn Taylor <mtaylor@redhat.com>
> wrote:
> >>> We've had using buffer pools throughout on the backlog for a long
> time, so
> >>> +1 on this.  The only thing I'd say here is that retrofitting the
> reference
> >>> counting (i.e. releasing the buffers) can sometimes lead to leaks, if
> we
> >>> don't catch all cases, so we just need to be careful here.
> >>>
> >>> One other thing to consider, we do have users that run Artemis in
> >>> constrained environments, where memory is limited.  Allocating a chunk
> of
> >>> memory upfront for the buffers may not be ideal for that use case.
> >>>
> >>> Cheers
> >>>
> >>> On Thu, May 25, 2017 at 5:53 PM, Matt Pavlovich <mattrpav@gmail.com>
> wrote:
> >>>
> >>>> +1 this all sounds great
> >>>>
> >>>>> On May 12, 2017, at 12:02 PM, Michael André Pearce <
> >>>> michael.andre.pearce@me.com> wrote:
> >>>>>
> >>>>> I agree iterative targeted steps is best.
> >>>>>
> >>>>> So if even just pooling messages and keep the copying of the buffer
> as
> >>>> today it's a step in the right direction.
> >>>>>
> >>>>>
> >>>>> Sent from my iPhone
> >>>>>
> >>>>>> On 12 May 2017, at 15:52, Clebert Suconic <
> clebert.suconic@gmail.com>
> >>>> wrote:
> >>>>>>
> >>>>>> I'm not sure we can keep the message body as a native buffer...
> >>>>>>
> >>>>>> I have seen it being expensive. Especially when dealing with
> >>>>>> clustering and paging.. a lot of times I have seen memory
> exaustion...
> >>>>>>
> >>>>>> for AMQP, on qpid Proton though.. that would require a lot more
> >>>>>> changes.. it's not even possible to think about it now  unless
we
> make
> >>>>>> substantial changes to Proton.. Proton likes to keep its own
> internal
> >>>>>> pool and make a lot of copies.. so we cannot do this yet on
AMQP. (I
> >>>>>> would like to though).
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> But I'm always in advocating of tackling one thing at the time...
> >>>>>> first thing is to have some reference counting in place to tell
us
> >>>>>> when to deallocate the memory used by the message, in such way
it
> >>>>>> works with both paging and non paging... anything else then
will be
> >>>>>> "relatively' easier.
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> On Fri, May 12, 2017 at 2:56 AM, Michael André Pearce
> >>>>>> <michael.andre.pearce@me.com> wrote:
> >>>>>>>
> >>>>>>> Hi Clebert.
> >>>>>>>
> >>>>>>> +1 from me definitely.
> >>>>>>>
> >>>>>>> Agreed this def should target the server not the clients.
> >>>>>>>
> >>>>>>> Having the message / buffer used my message pooled would
be great,
> as
> >>>> will reduce GC pressure.
> >>>>>>>
> >>>>>>> I would like to take that one step further and question
if we could
> >>>> actually avoid copying the buffer contents at all on passing from/to
> netty.
> >>>> The zero-copy nivana.
> >>>>>>>
> >>>>>>> I know you state to have separate buffer pools. But if we
could use
> >>>> the same memory address we can avoid the copy, reducing latency also.
> This
> >>>> could be done by sharing the buffer and the pool, or by using
> >>>> slice/duplicate retained.
> >>>>>>>
> >>>>>>> Cheers
> >>>>>>> Mike
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>> On 11 May 2017, at 23:13, Clebert Suconic <
> clebert.suconic@gmail.com>
> >>>> wrote:
> >>>>>>>>
> >>>>>>>> One thing I couldn't do before without some proper thinking
was
> to use
> >>>>>>>> a Pooled Buffer on the message bodies.
> >>>>>>>>
> >>>>>>>> It would actually rock out the perf numbers if that
could be
> >>>> achieved...
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> I'm thinking this should be done on the server only.
Doing it on
> the
> >>>>>>>> client would mean to give some API to users to tell
when the
> message
> >>>>>>>> is gone and no longer needed.. I don't think we can
do this with
> JMS
> >>>>>>>> core, or any of the qpid clients... although we could
think about
> an
> >>>>>>>> API in the future for such thing.
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> For the server: I would need to capture when the message
is
> released..
> >>>>>>>> the only pitfal for this would be paging as the Page
read may
> come and
> >>>>>>>> go... So, this will involve some work on making sure
we would
> call the
> >>>>>>>> proper places.
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> We would still need to copy from Netty Buffer into another
> >>>>>>>> PooledBuffer as the Netty buffer would need to be a
Native buffer
> >>>>>>>> while the message a regular Buffer (non Native).
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> I am thinking of investing my time on this (even if
my spare time
> if
> >>>>>>>> needed be) after apache con next week.
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> This will certainly attract Francesco and Michael Pierce's
> attention..
> >>>>>>>> but this would be a pretty good improvement towards
even less GC
> >>>>>>>> pressure.
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> --
> >>>>>>>> Clebert Suconic
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> --
> >>>>>> Clebert Suconic
> >>>>
> >>>>
> >>
> >>
> >>
> >> --
> >> Clebert Suconic
> >
>
>
>
> --
> Clebert Suconic
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message