activemq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Clebert Suconic <clebert.suco...@gmail.com>
Subject Re: [DISCUSS] Artemis IOPS Limiter strategy
Date Sat, 13 May 2017 12:58:53 GMT
This option is so close to the code that I feel like changing code. Not
actual configuring it.

An option close to the code like this better stay at versioning the code.

You understand the config but it's not user friendly.

If the buffer is no good we revert it and only apply it after we think it's
safe. Imho.


On Sat, May 13, 2017 at 5:56 AM Michael André Pearce <
michael.andre.pearce@me.com> wrote:

> So as a Dev and a guy who wants performance I'm very happy with the
> results I'm def 100% for this. And don't get me wrong fully backing this.
>
> The reason I'm flagging to be having it configurable is really me having
> my work managerial hat on about managing roll out risk as diff hw disks can
> all perform very differently.
>
> Obviously can remove the option later if maintaining the toggles become a
> pain, once a good base of deployments in prod are happy.
>
> Sent from my iPhone
>
> > On 13 May 2017, at 09:38, nigro_franz <nigro.fra@gmail.com> wrote:
> >
> > I'm happy that no evident regressions due to the new implementation were
> > found: IMHO only feedback and proper tests will give a baseline to
> reason on
> > it and improve the options to the end users.
> >
> > I've a couple of results with an all out throughput test anyway, but it
> is
> > not representative of any production use case; it is simply a crash test.
> > No real system AFAIK (maybe trading ticks) send infinite burst of durable
> > data with all out throughput AND expecting to have exceptional latencies
> > too: see  "How NOT to Measure Latency" by Gil Tene - Safely sustainable
> > throughput example <https://youtu.be/lJ8ydIuPFeU?t=1686>  .
> > Instead, what I'm expecting is the measurement of the level of service
> > quality (eg end to end responsiveness)  under different production load
> > profiles (eg using a target throughput or with message scheduling
> reflecting
> > real production use).
> >
> > Anyway, the results are not bad at all: the new TimedBuffer behaves
> pretty
> > well (with and without the IO limiter) even when a disk is being maxed
> out.
> > Just as comparison, it provides slightly better throughput (>30% with
> > ASYNCIO) with a similar latency profile of the original one, as stated in
> > the first posts:
> >
> > <http://activemq.2283324.n4.nabble.com/file/n4726119/image_%282%29.png>
> >
> > <http://activemq.2283324.n4.nabble.com/file/n4726119/image_%283%29.png>
> >
> > My only concern is that I was expecting a little worse results by it and
> > that' s why I've discussed about using an IO limiter on it (adapting the
> > existent TokenXXX would be cool, kudos to Clebert about it)...
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > --
> > View this message in context:
> http://activemq.2283324.n4.nabble.com/DISCUSS-Artemis-IOPS-Limiter-strategy-tp4725875p4726119.html
> > Sent from the ActiveMQ - Dev mailing list archive at Nabble.com.
>
-- 
Clebert Suconic

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message