incubator-s4-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Matthieu Morel (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (S4-95) Add reproducible performance benchmarks
Date Sat, 08 Dec 2012 14:47:21 GMT

    [ https://issues.apache.org/jira/browse/S4-95?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13527154#comment-13527154
] 

Matthieu Morel commented on S4-95:
----------------------------------

Just uploaded some more improvements in commit 6fe7ea8

By default, PEs are executed using a load shedding stream executor (i.e. events are dropped
when work queue is full) and a blocking sender.

Other executors are also provided and can be used in replacement. For instance memory aware
executor (from netty) and throttling executor (limits maximum rate for task submission). Throttling
executor can be appropriate for throttling event sources for instance.

It is also possible to use blocking stream executors, in order to prevent event loss, though
depending on the design of the app, using both blocking sender and stream executor may lead
to some deadlocks. When using a load shedding executor, event loss can be minimized in case
of event bursts by using large work queues.
                
> Add reproducible performance benchmarks
> ---------------------------------------
>
>                 Key: S4-95
>                 URL: https://issues.apache.org/jira/browse/S4-95
>             Project: Apache S4
>          Issue Type: Test
>    Affects Versions: 0.6
>            Reporter: Matthieu Morel
>            Assignee: Matthieu Morel
>
> In order to track performance improvements, we need some reproducible performance benchmarks.
Here are some ideas of what we'd need:
> - use PEs that do nothing but create a new message and forward. Allows us to focus on
the overhead of the platform 
> - what is the maximum throughput without dropping messages, in a given host (in a setup
with 1 adapter node and 1 or 2 app nodes)
> - what is the latency for end to end processing (avg, median, etc...)
> - using a very simple app, with only 1 PE prototype
> - varying the number of keys
> - using a slightly more complex app (at least 2 communicating prototypes), in order to
take into account inter-PE communications and related optimizations
> - start measurements after a warmup phase
> Some tests could be part of the test suite (by specifying a given option for those performance-related
tests). That would allow some tracking of the performance.
> We could also add a simple injection mechanism that would work out of the box with the
example bundled with new S4 apps (through "s4 newApp" command).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message