incubator-s4-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Matthieu Morel (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (S4-95) Add reproducible performance benchmarks
Date Thu, 20 Sep 2012 16:25:07 GMT

    [ https://issues.apache.org/jira/browse/S4-95?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13459711#comment-13459711
] 

Matthieu Morel commented on S4-95:
----------------------------------

I slightly updated the benchmarking framework, and added several optimizations. They are included
in this branch so that current performance work is centralized here.

Current optimizations are mainly focusing on messaging:
- serialization of messages now uses kryo 2 and in particular avoids previously identified
deserialization bottleneck
- serialized messages are passed around as byte buffers in order to limit copies (and there
is room for more improvements here)
- intermediate EventMessage instances have been removed

My throughput measurements (messages/s), reproducible using the benchmarking framework, show
improvements of up to 300% wrt the previous commit.
                
> Add reproducible performance benchmarks
> ---------------------------------------
>
>                 Key: S4-95
>                 URL: https://issues.apache.org/jira/browse/S4-95
>             Project: Apache S4
>          Issue Type: Test
>    Affects Versions: 0.6
>            Reporter: Matthieu Morel
>            Assignee: Matthieu Morel
>
> In order to track performance improvements, we need some reproducible performance benchmarks.
Here are some ideas of what we'd need:
> - use PEs that do nothing but create a new message and forward. Allows us to focus on
the overhead of the platform 
> - what is the maximum throughput without dropping messages, in a given host (in a setup
with 1 adapter node and 1 or 2 app nodes)
> - what is the latency for end to end processing (avg, median, etc...)
> - using a very simple app, with only 1 PE prototype
> - varying the number of keys
> - using a slightly more complex app (at least 2 communicating prototypes), in order to
take into account inter-PE communications and related optimizations
> - start measurements after a warmup phase
> Some tests could be part of the test suite (by specifying a given option for those performance-related
tests). That would allow some tracking of the performance.
> We could also add a simple injection mechanism that would work out of the box with the
example bundled with new S4 apps (through "s4 newApp" command).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message