directory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alex Karasulu <>
Subject Re: [ApacheDS] Performance testing
Date Thu, 29 Jun 2006 20:39:11 GMT
Hash: SHA1


More in line ...

Emmanuel Lecharny wrote:
> On 6/28/06, *Alex Karasulu* <
> <>> wrote:
>     Need Benchmarking/Profiling Perf Suite (BPPS) for ApacheDS
>     ==========================================================
> <snip/>
>     I did the first thing anyone would do.  I tapped Emmanuel on the
>     shoulder to ask him for his materials for his AC EU presentation.  I did
>     not want to repeat the work that he had already done.
> Sorry about me being in a rush, I provided very few information, I'm
> afraid :(

No problem.

>     Please Emmanuel take no offense but I found the setup and repeated work
>     to be a bit of a hassle.   
> I find it a PITA :)
>     I'm sure you were bothered by doing things
>     manually yourself.  
> Sometime lack of time drive you to make the most common mistake : not
> spending some more time doing things correct immediatly, instead of
> postponing this task forever...

Oh so true.

>     Plus I wanted to profile these tests too inside
>     Eclipse using Yourkit.  Anyway I came to a final conclusion:
>     *Conclusion*: We need some repeatable benchmarking/profiling perfromance
>     test suite for ApacheDS that can be run easily.
> +1
>     Requirements for BPPS
>     =====================
>     Here's what I started asking myself for internally.  Please add to this
>     list if you can think of other requirements.
>     (1a) Need repeatable performance tests with setup prep and tear down
>     (1b) Tests should be able to load an initial data set (LDIF) into server
>     (2) I should be able to use Maven or ant to kick off these tests
>     (3) Tests should produce some kind of report
>     (4) Tests should easily be pointed to benchmark other servers
>     (5) Make it easy to create a new performance test.
>     (6) I want a summary of the conditions in the test report which include
>     the setup parameters for:
>             o operations performed
>             o capacity,
>             o concurrency,
>             o hardware.
>             o operating system
> We also need  differnet kind of tests. Many parts of the server can be
> tested so far, so micro-benchmarks may be added. A sub-project is needed
> for this !
>     Existing work and potential approaches
>     ======================================
>     I figured using JUnit was the best way to test ApacheDS or anyother
>     server.  Plus I could setUp and tearDown test cases.  The only thing I
>     needed to do was make a base test case or two for the various apacheds
>     configurations (embedded testing verses full networked testing).
>     The first base test case, for embedded testing, was setup here:
>     <>
>     Yeah it's weak and I'll try to add to it.  What I would like to do is
>     invite people to work with me on setting up this
>     benchmarking/profiling/perf testing framework.
>     Comments? Thoughts?
> Well, as we are in the process of writing Junit test cases for the
> Import command with my inter, we may use those tests to fed the server
> with various LDIF files. As we can add/modify or del entries, this seems
> to be a good approach.

This is what I wanted to hear especially below:

> At this point, I really want to stress that we have three different kind
> of "benchmarks" :
> 1) general benchmarks, used to compare two products for instance. Its
> target is to see how a server behaves when used in a standard way (I
> mean, like in a company). SLAMD could be the perfect tool to do that.


> 2) "profiling" benchmarks : this is quite a different beast. We do
> profiling benchmarks to compare two versions of the same product to
> check that a modifcation done on the server improve its averall
> performance - or not - :) To do that, we need specialized tests, may be
> hand-coded, but definitively tests that can be reproduced with a tool
> (maven, ant, whatever )


> 3) Microbenchmarks : People don't like them... But they are usefull in a
> certain way. Sometime one way want to check that a piece of code has
> been improved (for instance, I wrote such benchmarks to compare
> performance of StringTools.trim() function against JDK ones, and it
> helped me to find that the version in StringTools was incredibly stupid
> and slow... (thanks to St├ęphane Bailliez, who pointed it out at the
> first glance :) - ) So, yes, they are usefull.

+1 but unfortunately we cannot build a framework for this.  It's
something we must do for isolating poor performing code units.  It will
always be a hand woven task.  We can use simple JUnit tests for this and
perhaps mix in some of the YourKit APIs.

> At this point, we definitively need a sub-project, with a tool to select
> the benchmarks we may want to run. It's a non-sense to launch all the
> benchmarks wor a 4 hours session if we just need only one single result.
> This summer we will have a little bit time to spend on those subjects,
> and it's obviously a DEF-CON 1 task - with bug fixes, doco -. Someone
> may want to dedicate some time focusing on this subject, to, because it
> will be a full-time job for two months ...
> Ok, that was my comments. I hope it makes sense...

Yes it did thanks :).

Version: GnuPG v1.4.2.2 (GNU/Linux)
Comment: Using GnuPG with Mozilla -


View raw message