directory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alex Karasulu" <>
Subject Re: [Profiling] What should we test ?
Date Tue, 08 May 2007 13:45:32 GMT

I have so many ideas on this topic.  I'll try to outline and organize them
here in a
coherent fashion.  Thanks Emmanuel for taking the initiative to kick off
this thread.

Performance Testing Environment

Testing Tool: SLAMD

We need to setup a SLAMD environment first where the load generators (load
are running on separate machines to prevent context switching from
interfering with the

We need to either reuse existing SLAMD tests or devise our own.  Regardless
we need
some kind of test panel that tests various operations and perhaps a
combination of
them with various configurations.  I'm thinking of having situations where
all entries are
in the cache/memory (preloaded) verses ones where the cache is disabled (set
to 1).

Tested Servers

I would like to run the test panel against several OSS servers on the market
to get some
comparative figures.

ApacheDS 1.0
ApacheDS 1.5
ApacheDS x.y
OpenLDAP (current)
Fedora DS (current)
OpenDS (current)

It's very important here for us to establish a baseline for ADS and to do
benchmarks on the same hardware and operating environment.

Different Kinds of Benchmarks

Macro Performance Tests

We obviously want to test ADS as a standalone server to collect basic
metrics.  We
will no doubt test the following operations with specific controls (not LDAP
but experimental controls).


Some of the tests that come out of the box with SLAMD are usable but one
understand that they perform more than one kind of operation at a time and
result in cross reactivity while trying to measure the characteristics of a
operation.  So while we can use these tests we must also create our own
isolate a specific operation.

While performing these tests we can extract more information while varying
parameters.  For example we can vary the following:

op specific parameters

For operational specific parameters search perhaps allows for the most
variables.  We
need to test the server when different scopes are used and with varying
result sets.

Micro Performance Tests

Besides testing the performance of the server with each operation I would
like to configure
ApacheDS with some modified versions of the operation handlers (in protocol
interceptors and the default backend.  Basically these analogs will be
instrumented versions
of their standard counterparts.  For example you might have a
TestSearchHandler, and a
set of MetricsInterceptors along with a TestJdbmBackend.  The idea is to
start collecting statistics
on each operation at various levels in the server.  Then we can setup a
server.xml file to use
these TestXXX components instead of the default components to collect micro
while saturating the server.

Another way in which we can enable these micro metrics is by designing them
into the
components and switching off the feature when not testing.  A configuration
can be used to dynamically enable/disable these instrumentation features.

Capacity Performance Tests

These tests will be critical for tuning the partition implementation and
better heuristics for it.  Also we can build new partition implementations
test them to compare their performance characteristics.

We also need to test various operations as we scale the size of a partition
to detect
identify and fix performance issues that arise with increased capacity.

We could have various snapshots of the server each with different
capacities.  When
capacity tests are to be conducted we can clone the snapshot and use it for
the tests.


On 5/7/07, Emmanuel Lecharny <> wrote:
> Hi,
> I start this thread so that we can discuss how we profile the server,
> and what we should profile. The idea is to build a correct base for
> profiling sessions, so that we can evaluate many parts of the server.
> In my mind, we should try to run those profiling sessions :
> - adding N entries in the server, with indexed attributes
> - adding N entries in the server, without any indexed attributes
> - deleting N entries in the server, with indexed attributes
> - deleting N entries in the server, without indexed attributes
> - searching N entries in the server, with indexed attributes, and using
> those indexed attributes. The entries will be picked randomly. The cache
> should be bigger to the number of entry, so that each search never hit
> the disk
> - searching N tims the same entry, using an indexed attribute
> - doing N bind and N unbind requests
> - we should also test the server without MINA. To test the server
> without MINA, it's enough to test an embedded server.
> - to test MINA alone, we should also write a void LdapSearchHandler,
> returning the same result, which will be built, so that we don't pass
> through all the interceptors chain.
> Any more suggestion ?
> Emmanuel

View raw message