directory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Emmanuel Lecharny <elecha...@gmail.com>
Subject Re: Profiling DirServer with TPTP
Date Sun, 09 Apr 2006 08:07:15 GMT
Ole Ersoy a écrit :

>Hey Guys,
>  
>
Hi Ole,

><snip/>
>So now I'm trying to think what the right strategy is
>for testing the DS?
>  
>
Well, you have many.

>So suppose I created a JNDI client that created say 10
>threads that all hammered DS with lookup requests,
>should that produce the concurrency issue?
>  
>
I think that the problem has been solved by trustin last week. It was 
not really a concurrency problem, but pretty much a problem with thread 
pooling : I finished one test with more that 5000 threads ...

><snip/>
>I don't think I would want to go through JNDI, since
>that means I'll be tunneling through a lot of JNDI
>method calls...assuming there's a more direct way of
>achieving the usage pattern...?
>  
>
There is no problem with JNDI assuming your client is *not* on the same 
computer than the server. From the server side point of view, JNDI is 
just a wrapping API, which cost virtually nothing.

>In that case how would I write that client so that I'm
>calling DS methods directly and which methods should
>be profiled first?
>  
>
I don't think that is a good idea. You will spend a lot of time for very 
limited improvment. Except if you want to test specific parts of 
ApacheDS like the backend, for instance, this is not really an approach 
that will help a lot finding hot-spots.

><snip/>
>OK - Here's some key notes from the TPTP experience.
>  
>
Thanks a lot for the TPTP feedback, it will be very usefull !!

FYI, we are using YourKit profiling tool to do some profiling. It has 
given us some very valuable information about where are the hotspots. So 
far, we know that Dn parsing is the major bottleneck. This is quite 
surprising, because it's a pure CPU operation, and one could think that 
backend operations might have been the main bottleneck. However, the 
tests were biased :
- first, we did it with very little amount of data in the backend, so 
the chane are that all those data are cached
- second, the test just grab always the same data, so now you can be 
sure that data is cached !

It would be very interesting to build a real sample, with thousands of 
data, and real tests (scenario) that we can reproduce. This is something 
I have in mind for months, and I'm trying to build right now, but lack 
of time kills me. We also need the infrastructure to run those tests (at 
least three computers : a server and two clients).

I would be very please if we can gather our efforts to build these test 
scenarii and do all the smoke tests needed to prove that ADS is solid, 
fast - and furious ;). Of course, then, we will be able to work on the 
hot-spot to improve ADS.

Emmanuel Lécharny

Mime
View raw message