On Wed, May 2, 2012 at 1:54 PM, Emmanuel Lécharny <elecharny@gmail.com> wrote:
Le 5/2/12 12:33 PM, Alex Karasulu a écrit :
On Wed, May 2, 2012 at 12:49 PM, Emmanuel Lécharny<elecharny@gmail.com>wrote:

Le 5/2/12 9:53 AM, Alex Karasulu a écrit :

On Wed, May 2, 2012 at 2:43 AM, Emmanuel Lécharny<elecharny@gmail.com>**

wrote:

 Le 5/1/12 3:05 PM, Alex Karasulu a écrit :
 On Tue, May 1, 2012 at 4:08 AM, Emmanuel Lécharny<elecharny@gmail.com>*
***


wrote:

o Object scope search (lookup) : 49 880 req/s compared to 23 081 on the
previous trunk
o One Level scope search (5 entries returned) : 68 715 entries returned
per second, compared to 33 120/s
o Sub Level scope search (10 entries returned ) : 70 830 entries
returned
per second, compared to 18 910/s


This is great work Emmanuel. Nicely done!

 I have some even better results, as of today :
o Object scope search (lookup) : 52 712 req/s compared to 23 081 on the
previous trunk
o One Level scope search (5 entries returned) : 72 635 entries returned
per second, compared to 33 120/s
o Sub Level scope search (10 entries returned ) : 75 100 entries returned
per second, compared to 18 910/s


 This is just sick man! You've more than doubled the performance.
Some new idea this morning :

atm, we do clone the entries we fetch from the server, then we filter the
Attributes and the values, modifying the cloned entries. This leads to
useless create of the removed Attributes and Values. We suggested to
accumulate the modifications and to apply them at the end, avoiding the
cloning of AT which will not be returned.

First of all, we can avoid cloning the Values. The Value implementation
are immutable classes. This save around 7% of the time.

But this is not all we can do : we can simply avoid the accumulation of
modifications *and* avoid cloning the entry !

The idea is simple : when we get an entry in the cursor we have got back,
we create a new empty entry, then we iterate over all the original entry's
attributes and values, and for each one of those attributes and values, we
check the filters, which will just tell if the Attribute/Value must be
ditched or kept. This way, we don't do anything useless, like storing the
modification or creating useless Attributes.

It will work to the extent we deal with the CollectiveAttributes which
must be injected somewhere, before we enter the loop (a connectiveAttribute
might perfectly be removed by the ACI filter...). But we can also inject
those added collective attributes into the loop of filters.

I may miss something, but I do think that this solution is a clear winner,
even in term of implementation...

thoughts ?


We talked about using a wrapper around the entry to encapsulate these
matters making it happen automatically behind the scenes. This does not
affect the surrounding code.

How is this proposal now different? Why would you not use a wrapper?
Because the wrapper is useless in this case !

The beauty of the solution is that we either create a new entry with all the requested AT and values, accordingly to the filters (if the user embeds the server), or if this is a network request, we directly generates the encoded message without having to create the intermediate entry at all !


I don't know how you'll pull that off considering that the interceptors which cause side effects are expecting an entry to alter or from which to read information from to do their thang.

This is why I'm a bit confused. Maybe its a matter of description and the language where I'm failing to understand. 


--
Best Regards,
-- Alex