directory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Emmanuel L├ęcharny <elecha...@gmail.com>
Subject [Mavibot] BulkLoad
Date Fri, 20 Jun 2014 09:00:58 GMT
Hi guys,

many thanks Kiran for the OOM fix !

That's one step toward a fast load of big database load.

The next steps are also critical. We are currently limited by the memory
size as we store in memory the DNs we load. In order to go one step
farther, we need to implement a system where we can prcoess a ldif file
with no limitation due to the available memory.

That supposes we prcoess the ldif file by chunks, and once the chuks are
sorted, then we process them as a whole, pulling one element from each
of the sorted list of DN and picking the smallest to inject it into the
BTree.

A few hints here :
- we are using treeSet to store the DN. That's ok, but we would be way
more efficient if we were to use an array instead. injecting elements in
a treeset is way more costly then storing the same elements in an array
and doing a Arraus.sort( array ) (around 2 orders of magnitude slower).

- As soon as we decide to process the ldif file by chunks, it's easy to
use an array. We can say that we will process 100 000 DN at a time. If
we have 500 000 entries to inject, then we will have 5 chunks.

- The profiling I have done shows that we are reading the full LDIF
twice. That's all fine, thanks to Kiran implementation : we don't
process the full LDIF in the first phase, we just process the DN. That
make me think we could spare the DN parsing in the second phase, as it's
already  done (that would save a little bit of CPU).

- Another thing the profiler shows is that 80% of the time is spent in
the BufferReader.readLine() method. For some reason, it boils down to
using a SocketInputStream behind the curtain. At this point, if we are
using the chunk technique, we could probably avoid parsing the ldif file
twice (then that would mean we will store Entries and not only DN)

- using InputStream is expensive. We have to check what kind of
performance we would get by using a MemoryMappedFile instead.

- I think we can slightly improve the algorithm by not keeping N pages
in memory for each level (N being the number of elements taht a page can
contain). We just need a stack that holds M pages, M being th efinal
level of the B-tree. Every time a page is completed at one level, we
just have to pick the leftmost element and store it into the paren't
page (with a few corner case, like the leftmost page of each level which
will not push its leftmost element to the parent page). We also can
reuse the pages, instead of creating new ones
Those are minor optimizations though...

That's just a few ideas. Feel free to bring new ones on the table !

Mime
View raw message