directory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Norval Hope" <nrh...@gmail.com>
Subject Re: Implementing the PagedSearchControl
Date Tue, 09 Dec 2008 05:17:30 GMT
Hi Guys,

> There are vicious issues, though. Some of them are related to the way we
> have designed the server. For instance, when comparing the previous
> searchRequest with the current one, you have to compare attributes, DN and
> filters. That's not complicated, except that those elements might not be
> equals, just because they have not yet been normalized at this point (in
> SearchHandler).

I'm missing something here - why is comparing search results required?
I would have thought that all that would be stored on the server-side
for the paged search would be a key that could be used to look up a
live cursor, in which case there would never be a need to compare
search result entries (just to call Cursor.next() N times until a full
page of results could be returned, and when the next request came to
use the key stored in the session to look up the cursor and repeat).

>
> This is a big issue. At this point, we can manage to normalize the DN and
> attributes, but for the filter, this is another story. This make me think
> that the Normalize interceptor is not necessary, and that it should be moved
> up in the stack (in the codec, in fact).

>From the perspective (in part influenced by my VD work, but also
driven by encapsulation considerations) I think things are cleaner
when the codec (or at least some well defined layer of the codec) does
*only* the coding and decoding job, without any expectation of
normalization or access to schema information. IMO for the VD usecase
the AD server shouldn't expect schema information to be available, as
the "real" validation is going to be performed on the ultimate
concrete endpoints and the AD server is just acting as a transport /
container.

>
> Otherwise, the other problem we have is the Cursor closure. When we are done
> with them, we should close those guys. This is easy if the client behave
> correctly (ie, send a last request with 0 as the number of element to
> return, or if we reach the end of the entries to return), but io the client
> don't do that, we will end with potentially thousands of open cursors in
> memory.
>
> So we need to add a cleanup thread associated with each session, closing the
> cursor after a timeout has occured.

I'd expect a single thread (or singleton execution Executor) could do
this job for all sessions. And also if the searches are being paged
I'd imagine retrieving a single page of search results could also be
done in the MINA / AD worker thread rather then requiring a separate
thread, unless the Cursor implementation itself mandates a specific
thread is kept around. Hence the only resources needed for each search
request would be an entry in a <String, Cursor> hashmap (presuming the
key stored is a string) and the Cursor itself.

Cheers,
Norval

Mime
View raw message