Yet another idea ...

Constructing a Cursor for a filter imposes some overhead in search which imposes some latency before the first entry is even returned.  When the same filter with different assertion values are used frequently in search operations that return a small set of entries this latency could be a major factor in determining search rates.  What if we can parameterize filters as suggested in the previous thread on filter indices:

(& (objectClass=person) (ou=?) (l=?))

Presume we implement a special search request control called prepare filter.  This search would return immediately without results.  The request would build a special kind of parameterized Cursor matching those of the filter.  The Cursor is kept in a cache which associates the filter with the Cursor.  The search result done response with SUCCESS would indicate filter preparation succeeded, while a special result code for statement preparation failure could be used to denote errors.

When normal search requests are received the Cursor cache is consulted to quickly pull out and deep clone this Cursor.  The cloned Cursor copy's parameter values are set for the ou and localityName attributes.  Then the Cursor is used to conduct the search.  This ***could*** be cheaper than building the Cursor (the search plan) from scratch by analyzing the filter AST.

If this latency can be removed a big impact on search request rates can be felt especially when the same search is performed repeatedly to return a small number of results.  I've seen this happen a lot in real world scenarios.  I've seen many frequent one level scoped searches using the same filter with different values to return a small set of entries or just a single entry.

Thoughts?

Alex