The way it works with oracle (dsorapart) actually is : the SchemaPartition is an OraclePartition that stores the schema data into the database. So actually to handle this the first time, i have to do is:

1. extract the files into some directory
2. create the JdbmPartition from those files
3. copy the JdbmPartition into the OraclePartition
4. never use the extracted files again... (i check if i have already loaded schema data into the database)

Obviously the working directory for an OraclePartition is useless.


The code of my modified DirectoryService.initialize():

  OraclePartition schemaPartition = new OraclePartition(registries, config);
        schemaPartition.setId( "schema" );
        schemaPartition.setCacheSize( 0 );
        LdapDN contextDn = new LdapDN( "ou=schema" );
        contextDn.normalize( registries.getAttributeTypeRegistry().getNormalizerMapping() );
        schemaPartition.setSuffix( contextDn.toString() );
        ServerEntry entry = new DefaultServerEntry( registries, contextDn );
        entry.put( SchemaConstants.OBJECT_CLASS_AT,
                SchemaConstants.ORGANIZATIONAL_UNIT_OC );
        entry.put( SchemaConstants.OU_AT, "schema" );
        schemaPartition.setContextEntry( entry );
        schemaPartition.init( this );
        boolean loadSchema= !schemaPartition.hasEntry(new EntryOperationContext(registries, entry.getDn()));
        if (loadSchema)
                    File schemaDirectory = new File( workingDirectory, "schema" );
                    SchemaPartitionExtractor extractor;
                    if ( ! schemaDirectory.exists() )
                            extractor = new SchemaPartitionExtractor( workingDirectory );
                        catch ( IOException e )
                            NamingException ne = new NamingException( "Failed to extract pre-loaded schema partition." );
                            ne.setRootCause( e );
                            throw ne;
                    // --------------------------------------------------------------------
                    // Initialize schema partition
                    // --------------------------------------------------------------------
                    JdbmPartition defaultschemaPartition = new JdbmPartition();
                    defaultschemaPartition.setId( "schema" );
                    defaultschemaPartition.setCacheSize( 1000 );
                    DbFileListing listing;
                        listing = new DbFileListing();
                    catch( IOException e )
                        throw new LdapNamingException( "Got IOException while trying to read DBFileListing: " + e.getMessage(),
                            ResultCodeEnum.OTHER );
                    Set<Index> indexedAttributes = new HashSet<Index>();
                    for ( String attributeId : listing.getIndexedAttributes() )
                        indexedAttributes.add( new JdbmIndex( attributeId ) );
                    defaultschemaPartition.setIndexedAttributes( indexedAttributes );
                    defaultschemaPartition.setSuffix( ServerDNConstants.OU_SCHEMA_DN );
                    entry = new DefaultServerEntry( registries, new LdapDN( ServerDNConstants.OU_SCHEMA_DN ) );
                    entry.put( SchemaConstants.OBJECT_CLASS_AT,
                            SchemaConstants.ORGANIZATIONAL_UNIT_OC );
                    entry.put( SchemaConstants.OU_AT, "schema" );
                    defaultschemaPartition.setContextEntry( entry );
                    defaultschemaPartition.init( this );

                    LdapDN base = new LdapDN( ServerDNConstants.OU_SCHEMA_DN );
                    ExprNode filter = new PresenceNode( registries.getOidRegistry().getOid( SchemaConstants.OBJECT_CLASS_AT ));
                    SearchControls searchControls = new SearchControls();
                    searchControls.setSearchScope( SearchControls.SUBTREE_SCOPE );
                    searchControls.setReturningAttributes(new String[]{"*","+"});
                    NamingEnumeration<ServerSearchResult> all= new SearchOperationContext( registries, base, AliasDerefMode.DEREF_ALWAYS, filter,
                            searchControls ) );
                    while (all.hasMore())
                        ServerSearchResult ssr=;
                        schemaPartition.add(new AddOperationContext(registries,ssr.getServerEntry()));

2009/9/9 Alex Karasulu <>
On Wed, Sep 9, 2009 at 2:05 AM, Emmanuel Lecharny <> wrote:

I was working on the DirectoryService startup process yesturday. This is called by the DirectoryServiceFactory, and when you launch the integration tests, you get a NPE.

The reason we get it is because at some point, we don't have a working directory to look at for the schema.

In order to fix this, we just have to inject the SchemaPartition working directory, and we will be set. The question is now to determinate how we will inject this property.

The SchemaPartition does not need a workingDirectory property necessarily.  Just add the workingDirectory property to the LdifPartition or whatever is wrapped by the SchemaPartition.

The schema partition is initialized only once : if its working directory is empty. In this case, we extract all the existing schema files from a jar, into a directory. Otherwise, we just read this directory, and load the registries. The problem is that we must have a pointer to this working directory, and we don't have any atm.

Question is do we make the SchemaPartition unzip the schema files (only done for LdifPartition at this point) or do we have the Partition which is wrapped handle this?  The OraclePartition if used as the wrapped Partition will not need this I think.

Questions :
1) how do we set the working directory ?

Set it programmatically on the LdifPartition.
2) when do we set it, assuming that it may be a part of the configuration ?

After instantiation of the LdifPartition, then call SchemaPartition.setWrappedPartition( ldifPartition) so the SchemaPartition has the reference to the LdifPartition. 

3) shouldn't it be a static definition ?


Right now, all the partitions are stored into the DirectoryService working directory (default to 'server-work'), under a subdirectory using the partition Id (ie, "schema" here for the schema partition). That sounds good to me atm.

wdyt ?

That's because tests default to setting the workingDirectory property of the DS to this server-work directory.  When they do that the Partitions have a base to create their own work directories for their data.

Alex Karasulu
My Blog ::
Apache Directory Server ::
Apache MINA ::