directory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrea Gariboldi <andrea.garibo...@gmail.com>
Subject Re: Directory startup : what needs to be done...
Date Wed, 09 Sep 2009 07:12:35 GMT
The way it works with oracle (dsorapart) actually is : the SchemaPartition
is an OraclePartition that stores the schema data into the database. So
actually to handle this the first time, i have to do is:

1. extract the files into some directory
2. create the JdbmPartition from those files
3. copy the JdbmPartition into the OraclePartition
4. never use the extracted files again... (i check if i have already loaded
schema data into the database)

Obviously the working directory for an OraclePartition is useless.

cheers,
Andrea

The code of my modified DirectoryService.initialize():

  OraclePartition schemaPartition = new OraclePartition(registries, config);
        schemaPartition.setId( "schema" );
        schemaPartition.setCacheSize( 0 );
        LdapDN contextDn = new LdapDN( "ou=schema" );
        contextDn.normalize(
registries.getAttributeTypeRegistry().getNormalizerMapping() );
        schemaPartition.setSuffix( contextDn.toString() );

        ServerEntry entry = new DefaultServerEntry( registries, contextDn );
        entry.put( SchemaConstants.OBJECT_CLASS_AT,
                SchemaConstants.TOP_OC,
                SchemaConstants.ORGANIZATIONAL_UNIT_OC );
        entry.put( SchemaConstants.OU_AT, "schema" );
        schemaPartition.setContextEntry( entry );
        schemaPartition.init( this );

        boolean loadSchema= !schemaPartition.hasEntry(new
EntryOperationContext(registries, entry.getDn()));

        if (loadSchema)
        {
                    File schemaDirectory = new File( workingDirectory,
"schema" );
                    SchemaPartitionExtractor extractor;
                    if ( ! schemaDirectory.exists() )
                    {
                        try
                        {
                            extractor = new SchemaPartitionExtractor(
workingDirectory );
                            extractor.extract();
                        }
                        catch ( IOException e )
                        {
                            NamingException ne = new NamingException(
"Failed to extract pre-loaded schema partition." );
                            ne.setRootCause( e );
                            throw ne;
                        }
                    }

                    //
--------------------------------------------------------------------
                    // Initialize schema partition
                    //
--------------------------------------------------------------------

                    JdbmPartition defaultschemaPartition = new
JdbmPartition();
                    defaultschemaPartition.setId( "schema" );
                    defaultschemaPartition.setCacheSize( 1000 );
                    DbFileListing listing;

                    try
                    {
                        listing = new DbFileListing();
                    }
                    catch( IOException e )
                    {
                        throw new LdapNamingException( "Got IOException
while trying to read DBFileListing: " + e.getMessage(),
                            ResultCodeEnum.OTHER );
                    }

                    Set<Index> indexedAttributes = new HashSet<Index>();

                    for ( String attributeId :
listing.getIndexedAttributes() )
                    {
                        indexedAttributes.add( new JdbmIndex( attributeId )
);
                    }

                    defaultschemaPartition.setIndexedAttributes(
indexedAttributes );
                    defaultschemaPartition.setSuffix(
ServerDNConstants.OU_SCHEMA_DN );

                    entry = new DefaultServerEntry( registries, new LdapDN(
ServerDNConstants.OU_SCHEMA_DN ) );
                    entry.put( SchemaConstants.OBJECT_CLASS_AT,
                            SchemaConstants.TOP_OC,
                            SchemaConstants.ORGANIZATIONAL_UNIT_OC );
                    entry.put( SchemaConstants.OU_AT, "schema" );
                    defaultschemaPartition.setContextEntry( entry );
                    defaultschemaPartition.init( this );

                    LdapDN base = new LdapDN( ServerDNConstants.OU_SCHEMA_DN
);

base.normalize(registries.getAttributeTypeRegistry().getNormalizerMapping());
                    ExprNode filter = new PresenceNode(
registries.getOidRegistry().getOid( SchemaConstants.OBJECT_CLASS_AT ));
                    SearchControls searchControls = new SearchControls();
                    searchControls.setSearchScope(
SearchControls.SUBTREE_SCOPE );
                    searchControls.setReturningAttributes(new
String[]{"*","+"});

                    NamingEnumeration<ServerSearchResult> all=
defaultschemaPartition.search( new SearchOperationContext( registries, base,
AliasDerefMode.DEREF_ALWAYS, filter,
                            searchControls ) );

                    while (all.hasMore())
                    {
                        ServerSearchResult ssr= all.next();
                        schemaPartition.add(new
AddOperationContext(registries,ssr.getServerEntry()));
                    }

                    defaultschemaPartition.destroy();
        }
















2009/9/9 Alex Karasulu <akarasulu@gmail.com>

> On Wed, Sep 9, 2009 at 2:05 AM, Emmanuel Lecharny <elecharny@apache.org>wrote:
>
>> Hi,
>>
>> I was working on the DirectoryService startup process yesturday. This is
>> called by the DirectoryServiceFactory, and when you launch the integration
>> tests, you get a NPE.
>>
>> The reason we get it is because at some point, we don't have a working
>> directory to look at for the schema.
>>
>> In order to fix this, we just have to inject the SchemaPartition working
>> directory, and we will be set. The question is now to determinate how we
>> will inject this property.
>>
>>
> The SchemaPartition does not need a workingDirectory property necessarily.
> Just add the workingDirectory property to the LdifPartition or whatever is
> wrapped by the SchemaPartition.
>
>
>
>> The schema partition is initialized only once : if its working directory
>> is empty. In this case, we extract all the existing schema files from a jar,
>> into a directory. Otherwise, we just read this directory, and load the
>> registries. The problem is that we must have a pointer to this working
>> directory, and we don't have any atm.
>>
>>
> Question is do we make the SchemaPartition unzip the schema files (only
> done for LdifPartition at this point) or do we have the Partition which is
> wrapped handle this?  The OraclePartition if used as the wrapped Partition
> will not need this I think.
>






>
>
>> Questions :
>> 1) how do we set the working directory ?
>>
>
> Set it programmatically on the LdifPartition.
>
>
>> 2) when do we set it, assuming that it may be a part of the configuration
>> ?
>>
>
> After instantiation of the LdifPartition, then call
> SchemaPartition.setWrappedPartition( ldifPartition) so the SchemaPartition
> has the reference to the LdifPartition.
>
> 3) shouldn't it be a static definition ?
>>
>
> NO STATICS ! ! ! BAD JUJU WITH STATICS.
>
>
>> Right now, all the partitions are stored into the DirectoryService working
>> directory (default to 'server-work'), under a subdirectory using the
>> partition Id (ie, "schema" here for the schema partition). That sounds good
>> to me atm.
>>
>> wdyt ?
>>
>>
> That's because tests default to setting the workingDirectory property of
> the DS to this server-work directory.  When they do that the Partitions have
> a base to create their own work directories for their data.
>
> --
> Alex Karasulu
> My Blog :: http://www.jroller.com/akarasulu/
> Apache Directory Server :: http://directory.apache.org
> Apache MINA :: http://mina.apache.org
>
>

Mime
View raw message