On Wed, Sep 9, 2009 at 10:16 AM, Emmanuel Lecharny <firstname.lastname@example.org>
See my answer in your next response.
Alex Karasulu wrote:
On Wed, Sep 9, 2009 at 2:05 AM, Emmanuel Lecharny <email@example.com>wrote:
The SchemaPartition does not need a workingDirectory property necessarily.
I was working on the DirectoryService startup process yesturday. This is
called by the DirectoryServiceFactory, and when you launch the integration
tests, you get a NPE.
The reason we get it is because at some point, we don't have a working
directory to look at for the schema.
In order to fix this, we just have to inject the SchemaPartition working
directory, and we will be set. The question is now to determinate how we
will inject this property.
Just add the workingDirectory property to the LdifPartition or whatever is
wrapped by the SchemaPartition.
I think that the SchemaPartition should be responsible for the initialization of the wrapped partition.
The schema partition is initialized only once : if its working directory is
Question is do we make the SchemaPartition unzip the schema files (only done
empty. In this case, we extract all the existing schema files from a jar,
into a directory. Otherwise, we just read this directory, and load the
registries. The problem is that we must have a pointer to this working
directory, and we don't have any atm.
for LdifPartition at this point) or do we have the Partition which is
wrapped handle this? The OraclePartition if used as the wrapped Partition
will not need this I think.
This is exactly the kind of thought process that is an impediment to establishing a solid component based model with a container driving initialization. The reason why we refactored and decoupled things out of the core was to make sure the DefaultDirectoryService and it's internals were not involved in configuration of contained components.
We must let the container configure components: not other components that contain them. This is utterly going in the opposite direction.
If you start a brand new server, the schema partition does not exist yet (on disk, or in oracle). You then have to extract the schema objects from the jar, and inject those into the wrapped partition. If we are using a ldif partition, it's easy, as we just have to extract the data on disk. For an Oracle partition, we have to inject the objects one by one, to initialize the database. In the first case, and if the extraction of the initial schemas is done by the schemaPartition, then the SchemaPartition must know what is the working directory.
On the other hand, if we delegate to the ldif partition the extraction of the initial schemas, then we have to implement this function, which is not used by any other LdifPartition.
So the SchemaPartition has to have a knowledge about the working directory when it does this initial extraction. Question : can we get this working directory from the wrapped LdifPartition? Something like :
LdifPartition ldifPartition = new LdifPartition();
ldifPartition.setWorkingDirectory( blah );
SchemaPartition schemaPartition = new SchemaPartition();
schemaPartition.setWrappedPartition( ldifPartition );
String workingDirectory = wrappedPartition.getWorkingDirectory();
// Check if the schemas have already been extracted
// If not, do the initial extraction using the Jar
// otherwise call the ldifPartition.initialize() method
I think so, and will try to implement it this way.
Let's get a chance to talk before we go down this road.