jackrabbit-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Colin Anderson" <colin.w.ander...@gmail.com>
Subject Jackrabbit Clustering
Date Fri, 30 May 2008 09:45:13 GMT
Hi,

I've read over the article on Jackrabbit clustering on the wiki, but I
still have a couple of questions and I'm hoping someone on the list
can answer them.

I'll describe our experiences so far...

Our current cluster has two Jackrabbit instances, nodeA and nodeB.
When we first configured the cluster we had both of them using the
same shared repository home (/DB_files/clustering, an NFS mount) but
their own local repository.xml file. This didn't work as nodeB
wouldn't start as it had detected the lock file created by nodeA.

Next, we hardcoded the paths to the repository and workspace
directories and set the repository home to a local directory. This
allowed both nodeA and nodeB to start, but Lucene would throw
exceptions relating to missing index files. I'm assuming this was due
to both nodes sharing the same index files and one removing them
whilst the other was trying to write to them. This is what the
repository.xml looked like, I've removed most of the search params and
the version/security sections for the sake of brevity:

<Repository>
  <Cluster id="nodeA">
  <Journal class="org.apache.jackrabbit.core.journal.FileJournal">
    <param name="revision"
value="/DB_files/clustering/cluster-revision/revision.log" />
    <param name="directory" value="/DB_files/clustering/cluster-journal/" />
  </Journal>
  </Cluster>
  <DataStore class="org.apache.jackrabbit.core.data.FileDataStore">
    <param name="path" value="/DB_files/clustering/repository/datastore"/>
    <param name="minRecordLength" value="100"/>
  </DataStore>
  <FileSystem class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
    <param name="path" value="/DB_files/clustering/repository"/>
  </FileSystem>
  <Workspaces rootPath="/DB_files/clustering/workspaces"
defaultWorkspace="default"/>
  <Workspace name="${wsp.name}">
    <FileSystem class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
      <param name="path" value="${wsp.home}"/>
    </FileSystem>
    <PersistenceManager
class="org.apache.jackrabbit.core.persistence.bundle.BundleFsPersistenceManager"
>
      <param name="bundlecacheSize" value="8"/>
      <param name="consistencyCheck" value="false"/>
      <param name="errorHandling" value=""/>
    </PersistenceManager>
    <ISMLocking class="org.apache.jackrabbit.core.state.FineGrainedISMLocking">
    </ISMLocking>
    <SearchIndex class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
      <param name="path" value="${wsp.home}/index"/>
    </SearchIndex>
  </Workspace>
  <SearchIndex class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
    <param name="path" value="/DB_files/clustering/repository/index"/>
  </SearchIndex>
</Repository>

So, as that hadn't worked, we then tried it with each node pointing to
it's own index directory:

<SearchIndex class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
  <param name="path" value="${wsp.home}/index/nodeB"/>
</SearchIndex>

 This no longer threw up the Lucene exceptions, but now we are seeing
errors like this:

javax.jcr.InvalidItemStateException:
fa8ddc2d-db42-4b26-b2f2-7909344a243f: the item cannot be saved because
it has been modified externally.
        at org.apache.jackrabbit.core.ItemImpl.getTransientStates(ItemImpl.java:378)
        at org.apache.jackrabbit.core.ItemImpl.save(ItemImpl.java:1083)
        at org.apache.jackrabbit.core.SessionImpl.save(SessionImpl.java:897)

So, what exactly can and can't you share in a clustered repository environment?

TIA,
Colin.

Mime
View raw message