jackrabbit-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Dominique Pfister" <dominique.pfis...@day.com>
Subject Re: Repository locking with clustering
Date Tue, 08 May 2007 06:39:47 GMT
Hi Jon,

On 5/8/07, Jon Walker <jwalker@shokker.net> wrote:
> I've been looking at the clustering feature that was added in, but am
> having trouble with the locking.  I have 2 Model 1 deployments set up
> using clustering (each with a unique ID).  Through shared drives, they are
> accessing the same repository location (unique configurations for the
> clustering) and I have SimpleDBPersistence to SQL Server for persistence.

Sharing the repository location is not supported: every repository
must have its own, unique repository home. This is also the reason for
the exception you get.

In order to use clustering, not the actual repository homes, but the
persistence managers of your 2 deployments must point to the same
location. Furtheron, only database persistence managers currently
provide the ACID functionality required for transactionally correct
behaviour.

I recently added some wiki page to help setup clustering that might be helpful:

http://wiki.apache.org/jackrabbit/Clustering

If you have more questions or find that some important information is
missing, please do not hesitate to ask.

Kind regards
Dominique

>
> On first access to each instance, the webservices attempt to start the
> repository.  The first server to be hit will create a Repository instance
> and create the .lock file in the repository's home directory.  When the
> other server gets hit and attempts to start the repository, it throws an
> exception saying "The repository home at [path] appears to be in use since
> the file at [path]/.lock is locked by another process."
>
> Is what I'm trying to do possible?  Do I have something wrong in my
> configuration that I need to check?
>
> Thanks for your help,
>
> Jon
>

Mime
View raw message