curator-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "3l3ph4n1n3 (JIRA)" <>
Subject [jira] [Commented] (CURATOR-28) Add Expiry Time To InterProcessLocks
Date Wed, 05 Jun 2013 01:49:19 GMT


3l3ph4n1n3 commented on CURATOR-28:

It's indeed cautious - but I really don't want my distributed locks to have a single point
of failure (as you can't guarantee clients don't have bugs).

Some more examples here -
- and I've seen a client throw NoClassDefFoundErrors at runtime as one of the jars on its
classpath was temporarily unreachable as it was on a networked file system!

If I submitted a patch that (optionally) made all the clients attempt to delete a lock node
after a fixed (but configurable) amount of time, would you consider including it? It'd remove
the single point of failure (as any client can release the lock after the timeout, not just
the one that took it), and this kind of functionality would ideally be part of the library
anyway (to prevent applications that need it from writing pretty much the same code every
> Add Expiry Time To InterProcessLocks
> ------------------------------------
>                 Key: CURATOR-28
>                 URL:
>             Project: Apache Curator
>          Issue Type: New Feature
>          Components: Recipes
>    Affects Versions: 2.0.0-incubating
>            Reporter: 3l3ph4n1n3
>            Priority: Minor
> If a client takes a distributed lock and fails without breaking its zookeeper connection
(e.g. the main application thread deadlocks) then that lock will never be released (at least
without manual intervention, e.g. killing the process that has it). When a client's acquiring
a lock I'd like to be able to specify a time after which the lock is automatically released.
If the client currently holds the lock it should be able to extend this time period as many
times as it likes. A write-up for what I'm describing for redis is here:
> I can see a couple of ways of going about this - the lock lifetime could be stored in
the node's date (and so clients could check if the node had expired by adding the lifetime
to the node's ctime or mtime). However, comparing the client's current time with the expiry
time in the node is probably not the right thing to do as the client's clock may be out of
sync with the other clients (or the zookeeper nodes). It'd be nice if zookeeper could automatically
delete nodes (i.e. release the lock) after a certain amount of time - i.e. make it the zookeeper
cluster's decision when the lock is expired, not the client's decision. However, I'm not sure
exactly how to do this...
> Thanks,

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see:

View raw message