curator-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "3l3ph4n1n3 (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CURATOR-28) Add Expiry Time To InterProcessLocks
Date Wed, 29 May 2013 13:53:21 GMT

    [ https://issues.apache.org/jira/browse/CURATOR-28?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13669262#comment-13669262
] 

3l3ph4n1n3 commented on CURATOR-28:
-----------------------------------

The client is not necessarily burdened - the situation I described above (the lockTimer thread
had a bug in it and crashed, or deadlocked itself with another thread) could happen regardless
of the load on the client. The problem I'm trying to avoid is where the heartbeat thread is
making progress but the unlocking thread isn't...
                
> Add Expiry Time To InterProcessLocks
> ------------------------------------
>
>                 Key: CURATOR-28
>                 URL: https://issues.apache.org/jira/browse/CURATOR-28
>             Project: Apache Curator
>          Issue Type: New Feature
>          Components: Recipes
>    Affects Versions: 2.0.0-incubating
>            Reporter: 3l3ph4n1n3
>            Assignee: Jordan Zimmerman
>            Priority: Minor
>
> If a client takes a distributed lock and fails without breaking its zookeeper connection
(e.g. the main application thread deadlocks) then that lock will never be released (at least
without manual intervention, e.g. killing the process that has it). When a client's acquiring
a lock I'd like to be able to specify a time after which the lock is automatically released.
If the client currently holds the lock it should be able to extend this time period as many
times as it likes. A write-up for what I'm describing for redis is here: https://chris-lamb.co.uk/posts/distributing-locking-python-and-redis
.
> I can see a couple of ways of going about this - the lock lifetime could be stored in
the node's date (and so clients could check if the node had expired by adding the lifetime
to the node's ctime or mtime). However, comparing the client's current time with the expiry
time in the node is probably not the right thing to do as the client's clock may be out of
sync with the other clients (or the zookeeper nodes). It'd be nice if zookeeper could automatically
delete nodes (i.e. release the lock) after a certain amount of time - i.e. make it the zookeeper
cluster's decision when the lock is expired, not the client's decision. However, I'm not sure
exactly how to do this...
> Thanks,

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message