cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Marcus Olsson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-11258) Repair scheduling - Resource locking API
Date Thu, 03 Mar 2016 09:15:18 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-11258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15177525#comment-15177525
] 

Marcus Olsson commented on CASSANDRA-11258:
-------------------------------------------

bq. While I think we could add a new VERB (REMOTE_CAS) to the messaging service without a
protocol bump (by reusing the UNUSED_X verbs), I think we could this in a separate ticket
to avoid losing focus here.
Great, I'll create a JIRA for it and link it to this one.

bq. So I propose we use a global CAS (SERIAL consistency) for each DC lock for the first version,
which should make multi-dc schedule repairs work when there is no network partition, and improve
later when the REMOTE_CAS verb is in place. WDYT?
+1

For this lock table to work correctly later on, it should be set up to have replicas in all
data centers, right? Should this be automatically configured or should this be something that
the user would have to configure when adding/removing data centers? From a usability point
I think it would be great if this was handled automatically and it would probably not be too
hard to create a replication strategy defined as "at most X replicas in each dc", but I'm
not sure if this might cause problems if someone where to use it for other purposes?

> Repair scheduling - Resource locking API
> ----------------------------------------
>
>                 Key: CASSANDRA-11258
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-11258
>             Project: Cassandra
>          Issue Type: Sub-task
>            Reporter: Marcus Olsson
>            Assignee: Marcus Olsson
>            Priority: Minor
>
> Create a resource locking API & implementation that is able to lock a resource in
a specified data center. It should handle priorities to avoid node starvation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message