accumulo-notifications mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Keith Turner (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (ACCUMULO-1454) Need good way to perform a rolling restart of all tablet servers
Date Mon, 11 Aug 2014 16:24:13 GMT

    [ https://issues.apache.org/jira/browse/ACCUMULO-1454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14092944#comment-14092944
] 

Keith Turner commented on ACCUMULO-1454:
----------------------------------------

I thought of another possible solution.  Instead of killing the tserver process and restarting,
start another tserver instance on the same node  while the old tserver instance is still running.
 Then migrate tablets between the old and new tserver instance on the same node.   After everything
is migrated, kill the old tserver instance on the node.

I really like this approach, but it has one small problem.  There is a potential for memory
exhaustion (from using 2x memory for buffering of read and write data).  To circumvent this,
could possibly make decommissioned tserver flush its read cache, flush recently written memory,
and hold new writes.  This approach my delay writes a bit, but seems like it would be good
for reads.


> Need good way to perform a rolling restart of all tablet servers
> ----------------------------------------------------------------
>
>                 Key: ACCUMULO-1454
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-1454
>             Project: Accumulo
>          Issue Type: Improvement
>          Components: tserver
>    Affects Versions: 1.4.3, 1.5.0
>            Reporter: Mike Drob
>
> When needing to change a tserver parameter (e.g. java heap space) across the entire cluster,
there is not a graceful way to perform a rolling restart.
> The naive approach of just killing tservers one at a time causes a lot of churn on the
cluster as tablets move around and zookeeper tries to maintain current state.
> Potential solutions might be via a fancy fate operation, with coordination by the master.
Ideally, the master would know which servers are 'safe' to restart and could minimize overall
impact during the operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message