hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Enis Soztutar (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (HBASE-13832) Procedure V2: master fail to start due to WALProcedureStore sync failures when HDFS data nodes count is low
Date Sat, 27 Jun 2015 00:53:04 GMT

    [ https://issues.apache.org/jira/browse/HBASE-13832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14603860#comment-14603860
] 

Enis Soztutar edited comment on HBASE-13832 at 6/27/15 12:53 AM:
-----------------------------------------------------------------

Looks good overall. 

Why RTE, rather than rethrow? 
{code}
+      throw new RuntimeException(e);
{code}

I think we may have an issue about {{syncSlots()}} never throwing an exception. If a procedure
executor thread calls {{pushData()}} which internally will wait on {{syncCond}}. In case of
an exception in syncer thread, {{syncSlots}} will start the master abort process, but returning
from master abort does not guarantee that the proc executor or rest of proc store is stopped.
Since {{syncSlots}} does not rethrow the exception,  {{syncLoop}} will continue on and call
 {{syncCond.signalAll()}} which will cause the {{pushData}} to return and think that the proc
state is persisted, while it is not. 

This new code seems to be a fix for it: 
{code}
+      if (!isRunning()) {
+        throw new RuntimeException("sync aborted");
+      }
{code}
but I think there is no guarantee that when {{master.abort()}} returns, the WALProcedureStore
may still be not stopped which means that there is a time window that the procedure can execute
assuming persisted state. I maybe missing something in the above analysis though. 


was (Author: enis):
Looks good overall. 

Why RTE, rather than rethrow? 
{code}
+      throw new RuntimeException(e);
{code}

I think we may have an issue about {{syncSlots()}} never throwing an exception. If a procedure
executor thread calls {{pushData()}} which internally will wait on {{syncCond}}. In case of
an exception in syncer thread, {{syncSlots}} will start the master abort process, but returning
from master abort does not guarantee that the proc executor or rest of proc store is stopped.
Since {{syncSlots}} does not rethrow the exception,  {{syncLoop}} will continue on and call
 {{syncCond.signalAll(); }} which will cause the {{pushData}} to return and think that the
proc state is persisted, while it is not. 

This new code seems to be a fix for it: 
{code}
+      if (!isRunning()) {
+        throw new RuntimeException("sync aborted");
+      }
{code}
but I think there is no guarantee that when {{master.abort()}} returns, the WALProcedureStore
may still be not stopped which means that there is a time window that the procedure can execute
assuming persisted state. I maybe missing something in the above analysis though. 

> Procedure V2: master fail to start due to WALProcedureStore sync failures when HDFS data
nodes count is low
> -----------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-13832
>                 URL: https://issues.apache.org/jira/browse/HBASE-13832
>             Project: HBase
>          Issue Type: Sub-task
>          Components: master, proc-v2
>    Affects Versions: 2.0.0, 1.1.0, 1.2.0
>            Reporter: Stephen Yuan Jiang
>            Assignee: Matteo Bertozzi
>            Priority: Critical
>             Fix For: 2.0.0, 1.1.2, 1.3.0, 1.2.1
>
>         Attachments: HBASE-13832-v0.patch, HBASE-13832-v1.patch, HDFSPipeline.java
>
>
> when the data node < 3, we got failure in WALProcedureStore#syncLoop() during master
start.  The failure prevents master to get started.  
> {noformat}
> 2015-05-29 13:27:16,625 ERROR [WALProcedureStoreSyncThread] wal.WALProcedureStore: Sync
slot failed, abort.
> java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to
no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[10.333.444.555:50010,DS-3c7777ed-93f4-47b6-9c23-1426f7a6acdc,DISK],
DatanodeInfoWithStorage[10.222.666.777:50010,DS-f9c983b4-1f10-4d5e-8983-490ece56c772,DISK]],
                    original=[DatanodeInfoWithStorage[10.333.444.555:50010,DS-3c7777ed-93f4-47b6-9c23-1426f7a6acdc,DISK],
DatanodeInfoWithStorage[10.222.666.777:50010,DS-f9c983b4-1f10-4d5e-8983-    490ece56c772,DISK]]).
The current failed datanode replacement policy is DEFAULT, and a client may configure this
via 'dfs.client.block.write.replace-datanode-on-failure.policy'  in its configuration.
>   at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:951)
> {noformat}
> One proposal is to implement some similar logic as FSHLog: if IOException is thrown during
syncLoop in WALProcedureStore#start(), instead of immediate abort, we could try to roll the
log and see whether this resolve the issue; if the new log cannot be created or more exception
from rolling the log, we then abort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message