hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jerry He (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-13832) Procedure V2: master fail to start due to WALProcedureStore sync failures when HDFS data nodes count is low
Date Tue, 23 Jun 2015 23:58:43 GMT

    [ https://issues.apache.org/jira/browse/HBASE-13832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598604#comment-14598604
] 

Jerry He commented on HBASE-13832:
----------------------------------

I have recently seen similar failures on our HBase 1.1.0 clusters:
{noformat}
2015-06-22 18:20:20,240 INFO  [B.defaultRpcServer.handler=36,queue=0,port=60000] master.HMaster:
Client=bigsql/null create 'bigsql.smoke_1820163205', {TABLE_ATTRIBUTES => {METADATA =>
{'hbase.columns.mapping' => '[{"key":{"c":[{"n":"c1","t":"int"}]}},{"cmap":[{"c":[{"n":"c2","t":"int"},{"n":"c3","t":"int"}],"f":"cf1","q":"cq1"}]},{"def":{"enc":{"encName":"binary"},"sep":"\x5Cu0000"}},{"sqlc":"c1,c2,c3"}]'}},
{NAME => 'cf1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE
=> '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL =>
'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false',
BLOCKCACHE => 'true'}
2015-06-22 18:20:20,341 ERROR [WALProcedureStoreSyncThread] wal.WALProcedureStore: sync slot
failed, abort.
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more
good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[9.30.255.244:50010,DS-5fd53475-f1d1-4141-b732-4df6998996ca,DISK],
DatanodeInfoWithStorage[9.30.255.245:50010,DS-9b9ed517-99e8-4d92-8d85-42650b6e97db,DISK]],
original=[DatanodeInfoWithStorage[9.30.255.244:50010,DS-5fd53475-f1d1-4141-b732-4df6998996ca,DISK],
DatanodeInfoWithStorage[9.30.255.245:50010,DS-9b9ed517-99e8-4d92-8d85-42650b6e97db,DISK]]).
The current failed datanode replacement policy is DEFAULT, and a client may configure this
via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:918)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:984)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1131)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:876)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:402)
2015-06-22 18:20:20,341 FATAL [WALProcedureStoreSyncThread] master.HMaster: Master server
abort: loaded coprocessors are: []
2015-06-22 18:20:20,341 INFO  [WALProcedureStoreSyncThread] regionserver.HRegionServer: STOPPED:
The Procedure Store lost the lease
2015-06-22 18:20:20,343 INFO  [master/bdls142.svl.ibm.com/9.30.255.242:60000] regionserver.HRegionServer:
Stopping infoServer
2015-06-22 18:20:20,350 INFO  [master/bdls142.svl.ibm.com/9.30.255.242:60000] mortbay.log:
Stopped SelectChannelConnector@0.0.0.0:60010
2015-06-22 18:20:20,351 INFO  [master/bdls142.svl.ibm.com/9.30.255.242:60000] procedure2.ProcedureExecutor:
Stopping the procedure executor
2015-06-22 18:20:20,352 INFO  [master/bdls142.svl.ibm.com/9.30.255.242:60000] wal.WALProcedureStore:
Stopping the WAL Procedure Store
2015-06-22 18:20:20,352 WARN  [master/bdls142.svl.ibm.com/9.30.255.242:60000] wal.WALProcedureStore:
Unable to write the trailer: Failed to replace a bad datanode on the existing pipeline due
to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[9.30.255.244:50010,DS-5fd53475-f1d1-4141-b732-4df6998996ca,DISK],
DatanodeInfoWithStorage[9.30.255.245:50010,DS-9b9ed517-99e8-4d92-8d85-42650b6e97db,DISK]],
original=[DatanodeInfoWithStorage[9.30.255.244:50010,DS-5fd53475-f1d1-4141-b732-4df6998996ca,DISK],
DatanodeInfoWithStorage[9.30.255.245:50010,DS-9b9ed517-99e8-4d92-8d85-42650b6e97db,DISK]]).
The current failed datanode replacement policy is DEFAULT, and a client may configure this
via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
2015-06-22 18:20:20,352 ERROR [master/bdls142.svl.ibm.com/9.30.255.242:60000] wal.WALProcedureStore:
Unable to close the stream
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more
good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[9.30.255.244:50010,DS-5fd53475-f1d1-4141-b732-4df6998996ca,DISK],
DatanodeInfoWithStorage[9.30.255.245:50010,DS-9b9ed517-99e8-4d92-8d85-42650b6e97db,DISK]],
original=[DatanodeInfoWithStorage[9.30.255.244:50010,DS-5fd53475-f1d1-4141-b732-4df6998996ca,DISK],
DatanodeInfoWithStorage[9.30.255.245:50010,DS-9b9ed517-99e8-4d92-8d85-42650b6e97db,DISK]]).
The current failed datanode replacement policy is DEFAULT, and a client may configure this
via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:918)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:984)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1131)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:876)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:402)
{noformat}

> Procedure V2: master fail to start due to WALProcedureStore sync failures when HDFS data
nodes count is low
> -----------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-13832
>                 URL: https://issues.apache.org/jira/browse/HBASE-13832
>             Project: HBase
>          Issue Type: Sub-task
>          Components: master, proc-v2
>    Affects Versions: 2.0.0, 1.1.0, 1.2.0
>            Reporter: Stephen Yuan Jiang
>            Assignee: Matteo Bertozzi
>         Attachments: HBASE-13832-v0.patch, HDFSPipeline.java
>
>
> when the data node < 3, we got failure in WALProcedureStore#syncLoop() during master
start.  The failure prevents master to get started.  
> {noformat}
> 2015-05-29 13:27:16,625 ERROR [WALProcedureStoreSyncThread] wal.WALProcedureStore: Sync
slot failed, abort.
> java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to
no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[10.333.444.555:50010,DS-3c7777ed-93f4-47b6-9c23-1426f7a6acdc,DISK],
DatanodeInfoWithStorage[10.222.666.777:50010,DS-f9c983b4-1f10-4d5e-8983-490ece56c772,DISK]],
                    original=[DatanodeInfoWithStorage[10.333.444.555:50010,DS-3c7777ed-93f4-47b6-9c23-1426f7a6acdc,DISK],
DatanodeInfoWithStorage[10.222.666.777:50010,DS-f9c983b4-1f10-4d5e-8983-    490ece56c772,DISK]]).
The current failed datanode replacement policy is DEFAULT, and a client may configure this
via 'dfs.client.block.write.replace-datanode-on-failure.policy'  in its configuration.
>   at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:951)
> {noformat}
> One proposal is to implement some similar logic as FSHLog: if IOException is thrown during
syncLoop in WALProcedureStore#start(), instead of immediate abort, we could try to roll the
log and see whether this resolve the issue; if the new log cannot be created or more exception
from rolling the log, we then abort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message