hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Arpit Agarwal (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-12453) TestDataNodeHotSwapVolumes fails in trunk Jenkins runs
Date Thu, 14 Sep 2017 19:17:00 GMT
Arpit Agarwal created HDFS-12453:
------------------------------------

             Summary: TestDataNodeHotSwapVolumes fails in trunk Jenkins runs
                 Key: HDFS-12453
                 URL: https://issues.apache.org/jira/browse/HDFS-12453
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: test
            Reporter: Arpit Agarwal
            Priority: Critical


TestDataNodeHotSwapVolumes fails occasionally with the following error. Ran it ~10 times locally
and it passed every time.

{code}
Error Message

Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being
available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:46950,DS-e653e0ec-e490-47d3-9cca-cdd6dff3964a,DISK],
DatanodeInfoWithStorage[127.0.0.1:34486,DS-fd83f642-be1a-44ed-8ba5-6ad5a42e85fd,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:34486,DS-fd83f642-be1a-44ed-8ba5-6ad5a42e85fd,DISK],
DatanodeInfoWithStorage[127.0.0.1:46950,DS-e653e0ec-e490-47d3-9cca-cdd6dff3964a,DISK]]). The
current failed datanode replacement policy is DEFAULT, and a client may configure this via
'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

Stacktrace

java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more
good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:46950,DS-e653e0ec-e490-47d3-9cca-cdd6dff3964a,DISK],
DatanodeInfoWithStorage[127.0.0.1:34486,DS-fd83f642-be1a-44ed-8ba5-6ad5a42e85fd,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:34486,DS-fd83f642-be1a-44ed-8ba5-6ad5a42e85fd,DISK],
DatanodeInfoWithStorage[127.0.0.1:46950,DS-e653e0ec-e490-47d3-9cca-cdd6dff3964a,DISK]]). The
current failed datanode replacement policy is DEFAULT, and a client may configure this via
'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
	at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1321)
	at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1387)
	at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1586)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1487)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1469)
	at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1273)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:684)

{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org


Mime
View raw message