hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From sunww <spe...@outlook.com>
Subject write to most datanode fail quickly
Date Tue, 14 Oct 2014 09:31:58 GMT
Hi    I'm using hbase with about 20 regionserver. And  one regionserver failed to write  most
of datanodes quickly, finally cause this regionserver die. While other regionserver is ok.

logs like this:    java.io.IOException: Bad response ERROR for block BP-165080589-
from datanode	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:681)2014-10-13
09:23:01,227 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery for block BP-165080589-
in pipeline,, bad datanode 09:23:32,021 WARN org.apache.hadoop.hdfs.DFSClient: DFSOutputStream
ResponseProcessor exception  for block BP-165080589-
Bad response ERROR for block BP-165080589-
from datanode	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:681)
               then serveral  "firstBadLink error "    2014-10-13 09:23:33,390 INFO org.apache.hadoop.hdfs.DFSClient:
Exception in createBlockOutputStreamjava.io.IOException: Bad connect ack with firstBadLink
as	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1090)
           then serveral "Failed to add a datanode"    2014-10-13 09:23:44,331 WARN org.apache.hadoop.hdfs.DFSClient:
Error while syncingjava.io.IOException: Failed to add a datanode.  User may turn off this
feature by setting dfs.client.block.write.replace-datanode-on-failure.policy in configuration,
where the current policy is DEFAULT.  (Nodes: current=[,],
    the full log is in http://paste2.org/xfn16jm2        Any suggestion will be appreciated.
View raw message