hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Koji Noguchi (JIRA)" <j...@apache.org>
Subject [jira] Created: (HADOOP-990) Datanode doesn't retry when write to one (full)drive fail
Date Wed, 07 Feb 2007 22:14:05 GMT
Datanode doesn't retry when write to one (full)drive fail

                 Key: HADOOP-990
                 URL: https://issues.apache.org/jira/browse/HADOOP-990
             Project: Hadoop
          Issue Type: Bug
          Components: dfs
            Reporter: Koji Noguchi

When one drive is 99.9% full and datanode choose that drive to write, it fails with 

2007-02-07 18:16:56,574 WARN org.apache.hadoop.dfs.DataNode: DataXCeiver
org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: No space left on device
 at org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:801)
 at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:563)
 at java.lang.Thread.run(Thread.java:595) 

Combined with HADOOP-940, these failed blocks stay under-replicated.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message