hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Koji Noguchi (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HDFS-1158) HDFS-457 increases the chances of losing blocks
Date Tue, 01 Jun 2010 18:17:39 GMT

     [ https://issues.apache.org/jira/browse/HDFS-1158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Koji Noguchi updated HDFS-1158:

    Priority: Major  (was: Critical)

Lowering priority.  As long as HDFS-1161 makes it to 0.21,  this is not a huge issue for me.
We can keep this Jira open for further discussion or close it as duplicate of HDFS-1161.

In addition to the question of how we should handle /tmp, pid, volume dir errors, 
maybe additional feature for the datanode to decommission when it decides to kill itself?

>  HDFS-457 increases the chances of losing blocks
> ------------------------------------------------
>                 Key: HDFS-1158
>                 URL: https://issues.apache.org/jira/browse/HDFS-1158
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>    Affects Versions: 0.21.0
>            Reporter: Koji Noguchi
> Whenever we restart a cluster, there's a chance of losing some blocks if more than three
datanodes don't come up.
> HDFS-457 increases this chance by keeping the datanodes up even when 
>    # /tmp disk goes read-only
>    # /disk0 that is used for storing PID goes read-only 
> and probably more.
> In our environment, /tmp and /disk0 are from the same device.
> When trying to restart a datanode, it would fail with
> 1) 
> {noformat}
> 2010-05-15 05:45:45,575 WARN org.mortbay.log: tmpdir
> java.io.IOException: Read-only file system
>         at java.io.UnixFileSystem.createFileExclusively(Native Method)
>         at java.io.File.checkAndCreate(File.java:1704)
>         at java.io.File.createTempFile(File.java:1792)
>         at java.io.File.createTempFile(File.java:1828)
>         at org.mortbay.jetty.webapp.WebAppContext.getTempDirectory(WebAppContext.java:745)
> {noformat}
> or 
> 2) 
> {noformat}
> hadoop-daemon.sh: line 117: /disk/0/hadoop-datanode....com.out: Read-only file system
> hadoop-daemon.sh: line 118: /disk/0/hadoop-datanode.pid: Read-only file system
> {noformat}
> I can recover the missing blocks but it takes some time.
> Also, we are losing track of block movements since log directory can also go to read-only
but datanode would continue running.
> For 0.21 release, can we revert HDFS-457 or make it configurable?

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message