hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Koji Noguchi (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HDFS-1158) HDFS-457 increases the chances of losing blocks
Date Tue, 31 Aug 2010 23:34:57 GMT

     [ https://issues.apache.org/jira/browse/HDFS-1158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Koji Noguchi updated HDFS-1158:
-------------------------------

    Fix Version/s:     (was: 0.22.0)
                       (was: 0.21.1)

bq. Should this be a blocker for 0.21.1 and 0.22 now that 457 is out in the wild with 0.21?

Allen, sorry for the confusing Jira.  After I created this Jira, Eli created HDFS-1161 to
take care of my main concern.  With HDFS-1161 which also went in 0.21, behavior is same as
before.  Datanode would kill itself when seeing any single error on disk.

Taking out 0.21/0.22 target for now. 


>  HDFS-457 increases the chances of losing blocks
> ------------------------------------------------
>
>                 Key: HDFS-1158
>                 URL: https://issues.apache.org/jira/browse/HDFS-1158
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>    Affects Versions: 0.21.0
>            Reporter: Koji Noguchi
>         Attachments: rev-HDFS-457.patch
>
>
> Whenever we restart a cluster, there's a chance of losing some blocks if more than three
datanodes don't come up.
> HDFS-457 increases this chance by keeping the datanodes up even when 
>    # /tmp disk goes read-only
>    # /disk0 that is used for storing PID goes read-only 
> and probably more.
> In our environment, /tmp and /disk0 are from the same device.
> When trying to restart a datanode, it would fail with
> 1) 
> {noformat}
> 2010-05-15 05:45:45,575 WARN org.mortbay.log: tmpdir
> java.io.IOException: Read-only file system
>         at java.io.UnixFileSystem.createFileExclusively(Native Method)
>         at java.io.File.checkAndCreate(File.java:1704)
>         at java.io.File.createTempFile(File.java:1792)
>         at java.io.File.createTempFile(File.java:1828)
>         at org.mortbay.jetty.webapp.WebAppContext.getTempDirectory(WebAppContext.java:745)
> {noformat}
> or 
> 2) 
> {noformat}
> hadoop-daemon.sh: line 117: /disk/0/hadoop-datanode....com.out: Read-only file system
> hadoop-daemon.sh: line 118: /disk/0/hadoop-datanode.pid: Read-only file system
> {noformat}
> I can recover the missing blocks but it takes some time.
> Also, we are losing track of block movements since log directory can also go to read-only
but datanode would continue running.
> For 0.21 release, can we revert HDFS-457 or make it configurable?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message