hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eli Collins (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-1509) Resync discarded directories in fs.name.dir during saveNamespace command
Date Fri, 19 Nov 2010 00:13:14 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12933621#action_12933621
] 

Eli Collins commented on HDFS-1509:
-----------------------------------

If a fs.name.dir is faulty (eg a failed local disk or flaky NFS mount) then won't this mean
we continually fail writing fsedits, unless you dynamically update the configuration to remove
the failed fs.name.dir?  

The idea behind HADOOP-4885 was that a faulty dir gets black-listed will get re-instated on
the first checkpoint after it becomes valid. Is this approach insufficient?


> Resync discarded directories in fs.name.dir during saveNamespace command
> ------------------------------------------------------------------------
>
>                 Key: HDFS-1509
>                 URL: https://issues.apache.org/jira/browse/HDFS-1509
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: name-node
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>
> In the current implementation, if the Namenode encounters an error while writing to a
fs.name.dir directory it stops writing new edits to that directory. My proposal is to make
 the namenode write the fsimage to all configured directories in fs.name.dir, and from then
on, continue writing fsedits to all configured directories.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message