hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Colin Patrick McCabe (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-3277) fail over to loading a different FSImage if the first one we try to load is corrupt
Date Tue, 08 May 2012 18:01:51 GMT

     [ https://issues.apache.org/jira/browse/HDFS-3277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Colin Patrick McCabe updated HDFS-3277:
---------------------------------------

    Attachment: HDFS-3277.003.patch

* fix bug where we weren't always loading the newest image(s)

* rebase on trunk
                
> fail over to loading a different FSImage if the first one we try to load is corrupt
> -----------------------------------------------------------------------------------
>
>                 Key: HDFS-3277
>                 URL: https://issues.apache.org/jira/browse/HDFS-3277
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>         Attachments: HDFS-3277.002.patch, HDFS-3277.003.patch
>
>
> Most users store multiple copies of the FSImage in order to prevent catastrophic data
loss if a hard disk fails.  However, our image loading code is currently not set up to start
reading another FSImage if loading the first one does not succeed.  We should add this capability.
> We should also be sure to remove the FSImage directory that failed from the list of FSImage
directories to write to, in the way we normally do when a write (as opopsed to read) fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message