hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (Created) (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-2602) Standby needs to maintain BlockInfo while following edits
Date Tue, 29 Nov 2011 21:27:40 GMT
Standby needs to maintain BlockInfo while following edits

                 Key: HDFS-2602
                 URL: https://issues.apache.org/jira/browse/HDFS-2602
             Project: Hadoop HDFS
          Issue Type: Sub-task
          Components: ha
    Affects Versions: HA branch (HDFS-1623)
            Reporter: Todd Lipcon
            Assignee: Aaron T. Myers
            Priority: Critical

As described in HDFS-1975:

When we close a file, or add another block to a file, we write OP_CLOSE or OP_ADD in the txn
log. FSEditLogLoader, when it sees these types of transactions, creates new BlockInfo objects
for all of the blocks listed in the transaction. These new BlockInfos have no block locations
associated. So, when we close a file, the SBNN loses its block locations info for that file
and is no longer "hot".

I have an ugly hack which copies over the old BlockInfos from the existing INode, but I'm
not convinced it's the right way. It might be cleaner to add new opcode types like OP_ADD_ADDITIONAL_BLOCK,
and actually treat OP_CLOSE as just a finalization of INodeFileUnderConstruction to INodeFile,
rather than replacing block info at all.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message