hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-2602) NN should log newly-allocated blocks without losing BlockInfo
Date Thu, 15 Dec 2011 22:55:31 GMT

    [ https://issues.apache.org/jira/browse/HDFS-2602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13170555#comment-13170555

Todd Lipcon commented on HDFS-2602:

Also TestEditLog fails since it logs alternating OP_ADD and OP_CLOSE for the same file. I
don't know if it's an unrealistic test or an actual issue -- but I think what's happening
is this:
- OP_ADD creates a new INodeFileUnderConstruction
- OP_CLOSE convers to INodeFile
- OP_ADD sees an already-existing file, and just does updateBlocks without converting back
to INodeFileUnderConstruction
- OP_CLOSE fails because it's trying to close a non-underconstruction file

Doesn't this happen in the append() case?
> NN should log newly-allocated blocks without losing BlockInfo
> -------------------------------------------------------------
>                 Key: HDFS-2602
>                 URL: https://issues.apache.org/jira/browse/HDFS-2602
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: ha
>    Affects Versions: HA branch (HDFS-1623)
>            Reporter: Todd Lipcon
>            Assignee: Aaron T. Myers
>            Priority: Critical
>         Attachments: HDFS-2602.patch, HDFS-2602.patch, HDFS-2602.patch
> Without the patch in HDFS-1108, new block allocations aren't logged to the edits log.
For HA, we'll need that functionality and we'll need to make sure that block locations aren't
blown away in the Standby NN when tailing the edits log.
> As described in HDFS-1975:
> When we close a file, or add another block to a file, we write OP_CLOSE or OP_ADD in
the txn log. FSEditLogLoader, when it sees these types of transactions, creates new BlockInfo
objects for all of the blocks listed in the transaction. These new BlockInfos have no block
locations associated. So, when we close a file, the SBNN loses its block locations info for
that file and is no longer "hot".
> I have an ugly hack which copies over the old BlockInfos from the existing INode, but
I'm not convinced it's the right way. It might be cleaner to add new opcode types like OP_ADD_ADDITIONAL_BLOCK,
and actually treat OP_CLOSE as just a finalization of INodeFileUnderConstruction to INodeFile,
rather than replacing block info at all.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message