hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hairong Kuang (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2148) Inefficient FSDataset.getBlockFile()
Date Tue, 18 Mar 2008 23:16:24 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12580166#action_12580166
] 

Hairong Kuang commented on HADOOP-2148:
---------------------------------------

+1 This patch looks good. It removes a duplicate block map look up.

> Inefficient FSDataset.getBlockFile()
> ------------------------------------
>
>                 Key: HADOOP-2148
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2148
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: dfs
>    Affects Versions: 0.14.0
>            Reporter: Konstantin Shvachko
>            Assignee: Konstantin Shvachko
>             Fix For: 0.17.0
>
>         Attachments: getBlockFile.patch, getBlockFile1.patch
>
>
> FSDataset.getBlockFile() first verifies that the block is valid and then returns the
file name corresponding to the block.
> Doing that it performs the data-node blockMap lookup twice. Only one lookup is needed
here. 
> This is important since the data-node blockMap is big.
> Another observation is that data-nodes do not need the blockMap at all. File names can
be derived from the block IDs,
> there is no need to hold Block to File mapping in memory.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message