hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Konstantin Shvachko (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-2148) Inefficient FSDataset.getBlockFile()
Date Tue, 18 Mar 2008 01:30:25 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-2148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Konstantin Shvachko updated HADOOP-2148:
----------------------------------------

    Attachment: getBlockFile1.patch

This fixes findBugs warnings.
I could not reproduce test timeout in TestDFSStorageStateRecovery.
This test has a lot of test cases. My suspicion is that if Hudson runs slow it could run out
of time on this.

> Inefficient FSDataset.getBlockFile()
> ------------------------------------
>
>                 Key: HADOOP-2148
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2148
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: dfs
>    Affects Versions: 0.14.0
>            Reporter: Konstantin Shvachko
>            Assignee: Konstantin Shvachko
>             Fix For: 0.17.0
>
>         Attachments: getBlockFile.patch, getBlockFile1.patch
>
>
> FSDataset.getBlockFile() first verifies that the block is valid and then returns the
file name corresponding to the block.
> Doing that it performs the data-node blockMap lookup twice. Only one lookup is needed
here. 
> This is important since the data-node blockMap is big.
> Another observation is that data-nodes do not need the blockMap at all. File names can
be derived from the block IDs,
> there is no need to hold Block to File mapping in memory.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message