hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vinayakumar B (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy
Date Wed, 14 Oct 2015 09:23:06 GMT

    [ https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14956537#comment-14956537

Vinayakumar B commented on HDFS-8647:

Thanks [~brahmareddy] for patch.
Thanks [~mingma] for review, 
Those are good points.
bq. The existing blockHasEnoughRacksStriped compares getRealDataBlockNum (# of data blocks)
with the number racks. But after the refactoring, it compares getRealTotalBlockNum (# of total
blocks) with the number racks,
Yes, you are right. It should use getRealDataBlockNum as minracks. I found this is discussed
in HDFS-7613

bq. The current patch doesn't apply to branch-2. If you agree with the above changes, could
you try out if it applies to branch-2? If it doesn't apply, you will need to provide a separate
patch for branch-2 later.
Yes, of-course it will not apply. Instead of providing another branch-2 patch, which might
create more conflicts when merging EC to branch-2, how about waiting for commit to branch-2.
After that simple cherry-pick might work.

bq. A general question about striped EC. It uses "# of racks >= # of data blocks" to check
if a given block has enough racks. But what if "# of racks for the whole cluster < # of
data blocks"? Say we use RS(6,3) and the cluster has 5 racks. The write operation will spread
the 9 blocks to 5 racks and succeed. But it will fail the "enough racks" check later in BM?
But that has nothing to with the refactoring work here. I just want to bring it up in case
others can chime in.
You are right. Check will fail, that means block will not be removed from neededReplications
 "# of racks >= # of data blocks" requirement is to ensure rackwise failure doesn't create
any dataloss in case of EC'ed file.

> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -------------------------------------------------------------
>                 Key: HDFS-8647
>                 URL: https://issues.apache.org/jira/browse/HDFS-8647
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Ming Ma
>            Assignee: Brahma Reddy Battula
>         Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, HDFS-8647-003.patch, HDFS-8647-004.patch,
HDFS-8647-004.patch, HDFS-8647-005.patch
> Sometimes we want to have namenode use alternative block placement policy such as upgrade
domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as useDelHint,
blockHasEnoughRacks. That means when we have new block placement policy, we need to modify
BlockManager to account for the new policy. Ideally BlockManager should ask BlockPlacementPolicy
object instead. That will allow us to provide new BlockPlacementPolicy without changing BlockManager.

This message was sent by Atlassian JIRA

View raw message