hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rakesh R (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-11125) [SPS]: Use smaller batches of BlockMovingInfo into the block storage movement command
Date Fri, 23 Jun 2017 10:10:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-11125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16060681#comment-16060681

Rakesh R commented on HDFS-11125:

[~ehiggs] Thanks for the heads up. 

Like mentioned in the jira description, here the idea is to create multiple batches of block
movement items for a file and send it over C-DN's heart beat response one by one sequentially.
This is to reduce the network overhead of transferring all block moving items of a file in
a single heart beat response. Sure, we will maintain current semantics of {{trackID Vs list
of blocks on a file}} and the plan is not to combine blocks of different files to a {{trackID}}.

For example, a file has 1000 blocks. We can split into 5 block movements batches of size 200
each, then send one by one batch to C-DN sequentially. 

We kept this as low priority task now. I don't have a concrete logic now but we could build
approach like, make the batchsize configurable {{dfs.storage.policy.satisfier.block.movements.batch.size}}
and add attributes in {{org.apache.hadoop.hdfs.server.namenode.BlockStorageMovementInfosBatch}}
object to define the batch sequence numbers etc. On the other side, C-DN will update the status
to SPS once all the batches are finished moving the given set of blocks.

> [SPS]: Use smaller batches of BlockMovingInfo into the block storage movement command
> -------------------------------------------------------------------------------------
>                 Key: HDFS-11125
>                 URL: https://issues.apache.org/jira/browse/HDFS-11125
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode, namenode
>            Reporter: Rakesh R
>            Assignee: Rakesh R
> This is a follow-up task of HDFS-11068, where it sends all the blocks under a trackID
over single heartbeat response(DNA_BLOCK_STORAGE_MOVEMENT command). If blocks are many under
a given trackID(For example: a file contains many blocks) then those requests go across a
network and come with a lot of overhead. In this jira, we will discuss and implement a mechanism
to limit the list of items into smaller batches with in trackID.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message