hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Lei (Eddy) Xu (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-12044) Mismatch between BlockManager#maxReplicatioStreams and ErasureCodingWorker.stripedReconstructionPool pool size causes slow and burst recovery.
Date Tue, 27 Jun 2017 22:13:00 GMT

     [ https://issues.apache.org/jira/browse/HDFS-12044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Lei (Eddy) Xu updated HDFS-12044:
---------------------------------
    Attachment: HDFS-12044.00.patch

There are two potential approaches to this.

* Allow {{ErasureCodingWorker}} to accept un-bounded number of re-construct tasks. The reconstruction
worker for regular replicated files accept unbounded number of reconstruction tasks as well.

* Make {{DFS_NAMENODE_REPLICATION_MAX_STREAMS_KEY}} and {{DFS_DN_EC_RECONSTRUCTION_STRIPED_BLK_THREADS_KEY}}
be the same, i.e., sharing the same key and value.

Attach a patch for the first approach which is simpler. 

> Mismatch between BlockManager#maxReplicatioStreams and ErasureCodingWorker.stripedReconstructionPool
pool size causes slow and burst recovery. 
> -----------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-12044
>                 URL: https://issues.apache.org/jira/browse/HDFS-12044
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: erasure-coding
>    Affects Versions: 3.0.0-alpha3
>            Reporter: Lei (Eddy) Xu
>            Assignee: Lei (Eddy) Xu
>         Attachments: HDFS-12044.00.patch
>
>
> {{ErasureCodingWorker#stripedReconstructionPool}} is with {{corePoolSize=2}} and {{maxPoolSize=8}}
as default. And it rejects more tasks if the queue is full.
> When {{BlockManager#maxReplicationStream}} is larger than {{ErasureCodingWorker#stripedReconstructionPool#corePoolSize/maxPoolSize}},
for example, {{maxReplicationStream=20}} and {{corePoolSize=2 , maxPoolSize=8}}.  Meanwhile,
NN sends up to {{maxTransfer}} reconstruction tasks to DN for each heartbeat, and it is calculated
in {{FSNamesystem}}:
> {code}
> final int maxTransfer = blockManager.getMaxReplicationStreams() - xmitsInProgress;
> {code}
> However, at any giving time, {{{ErasureCodingWorker#stripedReconstructionPool}} takes
2 {{xmitInProcess}}. So for each heartbeat in 3s, NN will send about {{20-2 = 18}} reconstruction
tasks to the DN, and DN throw away most of them if there were 8 tasks in the queue already.
So NN needs to take longer to re-consider these blocks were under-replicated to schedule new
tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message