hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hudson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-12044) Mismatch between BlockManager#maxReplicationStreams and ErasureCodingWorker.stripedReconstructionPool pool size causes slow and bursty recovery
Date Fri, 28 Jul 2017 18:16:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-12044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16105422#comment-16105422
] 

Hudson commented on HDFS-12044:
-------------------------------

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12068 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12068/])
HDFS-12044. Mismatch between BlockManager.maxReplicationStreams and (lei: rev 77791e4c36ddc9305306c83806bf486d4d32575d)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/StripedBlockReconstructor.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/StripedReader.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/StripedReconstructor.java
* (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReconstructStripedFile.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/StripedReconstructionInfo.java


> Mismatch between BlockManager#maxReplicationStreams and ErasureCodingWorker.stripedReconstructionPool
pool size causes slow and bursty recovery
> -----------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-12044
>                 URL: https://issues.apache.org/jira/browse/HDFS-12044
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: erasure-coding
>    Affects Versions: 3.0.0-alpha3
>            Reporter: Lei (Eddy) Xu
>            Assignee: Lei (Eddy) Xu
>              Labels: hdfs-ec-3.0-must-do
>             Fix For: 3.0.0-beta1
>
>         Attachments: HDFS-12044.00.patch, HDFS-12044.01.patch, HDFS-12044.02.patch, HDFS-12044.03.patch,
HDFS-12044.04.patch, HDFS-12044.05.patch
>
>
> {{ErasureCodingWorker#stripedReconstructionPool}} is with {{corePoolSize=2}} and {{maxPoolSize=8}}
as default. And it rejects more tasks if the queue is full.
> When {{BlockManager#maxReplicationStream}} is larger than {{ErasureCodingWorker#stripedReconstructionPool#corePoolSize/maxPoolSize}},
for example, {{maxReplicationStream=20}} and {{corePoolSize=2 , maxPoolSize=8}}.  Meanwhile,
NN sends up to {{maxTransfer}} reconstruction tasks to DN for each heartbeat, and it is calculated
in {{FSNamesystem}}:
> {code}
> final int maxTransfer = blockManager.getMaxReplicationStreams() - xmitsInProgress;
> {code}
> However, at any giving time, {{{ErasureCodingWorker#stripedReconstructionPool}} takes
2 {{xmitInProcess}}. So for each heartbeat in 3s, NN will send about {{20-2 = 18}} reconstruction
tasks to the DN, and DN throw away most of them if there were 8 tasks in the queue already.
So NN needs to take longer to re-consider these blocks were under-replicated to schedule new
tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message