hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (Jira)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-15443) Setting dfs.datanode.max.transfer.threads to a very small value can cause strange failure.
Date Sat, 25 Jul 2020 01:12:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-15443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17164707#comment-17164707
] 

Hadoop QA commented on HDFS-15443:
----------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  0s{color} | {color:blue}
Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m 26s{color} | {color:red}
Docker failed to build yetus/hadoop:cce5a6f6094. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-15443 |
| JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13008381/HDFS-15443.003.patch
|
| Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/29557/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |


This message was automatically generated.



> Setting dfs.datanode.max.transfer.threads to a very small value can cause strange failure.
> ------------------------------------------------------------------------------------------
>
>                 Key: HDFS-15443
>                 URL: https://issues.apache.org/jira/browse/HDFS-15443
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>            Reporter: AMC-team
>            Priority: Major
>         Attachments: HDFS-15443.000.patch, HDFS-15443.001.patch, HDFS-15443.002.patch,
HDFS-15443.003.patch
>
>
> Configuration parameter dfs.datanode.max.transfer.threads is to specify the maximum number
of threads to use for transferring data in and out of the DN. This is a vital param that need
to tune carefully. 
> {code:java}
> // DataXceiverServer.java
> // Make sure the xceiver count is not exceeded
> intcurXceiverCount = datanode.getXceiverCount();
> if (curXceiverCount > maxXceiverCount) {
> thrownewIOException("Xceiver count " + curXceiverCount
> + " exceeds the limit of concurrent xceivers: "
> + maxXceiverCount);
> }
> {code}
> There are many issues that caused by not setting this param to an appropriate value.
However, there is no any check code to restrict the parameter. Although having a hard-and-fast
rule is difficult because we need to consider number of cores, main memory etc, *we can prevent
users from setting this value to an absolute wrong value by accident.* (e.g. a negative value
that totally break the availability of datanode.)
> *How to fix:*
> Add proper check code for the parameter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message