hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Abhishek Sakhuja (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (HDFS-13139) Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% capacity) and fails
Date Mon, 19 Feb 2018 05:33:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-13139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16368797#comment-16368797
] 

Abhishek Sakhuja edited comment on HDFS-13139 at 2/19/18 5:32 AM:
------------------------------------------------------------------

HDP default configuration had calculated non-HDFS reserved storage "dfs.du.datanode.reserved"
(approx 3.5 %) on total disk for the lowest storage configured for a datanode (among the compute
config groups) which had three drives and one drive was in TBs. Our default configuration
to store data on datanode "dfs.datanode.data.dir" was pointing to a drive with lowest capacity
(around 3 % of overall DN storage). This 3 % < 3.5 % had made HDFS capacity as 0% and our
existing datanode storage had some supporting directories and files in KBs which had resulted
in marking negative KB capacity of the datanode. To fix the downscaling issue, either, we
need to lower down non hdfs reserved capacity (lower than 3 %) or point our datanode to higher
disk capacity (greater than 3.5 %).


was (Author: abhi.sakhuja):
HDP default configuration had calculated non-HDFS reserved storage "dfs.du.datanode.reserved"
(approx 3.5 %) on total disk for the lowest storage configured for a datanode (among the compute
config groups) which has three drives and one drive was in TBs. Our default configuration
to store data on datanode "dfs.datanode.data.dir" was pointing to a drive with lowest capacity
(around 3 % of overall DN storage). This 3 % < 3.5 % had made HDFS capacity as 0% and our
existing datanode storage had some supporting directories and files in KBs which had resulted
in marking negative KB capacity of the datanode. To fix the downscaling issue, either, we
need to lower down non hdfs reserved capacity (lower than 3 %) or point our datanode to higher
disk capacity (greater than 3.5 %).

> Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% capacity) and
fails
> ------------------------------------------------------------------------------------------
>
>                 Key: HDFS-13139
>                 URL: https://issues.apache.org/jira/browse/HDFS-13139
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode, fs/azure, hdfs
>    Affects Versions: 2.7.3
>            Reporter: Abhishek Sakhuja
>            Priority: Major
>              Labels: azureblob, cloudbreak, datanode, decommission, hdfs
>
> Configuring Azure WASB storage as a default HDFS location which means that Hadoop HDFS
capacity will be 0. I have default replication as 1 but now when I am trying to decommission
a node, datanode tries to rebalance some 28KB of data to another available datanode. However,
our HDFS has 0 capacity and therefore, decommissioning fails with below given error:
> {code:java}
> New node(s) could not be removed from the cluster. Reason Trying to move '28672' bytes
worth of data to nodes with '0' bytes of capacity is not allowed{code}
> Getting the information on cluster shows that default local HDFS is still used for some
KB space which is getting rebalanced whereas available capacity is 0:
> {code:java}
> "CapacityRemaining" : 0,
>  "CapacityTotal" : 0,
>  "CapacityUsed" : 131072,
>  "DeadNodes" : "{}",
>  "DecomNodes" : "{}",
>  "HeapMemoryMax" : 1060372480,
>  "HeapMemoryUsed" : 147668152,
>  "NonDfsUsedSpace" : 0,
>  "NonHeapMemoryMax" : -1,
>  "NonHeapMemoryUsed" : 75319744,
>  "PercentRemaining" : 0.0,
>  "PercentUsed" : 100.0,
>  "Safemode" : "",
>  "StartTime" : 1518241019502,
>  "TotalFiles" : 1,
>  "UpgradeFinalized" : true,{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message