hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Allen Wittenauer (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-5926) documation should clarify dfs.datanode.du.reserved wrt reserved disk capacity
Date Sun, 14 Sep 2014 18:15:34 GMT

     [ https://issues.apache.org/jira/browse/HDFS-5926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Allen Wittenauer updated HDFS-5926:
    Labels: newbie  (was: )

> documation should clarify dfs.datanode.du.reserved wrt reserved disk capacity
> -----------------------------------------------------------------------------
>                 Key: HDFS-5926
>                 URL: https://issues.apache.org/jira/browse/HDFS-5926
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: documentation
>    Affects Versions: 0.20.2
>            Reporter: Alexander Fahlke
>            Priority: Minor
>              Labels: newbie
> I'm using hadoop-0.20.2 on Debian Squeeze and ran into the same confusion as many others
with the parameter for dfs.datanode.du.reserved. One day some data nodes got out of disk errors
although there was space left on the disks.
> The following values are rounded to make the problem more clear:
> - the disk for the DFS data has 1000GB and only one Partition (ext3) for DFS data
> - you plan to set the dfs.datanode.du.reserved to 20GB
> - the reserved reserved-blocks-percentage by tune2fs is 5% (the default)
> That gives all users, except root, 5% less capacity that they can use.
> Although the System reports the total of 1000GB as usable for all users via df. The hadoop-deamons
are not running as root.
> If i read it right, than hadoop get's the free capacity via df.
> Starting in {{/src/hdfs/org/apache/hadoop/hdfs/server/datanode/FSDataset.java}} on line
350: {{return usage.getCapacity()-reserved;}}
> going to {{/src/core/org/apache/hadoop/fs/DF.java}} which says:
> {{"Filesystem disk space usage statistics. Uses the unix 'df' program"}}
> When you have 5% reserved by tune2fs (in our case 50GB) and you give dfs.datanode.du.reserved
only 20GB, than you can possibly ran into out of disk errors that hadoop can't handle.
> In this case you must add the planned 20GB du reserved to the reserved capacity by tune2fs.
This results in (at least) 70GB for dfs.datanode.du.reserved in my case.
> Two ideas:
> # The documentation must be clear at this point to avoid this problem.
> # Hadoop could check for reserved space by tune2fs (or other tools) and add this value
to the dfs.datanode.du.reserved parameter.
> This ticket is a follow up from the Mailinglist: https://mail-archives.apache.org/mod_mbox/hadoop-common-user/201312.mbox/%3CCAHodO=Kbv=13T=2Otz+S8nSOdbS1icNzqYXT_0WDfxy5gKSOSw@mail.gmail.com%3E

This message was sent by Atlassian JIRA

View raw message