hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Joydeep Sen Sarma (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2991) dfs.du.reserved not honored in 0.15/16 (regression from 0.14+patch for 2549)
Date Tue, 11 Mar 2008 16:18:46 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12577493#action_12577493

Joydeep Sen Sarma commented on HADOOP-2991:

Raghu - the bug is that 

DF.java: parseExecResult()
   this.capacity = Long.parseLong(tokens.nextToken()) * 1024;

is not correct. the code treats this as 'usable' space in computing getAvailable(). But as
we are pointing out - this is *not* usable space - but merely the capacity of the drive. Usable
space = this.available+this.used (applied in the context of the same file).

(Note again - the notion of *usable* and *capacity* are different in most file systems)


as a matter of philosophy (and *outside* of this bug) - i completely and whole heartedly disagree
with the notion that:

reserved space = the space reserved for non-dfs usage

that administrators can ever figure out non-dfs usage precisely. in any case - such usage
can differ from disk to disk (root partitions consume lot more non-dfs stuff than other partitions)
- and it will be a nightmare to start adding disk level configuration to hadoop. It is *completely*
counter-intuitive. It is *much* easier for admins to understand that:

reserved space = last N bytes that DFS will not use.

this is a uniform parameter that can be easily understood across all drives and lends to easy
planning. Normally, one images the system, installs Hadoop and that's it. One wants to leave
some extra space for adding libraries and such - but this is typically small amount of data.
It's very easy for an admin following this standard procedure to budget some reserve space
for this purpose that dfs will not touch.

please put urself in an admin's shoe. please! 


Do you think we could take a poll on the dev/users lists on what controls admins want? 

> dfs.du.reserved not honored in 0.15/16 (regression from 0.14+patch for 2549)
> ----------------------------------------------------------------------------
>                 Key: HADOOP-2991
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2991
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.15.0, 0.15.1, 0.15.2, 0.15.3, 0.16.0
>            Reporter: Joydeep Sen Sarma
>            Priority: Critical
> changes for https://issues.apache.org/jira/browse/HADOOP-1463
> have caused a regression. earlier:
> - we could set dfs.du.reserve to 1G and be *sure* that 1G would not be used.
> now this is no longer true. I am quoting Pete Wyckoff's example:
> <example>
> Let's look at an example. 100 GB disk and /usr using 45 GB and dfs using 50 GBs now
> Df -kh shows:
> Capacity = 100 GB
> Available = 1 GB (remember ~4 GB chopped out for metadata and stuff)
> Used = 95 GBs   
> remaining = 100 GB - 50 GB - 1GB = 49 GB 
> Min(remaining, available) = 1 GB
> 98% of which is usable for DFS apparently - 
> So, we're at the limit, but are free to use 98% of the remaining 1GB.
> </example>
> this is broke. based on the discussion on 1463 - it seems like the notion of 'capacity'
as being the first field of 'df' is problematic. For example - here's what our df output looks
> Filesystem            Size  Used Avail Use% Mounted on
> /dev/sda3             130G  123G   49M 100% /
> as u can see - 'Size' is a misnomer - that much space is not available. Rather the actual
usable space is 123G+49M ~ 123G. (not entirely sure what the discrepancy is due to - but have
heard this may be due to space reserved for file system metadata). Because of this discrepancy
- we end up in a situation where file system is out of space.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message