hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Joydeep Sen Sarma (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2991) dfs.du.reserved not honored in 0.15/16 (regression from 0.14+patch for 2549)
Date Tue, 11 Mar 2008 00:26:46 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12577259#action_12577259
] 

Joydeep Sen Sarma commented on HADOOP-2991:
-------------------------------------------

Hairong - there are two issues here (please please do not mix them up):

1. the size column of the DF output does not give usable space. This is regardless of 1463.
But 1463 makes this worse - because earlier the 'capacity' field never really mattered in
edge cases - now it becomes paramount.

2. the argument over what 'reserved' means. I didn't raise this point - but earlier 'reserve'
meant that ' please don't touch the last N bytes'. The xml documentation still says so:

  <description>Reserved space in bytes. Always leave this much space free for non dfs
use  </description>

now - we have no way of making sure that DFS does not use the last N bytes. As an administrator
- i hate this. Earlier i could sleep in peace knowing that DFS would never cause file system
full. Now i can't. It is _very_ hard for me to estimate up front all the non DFS usage. It's
much easier for me to say 'please do not use last N bytes').

---

this is another case where interface semantics were changed:
a) no backwards compatibility with old semantics (of 0.14)
b) no clear information to admins about changes to existing semantics (I went through the
change notes when i was struggling with the compression problems - and this never caught my
eye).

---

Please consider the two issues raised here separately. We have, of course, patched this already
in our environment. But the general user community will face this problem. There has already
been another reported instance that u saw where someone has felt this was not working as expected.

> dfs.du.reserved not honored in 0.15/16 (regression from 0.14+patch for 2549)
> ----------------------------------------------------------------------------
>
>                 Key: HADOOP-2991
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2991
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.15.0, 0.15.1, 0.15.2, 0.15.3, 0.16.0
>            Reporter: Joydeep Sen Sarma
>            Priority: Critical
>
> changes for https://issues.apache.org/jira/browse/HADOOP-1463
> have caused a regression. earlier:
> - we could set dfs.du.reserve to 1G and be *sure* that 1G would not be used.
> now this is no longer true. I am quoting Pete Wyckoff's example:
> <example>
> Let's look at an example. 100 GB disk and /usr using 45 GB and dfs using 50 GBs now
> Df -kh shows:
> Capacity = 100 GB
> Available = 1 GB (remember ~4 GB chopped out for metadata and stuff)
> Used = 95 GBs   
> remaining = 100 GB - 50 GB - 1GB = 49 GB 
> Min(remaining, available) = 1 GB
> 98% of which is usable for DFS apparently - 
> So, we're at the limit, but are free to use 98% of the remaining 1GB.
> </example>
> this is broke. based on the discussion on 1463 - it seems like the notion of 'capacity'
as being the first field of 'df' is problematic. For example - here's what our df output looks
like:
> Filesystem            Size  Used Avail Use% Mounted on
> /dev/sda3             130G  123G   49M 100% /
> as u can see - 'Size' is a misnomer - that much space is not available. Rather the actual
usable space is 123G+49M ~ 123G. (not entirely sure what the discrepancy is due to - but have
heard this may be due to space reserved for file system metadata). Because of this discrepancy
- we end up in a situation where file system is out of space.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message