hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Allen Wittenauer (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-6319) Capacity reporting incorrect on Solaris
Date Mon, 19 Oct 2009 23:34:59 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-6319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12767614#action_12767614
] 

Allen Wittenauer commented on HADOOP-6319:
------------------------------------------

This is an edge-case that I discussed with Yahoo!'s HDFS team a long time back that lead me
to the conclusion that one is still better off specifying a max size rather than trying to
guess capacity and do negative math.  Needless to say, I lost.  

In this particular edge-case, I think the fix would work. I'd still rate it as risky since
there are likely other filesystems (especially pool based) that have similar df outputs, however
where capacity is not used+avail.

Although I'm curious about one thing.

Why not just create another filesystem in this ZFS pool rather than using the root filesystem?
 A  ZFS file system is significantly faster for Hadoop operations than using UFS. [... yes,
I've tested it.] As an added bonus, you avoid this issue. :)

> Capacity reporting incorrect on Solaris
> ---------------------------------------
>
>                 Key: HADOOP-6319
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6319
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 0.20.1
>            Reporter: Doug Judd
>         Attachments: solaris-hadoop.patch
>
>
> When trying to get Hadoop up and running on Solaris on a ZFS filesystem, I encountered
a problem where the capacity reported was zero:
> Configured Capacity: 0 (0 KB)
> It looks like the problem is with the 'df' output:
> $ df -k /data/hadoop 
> Filesystem           1024-blocks        Used   Available Capacity  Mounted on
> /                              0     7186354    20490274    26%    /
> The following patch (applied to trunk) fixes the problem.  Though the real problem is
with 'df', I suspect the patch is harmless enough to include?
> Index: src/java/org/apache/hadoop/fs/DF.java
> ===================================================================
> --- src/java/org/apache/hadoop/fs/DF.java	(revision 826471)
> +++ src/java/org/apache/hadoop/fs/DF.java	(working copy)
> @@ -181,7 +181,11 @@
>          this.percentUsed = Integer.parseInt(tokens.nextToken());
>          this.mount = tokens.nextToken();
>          break;
> -   }
> +    }
> +
> +    if (this.capacity == 0)
> +	this.capacity = this.used + this.available;
> +    
>    }
>  
>    public static void main(String[] args) throws Exception {

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message