hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Konstantin Shvachko (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2845) dfsadmin disk utilization report on Solaris is wrong
Date Fri, 22 Feb 2008 20:41:19 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12571586#action_12571586
] 

Konstantin Shvachko commented on HADOOP-2845:
---------------------------------------------

# Do you really need to wait(5000). Would it help if we flush() and then sync() rather than
just sync()?
# du -sk for a 1-byte file prints out 0 for nfs mounted on my linux box. So you will be getting
0-size blocks in this case.

> dfsadmin disk utilization report on Solaris is wrong
> ----------------------------------------------------
>
>                 Key: HADOOP-2845
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2845
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 0.16.0
>            Reporter: Martin Traverso
>            Assignee: Martin Traverso
>             Fix For: 0.17.0
>
>         Attachments: HADOOP-2845-1.patch, HADOOP-2845.patch
>
>
> dfsadmin reports 2x disk utilization on some platforms (Solaris, MacOS). The reason for
this is that org.apache.hadoop.fs.DU is relying on du's default block size when reporting
sizes and assuming they are 1024 byte blocks. This works fine on Linux, but du Solaris and
MacOS uses 512-byte blocks to report disk usage.
> DU should use "du -sk" instead of "du -s" to force the command to report sizes based
on 1024 byte blocks.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message