hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hairong Kuang (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2549) hdfs does not honor dfs.du.reserved setting
Date Tue, 08 Jan 2008 23:06:37 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12557063#action_12557063

Hairong Kuang commented on HADOOP-2549:

The cause of block size being 0 is that block size is not past as a parameter in block transfer
protocol. So a Block object is initialized, we set its block size to be zero that leads to
a parameter of zero when getNextVolume is called. There are three options:
1. change the DatanodeProtocol to pass the expected block size as well.
2. not to pass the block size in protocol, but use the default block size. The problem with
this approach is to block size is a client size configuration.
3. use a big number like 128m as the block size. This may not work for bigger block size but
should work most of the time.

> hdfs does not honor dfs.du.reserved setting
> -------------------------------------------
>                 Key: HADOOP-2549
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2549
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.14.4
>         Environment: FC Linux.
>            Reporter: Joydeep Sen Sarma
>            Priority: Critical
> running 0.14.4. one of our drives is smaller and is always getting disk full. i reset
the disk reservation to 1Gig - but it was filled quickly again.
> i put in some tracing in getnextvolume. the blocksize argument is 0. so every volume
(regardless of available space) qualifies. here's the trace:
> /* root disk chosen with 0 available bytes. format is <available>:<blocksize>*/
> 2008-01-08 08:08:51,918 WARN org.apache.hadoop.dfs.DataNode: Volume /var/hadoop/tmp/dfs/data/current:0:0
> /* some other disk chosen with 300G space. */
> 2008-01-08 08:09:21,974 WARN org.apache.hadoop.dfs.DataNode: Volume /mnt/d1/hdfs/current:304725631026:0
> i am going to default blocksize to something reasonable when it's zero for now.
> this is driving us nuts since our automounter starts failing when we run out of space.
so everything's broke.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message