Return-Path: Delivered-To: apmail-lucene-hadoop-dev-archive@locus.apache.org Received: (qmail 32276 invoked from network); 8 Jan 2008 23:06:59 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 8 Jan 2008 23:06:59 -0000 Received: (qmail 70725 invoked by uid 500); 8 Jan 2008 23:06:47 -0000 Delivered-To: apmail-lucene-hadoop-dev-archive@lucene.apache.org Received: (qmail 70691 invoked by uid 500); 8 Jan 2008 23:06:47 -0000 Mailing-List: contact hadoop-dev-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hadoop-dev@lucene.apache.org Delivered-To: mailing list hadoop-dev@lucene.apache.org Received: (qmail 70681 invoked by uid 99); 8 Jan 2008 23:06:47 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 08 Jan 2008 15:06:47 -0800 X-ASF-Spam-Status: No, hits=-100.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO brutus.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 08 Jan 2008 23:06:31 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 2C67C71422E for ; Tue, 8 Jan 2008 15:06:37 -0800 (PST) Message-ID: <30805199.1199833597169.JavaMail.jira@brutus> Date: Tue, 8 Jan 2008 15:06:37 -0800 (PST) From: "Hairong Kuang (JIRA)" To: hadoop-dev@lucene.apache.org Subject: [jira] Commented: (HADOOP-2549) hdfs does not honor dfs.du.reserved setting In-Reply-To: <906477.1199811994109.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-2549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12557063#action_12557063 ] Hairong Kuang commented on HADOOP-2549: --------------------------------------- The cause of block size being 0 is that block size is not past as a parameter in block transfer protocol. So a Block object is initialized, we set its block size to be zero that leads to a parameter of zero when getNextVolume is called. There are three options: 1. change the DatanodeProtocol to pass the expected block size as well. 2. not to pass the block size in protocol, but use the default block size. The problem with this approach is to block size is a client size configuration. 3. use a big number like 128m as the block size. This may not work for bigger block size but should work most of the time. > hdfs does not honor dfs.du.reserved setting > ------------------------------------------- > > Key: HADOOP-2549 > URL: https://issues.apache.org/jira/browse/HADOOP-2549 > Project: Hadoop > Issue Type: Bug > Components: dfs > Affects Versions: 0.14.4 > Environment: FC Linux. > Reporter: Joydeep Sen Sarma > Priority: Critical > > running 0.14.4. one of our drives is smaller and is always getting disk full. i reset the disk reservation to 1Gig - but it was filled quickly again. > i put in some tracing in getnextvolume. the blocksize argument is 0. so every volume (regardless of available space) qualifies. here's the trace: > /* root disk chosen with 0 available bytes. format is :*/ > 2008-01-08 08:08:51,918 WARN org.apache.hadoop.dfs.DataNode: Volume /var/hadoop/tmp/dfs/data/current:0:0 > /* some other disk chosen with 300G space. */ > 2008-01-08 08:09:21,974 WARN org.apache.hadoop.dfs.DataNode: Volume /mnt/d1/hdfs/current:304725631026:0 > i am going to default blocksize to something reasonable when it's zero for now. > this is driving us nuts since our automounter starts failing when we run out of space. so everything's broke. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.