Return-Path: Delivered-To: apmail-hadoop-core-commits-archive@www.apache.org Received: (qmail 17566 invoked from network); 4 Apr 2008 04:32:47 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 4 Apr 2008 04:32:47 -0000 Received: (qmail 44870 invoked by uid 500); 4 Apr 2008 04:32:47 -0000 Delivered-To: apmail-hadoop-core-commits-archive@hadoop.apache.org Received: (qmail 44740 invoked by uid 500); 4 Apr 2008 04:32:47 -0000 Mailing-List: contact core-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-commits@hadoop.apache.org Received: (qmail 44731 invoked by uid 99); 4 Apr 2008 04:32:47 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 03 Apr 2008 21:32:47 -0700 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.3] (HELO eris.apache.org) (140.211.11.3) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 04 Apr 2008 04:32:14 +0000 Received: by eris.apache.org (Postfix, from userid 65534) id 081DB1A9832; Thu, 3 Apr 2008 21:32:25 -0700 (PDT) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r644603 - in /hadoop/core/trunk: CHANGES.txt src/test/org/apache/hadoop/fs/TestDU.java Date: Fri, 04 Apr 2008 04:32:24 -0000 To: core-commits@hadoop.apache.org From: nigel@apache.org X-Mailer: svnmailer-1.0.8 Message-Id: <20080404043226.081DB1A9832@eris.apache.org> X-Virus-Checked: Checked by ClamAV on apache.org Author: nigel Date: Thu Apr 3 21:32:11 2008 New Revision: 644603 URL: http://svn.apache.org/viewvc?rev=644603&view=rev Log: HADOOP-2927. Fix TestDU to acurately calculate the expected file size. Contributed by shv. Modified: hadoop/core/trunk/CHANGES.txt hadoop/core/trunk/src/test/org/apache/hadoop/fs/TestDU.java Modified: hadoop/core/trunk/CHANGES.txt URL: http://svn.apache.org/viewvc/hadoop/core/trunk/CHANGES.txt?rev=644603&r1=644602&r2=644603&view=diff ============================================================================== --- hadoop/core/trunk/CHANGES.txt (original) +++ hadoop/core/trunk/CHANGES.txt Thu Apr 3 21:32:11 2008 @@ -455,6 +455,9 @@ HADOOP-3161. Fix FIleUtil.HardLink.getLinkCount on Mac OS. (nigel via omalley) + HADOOP-2927. Fix TestDU to acurately calculate the expected file size. + (shv via nigel) + Release 0.16.2 - 2008-04-02 BUG FIXES Modified: hadoop/core/trunk/src/test/org/apache/hadoop/fs/TestDU.java URL: http://svn.apache.org/viewvc/hadoop/core/trunk/src/test/org/apache/hadoop/fs/TestDU.java?rev=644603&r1=644602&r2=644603&view=diff ============================================================================== --- hadoop/core/trunk/src/test/org/apache/hadoop/fs/TestDU.java (original) +++ hadoop/core/trunk/src/test/org/apache/hadoop/fs/TestDU.java Thu Apr 3 21:32:11 2008 @@ -53,32 +53,27 @@ file.getFD().sync(); file.close(); } - - /* - * Find a number that is a multiple of the block size in this file system - */ - private int getBlockSize() throws IOException, InterruptedException { - File file = new File(DU_DIR, "small"); - createFile(file, 128); // this is an arbitrary number. It has to be big enough for the filesystem to report - // any usage at all. For instance, NFS reports 0 blocks if the file is <= 64 bytes - - Thread.sleep(5000); // let the metadata updater catch up - - DU du = new DU(file, 0); - return (int) du.getUsed(); - } + /** + * Verify that du returns expected used space for a file. + * We assume here that if a file system crates a file of size + * that is a multiple of the block size in this file system, + * then the used size for the file will be exactly that size. + * This is true for most file systems. + * + * @throws IOException + * @throws InterruptedException + */ public void testDU() throws IOException, InterruptedException { - int blockSize = getBlockSize(); - + int writtenSize = 32*1024; // writing 32K File file = new File(DU_DIR, "data"); - createFile(file, 2 * blockSize); + createFile(file, writtenSize); Thread.sleep(5000); // let the metadata updater catch up DU du = new DU(file, 0); - long size = du.getUsed(); + long duSize = du.getUsed(); - assertEquals(2 * blockSize, size); + assertEquals(writtenSize, duSize); } }