Return-Path: Delivered-To: apmail-lucene-hadoop-dev-archive@locus.apache.org Received: (qmail 68168 invoked from network); 15 May 2006 19:42:34 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (209.237.227.199) by minotaur.apache.org with SMTP; 15 May 2006 19:42:34 -0000 Received: (qmail 72281 invoked by uid 500); 15 May 2006 19:42:34 -0000 Delivered-To: apmail-lucene-hadoop-dev-archive@lucene.apache.org Received: (qmail 72258 invoked by uid 500); 15 May 2006 19:42:33 -0000 Mailing-List: contact hadoop-dev-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hadoop-dev@lucene.apache.org Delivered-To: mailing list hadoop-dev@lucene.apache.org Received: (qmail 72248 invoked by uid 99); 15 May 2006 19:42:33 -0000 Received: from asf.osuosl.org (HELO asf.osuosl.org) (140.211.166.49) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 15 May 2006 12:42:33 -0700 X-ASF-Spam-Status: No, hits=0.0 required=10.0 tests= X-Spam-Check-By: apache.org Received: from [209.237.227.198] (HELO brutus.apache.org) (209.237.227.198) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 15 May 2006 12:42:33 -0700 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id AC827410007 for ; Mon, 15 May 2006 19:42:06 +0000 (GMT) Message-ID: <1519243.1147722126703.JavaMail.jira@brutus> Date: Mon, 15 May 2006 19:42:06 +0000 (GMT+00:00) From: "Doug Cutting (JIRA)" To: hadoop-dev@lucene.apache.org Subject: [jira] Updated: (HADOOP-212) allow changes to dfs block size In-Reply-To: <28825893.1147448769761.JavaMail.root@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org X-Spam-Rating: minotaur.apache.org 1.6.2 0/1000/N [ http://issues.apache.org/jira/browse/HADOOP-212?page=all ] Doug Cutting updated HADOOP-212: -------------------------------- Attachment: TEST-org.apache.hadoop.fs.TestCopyFiles.txt Overall, this looks great and is much needed. Unfortunately I'm getting some null pointer exceptions running unit tests with this patch. I've not yet tried to debug these... > allow changes to dfs block size > ------------------------------- > > Key: HADOOP-212 > URL: http://issues.apache.org/jira/browse/HADOOP-212 > Project: Hadoop > Type: Improvement > Components: dfs > Versions: 0.2 > Reporter: Owen O'Malley > Assignee: Owen O'Malley > Priority: Critical > Fix For: 0.3 > Attachments: TEST-org.apache.hadoop.fs.TestCopyFiles.txt, dfs-blocksize.patch > > Trying to change the DFS block size, led the realization that the 32,000,000 was hard coded into the source code. I propose: > 1. Change the default block size to 64 * 1024 * 1024. > 2. Add the config variable dfs.block.size that sets the default block size. > 3. Add a parameter to the FileSystem, DFSClient, and ClientProtocol create method that let's the user control the block size. > 4. Rename the FileSystem.getBlockSize to getDefaultBlockSize. > 5. Add a new method to FileSytem.getBlockSize that takes a pathname. > 6. Use long for the block size in the API, which is what was used before. However, the implementation will not work if block size is set bigger than 2**31. > 7. Have the InputFormatBase use the blocksize of each file to determine the split size. > Thoughts? -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira