Return-Path: Delivered-To: apmail-lucene-hadoop-dev-archive@locus.apache.org Received: (qmail 87411 invoked from network); 15 May 2006 20:40:34 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (209.237.227.199) by minotaur.apache.org with SMTP; 15 May 2006 20:40:34 -0000 Received: (qmail 51121 invoked by uid 500); 15 May 2006 20:40:33 -0000 Delivered-To: apmail-lucene-hadoop-dev-archive@lucene.apache.org Received: (qmail 51102 invoked by uid 500); 15 May 2006 20:40:33 -0000 Mailing-List: contact hadoop-dev-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hadoop-dev@lucene.apache.org Delivered-To: mailing list hadoop-dev@lucene.apache.org Received: (qmail 51093 invoked by uid 99); 15 May 2006 20:40:33 -0000 Received: from asf.osuosl.org (HELO asf.osuosl.org) (140.211.166.49) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 15 May 2006 13:40:33 -0700 X-ASF-Spam-Status: No, hits=0.0 required=10.0 tests= X-Spam-Check-By: apache.org Received: from [209.237.227.198] (HELO brutus.apache.org) (209.237.227.198) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 15 May 2006 13:40:33 -0700 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 8C8BA41000D for ; Mon, 15 May 2006 20:40:06 +0000 (GMT) Message-ID: <21044356.1147725606573.JavaMail.jira@brutus> Date: Mon, 15 May 2006 20:40:06 +0000 (GMT+00:00) From: "Doug Cutting (JIRA)" To: hadoop-dev@lucene.apache.org Subject: [jira] Commented: (HADOOP-212) allow changes to dfs block size In-Reply-To: <28825893.1147448769761.JavaMail.root@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org X-Spam-Rating: minotaur.apache.org 1.6.2 0/1000/N [ http://issues.apache.org/jira/browse/HADOOP-212?page=comments#action_12402416 ] Doug Cutting commented on HADOOP-212: ------------------------------------- Milind, I don't think these 'failed to create directory' messages are the problem. That unit test succeeeds w/o this patch and fails with it. In either case the unit test prints these messages. I think these messages are because the directories already exist, so new attempts to create them fail, but I have not yet looked closely at that. > allow changes to dfs block size > ------------------------------- > > Key: HADOOP-212 > URL: http://issues.apache.org/jira/browse/HADOOP-212 > Project: Hadoop > Type: Improvement > Components: dfs > Versions: 0.2 > Reporter: Owen O'Malley > Assignee: Owen O'Malley > Priority: Critical > Fix For: 0.3 > Attachments: TEST-org.apache.hadoop.fs.TestCopyFiles.txt, dfs-blocksize.patch > > Trying to change the DFS block size, led the realization that the 32,000,000 was hard coded into the source code. I propose: > 1. Change the default block size to 64 * 1024 * 1024. > 2. Add the config variable dfs.block.size that sets the default block size. > 3. Add a parameter to the FileSystem, DFSClient, and ClientProtocol create method that let's the user control the block size. > 4. Rename the FileSystem.getBlockSize to getDefaultBlockSize. > 5. Add a new method to FileSytem.getBlockSize that takes a pathname. > 6. Use long for the block size in the API, which is what was used before. However, the implementation will not work if block size is set bigger than 2**31. > 7. Have the InputFormatBase use the blocksize of each file to determine the split size. > Thoughts? -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira