Return-Path: Delivered-To: apmail-lucene-hadoop-dev-archive@locus.apache.org Received: (qmail 41589 invoked from network); 31 May 2007 21:32:39 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 31 May 2007 21:32:39 -0000 Received: (qmail 83468 invoked by uid 500); 31 May 2007 21:32:41 -0000 Delivered-To: apmail-lucene-hadoop-dev-archive@lucene.apache.org Received: (qmail 83428 invoked by uid 500); 31 May 2007 21:32:41 -0000 Mailing-List: contact hadoop-dev-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hadoop-dev@lucene.apache.org Delivered-To: mailing list hadoop-dev@lucene.apache.org Received: (qmail 83404 invoked by uid 99); 31 May 2007 21:32:41 -0000 Received: from herse.apache.org (HELO herse.apache.org) (140.211.11.133) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 31 May 2007 14:32:41 -0700 X-ASF-Spam-Status: No, hits=-100.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO brutus.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 31 May 2007 14:32:36 -0700 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 339B9714186 for ; Thu, 31 May 2007 14:32:16 -0700 (PDT) Message-ID: <13494278.1180647136208.JavaMail.jira@brutus> Date: Thu, 31 May 2007 14:32:16 -0700 (PDT) From: "Raghu Angadi (JIRA)" To: hadoop-dev@lucene.apache.org Subject: [jira] Commented: (HADOOP-1134) Block level CRCs in HDFS In-Reply-To: <2906341.1174343312447.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12500499 ] Raghu Angadi commented on HADOOP-1134: -------------------------------------- bq. The current checksum code catches a lot of such errors. btw, ChecksumDistributedFileSystem has these problems. i.e. data is buffered before it reaches FSOutputSummer while writing and after it is out of FSInputChecker while reading. > Block level CRCs in HDFS > ------------------------ > > Key: HADOOP-1134 > URL: https://issues.apache.org/jira/browse/HADOOP-1134 > Project: Hadoop > Issue Type: New Feature > Components: dfs > Reporter: Raghu Angadi > Assignee: Raghu Angadi > Attachments: bc-no-upgrade-05302007.patch, DfsBlockCrcDesign-05305007.htm > > > Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about it. Though this served us well there a few disadvantages : > 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases, it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace performance both in terms of CPU and memory. > 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With block level CRCs, Datanode can periodically verify the checksums and report corruptions to namnode such that name replicas can be created. > We propose to have CRCs maintained for all HDFS data in much the same way as in GFS. I will update the jira with detailed requirements and design. This will include same guarantees provided by current implementation and will include a upgrade of current data. > -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.