Return-Path: Delivered-To: apmail-lucene-hadoop-dev-archive@locus.apache.org Received: (qmail 49446 invoked from network); 29 Mar 2007 20:35:47 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 29 Mar 2007 20:35:47 -0000 Received: (qmail 7747 invoked by uid 500); 29 Mar 2007 20:35:53 -0000 Delivered-To: apmail-lucene-hadoop-dev-archive@lucene.apache.org Received: (qmail 7719 invoked by uid 500); 29 Mar 2007 20:35:53 -0000 Mailing-List: contact hadoop-dev-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hadoop-dev@lucene.apache.org Delivered-To: mailing list hadoop-dev@lucene.apache.org Received: (qmail 7710 invoked by uid 99); 29 Mar 2007 20:35:53 -0000 Received: from herse.apache.org (HELO herse.apache.org) (140.211.11.133) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 29 Mar 2007 13:35:53 -0700 X-ASF-Spam-Status: No, hits=-100.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO brutus.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 29 Mar 2007 13:35:45 -0700 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 5E5D1714065 for ; Thu, 29 Mar 2007 13:35:25 -0700 (PDT) Message-ID: <30758421.1175200525381.JavaMail.jira@brutus> Date: Thu, 29 Mar 2007 13:35:25 -0700 (PDT) From: "Doug Cutting (JIRA)" To: hadoop-dev@lucene.apache.org Subject: [jira] Commented: (HADOOP-1134) Block level CRCs in HDFS In-Reply-To: <2906341.1174343312447.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485341 ] Doug Cutting commented on HADOOP-1134: -------------------------------------- > Yes, but in each of those 100 real data corruptions data can be salvaged by switching to a valid instance of the block Assuming the corruption happened after replication. > Data corruption before data reaches the Datanode would occur either in RAM or during network transmission, the likelihood of this happening is orders of magnitude lower than 1 out of 3 replicas on disk becoming corrupt. That's not the universal experience. Many if not most of the checksum errors I've heard of traced back to memory errors. Someone recently reported a non-reproducible checksum error from the InMemoryFileSystem, didn't they? > Block level CRCs in HDFS > ------------------------ > > Key: HADOOP-1134 > URL: https://issues.apache.org/jira/browse/HADOOP-1134 > Project: Hadoop > Issue Type: New Feature > Components: dfs > Reporter: Raghu Angadi > Assigned To: Raghu Angadi > > Currently CRCs are handled at FileSystem level and are transparent to core HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given filesystem ) regd more about it. Though this served us well there a few disadvantages : > 1) This doubles namespace in HDFS ( or other filesystem implementations ). In many cases, it nearly doubles the number of blocks. Taking namenode out of CRCs would nearly double namespace performance both in terms of CPU and memory. > 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted blocks. With block level CRCs, Datanode can periodically verify the checksums and report corruptions to namnode such that name replicas can be created. > We propose to have CRCs maintained for all HDFS data in much the same way as in GFS. I will update the jira with detailed requirements and design. This will include same guarantees provided by current implementation and will include a upgrade of current data. > -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.