Return-Path: Delivered-To: apmail-lucene-hadoop-dev-archive@locus.apache.org Received: (qmail 20779 invoked from network); 19 Oct 2007 22:19:42 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 19 Oct 2007 22:19:42 -0000 Received: (qmail 9117 invoked by uid 500); 19 Oct 2007 22:19:29 -0000 Delivered-To: apmail-lucene-hadoop-dev-archive@lucene.apache.org Received: (qmail 9079 invoked by uid 500); 19 Oct 2007 22:19:29 -0000 Mailing-List: contact hadoop-dev-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hadoop-dev@lucene.apache.org Delivered-To: mailing list hadoop-dev@lucene.apache.org Received: (qmail 9070 invoked by uid 99); 19 Oct 2007 22:19:29 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 19 Oct 2007 15:19:29 -0700 X-ASF-Spam-Status: No, hits=-100.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO brutus.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 19 Oct 2007 22:19:41 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 9DF62714208 for ; Fri, 19 Oct 2007 15:18:50 -0700 (PDT) Message-ID: <4868884.1192832330637.JavaMail.jira@brutus> Date: Fri, 19 Oct 2007 15:18:50 -0700 (PDT) From: "Raghu Angadi (JIRA)" To: hadoop-dev@lucene.apache.org Subject: [jira] Updated: (HADOOP-2012) Periodic verification at the Datanode In-Reply-To: <10538159.1191891590670.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-2012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Raghu Angadi updated HADOOP-2012: --------------------------------- Attachment: HADOOP-2012.patch Updated the patch. SCAN_PERIOD was set to negative value by mistake. > Periodic verification at the Datanode > ------------------------------------- > > Key: HADOOP-2012 > URL: https://issues.apache.org/jira/browse/HADOOP-2012 > Project: Hadoop > Issue Type: New Feature > Components: dfs > Reporter: Raghu Angadi > Assignee: Raghu Angadi > Attachments: HADOOP-2012.patch, HADOOP-2012.patch > > > Currently on-disk data corruption on data blocks is detected only when it is read by the client or by another datanode. These errors are detected much earlier if datanode can periodically verify the data checksums for the local blocks. > Some of the issues to consider : > - How should we check the blocks ( no more often than once every couple of weeks ?) > - How do we keep track of when a block was last verfied ( there is a .meta file associcated with each lock ). > - What action to take once a corruption is detected > - Scanning should be done as a very low priority with rest of the datanode disk traffic in mind. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.