Return-Path: Delivered-To: apmail-lucene-hadoop-dev-archive@locus.apache.org Received: (qmail 63713 invoked from network); 29 Nov 2006 18:12:47 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 29 Nov 2006 18:12:47 -0000 Received: (qmail 27100 invoked by uid 500); 29 Nov 2006 18:12:54 -0000 Delivered-To: apmail-lucene-hadoop-dev-archive@lucene.apache.org Received: (qmail 27069 invoked by uid 500); 29 Nov 2006 18:12:54 -0000 Mailing-List: contact hadoop-dev-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hadoop-dev@lucene.apache.org Delivered-To: mailing list hadoop-dev@lucene.apache.org Received: (qmail 27057 invoked by uid 99); 29 Nov 2006 18:12:54 -0000 Received: from herse.apache.org (HELO herse.apache.org) (140.211.11.133) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 29 Nov 2006 10:12:54 -0800 X-ASF-Spam-Status: No, hits=0.0 required=10.0 tests= X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO brutus.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 29 Nov 2006 10:12:44 -0800 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 7C24C7142FD for ; Wed, 29 Nov 2006 10:12:24 -0800 (PST) Message-ID: <6019835.1164823944505.JavaMail.jira@brutus> Date: Wed, 29 Nov 2006 10:12:24 -0800 (PST) From: "Yoram Arnon (JIRA)" To: hadoop-dev@lucene.apache.org Subject: [jira] Resolved: (HADOOP-760) HDFS edits log file corrupted can lead to a major loss of data. In-Reply-To: <12345392.1164793641046.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ http://issues.apache.org/jira/browse/HADOOP-760?page=all ] Yoram Arnon resolved HADOOP-760. -------------------------------- Resolution: Duplicate this is a duplicate of HADOOP-227, which requests periodic checkpointing (and starting a fresh edits file) of the namenode image > HDFS edits log file corrupted can lead to a major loss of data. > --------------------------------------------------------------- > > Key: HADOOP-760 > URL: http://issues.apache.org/jira/browse/HADOOP-760 > Project: Hadoop > Issue Type: Bug > Components: dfs > Affects Versions: 0.6.1 > Reporter: Philippe Gassmann > Priority: Critical > > In one of our test system, our HDFS gets corrupted after the edits log file has been corrupted (i can tell how). > When we restarted the HDFS, the namenode refusses to started with a exception in hadoop-namenode-xxx.out. > Unfortunately, a rm mistake has been done, and I was not able to save somewhere this exception. > But it was an ArrayIndexOutOfBoundException somewhere in a UTF8 method called from FSEditLog.loadFSEdits. > The result : the namenode was unable to start, the only way to get it fixed was the removing of the edits log file. > As it was on a test machine we do not have any backup, so all files created in the hdfs since the last start of the namenode were lost. > Is there a way to periodically commit changes to the hdfs in fsimage instead of keeping a huge logfile ? (eg every 10 minutes or so.) > Even if the namenode files are rsync'ed, what can be done in that particular case ? (if we periodically rsync the fsimage and its corrupted edits file). > This issue affects the 0.6.1 HDFS version. After looking at the hadoop trunk code, I am not able to says if this can be happening anymore... (I would say yes because of the use of UTF8 class in the same way as in 0.6.1) -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira