Return-Path: Delivered-To: apmail-lucene-hadoop-dev-archive@locus.apache.org Received: (qmail 69593 invoked from network); 15 Sep 2007 20:24:55 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 15 Sep 2007 20:24:55 -0000 Received: (qmail 40168 invoked by uid 500); 15 Sep 2007 20:24:47 -0000 Delivered-To: apmail-lucene-hadoop-dev-archive@lucene.apache.org Received: (qmail 40127 invoked by uid 500); 15 Sep 2007 20:24:47 -0000 Mailing-List: contact hadoop-dev-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hadoop-dev@lucene.apache.org Delivered-To: mailing list hadoop-dev@lucene.apache.org Received: (qmail 40118 invoked by uid 99); 15 Sep 2007 20:24:47 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 15 Sep 2007 13:24:47 -0700 X-ASF-Spam-Status: No, hits=-100.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO brutus.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 15 Sep 2007 20:26:40 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 2CB217141F2 for ; Sat, 15 Sep 2007 13:24:32 -0700 (PDT) Message-ID: <22328140.1189887872180.JavaMail.jira@brutus> Date: Sat, 15 Sep 2007 13:24:32 -0700 (PDT) From: "Hadoop QA (JIRA)" To: hadoop-dev@lucene.apache.org Subject: [jira] Commented: (HADOOP-1903) [hbase] Possible data loss if Exception happens between snapshot and flush to disk. In-Reply-To: <3455784.1189819952380.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-1903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12527778 ] Hadoop QA commented on HADOOP-1903: ----------------------------------- +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12365914/1903.patch against trunk revision r575950. @author +1. The patch does not contain any @author tags. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new compiler warnings. findbugs +1. The patch does not introduce any new Findbugs warnings. core tests +1. The patch passed core unit tests. contrib tests +1. The patch passed contrib unit tests. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/772/testReport/ Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/772/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/772/artifact/trunk/build/test/checkstyle-errors.html Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/772/console This message is automatically generated. > [hbase] Possible data loss if Exception happens between snapshot and flush to disk. > ----------------------------------------------------------------------------------- > > Key: HADOOP-1903 > URL: https://issues.apache.org/jira/browse/HADOOP-1903 > Project: Hadoop > Issue Type: Bug > Reporter: stack > Assignee: stack > Priority: Minor > Fix For: 0.15.0 > > Attachments: 1903.patch > > > There exists a little window during which we can lose data. During a memcache flush, we make an inmemory copy, a 'snapshot'. The memcache is then zeroed and off we go again taking updates. Meantime, in background we are supposed to flush the snapshot to disk. If this process is interrupted -- e.g. the HDFS is yanked from under us or if an OOME occurs in this thread -- then the content of the snapshot is lost. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.