Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 2F7B59FED for ; Thu, 26 Jan 2012 21:00:47 +0000 (UTC) Received: (qmail 40464 invoked by uid 500); 26 Jan 2012 21:00:45 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 40376 invoked by uid 500); 26 Jan 2012 21:00:44 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 40368 invoked by uid 99); 26 Jan 2012 21:00:44 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 26 Jan 2012 21:00:44 +0000 X-ASF-Spam-Status: No, hits=-0.5 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of yongyong313@gmail.com designates 209.85.210.41 as permitted sender) Received: from [209.85.210.41] (HELO mail-pz0-f41.google.com) (209.85.210.41) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 26 Jan 2012 21:00:40 +0000 Received: by dake40 with SMTP id e40so999340dak.14 for ; Thu, 26 Jan 2012 13:00:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; bh=Lf9owQ837QLVtFtozgExkN2NWHzhXC7i3nvxhr/RrvI=; b=vnzGoObd30MJHG7hNlXGDJmc7d/Pcy+lrV9DWfBPpoOqaEnBxu1aiowWlAVcxdpG4P oL9+Gs+KLlDKaZAY1euc1UDseF5G1p4CYEY8EMrzBlIIx7Y2DcTfvgS8D6eMaHc5WjDc pCrfGU9FhteG3IWGhULb5tybIeOPel/F4L4nQ= MIME-Version: 1.0 Received: by 10.68.210.12 with SMTP id mq12mr8265059pbc.2.1327611619707; Thu, 26 Jan 2012 13:00:19 -0800 (PST) Received: by 10.68.210.2 with HTTP; Thu, 26 Jan 2012 13:00:19 -0800 (PST) In-Reply-To: References: Date: Thu, 26 Jan 2012 22:00:19 +0100 Message-ID: Subject: Re: the occasion of the major compact? From: yonghu To: user@hbase.apache.org Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Nicolas, In my use case, I want to extract the deleted data. Hence, if I disable the major compaction, I can prevent the hbase to actually delete the data. After extracting the deleted data, I can issue major compact by myself. Regards Yong On Thu, Jan 26, 2012 at 8:02 PM, Nicolas Spiegelberg wrote: > Yong, > > Can you please explain why you want to disable major compactions? =A0What > are the problems that you're currently seeing or what are you worried wil= l > happen if a major compaction is allowed to occur? =A0Right now, there are > only an extremely small subset of cases where you must explicitly disable > compactions. =A0These use cases I know of are very complicated and requir= e > building StoreFile analysis tools underneath HBase, that I'm pretty sure > you're not needing this. > > Please also read my follow up commentary to explaining major compaction > logic: > http://search-hadoop.com/m/JR9sK1xnbj21 > http://search-hadoop.com/m/X7W7q1xnbj21 > > > The vast majority of users need features completely unrelated to > compactions. =A0The compaction algorithm is an easy target to worry about= . > > > On 1/26/12 7:06 AM, "yonghu" wrote: > >>Hello Mikael, >> >>I think disabling the major compaction in the timed and client-issued >>situation is not a problem. The problem is the size-based. From the >>mailing list, it only talks about the situation of minor compaction >>not major compaction, if I understand right. So, I want to know if >>someone can tell me how to close the major compaction in size-based >>situation. >> >>Thanks >> >>Yong >>I saw the description which indicating the size of store file can also >>trigger major compaction. >> >>On Thu, Jan 26, 2012 at 3:54 PM, Mikael Sitruk >>wrote: >>> Yong hi >>> >>> As far as i know setting =A0hbase.hregion.majorcompaction to 0 will >>>disable >>> the time based trigger only. >>> Client are always able to invoke the major compact, no matter what is >>>the >>> value of the hbase.hregion.majorcompaction. >>> >>> Perhaps client invocation of compaction can me disabled with the >>>security >>> package. >>> >>> Anyway i'm digging into 0.92, I hope to get those insight soon. >>> >>> Mikael.S >>> >>> On Thu, Jan 26, 2012 at 4:39 PM, yonghu wrote: >>> >>>> Thanks for your response. >>>> >>>> I knew that major compact can be triggered based on client, time and >>>> size. In my situation, I have to close the functionality of major >>>> compact. So, if I set the =8Chbase.hregion.majorcompaction=B9 into 0, = it >>>> will close all the three situations or I have to set it separately for >>>> each case. BTW, my hbase version is 0.92. >>>> >>>> Thanks! >>>> >>>> Yong >>>> >>>> On Thu, Jan 26, 2012 at 3:09 PM, Mikael Sitruk >>>> >>>> wrote: >>>> > look at the thread http://search-hadoop.com/m/GHUWQ1xnbj21, it >>>>explain a >>>> > lot on major compaction and enhancement over versions >>>> > >>>> > Mikael.S >>>> > >>>> > >>>> > On Thu, Jan 26, 2012 at 3:51 PM, Damien Hardy >>>> wrote: >>>> > >>>> >> Le 26/01/2012 14:43, yonghu a =E9crit : >>>> >> > Hello, >>>> >> > >>>> >> > I read this blog http://outerthought.org/blog/465-ot.html. It >>>> mentions >>>> >> > that every 24 hours the major compaction will occur. My question >>>>is >>>> >> > that if there are any other conditions which can trigger major >>>> >> > compaction happening? For example, when the size of store file >>>>reaches >>>> >> > the threshold (I think this will cause minor compaction or region >>>>file >>>> >> > split, not major compaction, but not quite sure). >>>> >> > >>>> >> > Thanks! >>>> >> > >>>> >> > Yong >>>> >> >>>> >> Hello, >>>> >> I think when there is massive delete on the table or change table >>>> >> attribute like TTL (that is susseptible of remove a lot of >>>> >> versions/rows) or COMPRESSION wich gain a lot of disk space on each >>>> region. >>>> >> >>>> >> Cheers, >>>> >> >>>> >> -- >>>> >> Damien >>>> >> >>>> >> >>>> > >>>> > >>>> > -- >>>> > Mikael.S >>>> >>> >>> >>> >>> -- >>> Mikael.S >