hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tianying Chang <tych...@gmail.com>
Subject Re: no-flush based snapshot policy?
Date Tue, 25 Mar 2014 23:47:24 GMT
Cool. Thanks for the confirmation. For my case, I can just add the new
config key in the hbase-site.xml and then restart the RS to take effect,
and use that all the time. Not sure if it is good enough if people also
want both flush and non-flush snapshot available without restarting RS.

Thanks
Tian-Ying


On Tue, Mar 25, 2014 at 4:17 PM, Matteo Bertozzi <theo.bertozzi@gmail.com>wrote:

> There is no data corruption or other kind of problems by skipping the
> flush.
> It will only not include the memstore data in the snapshot, which is
> basically what you asking for.
> so, sounds good to me if you want add that flag.
>
> probably having a shell flag will be "harder" to implement, since you have
> to pass it to the master
> and then add it to the snapshot information in the zk procedure, and then
> read it from the RS.
> Not a big thing, but you have touch lots of different places. It is not
> like a static conf property that you read on the RS and you are done.
>
> Matteo
>
>
>
> On Tue, Mar 25, 2014 at 2:38 PM, Tianying Chang <tychang@gmail.com> wrote:
>
> > Hi,
> >
> > I need a new snapshot policy. Basically, I cannot disable the table, but
> I
> > also don't need the snapshot to be that "consistent" where all RS
> > coordinated to flush the region before taking the snapshot since it slow
> > down production cluster when flush take too long. It is OK for me if the
> > snapshot missed the data in memstore because I will use WALPlayer to fill
> > the data gap that is not in the snapshot but has been persisted (in WAL).
> > So I should have no data loss.
> >
> > As a quick hack way to test this in my hbase backup workflow, I just add
> a
> > config key, and skip the flushcache() in file
> > *regionserver/snapshot/FlushSnapshotSubprocedure.java*, something like
> > below.  It seems works fine for me, where all data are recovered in a new
> > cluster after running WALPlayer.
> >
> > Does anyone see any problem like data corruption, etc?
> >
> >
> > LOG.debug("Flush Snapshotting region " + region.toString() + "
> > started...");
> > if (noFlushNeeded)
> > {
> >    LOG.debug("No flush before taking snapshot");
> > } else
> > {
> >     region.flushcache();
> > }
> >
> > If there is no data corruption issue with this policy, I can add an
> > parameter from hbase shell, so that people can dynamically decide when to
> > use no-flush snapshot.
> >
> > Thanks
> > Tian-Ying
> >
> > On Tue, Mar 25, 2014 at 2:08 PM, Tianying Chang <tychang@gmail.com>
> wrote:
> >
> > > Hi,
> > >
> > > I need a new snapshot policy which sits in between the disabled and
> > > flushed version. So, basically:
> > > I cannot disable the table, but I also don't need the snapshot to be
> that
> > > "consistent" where all RS coordinated to flush the region before taking
> > the
> > > snapshot.
> > >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message