hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From lars hofhansl <la...@apache.org>
Subject Re: Changing it so we do NOT archive hfiles by default
Date Thu, 20 Nov 2014 20:21:43 GMT
Interesting that removing the files (which is just a metadata operation in the NN) is slower
than writing the files with all their data in the first place.Is it really the NN that is
the gating factor or is it the algorithm we have in HBase? I remember we had similar issue
with the HLog removal where we rescan the WAL directory over and over for no good reason,
and the nice guys from Flurry did a fix.
We have a lot of stuff relying on this now, so it should be done carefully. You thinking 1.0+,
or even earlier releases?

-- Lars
      From: Stack <stack@duboce.net>
 To: HBase Dev List <dev@hbase.apache.org> 
 Sent: Thursday, November 20, 2014 11:08 AM
 Subject: Changing it so we do NOT archive hfiles by default
   
I think we should swap the default that has us archive hfiles rather than
just outright delete them when we are done with them. The current
configuration works for the minority of us who are running backup tools.
For the rest of us, our clusters are doing unnecessary extra work.

Background:

Since 0.94 (https://issues.apache.org/jira/browse/HBASE-5547), when we are
done with an hfile, it is moved to the 'archive' (hbase/.archive)
directory. A thread in the master then removes hfiles older than some
configured time. We do this rather than just delete hfiles to facilitate
backup tools -- let backup tools have a say in when an hfile is safe to
remove.

The subject on HBASE-5547 has it that the archiving behavior only happens
when the cluster is in 'backup mode', but as it turns out, later in the
issue discussion, the implementation becomes significantly easier if we
just always archive and that is what we ended up implementing and
committing.

These last few days, a few of us have been helping a user on a large
cluster who is (temporarily) doing loads of compactions with the replaced
hfiles being moved to hbase/.archive. The cleaning thread in master is not
working fast enough deleting the hfiles so there is buildup going on -- so
much so, its slowing the whole cluster down (NN operations over tens of
millions of files).

Any problem swapping the default and having users opt-in for archiving?
(I'd leave it as is in released software).  I will also take a look at
having the cleaner thread do more work per cycle.

Thanks,
St.Ack


  
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message