hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Dave Latham (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-9208) ReplicationLogCleaner slow at large scale
Date Wed, 14 Aug 2013 17:05:49 GMT

    [ https://issues.apache.org/jira/browse/HBASE-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13739883#comment-13739883
] 

Dave Latham commented on HBASE-9208:
------------------------------------

The CleanerChore is shared code between cleaning up HLogs in .oldlogs (a single directory
with many files) and cleaning up HFiles in archive (which can have many nested subdirectories).
 The proposal here assumes that as long as we get file listings from HDFS first and then check
for references to those files second that we're safe to delete the file if there are no such
references (of course the existing code has the same assumption).  So that leaves a choice:

(a) We can do this batch for each directory at a time only which should still solve the issue
for HLogs (so long as they stay in a flat directory) but wouldn't allow the HFile archive
to use as much of the optimization. Or:

(b) We can first do a full recursive load of all files in the base directory, and call a batch
filter across all those files.  Then, in order to remove now empty subdirectories we need
to do some bookkeeping to associate which ones we believe may now be empty (of course some
subdirectories may have new entries created in the mean time).

Since I'm focused for the moment on HLogs I'm planning on sticking with the simpler (a) for
now, unless I hear some clamoring for the fuller solution (b).

One other question occurred when doing some work on CleanerChore for this.  If an IOException
occurs during deleting an entry in the top level directory the chore continues attempting
to delete the other entries.  However, if an IOException is thrown deleting an entry in a
subdirectory it aborts trying other entries in the subdirectory.  I'd prefer to see consistency
(and it would make it easier to share the code) if we either give up the entire current chore
iteration on an unexpected file system IOException (i.e. we still tolerate IOException when
deleting a non-empty directory) or alternately attempt to continue crawling all subentries.
 Does anyone have an opinion on which is better (or disagree about making the behavior consistent
between top level entries and sublevel entries)?
                
> ReplicationLogCleaner slow at large scale
> -----------------------------------------
>
>                 Key: HBASE-9208
>                 URL: https://issues.apache.org/jira/browse/HBASE-9208
>             Project: HBase
>          Issue Type: Improvement
>            Reporter: Dave Latham
>            Assignee: Dave Latham
>             Fix For: 0.94.12
>
>
> At a large scale the ReplicationLogCleaner fails to clean up .oldlogs as fast as the
cluster is producing them.  For each old HLog file that has been replicated and should be
deleted the ReplicationLogCleaner checks every replication queue in ZooKeeper before removing
it.  This means that as a cluster scales up the number of files to delete scales as well as
the time to delete each file so the cleanup chore scales quadratically.  In our case it reached
the point where the oldlogs were growing faster than they were being cleaned up.
> We're now running with a patch that allows the ReplicationLogCleaner to refresh its list
of files in the replication queues from ZooKeeper just once for each batch of files the CleanerChore
wants to evaluate.
> I'd propose updating FileCleanerDelegate to take a List<FileStatus> rather than
a single one at a time.  This would allow file cleaners that check an external resource for
references such as ZooKeeper (for ReplicationLogCleaner) or HDFS (for SnapshotLogCleaner which
looks like it may also have similar trouble at scale) to load those references once per batch
rather than for every log.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message