hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Dave Latham (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HBASE-9208) ReplicationLogCleaner slow at large scale
Date Thu, 15 Aug 2013 21:45:52 GMT

     [ https://issues.apache.org/jira/browse/HBASE-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Dave Latham updated HBASE-9208:

    Attachment: HBASE-9208.patch

Attaching HBASE-9208.patch with the following changes:
 - Updated FileCleanerDelegate from {{boolean isFileDeletable(FileStatus fStat);}} to {{Iterable<FileStatus>
getDeletableFiles(Iterable<FileStatus> files);}}
 - Added an abstract BaseFileCleanerDelegate that implements the new batch method in terms
of the old method so that existing cleaners will work by extending this base class.
 - Updated CleanerChore to make a single call to each cleaner with a batch of all files for
each directory (option (a) in coment above).  Also now catches and logs unexpected IOExceptions
for each entry of subdirectories rather than aborting to be consistent with behavior of the
top level directory as mentioned above and suggested by Jesse.

It is also available on review board at:

Have not updated SnapshotLogCleaner to take advantage of the batching interface.  I intend
to create a separate JIRA for that.

A couple more questions arose during the work:
 - ReplicationLogCleaner.stop contains:
// Not sure why we're deleting a connection that we never acquired or used
Is this correct with the latest work on connection management?
 - The cleaner hierarchy may now be deeper than needed.  In particular there is a FileCleanerDelegate
interface which is implemented by a BaseFileCleanerDelegate which is subclased by each of
BaseHFileCleanerDelegate (which adds only the stopped field) and BaseLogCleanerDelegate (which
includes a deprecated isLogDeletable method).  In turn these are subclassed by the concrete
implementations.  Should the base classes be consolidated?

Reviews and input would be greatly appreciated.
> ReplicationLogCleaner slow at large scale
> -----------------------------------------
>                 Key: HBASE-9208
>                 URL: https://issues.apache.org/jira/browse/HBASE-9208
>             Project: HBase
>          Issue Type: Improvement
>            Reporter: Dave Latham
>            Assignee: Dave Latham
>             Fix For: 0.94.12
>         Attachments: HBASE-9208.patch
> At a large scale the ReplicationLogCleaner fails to clean up .oldlogs as fast as the
cluster is producing them.  For each old HLog file that has been replicated and should be
deleted the ReplicationLogCleaner checks every replication queue in ZooKeeper before removing
it.  This means that as a cluster scales up the number of files to delete scales as well as
the time to delete each file so the cleanup chore scales quadratically.  In our case it reached
the point where the oldlogs were growing faster than they were being cleaned up.
> We're now running with a patch that allows the ReplicationLogCleaner to refresh its list
of files in the replication queues from ZooKeeper just once for each batch of files the CleanerChore
wants to evaluate.
> I'd propose updating FileCleanerDelegate to take a List<FileStatus> rather than
a single one at a time.  This would allow file cleaners that check an external resource for
references such as ZooKeeper (for ReplicationLogCleaner) or HDFS (for SnapshotLogCleaner which
looks like it may also have similar trouble at scale) to load those references once per batch
rather than for every log.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

View raw message