hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ted Yu (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-18084) Improve CleanerChore to clean from directory which consumes more disk space
Date Sat, 20 May 2017 14:45:04 GMT

    [ https://issues.apache.org/jira/browse/HBASE-18084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16018488#comment-16018488

Ted Yu commented on HBASE-18084:

171	          return -1;
172	        } else if (f1ConsumedSpace < f2ConsumedSpace) {
'else' can be omitted since return is called in the previous if block.
164	      HashMap<FileStatus, Long> directorySpaces = new HashMap<FileStatus, Long>();
The map is declared in the comparator which is passed dirs List. How many directories would
find their cached lengths ?
224	      LOG.debug("Prepared to delete files in directory: " + dirs);
Would the list of directories be logged ? nit: directory -> directories

> Improve CleanerChore to clean from directory which consumes more disk space
> ---------------------------------------------------------------------------
>                 Key: HBASE-18084
>                 URL: https://issues.apache.org/jira/browse/HBASE-18084
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Yu Li
>            Assignee: Yu Li
>         Attachments: HBASE-18084.patch
> Currently CleanerChore cleans the directory in dictionary order, rather than from the
directory with largest space usage. And when data abnormally accumulated to some huge volume
in archive directory, the cleaning speed might not be enough.
> This proposal is another improvement working together with HBASE-18083 to resolve our
online issue (archive dir consumed more than 1.8PB SSD space)

This message was sent by Atlassian JIRA

View raw message