hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2636) [hbase] Make cache flush triggering less simplistic
Date Tue, 29 Jan 2008 20:12:34 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12563645#action_12563645

stack commented on HADOOP-2636:

In HLogKey, was it just a case of a misnamed data member?  All along it was a store but we
were calling it region?  See below:

-  Text regionName = new Text();
+  Text storeName = new Text();

Can this string creation be avoided in HStore; e.g. can storeName be Text?

+            || (key.getStoreName().toString().compareTo(storeName) != 0)

Logging below at INFO level seems inappropriate:

+      LOG.info("Not flushing cache for " + storeName +
+          " because it has 0 entries");

This kind of logging doesn't help (though I think this log is just a line moved from elsewhere):

+            LOG.debug("nothing to compact for " + this.storeName);

Should say why there is nothing to compact -- e.g. only one file present or holds references.

Just remove rather than comment out?

-      HStoreKey rowKey = new HStoreKey(row, timestamp);
+/*      HStoreKey rowKey = new HStoreKey(row, timestamp); */

HStoreSize inner class is no longer needed because the check is local to HStore where before
it was higher up in HRegion?  The info HStoreSize carried is now all availble in the context
where the check is being done?

Nice how you cleaned up lease-making/updating.

Why make RowMap non-private?  Its used by inner classes?

The below no longer makes use of TextSequences?  Any reason for that?  (TS was means of cutting
down on object creations. Profiling, using TSs made a big difference).
-      Text qualifier = HStoreKey.extractQualifier(col);
+      Text member = HStoreKey.extractMember(col);

> [hbase] Make cache flush triggering less simplistic
> ---------------------------------------------------
>                 Key: HADOOP-2636
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2636
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: contrib/hbase
>    Affects Versions: 0.16.0
>            Reporter: stack
>            Assignee: Jim Kellerman
>             Fix For: 0.17.0
>         Attachments: patch.txt, patch.txt, patch.txt
> When flusher runs -- its triggered when the sum of all Stores in a Region > a configurable
max size -- we flush all Stores though a Store memcache might have but a few bytes.
> I would think Stores should only dump their memcache disk if they have some substance.
> The problem becomes more acute, the more families you have in a Region.
> Possible behaviors would be to dump the biggest Store only, or only those Stores >
50% of max memcache size.  Behavior would vary dependent on the prompt that provoked the flush.
 Would also log why the flush is running: optional or > max size.
> This issue comes out of HADOOP-2621.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message