hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jim Kellerman (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1644) [hbase] Compactions should not block updates
Date Fri, 03 Aug 2007 17:24:52 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12517588
] 

Jim Kellerman commented on HADOOP-1644:
---------------------------------------

Compactions should never block updates.

What follows is a proposed solution:

1. A compaction is needed. This is the hard part. What conditions should trigger a compaction?
Right after a split? Any other time?

2. A new thread is started to do the compaction. Since all the MapFiles (SSTables) are immutable,
the HStore can continue to service reads from the existing MapFiles while the compaction thread
is creating the new compacted MapFile. The thread gets the list of MapFiles that exist at
the time it is started and will only act on those. Cache flushes will create new MapFiles
that aren't a part of the compaction.

3. When the compaction is complete, the thread grabs a write lock on the HStore. (it might
have to wait a bit if there are some scans going on, but that's ok)

4. When the lock is acquired, the newly created compacted MapFile is put into place and the
MapFiles it read from are removed. (very short time period).

5. The lock is then released and the HStore services requests from the newly compacted MapFile
and any new MapFiles that may have been created by cache flushes that occurred since the compaction
started.

6. The MapFiles that were the input to the compaction can now be deleted.

> [hbase] Compactions should not block updates
> --------------------------------------------
>
>                 Key: HADOOP-1644
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1644
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: contrib/hbase
>    Affects Versions: 0.15.0
>            Reporter: stack
>            Assignee: stack
>             Fix For: 0.15.0
>
>
> Currently, compactions take a long time.  During compaction, updates are carried by the
HRegions' memcache (+ backing HLog).  memcache is unable to flush to disk until compaction
completes.
> Under sustained, substantial --  rows that contain multiple columns one of which is a
web page -- updates by multiple concurrent clients (10 in this case), a common hbase usage
scenario, the memcache grows fast and often to orders of magnitude in excess of the configured
'flush-to-disk' threshold.
> This throws the whole system out of kilter.  When memcache does get to run after compaction
completes -- assuming you have sufficent RAM and the region server doesn't OOME -- then the
resulting on-disk file will be way larger than any other on-disk HStoreFile bringing on a
region split ..... but the resulting split will produce regions that themselves need to be
immediately split because each half is beyond the configured limit, and so on...
> In another issue yet to be posted, tuning and some pointed memcache flushes makes the
above condition less extreme but until compaction durations come close to the memcache flush
threshold compactions will remain disruptive. 
> Its allowed that compactions may never be fast enough as per bigtable paper (This is
a 'wish' issue).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message