hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ramkrishna.s.vasudevan (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-13082) Coarsen StoreScanner locks to RegionScanner
Date Tue, 01 Dec 2015 12:21:11 GMT

    [ https://issues.apache.org/jira/browse/HBASE-13082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15033596#comment-15033596

ramkrishna.s.vasudevan commented on HBASE-13082:

bq.sortCompactedfiles This is already been called on new temp ArrayList. Do we still need
create a new list? May be to be best way is create ImmutableList here at end for setting to
instance variable
Stack suggested not to use Guava DS and APIs.
bq.Can the marking be done within addCompactionResults?
If we do this way then we have to add the markcompactedAway in all the compaction impl. This
way it will be better. Also better we do it inside the store level lock.
bq.That looks bad this API checks the status of compactedAway also. 
The name is Ok I think.  Any other name you suggest?
bq.isCompactedAway() {
This is used in tests only because wanted to check some conditions for the tests. 
bq.Better add similar API in StoreFile also and use that rather than using directly from reader.
This cannot be done so easily because all our StorefileScanners are created over the Reader.
So if we want to access from Storefile then it will be a bigger change. Any way there is a
follow up that was discussed to make things work with StoreFileManager and StorefileInfo getting
used by the manager.
bq.removeCompactedFiles -> Every time the chore runs, we seems to make a ThreadPoolExecutor
and ExecutorService which seems expensive. Why we need multi threaded reader close?
The close() is a costly operation hence did not want to do these things serially when we know
that we can do it parallely. The reason for creating the executor every time was that not
sure what is the thread count to be allocated to the executor.
bq.Should be ok as it is rare case also.. Just saying.
YA if there are very frequent flushes then this will happen, till then the GC will not be
cleared for that snapshot till the scan is over.
bq.HRegion region; -> Do we need this ref of HRegion type? Can be Region?
Okie that can be done I think.

> Coarsen StoreScanner locks to RegionScanner
> -------------------------------------------
>                 Key: HBASE-13082
>                 URL: https://issues.apache.org/jira/browse/HBASE-13082
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Lars Hofhansl
>            Assignee: ramkrishna.s.vasudevan
>         Attachments: 13082-test.txt, 13082-v2.txt, 13082-v3.txt, 13082-v4.txt, 13082.txt,
13082.txt, HBASE-13082.pdf, HBASE-13082_1.pdf, HBASE-13082_12.patch, HBASE-13082_13.patch,
HBASE-13082_14.patch, HBASE-13082_15.patch, HBASE-13082_16.patch, HBASE-13082_17.patch, HBASE-13082_1_WIP.patch,
HBASE-13082_2.pdf, HBASE-13082_2_WIP.patch, HBASE-13082_3.patch, HBASE-13082_4.patch, HBASE-13082_9.patch,
HBASE-13082_9.patch, HBASE-13082_withoutpatch.jpg, HBASE-13082_withpatch.jpg, LockVsSynchronized.java,
gc.png, gc.png, gc.png, hits.png, next.png, next.png
> Continuing where HBASE-10015 left of.
> We can avoid locking (and memory fencing) inside StoreScanner by deferring to the lock
already held by the RegionScanner.
> In tests this shows quite a scan improvement and reduced CPU (the fences make the cores
wait for memory fetches).
> There are some drawbacks too:
> * All calls to RegionScanner need to be remain synchronized
> * Implementors of coprocessors need to be diligent in following the locking contract.
For example Phoenix does not lock RegionScanner.nextRaw() and required in the documentation
(not picking on Phoenix, this one is my fault as I told them it's OK)
> * possible starving of flushes and compaction with heavy read load. RegionScanner operations
would keep getting the locks and the flushes/compactions would not be able finalize the set
of files.
> I'll have a patch soon.

This message was sent by Atlassian JIRA

View raw message