hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Nick Dimiduk (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-15248) BLOCKSIZE 4k should result in 4096 bytes on disk; i.e. fit inside a BucketCache 'block' of 4k
Date Fri, 16 Dec 2016 20:34:58 GMT

    [ https://issues.apache.org/jira/browse/HBASE-15248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15755447#comment-15755447
] 

Nick Dimiduk commented on HBASE-15248:
--------------------------------------

Yeah we have a disconnect -- BLOCKSIZE doesn't take compression or encoding into account.

> BLOCKSIZE 4k should result in 4096 bytes on disk; i.e. fit inside a BucketCache 'block'
of 4k
> ---------------------------------------------------------------------------------------------
>
>                 Key: HBASE-15248
>                 URL: https://issues.apache.org/jira/browse/HBASE-15248
>             Project: HBase
>          Issue Type: Sub-task
>          Components: BucketCache
>            Reporter: stack
>
> Chatting w/ a gentleman named Daniel Pol who is messing w/ bucketcache, he wants blocks
to be the size specified in the configuration and no bigger. His hardware set ups fetches
pages of 4k and so a block that has 4k of payload but has then a header and the header of
the next block (which helps figure whats next when scanning) ends up being 4203 bytes or something,
and this then then translates into two seeks per block fetch.
> This issue is about what it would take to stay inside our configured size boundary writing
out blocks.
> If not possible, give back better signal on what to do so you could fit inside a particular
constraint.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message