hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Enis Soztutar (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HBASE-16288) HFile intermediate block level indexes might recurse forever creating multi TB files
Date Fri, 29 Jul 2016 21:32:20 GMT

     [ https://issues.apache.org/jira/browse/HBASE-16288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Enis Soztutar updated HBASE-16288:
----------------------------------
    Attachment: hbase-16288_v4.patch

v4. 

Slight change to the algorithm. Now we have a min number of entries in a given index block,
defaults to 16, as well as the max size. Max size is ignored while we have less than desired
number of entries. This is useful since the index is supposed to be B-Tree like index, and
it does not make sense to have  index levels with 1-2 entries per level. We don't want to
end up with 50-level indices which will make seeking very inefficient. 

> HFile intermediate block level indexes might recurse forever creating multi TB files
> ------------------------------------------------------------------------------------
>
>                 Key: HBASE-16288
>                 URL: https://issues.apache.org/jira/browse/HBASE-16288
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Enis Soztutar
>            Assignee: Enis Soztutar
>            Priority: Critical
>             Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.6, 1.2.3
>
>         Attachments: hbase-16288_v1.patch, hbase-16288_v2.patch, hbase-16288_v3.patch,
hbase-16288_v4.patch
>
>
> Mighty [~elserj] was debugging an opentsdb cluster where some region directory ended
up having 5TB+ files under <regiondir>/.tmp/ 
> Further debugging and analysis, we were able to reproduce the problem locally where we
never we recursing in this code path for writing intermediate level indices: 
> {code:title=HFileBlockIndex.java}
> if (curInlineChunk != null) {
>         while (rootChunk.getRootSize() > maxChunkSize) {
>           rootChunk = writeIntermediateLevel(out, rootChunk);
>           numLevels += 1;
>         }
>       }
> {code}
> The problem happens if we end up with a very large rowKey (larger than "hfile.index.block.max.size"
being the first key in the block, then moving all the way to the root-level index building.
We will keep writing and building the next level of intermediate level indices with a single
very-large key. This can happen in flush / compaction / region recovery causing cluster inoperability
due to ever-growing files. 
> Seems the issue was also reported earlier, with a temporary workaround: 
> https://github.com/OpenTSDB/opentsdb/issues/490



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message