hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Lars Hofhansl (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-11165) Scaling so cluster can host 1M regions and beyond (50M regions?)
Date Tue, 03 Mar 2015 02:47:07 GMT

    [ https://issues.apache.org/jira/browse/HBASE-11165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344384#comment-14344384

Lars Hofhansl commented on HBASE-11165:

Just saying we should do what solves the problem with least amount of effort.
I think Flurry had abnormally small regions (1g), and that's why they have so many (with 20g
region they'd have 5.8pb :)) 
If having many regions and fixing all the issues from that is easiest we should do that. If
the other ways are easier we should do those.
There have been other ideas floating (have regions share a memstore, groups of regions for
assignment, etc).

I'm not against this. Exploring what new problems we're exchanging for the old problems:
# splittable META
#  scalable assignment manager
# handle many memstores
# multi-master
# potential NN scaling issues

If those are easier to solve than those and the previous comment, we know what we should do

> Scaling so cluster can host 1M regions and beyond (50M regions?)
> ----------------------------------------------------------------
>                 Key: HBASE-11165
>                 URL: https://issues.apache.org/jira/browse/HBASE-11165
>             Project: HBase
>          Issue Type: Brainstorming
>            Reporter: stack
>         Attachments: HBASE-11165.zip, Region Scalability test.pdf, ScalableMeta.pdf,
> This discussion issue comes out of "Co-locate Meta And Master HBASE-10569" and comments
on the doc posted there.
> A user -- our Francis Liu -- needs to be able to scale a cluster to do 1M regions maybe
even 50M later.  This issue is about discussing how we will do that (or if not 50M on a cluster,
how otherwise we can attain same end).
> More detail to follow.

This message was sent by Atlassian JIRA

View raw message