hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HBASE-1901) "General" partitioner for "hbase-48" bulk (behind the api, write hfiles direct) uploader
Date Mon, 12 Oct 2009 15:21:31 GMT

    [ https://issues.apache.org/jira/browse/HBASE-1901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12764713#action_12764713

stack commented on HBASE-1901:

Sampling would make for better partitioning.  Thats what the totalorderpartitioner up in hadoop
does.  We could do a sampling partitioner in a different issue?

> "General" partitioner for "hbase-48" bulk (behind the api, write hfiles direct) uploader
> ----------------------------------------------------------------------------------------
>                 Key: HBASE-1901
>                 URL: https://issues.apache.org/jira/browse/HBASE-1901
>             Project: Hadoop HBase
>          Issue Type: Wish
>            Reporter: stack
> For users to bulk upload by writing hfiles directly to the filesystem, they currently
need to write a partitioner that is intimate with how their key schema works.  This issue
is about providing a general partitioner, one that could never be as fair as a custom-written
partitioner but that might just work for many cases.  The idea is that a user would supply
the first and last keys in their dataset to upload.  We'd then do bigdecimal on the range
between start and end rowids dividing it by the number of reducers to come up with key ranges
per reducer.
> (I thought jgray had done some BigDecimal work dividing keys already but I can't find

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message