hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Nick Dimiduk (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-8073) HFileOutputFormat support for offline operation
Date Mon, 31 Mar 2014 19:28:16 GMT

    [ https://issues.apache.org/jira/browse/HBASE-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955565#comment-13955565

Nick Dimiduk commented on HBASE-8073:

bq. We could also expose additional API for HFileOutputFormat.configureIncrementalLoad() so
that we outsource and provide user/caller flexibility to input split point or other info to,
so that HFileOutputFormat does not has to figure out and do this internally.

Yes, this is a good step. The partitions file could be passed to TOP directly, and the same
file could be parsed to count the number of reducers.

> HFileOutputFormat support for offline operation
> -----------------------------------------------
>                 Key: HBASE-8073
>                 URL: https://issues.apache.org/jira/browse/HBASE-8073
>             Project: HBase
>          Issue Type: Sub-task
>          Components: mapreduce
>            Reporter: Nick Dimiduk
> When using HFileOutputFormat to generate HFiles, it inspects the region topology of the
target table. The split points from that table are used to guide the TotalOrderPartitioner.
If the target table does not exist, it is first created. This imposes an unnecessary dependence
on an online HBase and existing table.
> If the table exists, it can be used. However, the job can be smarter. For example, if
there's far more data going into the HFiles than the table currently contains, the table regions
aren't very useful for data split points. Instead, the input data can be sampled to produce
split points more meaningful to the dataset. LoadIncrementalHFiles is already capable of handling
divergence between HFile boundaries and table regions, so this should not pose any additional
burdon at load time.
> The proper method of sampling the data likely requires a custom input format and an additional
map-reduce job perform the sampling. See a relevant implementation: https://github.com/alexholmes/hadoop-book/blob/master/src/main/java/com/manning/hip/ch4/sampler/ReservoirSamplerInputFormat.java

This message was sent by Atlassian JIRA

View raw message