hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Weichen Ye (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs
Date Mon, 03 Nov 2014 12:37:34 GMT

     [ https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Weichen Ye updated HBASE-12394:
-------------------------------
    Description: 
Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/

For Hadoop cluster, a job with large HBase table as input always consumes a large amount of
computing resources. For example, we need to create a job with 1000 mappers to scan a table
with 1000 regions. This patch is to support one mapper using multiple regions as input.
 
The following new files are included in this patch:
TableMultiRegionInputFormat.java
TableMultiRegionInputFormatBase.java
TableMultiRegionMapReduceUtil.java
*TestTableMultiRegionInputFormatScan1.java
*TestTableMultiRegionInputFormatScan2.java
*TestTableMultiRegionInputFormatScanBase.java
*TestTableMultiRegionMapReduceUtil.java
 
The files start with * are tests.

In order to support multiple regions for one mapper, we need a new property in configuration--"hbase.mapreduce.scan.regionspermapper"

hbase.mapreduce.scan.regionspermapper controls how many regions used as input for one mapper.
For example,if we have an HBase table with 300 regions, and we set hbase.mapreduce.scan.regionspermapper
= 3. Then we run a job to scan the table, the job will use only 300/3=100 mappers.

In this way, we can control the number of mappers using the following formula.
Number of Mappers = (Total region numbers) / hbase.mapreduce.scan.regionspermapper

This is an example of the configuration.
<property>
     <name>hbase.mapreduce.scan.regionspermapper</name>
     <value>3</value>
</property>

This is an example for Java code:
TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, Text.class, Text.class,
job);



 
      

  was:
For Hadoop cluster, a job with large HBase table as input always consumes a large amount of
computing resources. For example, we need to create a job with 1000 mappers to scan a table
with 1000 regions. This patch is to support one mapper using multiple regions as input.
 
The following new files are included in this patch:
TableMultiRegionInputFormat.java
TableMultiRegionInputFormatBase.java
TableMultiRegionMapReduceUtil.java
*TestTableMultiRegionInputFormatScan1.java
*TestTableMultiRegionInputFormatScan2.java
*TestTableMultiRegionInputFormatScanBase.java
*TestTableMultiRegionMapReduceUtil.java
 
The files start with * are tests.

In order to support multiple regions for one mapper, we need a new property in configuration--"hbase.mapreduce.scan.regionspermapper"

hbase.mapreduce.scan.regionspermapper controls how many regions used as input for one mapper.
For example,if we have an HBase table with 300 regions, and we set hbase.mapreduce.scan.regionspermapper
= 3. Then we run a job to scan the table, the job will use only 300/3=100 mappers.

In this way, we can control the number of mappers using the following formula.
Number of Mappers = (Total region numbers) / hbase.mapreduce.scan.regionspermapper

This is an example of the configuration.
<property>
     <name>hbase.mapreduce.scan.regionspermapper</name>
     <value>3</value>
</property>

This is an example for Java code:
TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, Text.class, Text.class,
job);



 
      


> Support multiple regions as input to each mapper in map/reduce jobs
> -------------------------------------------------------------------
>
>                 Key: HBASE-12394
>                 URL: https://issues.apache.org/jira/browse/HBASE-12394
>             Project: HBase
>          Issue Type: Improvement
>          Components: mapreduce
>    Affects Versions: 2.0.0, 0.98.6.1
>            Reporter: Weichen Ye
>         Attachments: HBASE-12394-v2.patch, HBASE-12394.patch
>
>
> Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/
> For Hadoop cluster, a job with large HBase table as input always consumes a large amount
of computing resources. For example, we need to create a job with 1000 mappers to scan a table
with 1000 regions. This patch is to support one mapper using multiple regions as input.
>  
> The following new files are included in this patch:
> TableMultiRegionInputFormat.java
> TableMultiRegionInputFormatBase.java
> TableMultiRegionMapReduceUtil.java
> *TestTableMultiRegionInputFormatScan1.java
> *TestTableMultiRegionInputFormatScan2.java
> *TestTableMultiRegionInputFormatScanBase.java
> *TestTableMultiRegionMapReduceUtil.java
>  
> The files start with * are tests.
> In order to support multiple regions for one mapper, we need a new property in configuration--"hbase.mapreduce.scan.regionspermapper"
> hbase.mapreduce.scan.regionspermapper controls how many regions used as input for one
mapper. For example,if we have an HBase table with 300 regions, and we set hbase.mapreduce.scan.regionspermapper
= 3. Then we run a job to scan the table, the job will use only 300/3=100 mappers.
> In this way, we can control the number of mappers using the following formula.
> Number of Mappers = (Total region numbers) / hbase.mapreduce.scan.regionspermapper
> This is an example of the configuration.
> <property>
>      <name>hbase.mapreduce.scan.regionspermapper</name>
>      <value>3</value>
> </property>
> This is an example for Java code:
> TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, Text.class,
Text.class, job);
>  
>       



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message