hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Update of "Hbase/MapReduce" by JimKellerman
Date Wed, 15 Oct 2008 16:42:32 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The following page has been changed by JimKellerman:
http://wiki.apache.org/hadoop/Hbase/MapReduce

The comment on the change is:
fix broken links

------------------------------------------------------------------------------
  
  = Hbase as MapReduce job data source and sink =
  
- Hbase can be used as a data source, [http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/javadoc/org/apache/hadoop/hbase/mapred/TableInputFormat.html
TableInputFormat], and data sink, [http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/javadoc/org/apache/hadoop/hbase/mapred/TableOutputFormat.html
TableOutputFormat], for mapreduce jobs.  Writing mapreduce jobs that read or write hbase,
you'll probably want to subclass [http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/javadoc/org/apache/hadoop/hbase/mapred/TableMap.html
TableMap] and/or [http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/javadoc/org/apache/hadoop/hbase/mapred/TableReduce.html
TableReduce].  See the do-nothing passthrough classes [http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/javadoc/org/apache/hadoop/hbase/mapred/IdentityTableMap.html
IdentityTableMap] and [http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/javadoc/org/apache/hadoop/hbase
 /mapred/IdentityTableReduce.html IdentityTableReduce] for basic usage.  For a more involved
example, see [http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/javadoc/org/apache/hadoop/hbase/mapred/BuildTableIndex.html
BuildTableIndex] from the same package or review the org.apache.hadoop.hbase.mapred.TestTableMapReduce
unit test.
+ Hbase can be used as a data source, [http://hadoop.apache.org/hbase/docs/current/api/org/apache/hadoop/hbase/mapred/TableInputFormat.html
TableInputFormat], and data sink, [http://hadoop.apache.org/hbase/docs/current/api/org/apache/hadoop/hbase/mapred/TableOutputFormat.html
TableOutputFormat], for mapreduce jobs.  Writing mapreduce jobs that read or write hbase,
you'll probably want to subclass [http://hadoop.apache.org/hbase/docs/current/api/org/apache/hadoop/hbase/mapred/TableMap.html
TableMap] and/or [http://hadoop.apache.org/hbase/docs/current/api/org/apache/hadoop/hbase/mapred/TableReduce.html
TableReduce].  See the do-nothing passthrough classes [http://hadoop.apache.org/hbase/docs/current/api/org/apache/hadoop/hbase/mapred/IdentityTableMap.html
IdentityTableMap] and [http://hadoop.apache.org/hbase/docs/current/api/org/apache/hadoop/hbase/mapred/IdentityTableReduce.html
IdentityTableReduce] for basic usage.  For a more involved example, see [http://hadoop.apache.org/h
 base/docs/current/api/org/apache/hadoop/hbase/mapred/BuildTableIndex.html BuildTableIndex]
from the same package or review the org.apache.hadoop.hbase.mapred.!TestTableMapReduce unit
test.
  
  Running mapreduce jobs that have hbase as source or sink, you'll need to specify source/sink
table and column names in your configuration.
  

Mime
View raw message