Return-Path: Delivered-To: apmail-hadoop-hbase-user-archive@minotaur.apache.org Received: (qmail 84929 invoked from network); 11 Feb 2010 23:46:44 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 11 Feb 2010 23:46:44 -0000 Received: (qmail 42069 invoked by uid 500); 11 Feb 2010 23:46:43 -0000 Delivered-To: apmail-hadoop-hbase-user-archive@hadoop.apache.org Received: (qmail 42020 invoked by uid 500); 11 Feb 2010 23:46:43 -0000 Mailing-List: contact hbase-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hbase-user@hadoop.apache.org Delivered-To: mailing list hbase-user@hadoop.apache.org Received: (qmail 42010 invoked by uid 99); 11 Feb 2010 23:46:43 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 11 Feb 2010 23:46:43 +0000 X-ASF-Spam-Status: No, hits=-0.0 required=10.0 tests=SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of saint.ack@gmail.com designates 209.85.221.184 as permitted sender) Received: from [209.85.221.184] (HELO mail-qy0-f184.google.com) (209.85.221.184) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 11 Feb 2010 23:46:36 +0000 Received: by qyk14 with SMTP id 14so1141674qyk.9 for ; Thu, 11 Feb 2010 15:46:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:received:in-reply-to :references:date:x-google-sender-auth:message-id:subject:from:to :content-type:content-transfer-encoding; bh=ax88sx2rnrzO3YV8wezLiGqrfkcOg0m4PDz61yNy2Ek=; b=gKr6oF2gAmG/nU/uKhGyj22lQyQ1Wc+hoQi5hVF9VilzbYvAUyYKLu1/8JANC44KCv XTFl9hg6i97VoNCMUV6UW6odhclb9iAggvxqPMNrDn4dJXyZevLjeBZvSejhue+YHspS M5jnKudYgROxXVL+6m/bPJeKGbRd3a4McQX1A= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:content-type :content-transfer-encoding; b=sT1l9lQIhTPBFV897zngvp0KlLLIWSoq/1IGrv5j4V+dSBDC3BCqi90+fZLuGV0rnJ v74CEW57pTZ+lpn725ZL5JT/e3T5CPH/Rf/Dvcwsost9iyl/TNX4vftlnKiHkC8XBYmX +b43kGipZaeyBBa3vqHMF66atyKnTsTSVht3Y= MIME-Version: 1.0 Sender: saint.ack@gmail.com Received: by 10.229.212.213 with SMTP id gt21mr307940qcb.2.1265931975485; Thu, 11 Feb 2010 15:46:15 -0800 (PST) In-Reply-To: <199D57B3-882A-48FE-96EC-D5A5D851D397@3crowd.com> References: <8FF5F37F-7A4B-472B-982E-CCF8366BAFD2@3crowd.com> <519188da1002111225i62790ebfgaaf399a15556ed53@mail.gmail.com> <7844CA38-7835-4B2D-AD2B-1A189C34ED26@3crowd.com> <7c962aed1002111427ved4e131ua318d19a080e0986@mail.gmail.com> <199D57B3-882A-48FE-96EC-D5A5D851D397@3crowd.com> Date: Thu, 11 Feb 2010 15:46:15 -0800 X-Google-Sender-Auth: 711981364143e242 Message-ID: <7c962aed1002111546m5f474a19i8bef39c85beb893f@mail.gmail.com> Subject: Re: request for mapreduce with hbase examples From: Stack To: hbase-user@hadoop.apache.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable On Thu, Feb 11, 2010 at 3:07 PM, David Hawthorne wrote: > Perhaps. =A0I figured it would be easier to go mapreduce -> hbase instead= of > mapreduce -> output file in hdfs -> load output file into hbase as a > separate job. =A0Then again, for performance, maybe hdfs -> hbase is bett= er > than mapreduce -> hbase and I should just plan to do that instead. > There is a bit of a miscommunication going on here I think. For a "mapreduce to hbase" example, see the task in the Importer class. See how it configures an input that in this case is files in hdfs but it could be anything -- just change the jobs configuration -- and then see how it hooks up TableOutputFormat to catch the map emissions into hbase: i.e. "mapreduce to hbase". St.Ack > > On Feb 11, 2010, at 2:27 PM, Stack wrote: > >> On Thu, Feb 11, 2010 at 12:38 PM, David Hawthorne >> wrote: >>> >>> I was under the impression that you could read from/write to an hbase >>> table >>> from within a mapreduce job. =A0Import and Export look like methods for >>> reading HDFS files into hbase and dumping hbase into an HDFS file. >>> >> >> Yes. =A0Isn't that what you want? =A0Export shows how to use hbase as a >> mapreduce source and Import as a mapreduce sink. >> St.Ack >> >> >> >>> On Feb 11, 2010, at 12:25 PM, Guohua Hao wrote: >>> >>>> Hello there, >>>> >>>> Did =A0you take a look at the Import and Export classes under package >>>> org.apache.hadoop.hbase.mapreduce? They are mostly using new APIs in m= y >>>> mind. Correct me if I am wrong. >>>> >>>> Thanks, >>>> Guohua >>>> >>>> On Thu, Feb 11, 2010 at 2:13 PM, David Hawthorne >>>> wrote: >>>> >>>>> I'm looking for some examples for reading data out of hbase for use >>>>> with >>>>> mapreduce and for inserting data into hbase from a mapreduce job. =A0= I've >>>>> seen >>>>> the example shipped with hbase, and, well, it doesn't exactly make >>>>> things >>>>> click for me. =A0It also looks like it's using the old API, so maybe >>>>> that's >>>>> why. >>>>> >>>>> Can someone please send some example code for reading/writing from/to >>>>> hbase >>>>> with a mapreduce job? =A0The more examples the better. >>>>> >>>>> Thanks! >>>>> >>> >>> > >