Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E97ED10F23 for ; Tue, 7 Jan 2014 03:38:00 +0000 (UTC) Received: (qmail 18626 invoked by uid 500); 7 Jan 2014 03:37:37 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 18565 invoked by uid 500); 7 Jan 2014 03:37:34 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 18550 invoked by uid 99); 7 Jan 2014 03:37:32 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 07 Jan 2014 03:37:32 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of yuzhihong@gmail.com designates 209.85.217.174 as permitted sender) Received: from [209.85.217.174] (HELO mail-lb0-f174.google.com) (209.85.217.174) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 07 Jan 2014 03:37:24 +0000 Received: by mail-lb0-f174.google.com with SMTP id y6so10316197lbh.5 for ; Mon, 06 Jan 2014 19:37:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=l7PptIEasPNEbDNlTl3BId6VCkPGUymCmNA1tZMoKpY=; b=zMe43wgC0XExxU7TP0D8fqTUMSzePRv5LLrmeMxrL41Fy1qU1cTdxXLWf7jJjp7Qh8 C8bWbNrnXJR/czbuWZTar/8KsayE79yp9hcdY8VCdGckjIUTr0Wf1jtQnQtqrQ/7QPIs GphFt3PRwqdcR5QV74q8f+YZNntRgy4hu4oJR7b3b/36PSo6XEb6x7WOxnkwlluSmLye XFcERLLy66LcDPoQ35lCuDW3vMo2v2kIK9r0GDUZP39XOfFgO3xORlu5aSQQFAa6oglA QEqhq5HN2XGHPOObfnQgOteRgk0L6wQV+HtrKpc9YYYmDMI0RQIy1WVQypxcUC1VbKn/ EX9Q== MIME-Version: 1.0 X-Received: by 10.152.28.161 with SMTP id c1mr44073070lah.24.1389065823898; Mon, 06 Jan 2014 19:37:03 -0800 (PST) Received: by 10.112.183.231 with HTTP; Mon, 6 Jan 2014 19:37:03 -0800 (PST) In-Reply-To: References: Date: Mon, 6 Jan 2014 19:37:03 -0800 Message-ID: Subject: Re: Facing problem while using MultiTableOutputFormat From: Ted Yu To: "user@hbase.apache.org" Content-Type: multipart/alternative; boundary=089e0158c07a00d0e704ef5917a4 X-Virus-Checked: Checked by ClamAV on apache.org --089e0158c07a00d0e704ef5917a4 Content-Type: text/plain; charset=ISO-8859-1 System.out.println( " Running with on tables "+args[1]+ " and "+args[2]+" with zk "+args[3]); What was the output from the above ? I would expect a call similar to the following in your run() method - this comes from TestTableMapReduce.java: TableMapReduceUtil.initTableReducerJob( Bytes.toString(table.getTableName()), IdentityTableReducer.class, job); On Mon, Jan 6, 2014 at 7:12 PM, AnilKumar B wrote: > Hi, > > In my MR job, I need to write output into multiple tables, So I am > using MultiTableOutputFormat as below. But I am getting > TableNotFoundException. > > I am attaching code snippet below, Is this the correct way to use > MultiTableOutputFormat ? > > > Job class: > public int run(String[] args) throws Exception { > System.out.println( " Running with on tables "+args[1]+ " and "+args[2]+" > with zk "+args[3]); > Configuration hbaseConf = HBaseConfiguration.create(getConf()); > // hbaseConf.set(Constants.HBASE_ZOOKEEPER_QUORUM_PROP, > Constants.HBASE_OS_CL1_QUORUM); > hbaseConf.set(Constants.HBASE_ZOOKEEPER_QUORUM_PROP, args[3]); > Job job = new Job(hbaseConf); > job.setJarByClass(MultiTableTestJob.class); > job.setInputFormatClass(TextInputFormat.class); > job.setMapperClass(MultiTableTestMapper.class); > job.setMapOutputKeyClass(Text.class); > job.setMapOutputValueClass(Text.class); > job.setReducerClass(MultiTableTestReducer.class); > job.setOutputKeyClass(Text.class); > job.setOutputValueClass(Text.class); > FileInputFormat.setInputPaths(job, new Path(args[0])); > job.setOutputFormatClass(MultiTableOutputFormat.class); > TableMapReduceUtil.addDependencyJars(job); > TableMapReduceUtil.addDependencyJars(job.getConfiguration()); > return job.waitForCompletion(true) == true ? 0 : -1; > } > public static void main(String[] args) throws Exception{ > Configuration configuration = new Configuration(); > configuration.set("HBASE_DEST_TABLE", args[1]); > configuration.set("HBASE_LOOKUP_TABLE", args[2]); > ToolRunner.run(configuration, new CISuperSessionJob(), args); > } > > Reducer Class: > > private ImmutableBytesWritable tbl1; > private ImmutableBytesWritable tbl2; > > protected void setup(Context context) throws IOException > ,InterruptedException { > Configuration c = context.getConfiguration(); > tbl1 = new > > ImmutableBytesWritable(Bytes.toBytes(context.getConfiguration().get("HBASE_DEST_TABLE"))); > tbl2 = new > > ImmutableBytesWritable(Bytes.toBytes(context.getConfiguration().get("HBASE_LOOKUP_TABLE"))); > }; > > protected void reduce(Text key, java.lang.Iterable > values, Context context) throws IOException ,InterruptedException { > // > if (some condition) { > Put put = getSessionPut(key, vc); > if (put != null) { > context.write(tbl1, put); > } > } else { > // > Put put = getEventPut(key, vc); > context.write(tbl2, put); > } > } > } > > > Exception: > org.apache.hadoop.hbase.TableNotFoundException: mapred.reduce.tasks=100 > at > > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:999) > at > > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:864) > at > > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:821) > at > org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:234) > at org.apache.hadoop.hbase.client.HTable.(HTable.java:174) > at > > org.apache.hadoop.hbase.mapreduce.MultiTableOutputFormat$MultiTableRecordWriter.getTable(MultiTableOutputFormat.java:101) > at > > org.apache.hadoop.hbase.mapreduce.MultiTableOutputFormat$MultiTableRecordWriter.write(MultiTableOutputFormat.java:127) > at > > org.apache.hadoop.hbase.mapreduce.MultiTableOutputFormat$MultiTableRecordWriter.write(MultiTableOutputFormat.java:68) > at > > org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:586) > at > > org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80) > at > > > > Thanks & Regards, > B Anil Kumar. > --089e0158c07a00d0e704ef5917a4--