Return-Path: Delivered-To: apmail-hadoop-hbase-user-archive@minotaur.apache.org Received: (qmail 69569 invoked from network); 23 Jun 2009 22:02:11 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 23 Jun 2009 22:02:11 -0000 Received: (qmail 88015 invoked by uid 500); 23 Jun 2009 21:51:49 -0000 Delivered-To: apmail-hadoop-hbase-user-archive@hadoop.apache.org Received: (qmail 88000 invoked by uid 500); 23 Jun 2009 21:51:49 -0000 Mailing-List: contact hbase-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hbase-user@hadoop.apache.org Delivered-To: mailing list hbase-user@hadoop.apache.org Received: (qmail 87990 invoked by uid 99); 23 Jun 2009 21:51:49 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 23 Jun 2009 21:51:49 +0000 X-ASF-Spam-Status: No, hits=3.4 required=10.0 tests=HTML_MESSAGE,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (nike.apache.org: local policy) Received: from [76.13.9.55] (HELO web65511.mail.ac4.yahoo.com) (76.13.9.55) by apache.org (qpsmtpd/0.29) with SMTP; Tue, 23 Jun 2009 21:51:37 +0000 Received: (qmail 6025 invoked by uid 60001); 23 Jun 2009 21:51:15 -0000 Message-ID: <824395.3886.qm@web65511.mail.ac4.yahoo.com> X-YMail-OSG: 0jpeSroVM1kpffeG4xaec5FmxCPmVpjxRgJRAD96pSKdV0HZkKDGeBL_LDg6lvki19wir8UDR6mIn778e6PETvoF9tjU10Vke9sUR1Ay2telw1H0_t_jZWQxigpEB4cErI2T7yEA6q.S9Bcmh88vMtW3GTe9p2QRzKjBo9AJ2YlPdEtIe0DLuL_0JOWGzvsYAqDfyZwApDKwpFojfPoc6MPCAKt7S3V7WfP6m3o8OXYhAo_s8MDBvtCfNIb3gQPamPzEzv6JwRqr.fqNhd1r7bcU2BjOGdTxBgpmgSoAw4octfDrgfn.Q7weY8PEWm6je7pMt74FbvdVd6Frtbj0YwzvCMLrKQ-- Received: from [69.106.202.128] by web65511.mail.ac4.yahoo.com via HTTP; Tue, 23 Jun 2009 14:51:15 PDT X-RocketYMMF: apurtell X-Mailer: YahooMailRC/1277.43 YahooMailWebService/0.7.289.15 References: <24152144.post@talk.nabble.com> <74f4d40b0906221054o599b0941o736ea432cef5f4e1@mail.gmail.com> <24152534.post@talk.nabble.com> <7c962aed0906221146u78af50d5sddb4b2ec352c9ffd@mail.gmail.com> <24154281.post@talk.nabble.com> <24167386.post@talk.nabble.com> <24167811.post@talk.nabble.com> <7c962aed0906231014y74ccb623y431edd22efb1d8d4@mail.gmail.com> <24171189.post@talk.nabble.com> Date: Tue, 23 Jun 2009 14:51:15 -0700 (PDT) From: Andrew Purtell Subject: Re: Running programs under HBase 0.20.0 alpha To: hbase-user@hadoop.apache.org In-Reply-To: <24171189.post@talk.nabble.com> MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="0-1457562140-1245793875=:3886" X-Virus-Checked: Checked by ClamAV on apache.org --0-1457562140-1245793875=:3886 Content-Type: text/plain; charset=us-ascii This looks like HBASE-1560: https://issues.apache.org/jira/browse/HBASE-1560 - Andy ________________________________ From: llpind To: hbase-user@hadoop.apache.org Sent: Tuesday, June 23, 2009 12:04:59 PM Subject: Re: Running programs under HBase 0.20.0 alpha Okay thanks. Still unable to do a map/reduce job. I changed the API to use Put instead of BatchUpdate, along with some of the moved Classes. This is a map reduce program which reads from an input table, and outputs to another HBase table. Still using JobConf here: ====================================================================== JobConf c = new JobConf(getConf(), getClass()); c.setJobName(getClass().getName()); String inputTableName = args[0]; System.out.println(" STARTING... 7"); c.setInputFormat(org.apache.hadoop.hbase.mapreduce.TableInputFormat.class); c.setOutputFormat(org.apache.hadoop.hbase.mapreduce.TableOutputFormat.class); c.setInputFormat(TableInputFormat.class); c.setMapOutputKeyClass(ImmutableBytesWritable.class); c.setMapOutputValueClass(IntWritable.class); c.setMapperClass(MyMapper.class); FileInputFormat.addInputPaths(c, inputTableName); c.set(TableInputFormat.COLUMN_LIST, "link:"); c.setOutputFormat(TableOutputFormat.class); c.setReducerClass(MyReducer.class); c.set(TableOutputFormat.OUTPUT_TABLE, outputTableName); c.setOutputKeyClass(ImmutableBytesWritable.class); c.setOutputValueClass(Put.class); JobClient.runJob(c); ===================================================================== @Override public void map( ImmutableBytesWritable key, Result row, OutputCollector collector, Reporter r) throws IOException { collector.collect(new ImmutableBytesWritable(extractEntity(key.get())), one); } ============================================================================= private Put put = new Put(); @Override public void reduce(ImmutableBytesWritable k, Iterator v, OutputCollector c, Reporter r) throws IOException { put = new Put(k.get()); int sum = 0; while (v.hasNext()) { sum += v.next().get(); } put.add(Bytes.toBytes("link"), Bytes.toBytes("count"), Bytes.toBytes(sum) ); c.collect(k, put); } ==================================================================== 2009-06-23 11:37:30,083 INFO org.apache.zookeeper.ClientCnxn: Server connection successful 2009-06-23 11:37:30,410 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 6 2009-06-23 11:37:30,419 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb = 100 2009-06-23 11:37:30,587 INFO org.apache.hadoop.mapred.MapTask: data buffer = 79691776/99614720 2009-06-23 11:37:30,587 INFO org.apache.hadoop.mapred.MapTask: record buffer = 262144/327680 2009-06-23 11:44:20,251 WARN org.apache.hadoop.mapred.TaskTracker: Error running child org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to contact region server null for region , row '', but failed after 10 attempts. Exceptions: java.lang.NullPointerException java.lang.NullPointerException java.lang.NullPointerException java.lang.NullPointerException java.lang.NullPointerException java.lang.NullPointerException java.lang.NullPointerException java.lang.NullPointerException java.lang.NullPointerException java.lang.NullPointerException at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getRegionServerWithRetries(HConnectionManager.java:935) at org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:1797) at org.apache.hadoop.hbase.client.HTable$ClientScanner.initialize(HTable.java:1745) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:369) at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.restart(TableInputFormatBase.java:118) at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.next(TableInputFormatBase.java:219) at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.next(TableInputFormatBase.java:87) at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:191) at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:175) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:356) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305) at org.apache.hadoop.mapred.Child.main(Child.java:170) 2009-06-23 11:44:20,253 INFO org.apache.hadoop.mapred.TaskRunner: Runnning cleanup for the task ============================================================================ Looks like the ScannerCallable is coming into getRegionServerWithRetries as Null? stack-3 wrote: > > On Tue, Jun 23, 2009 at 10:07 AM, llpind wrote: > >> >> 1 other question. Do I list all my servers in the zoo.cfg? Not sure >> what >> role zookeeper plays in map/reduce, please explain. > > > > It plays no role in MR. > > Please read the 'Getting Started' document. Has pointers to what zk is > and > its role in hbase. > > St.Ack > > > >> >> -- >> View this message in context: >> http://www.nabble.com/Running-programs-under-HBase-0.20.0-alpha-tp24152144p24167811.html >> Sent from the HBase User mailing list archive at Nabble.com. >> >> > > -- View this message in context: http://www.nabble.com/Running-programs-under-HBase-0.20.0-alpha-tp24152144p24171189.html Sent from the HBase User mailing list archive at Nabble.com. --0-1457562140-1245793875=:3886--