hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ryan Rawson <ryano...@gmail.com>
Subject Re: Running programs under HBase 0.20.0 alpha
Date Tue, 23 Jun 2009 19:08:54 GMT
hey,

right now for TIF you have to specific column names, not just family specs
'link:'.   Hopefully in this iteration you can do so.

-ryan

On Tue, Jun 23, 2009 at 12:04 PM, llpind <sonny_heer@hotmail.com> wrote:

>
> Okay thanks.
>
> Still unable to do a map/reduce job.  I changed the API to use Put instead
> of BatchUpdate, along with some of the moved Classes.  This is a map reduce
> program which reads from an input table, and outputs to another HBase
> table.
> Still using JobConf here:
>
> ======================================================================
>                JobConf c = new JobConf(getConf(), getClass());
>                c.setJobName(getClass().getName());
>
>                String inputTableName = args[0];
>
>                System.out.println(" STARTING... 7");
>
> c.setInputFormat(org.apache.hadoop.hbase.mapreduce.TableInputFormat.class);
>
>
> c.setOutputFormat(org.apache.hadoop.hbase.mapreduce.TableOutputFormat.class);
>
>            c.setInputFormat(TableInputFormat.class);
>            c.setMapOutputKeyClass(ImmutableBytesWritable.class);
>            c.setMapOutputValueClass(IntWritable.class);
>            c.setMapperClass(MyMapper.class);
>
>            FileInputFormat.addInputPaths(c, inputTableName);
>            c.set(TableInputFormat.COLUMN_LIST, "link:");
>
>
>            c.setOutputFormat(TableOutputFormat.class);
>            c.setReducerClass(MyReducer.class);
>            c.set(TableOutputFormat.OUTPUT_TABLE, outputTableName);
>            c.setOutputKeyClass(ImmutableBytesWritable.class);
>            c.setOutputValueClass(Put.class);
>
>                JobClient.runJob(c);
>
> =====================================================================
>
>                @Override
>                public void map(
>                                ImmutableBytesWritable key,
>                                Result row,
>                                OutputCollector<ImmutableBytesWritable,
> IntWritable> collector,
>                                Reporter r) throws IOException {
>
>                        collector.collect(new
> ImmutableBytesWritable(extractEntity(key.get())),
> one);
>                }
>
>
> =============================================================================
>                private Put put = new Put();
>                @Override
>                public void reduce(ImmutableBytesWritable k,
>                                Iterator<IntWritable> v,
>                                OutputCollector<ImmutableBytesWritable, Put>
> c,
>                                Reporter r) throws IOException {
>
>                        put = new Put(k.get());
>                int sum = 0;
>                while (v.hasNext()) {
>                  sum += v.next().get();
>                }
>                put.add(Bytes.toBytes("link"), Bytes.toBytes("count"),
> Bytes.toBytes(sum) );
>                c.collect(k, put);
>
>
>
>                }
> ====================================================================
> 2009-06-23 11:37:30,083 INFO org.apache.zookeeper.ClientCnxn: Server
> connection successful
> 2009-06-23 11:37:30,410 INFO org.apache.hadoop.mapred.MapTask:
> numReduceTasks: 6
> 2009-06-23 11:37:30,419 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb =
> 100
> 2009-06-23 11:37:30,587 INFO org.apache.hadoop.mapred.MapTask: data buffer
> =
> 79691776/99614720
> 2009-06-23 11:37:30,587 INFO org.apache.hadoop.mapred.MapTask: record
> buffer
> = 262144/327680
> 2009-06-23 11:44:20,251 WARN org.apache.hadoop.mapred.TaskTracker: Error
> running child
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to contact
> region server null for region , row '', but failed after 10 attempts.
> Exceptions:
> java.lang.NullPointerException
> java.lang.NullPointerException
> java.lang.NullPointerException
> java.lang.NullPointerException
> java.lang.NullPointerException
> java.lang.NullPointerException
> java.lang.NullPointerException
> java.lang.NullPointerException
> java.lang.NullPointerException
> java.lang.NullPointerException
>
>        at
>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getRegionServerWithRetries(HConnectionManager.java:935)
>        at
>
> org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:1797)
>        at
>
> org.apache.hadoop.hbase.client.HTable$ClientScanner.initialize(HTable.java:1745)
>        at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:369)
>        at
>
> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.restart(TableInputFormatBase.java:118)
>        at
>
> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.next(TableInputFormatBase.java:219)
>        at
>
> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$TableRecordReader.next(TableInputFormatBase.java:87)
>        at
>
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:191)
>        at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:175)
>        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
>        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:356)
>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
>        at org.apache.hadoop.mapred.Child.main(Child.java:170)
> 2009-06-23 11:44:20,253 INFO org.apache.hadoop.mapred.TaskRunner: Runnning
> cleanup for the task
>
>
>
>
> ============================================================================
>
>
> Looks like the ScannerCallable is coming into getRegionServerWithRetries as
> Null?
>
>
>
>
> stack-3 wrote:
> >
> > On Tue, Jun 23, 2009 at 10:07 AM, llpind <sonny_heer@hotmail.com> wrote:
> >
> >>
> >> 1 other question.  Do I list all my servers in the zoo.cfg?  Not sure
> >> what
> >> role zookeeper plays in map/reduce, please explain.
> >
> >
> >
> > It plays no role in MR.
> >
> > Please read the 'Getting Started' document.  Has pointers to what zk is
> > and
> > its role in hbase.
> >
> > St.Ack
> >
> >
> >
> >>
> >> --
> >> View this message in context:
> >>
> http://www.nabble.com/Running-programs-under-HBase-0.20.0-alpha-tp24152144p24167811.html
> >> Sent from the HBase User mailing list archive at Nabble.com.
> >>
> >>
> >
> >
>
> --
> View this message in context:
> http://www.nabble.com/Running-programs-under-HBase-0.20.0-alpha-tp24152144p24171189.html
> Sent from the HBase User mailing list archive at Nabble.com.
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message