hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stack <st...@duboce.net>
Subject Re: Problem with importtsv on trsnferring data from HDFS to hbase table:
Date Wed, 15 Jun 2011 17:28:33 GMT
What is null?  this.table?  If so, can you figure why this is not being created?
St.Ack

On Tue, Jun 14, 2011 at 10:32 PM, Prashant Sharma
<meetprashant007@gmail.com> wrote:
> Hi all,
>
> I have converted the whole csv to a tsv using sed and it is still throwing
> null pointer exception at Tableoutputformat:127
>
> ..........output on stderr:....
> 11/06/15 10:39:26 INFO mapreduce.TableOutputFormat: Created table instance
> for movies
> 11/06/15 10:39:26 INFO input.FileInputFormat: Total input paths to process :
> 1
> 11/06/15 10:39:27 INFO mapred.JobClient: Running job: job_201106151021_0002
> 11/06/15 10:39:28 INFO mapred.JobClient:  map 0% reduce 0%
> 11/06/15 10:42:19 INFO mapred.JobClient: Task Id :
> attempt_201106151021_0002_m_000000_0, Status : FAILED
> java.lang.NullPointerException
> at
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:127)
>  at
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:82)
> at
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:638)
>  at
> org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
> at
> org.apache.hadoop.hbase.mapreduce.ImportTsv$TsvImporter.map(ImportTsv.java:259)
>  at
> org.apache.hadoop.hbase.mapreduce.ImportTsv$TsvImporter.map(ImportTsv.java:192)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
>  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:369)
>  at org.apache.hadoop.mapred.Child$4.run(Child.java:259)
> at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
>  at org.apache.hadoop.mapred.Child.main(Child.java:253)
>
> 11/06/15 10:43:43 INFO mapred.JobClient: Task Id :
> attempt_201106151021_0002_m_000000_1, Status : FAILED
> java.lang.NullPointerException
> at
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:127)
>  at
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:82)
> at
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:638)
>  at
> org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
> at
> org.apache.hadoop.hbase.mapreduce.ImportTsv$TsvImporter.map(ImportTsv.java:259)
>  at
> org.apache.hadoop.hbase.mapreduce.ImportTsv$TsvImporter.map(ImportTsv.java:192)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
>  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:369)
>  at org.apache.hadoop.mapred.Child$4.run(Child.java:259)
> at java.security.AccessController.doPrivileged(Native Method)
> .................................
> Line 127 of tableoutputformat.. :
>  @Override
>    public void write(KEY key, Writable value)
>    throws IOException {
>      if (value instanceof Put) this.table.put(new Put((Put)value));
>      else if (value instanceof Delete) this.table.delete(new
> Delete((Delete)value));
>      else throw new IOException("Pass a Delete or a Put");
>    }
> .........
> CommandLine:
> bin/hadoop jar ../../hbase/hbase-0.90.3/hbase-0.90.3.jar importtsv
> -Dimporttsv.columns=HBASE_ROW_KEY,year,name movies /user/hadoop/movies/
>
> Thanks,
> Prashant
>
> On Wed, Jun 15, 2011 at 6:55 AM, Todd Lipcon <todd@cloudera.com> wrote:
>
>> Plus, I'm not sure it will parse right if you say:
>> -Dimporttsv.separator=','
>>
>> Try: -Dimporttsv.separator=,
>>
>> (no quotes)
>>
>> -Todd
>>
>> On Tue, Jun 14, 2011 at 2:35 PM, Buttler, David <buttler1@llnl.gov> wrote:
>>
>> > Maybe because you misspelled an input parameter: importtsv.columns
>> >
>> >
>> > -----Original Message-----
>> > From: Prashant Sharma [mailto:meetprashant007@gmail.com]
>> > Sent: Tuesday, June 14, 2011 10:39 AM
>> > To: user@hbase.apache.org
>> > Subject: Re: Problem with importtsv on trsnferring data from HDFS to
>> hbase
>> > table:
>> >
>> > My input file is a CSV with 3 fields..
>> > uniqueID,year,name
>> >
>> > is there a problem with the format? I have checked it like 10 times..
>> > everything seem fine ... cant
>> > figure whats wrong. Any input would be very helpful.
>> >
>> > Thanks in advance,
>> > Prashant
>> >
>> > On Tue, Jun 14, 2011 at 3:44 PM, Prashant Sharma <
>> prashant.s@imaginea.com
>> > >wrote:
>> >
>> > > Hi,
>> > >  I am getting following errors while trying to transfer data from hdfs
>> to
>> > > hbase.
>> > >
>> > >  Table at hbase:
>> > > hbase(main):007:0> describe 'movies'
>> > > DESCRIPTION                                        
 ENABLED
>> > >  {NAME => 'movies', FAMILIES => [{NAME => 'HBASE_ROW true
>> > >  _KEY', BLOOMFILTER => 'NONE', REPLICATION_SCOPE =>
>> > >  '0', COMPRESSION => 'NONE', VERSIONS => '3', TTL =>
>> > >  '2147483647', BLOCKSIZE => '65536', IN_MEMORY => '
>> > >  false', BLOCKCACHE => 'true'}, {NAME => 'name', BLO
>> > >  OMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPR
>> > >  ESSION => 'NONE', VERSIONS => '3', TTL => '21474836
>> > >  47', BLOCKSIZE => '65536', IN_MEMORY => 'false', BL
>> > >  OCKCACHE => 'true'}, {NAME => 'year', BLOOMFILTER =
>> > >  > 'NONE', REPLICATION_SCOPE => '0', COMPRESSION =>
>> > >  'NONE', VERSIONS => '3', TTL => '2147483647', BLOCK
>> > >  SIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE =
>> > >  > 'true'}]}
>> > > 1 row(s) in 0.1820 seconds
>> > >
>> > >
>> > > hbase(main):006:0> scan 'movies'
>> > > ROW                   COLUMN+CELL
>> > >  1                    column=name:, timestamp=1308044917482,
value=new
>> > >  1                    column=year:, timestamp=1308044926957,
value=2055
>> > > 1 row(s) in 0.0710 seconds
>> > >
>> > > Command line:hadoop@hadoop:~/work/hadoop/hadoop-0.20.203.0$ bin/hadoop
>> > jar
>> > > ../../hbase/hbase-0.90.3/hbase-0.90.3.jar importtsv
>> > > -Dimporttscolumns=HBASE_ROW_KEY,year,name movies
>> > > /user/hadoop/movies/movie.csv -Dimporttsv.separator=',' 2>log
>> > >
>> > > Output on stderr:
>> > > ..some lines ommitted..
>> > > 11/06/14 15:35:21 INFO zookeeper.ZooKeeper: Client
>> > >
>> >
>> environment:java.library.path=/home/hadoop/work/hadoop/hadoop-0.20.203.0/bin/../lib/native/Linux-i386-32
>> > > 11/06/14 15:35:21 INFO zookeeper.ZooKeeper: Client
>> > > environment:java.io.tmpdir=/tmp
>> > > 11/06/14 15:35:21 INFO zookeeper.ZooKeeper: Client
>> > > environment:java.compiler=<NA>
>> > > 11/06/14 15:35:21 INFO zookeeper.ZooKeeper: Client environment:os.name
>> > > =Linux
>> > > 11/06/14 15:35:21 INFO zookeeper.ZooKeeper: Client
>> > environment:os.arch=i386
>> > > 11/06/14 15:35:21 INFO zookeeper.ZooKeeper: Client
>> > > environment:os.version=2.6.35-25-generic
>> > > 11/06/14 15:35:21 INFO zookeeper.ZooKeeper: Client environment:
>> user.name
>> > > =hadoop
>> > > 11/06/14 15:35:21 INFO zookeeper.ZooKeeper: Client
>> > > environment:user.home=/home/hadoop
>> > > 11/06/14 15:35:21 INFO zookeeper.ZooKeeper: Client
>> > > environment:user.dir=/home/hadoop/work/hadoop/hadoop-0.20.203.0
>> > > 11/06/14 15:35:21 INFO zookeeper.ZooKeeper: Initiating client
>> connection,
>> > > connectString=localhost:2181 sessionTimeout=180000 watcher=hconnection
>> > > 11/06/14 15:35:21 INFO zookeeper.ClientCnxn: Opening socket connection
>> to
>> > > server localhost/0:0:0:0:0:0:0:1:2181
>> > > 11/06/14 15:35:21 INFO zookeeper.ClientCnxn: Socket connection
>> > established
>> > > to localhost/0:0:0:0:0:0:0:1:2181, initiating session
>> > > 11/06/14 15:35:21 INFO zookeeper.ClientCnxn: Session establishment
>> > complete
>> > > on server localhost/0:0:0:0:0:0:0:1:2181, sessionid =
>> 0x1308d8861600014,
>> > > negotiated timeout = 180000
>> > > 11/06/14 15:35:22 INFO mapreduce.TableOutputFormat: Created table
>> > instance
>> > > for movies
>> > > 11/06/14 15:35:22 INFO input.FileInputFormat: Total input paths to
>> > process
>> > > : 1
>> > > 11/06/14 15:35:22 INFO mapred.JobClient: Running job:
>> > job_201106141233_0042
>> > > 11/06/14 15:35:23 INFO mapred.JobClient:  map 0% reduce 0%
>> > > 11/06/14 15:38:16 INFO mapred.JobClient: Task Id :
>> > > attempt_201106141233_0042_m_000000_0, Status : FAILED
>> > > java.lang.NullPointerException
>> > >        at
>> > >
>> >
>> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.close(TableOutputFormat.java:107)
>> > >        at
>> > >
>> >
>> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.close(MapTask.java:650)
>> > >        at
>> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:765)
>> > >        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:369)
>> > >        at org.apache.hadoop.mapred.Child$4.run(Child.java:259)
>> > >        at java.security.AccessController.doPrivileged(Native Method)
>> > >        at javax.security.auth.Subject.doAs(Subject.java:396)
>> > >        at
>> > >
>> >
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
>> > >        at org.apache.hadoop.mapred.Child.main(Child.java:253)
>> > >
>> > > attempt_201106141233_0042_m_000000_0: Bad line at offset: 0:
>> > > attempt_201106141233_0042_m_000000_0: No delimiter
>> > > attempt_201106141233_0042_m_000000_0: Bad line at offset: 34:
>> > > attempt_201106141233_0042_m_000000_0: No delimiter
>> > > attempt_201106141233_0042_m_000000_0: Bad line at offset: 51:
>> > > .......................... x33123 lines
>> > >
>> > >
>> > > ----------------------------------------------------------------
>> > > This message was sent using IMP, the Internet Messaging Program.
>> > >
>> > >
>> > >
>> >
>> >
>> > --
>> > Prashant Sharma
>> > Development Engineer
>> > Pramati Technologies
>> > Begumpet
>> >
>> > "Hare Krishna"
>> >
>>
>>
>>
>> --
>> Todd Lipcon
>> Software Engineer, Cloudera
>>
>

Mime
View raw message