hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeff Zhang <zjf...@gmail.com>
Subject Re: readFields() throws a NullPointerException
Date Mon, 20 Sep 2010 01:34:07 GMT
Do you have a no-argument constructor for your customer class and in
the constructor do you do initialize the members ? Otherwise these
members in your customer class will be null.


On Mon, Sep 20, 2010 at 1:21 AM,  <Christopher.Shain@sungard.com> wrote:
> How are you constructing your WritableDeserializer?  The reason that I
> ask is that on the line you are seeing an error:
>
> writable.readFields(dataIn);
>
> The only thing that could throw a null pointer exception is if writable
> was null.  Writable is constructed thusly:
>
>    public Writable deserialize(Writable w) throws IOException {
>      Writable writable;
>      if (w == null) {
>        writable
>          = (Writable) ReflectionUtils.newInstance(writableClass,
> getConf());
>      } else {
>        writable = w;
>      }
>      writable.readFields(dataIn);
>      return writable;
>    }
>
> So I suspect that you are passing null as the argument to Deserialize
> and when constructing your WritableDeserializer, you are passing null as
> the second argument (Class<?> c).  This would result in writable being
> null, and you'd see that error.
>
> In one of these two cases you must define your Writable class.
>
> Hope this helps,
>
> Chris
>
>
> -----Original Message-----
> From: Rakesh Ramakrishnan [mailto:raks.mail@gmail.com]
> Sent: Sunday, September 19, 2010 1:12 PM
> To: general@hadoop.apache.org
> Subject: readFields() throws a NullPointerException
>
> I have a simple map-reduce program in which my map and reduce primitives
> look like this
>
> map(K,V) = (Text, OutputAggregator)
> reduce(Text, OutputAggregator) = (Text,Text)
>
> The important point is that from my map function I emit an object of
> type
> OutputAggregator which is a custom class that implements the Writable
> interface. However, my reduce fails with the following exception. More
> specifically, the readFieds() function is throwing an exception. Any
> clue
> why ? I use hadoop 0.18.3
>
>
> 10/09/19 04:04:59 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> processName=JobTracker, sessionId=
> 10/09/19 04:04:59 WARN mapred.JobClient: Use GenericOptionsParser for
> parsing the arguments. Applications should implement Tool for the
> same.
> 10/09/19 04:04:59 INFO mapred.FileInputFormat: Total input paths to
> process : 1
> 10/09/19 04:04:59 INFO mapred.FileInputFormat: Total input paths to
> process : 1
> 10/09/19 04:04:59 INFO mapred.FileInputFormat: Total input paths to
> process : 1
> 10/09/19 04:04:59 INFO mapred.FileInputFormat: Total input paths to
> process : 1
> 10/09/19 04:04:59 INFO mapred.JobClient: Running job: job_local_0001
> 10/09/19 04:04:59 INFO mapred.MapTask: numReduceTasks: 1
> 10/09/19 04:04:59 INFO mapred.MapTask: io.sort.mb = 100
> 10/09/19 04:04:59 INFO mapred.MapTask: data buffer = 79691776/99614720
> 10/09/19 04:04:59 INFO mapred.MapTask: record buffer = 262144/327680
> Length = 10
> 10
> 10/09/19 04:04:59 INFO mapred.MapTask: Starting flush of map output
> 10/09/19 04:04:59 INFO mapred.MapTask: bufstart = 0; bufend = 231;
> bufvoid = 99614720
> 10/09/19 04:04:59 INFO mapred.MapTask: kvstart = 0; kvend = 10; length =
> 327680
> gl_books
> 10/09/19 04:04:59 WARN mapred.LocalJobRunner: job_local_0001
> java.lang.NullPointerException
>  at org.myorg.OutputAggregator.readFields(OutputAggregator.java:46)
>  at
> org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializ
> er.deserialize(WritableSerialization.java:67)
>  at
> org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializ
> er.deserialize(WritableSerialization.java:40)
>  at
> org.apache.hadoop.mapred.Task$ValuesIterator.readNextValue(Task.java:751
> )
>  at org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:691)
>  at
> org.apache.hadoop.mapred.Task$CombineValuesIterator.next(Task.java:770)
>  at org.myorg.xxxParallelizer$Reduce.reduce(xxxParallelizer.java:117)
>  at org.myorg.xxxParallelizer$Reduce.reduce(xxxParallelizer.java:1)
>  at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.combineAndSpill(MapTask
> .java:904)
>  at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.ja
> va:785)
>  at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:698)
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:228)
>  at
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:157)
> java.io.IOException: Job failed!
>  at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1113)
>  at org.myorg.xxxParallelizer.main(xxxParallelizer.java:145)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
>  at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>  at java.lang.reflect.Method.invoke(Unknown Source)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:155)
>  at org.apache.hadoop.mapred.JobShell.run(JobShell.java:54)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>  at org.apache.hadoop.mapred.JobShell.main(JobShell.java:68)
>
>



-- 
Best Regards

Jeff Zhang

Mime
View raw message