hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nick Cen <cenyo...@gmail.com>
Subject Re: What's the cause of this Exception
Date Mon, 02 Mar 2009 02:13:33 GMT
Hi,

my key has the format "key1,key2,key3“,and
conf.setKeyFieldPartitionerOptions("-k 1,1"). When i limit the input size,
it works fine, i think this because i limit the total number of the possible
"key1,key2,key3" compositions. but when i increate the input size, this
exception was thrown.

2009/3/2 jason hadoop <jason.hadoop@gmail.com>

> The way you are specifying the section of your key to compare is reaching
> beyond the end of the last part of the key.
>
> Your key specification is not terminating explicitly on the last character
> of the final field of the key.
>
> if your key splits in to N parts, and you are comparing on the Nth part,
> -kN,N will work while -kN will throw the exception.
>
> The way the comparator picks up a piece, is it takes the piece and the
> trailing separator by default. For the last piece there is no trailing
> separator and you get the array out of bounds exception..
>
>
>
> On Sun, Mar 1, 2009 at 5:38 PM, Nick Cen <cenyongh@gmail.com> wrote:
>
> > java.lang.ArrayIndexOutOfBoundsException: 4096
> >        at
> >
> >
> org.apache.hadoop.io.WritableComparator.compareBytes(WritableComparator.java:129)
> >        at
> >
> >
> org.apache.hadoop.mapred.lib.KeyFieldBasedComparator.compareByteSequence(KeyFieldBasedComparator.java:109)
> >        at
> >
> >
> org.apache.hadoop.mapred.lib.KeyFieldBasedComparator.compare(KeyFieldBasedComparator.java:85)
> >        at
> > org.apache.hadoop.mapred.Merger$MergeQueue.lessThan(Merger.java:308)
> >        at
> > org.apache.hadoop.util.PriorityQueue.downHeap(PriorityQueue.java:139)
> >        at
> > org.apache.hadoop.util.PriorityQueue.adjustTop(PriorityQueue.java:103)
> >        at
> >
> >
> org.apache.hadoop.mapred.Merger$MergeQueue.adjustPriorityQueue(Merger.java:270)
> >        at
> org.apache.hadoop.mapred.Merger$MergeQueue.next(Merger.java:285)
> >        at
> > org.apache.hadoop.mapred.Task$ValuesIterator.readNextKey(Task.java:870)
> >        at
> org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:829)
> >        at
> >
> >
> org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.moveToNext(ReduceTask.java:237)
> >        at
> >
> >
> org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.next(ReduceTask.java:233)
> >        at ufida.ReduceTask.reduce(ReduceTask.java:39)
> >        at ufida.ReduceTask.reduce(ReduceTask.java:1)
> >        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
> >        at org.apache.hadoop.mapred.Child.main(Child.java:155)
> >
> > my hadoop version is 0.19.0, if i limit the input file number, the
> > exception
> > wil not be thrown.
> > --
> > http://daily.appspot.com/food/
> >
>



-- 
http://daily.appspot.com/food/

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message