hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Juan P." <gordoslo...@gmail.com>
Subject Re: Comparing
Date Thu, 26 May 2011 13:21:51 GMT
Harsh,
Thanks for your response, it was very helpful.
There are still a couple of things which are not really clear to me though.
You say that "Keys have got to be compared by the MR framework". But I'm
still not 100% sure why keys are sorted. I thought what hadoop did was,
during shuffling it chose which keys went to which reducer and then for each
key/value it checked the key and sent them to the correct node. If that was
the case then a good equals implementation could be enough. So why instead
of just *shuffling* does the MP framework *sort* the items?

Also, you were very clear about the use of RawComparator, thank you. Do you
know how RawComparable works though?

Again, thanks for your help!
Cheers,
Pony

On Thu, May 26, 2011 at 1:58 AM, Harsh J <harsh@cloudera.com> wrote:

> Pony,
>
> Keys have got to be compared by the MR framework somehow, and the way
> it does when you use Writables is by ensuring that your Key is of a
> Writable + Comparable type (WritableComparable).
>
> If you specify a specific comparator class, then that will be used;
> else the default WritableComparator will get asked if it can supply a
> comparator for use with your key type.
>
> AFAIK, the default WritableComparator wraps around RawComparator and
> does indeed deserialize the writables before applying the compare
> operation. The RawComparator's primary idea is to give you a pair of
> raw byte sequences to compare directly. Certain other serialization
> libraries (Apache Avro is one) provide ways to compare using bytes
> itself (Across different types), which can end up being faster when
> used in jobs.
>
> Hope this clears up your confusion.
>
> On Tue, May 24, 2011 at 2:06 AM, Juan P. <gordoslocos@gmail.com> wrote:
> > Hi guys,
> > I wanted to get your help with a couple of questions which came up while
> > looking at the Hadoop Comparator/Comparable architecture.
> >
> > As I see it before each reducer operates on each key, a sorting algorithm
> is
> > applied to them. *Why does Hadoop need to do that?*
> >
> > If I implement my own class and I intend to use it as a Key I must allow
> for
> > instances of my class to be compared. So I have 2 choices: I can
> implement
> > WritableComparable or I can register a WritableComparator for my
> > class. Should I fail to do either, would the Job fail?
> > If I register my WritableComparator which does not use the Comparable
> > interface at all, does my Key need to implement WritableComparable?
> > If I don't implement my Comparator and my Key implements
> WritableComparable,
> > does it mean that Hadoop will deserialize my Keys twice? (once for
> sorting,
> > and once for reducing)
> > What is RawComparable used for?
> >
> > Thanks for your help!
> > Pony
> >
>
>
>
> --
> Harsh J
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message