hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ajay Srivastava <Ajay.Srivast...@guavus.com>
Subject Re: Cartesian product in hadoop
Date Thu, 18 Apr 2013 11:45:44 GMT
Yes, that's a crucial part.

Write a class which extends WritableComparator and override compare method.
You need to set this class in job client as -
job.setGroupingComparatorClass (Grouping comparator class).

This will make sure that records having same Ki will be grouped together and will go to same
iteration of reduce.
I forgot to mention in my previous post to write a partitioner too which partitions data based
on first part of key.

Ajay Srivastava

On 18-Apr-2013, at 4:42 PM, zheyi rong wrote:

Hi Ajay Srivastava,

Thank your for your reply.

Could you please explain a little bit more on "Write a grouping comparator which group records
on first part of key i.e. Ki."  ?
I guess it is a crucial part, which could filter some pairs before passing them to the reducer.

Zheyi Rong

On Thu, Apr 18, 2013 at 12:50 PM, Ajay Srivastava <Ajay.Srivastava@guavus.com<mailto:Ajay.Srivastava@guavus.com>>
Hi Rong,
You can use following simple method.

Lets say dataset1 has m records and when you emit these records from mapper, keys are K1,K2
….., Km for each respective record. Also add an identifier to identify dataset from where
records is being emitted.
So if R1 is a record in dataset1, the mapper will emit key as (K1, DATASET1) and value as

For dataset2 having n records, emit m records for each record with keys K1, K2, …., Km and
identifier as DATASET2.
So if R1' is a record from dataset2, emit m records with key as  (Ki, DATASET2) and value
R1' where i is from 1 to m.

Write a grouping comparator which group records on first part of key i.e. Ki.

In reducer, for each iteration of reduce there will be one record from dataset1 and n records
from dataset2. Get the cartesian product, apply filter and then output.

Note -- You may not know keys (K1, K2, … , Km) before hand. If yes, then you need one more
pass of dataset1 to identify the keys and store it to use for dataset2.

Ajay Srivastava

On 18-Apr-2013, at 3:51 PM, Azuryy Yu wrote:

This is not suitable for his large dataset.

--Send from my Sony mobile.

On Apr 18, 2013 5:58 PM, "Jagat Singh" <jagatsingh@gmail.com<mailto:jagatsingh@gmail.com>>

Can you have a look at



On Thu, Apr 18, 2013 at 7:47 PM, zheyi rong <zheyi.rong@gmail.com<mailto:zheyi.rong@gmail.com>>
Dear all,

I am writing to kindly ask for ideas of doing cartesian product in hadoop.
Specifically, now I have two datasets, each of which contains 20million lines.
I want to do cartesian product on these two datasets, comparing lines pairwisely.

The output of each comparison can be mostly filtered by a function ( we do not output the
whole result of this cartesian product, but only a small part).

I guess one good way is to pass one block from dataset1 and another block from dataset2
to a mapper, then let the mappers do the product in memory to avoid IO.

Any suggestions?
Thank you very much.

Zheyi Rong

View raw message