hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Michel Tourn (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-372) should allow to specify different inputformat classes for different input dirs for Map/Reduce jobs
Date Sat, 26 Aug 2006 02:45:23 GMT
    [ http://issues.apache.org/jira/browse/HADOOP-372?page=comments#action_12430683 ] 
            
Michel Tourn commented on HADOOP-372:
-------------------------------------

Support for "all-pairs" joins is a far-reaching requirement that I don't touch here. 

But I agree with Doug's last comment:

>In other words, if you're going to define custom mappers anyway, 
>then it's no more work to define custom Input formats.

Moreover, unless I am missing something, the current APIs already nicely address the requirement
in the JIRA issue title.

JobConf.addInputPath(Path dir)

JobConf.setInputFormat(Class theClass)

InputFormat {
  FileSplit[] getSplits(FileSystem, JobConf, preferredNumSplits)
  RecordReader getRecordReader(FileSystem fs, FileSplit split,
                               JobConf job, Reporter reporter)
  ...
}

Given this current API the flow looks like this:

During Task execution ( InputFormat.getRecordReader() ):

taks's FileSplit + job's single InputFormat  -> Path context -> inputDirectory context
--> dispatched "sub" InputFormat --> getRecordReader --> RecordReader instance.


During JobTracker splits computation ( InputFormat.getSplits() ):

job's single InputFormat  + job's list of input directories --> list of input dirs/files
--> list of sub-InputFormat-s --> dispatch and "aggregate" the results from your sub-InputFormat-s
.getSplits()


This is enough to implement the special case discussed in the HADOOP-372 title:

InputDirectory  --> InputFormat


A framework class or the examples or the Wiki FAQ could demonstrate how one can write such
a *generic* dispatching  class:


class DispatchInputFormat(InputFormat[], JobConf)   implements InputFormat


It is generic but not universal.
different applications will need to use different information to make the InputFormat dispatch
/ aggregation decisions.

Now three ways to customize this DispatchInputFormat.

1./3. Writing zero java code:

replace:
>job.addInputPath("foo", FooInput.class); 
with:
job.set("DispatchInputFormat.inputdirmap", "foo=org.example.FooInput bar=org.example.BarInput")

If you want client-side type checking for the classnames, do it in a helper method.
For example:

static void DispatchInputFormat.addDir(
  JobConf, Path dir, Class<InputFormat> clazz)
Call:
DispatchInputFormat.add(job, new Path("foo"), FooInput.class);
DispatchInputFormat.add(job, new Path("bar"), BarInput.class);

Where Class<InputFormat> uses Generics to enforce at compile-time that FooInput implements
InputFormat. 


2./3. code reuse without copy-paste

A few well-placed hooks could allow users to reuse and customize the DispatchInputFormat code
without duplicating it:

class MyInputFormat extends DispatchInputFormat {

  //override
   protected InputFormat inputDirToFormat(Path inputDir) {
     ...
   } 
}

3./3. code reuse with copy-paste

And for more complex requirements that do not fit well with inputDirToFormat():

one would instead use DispatchInputFormat as a starting point, a source code example.



> should allow to specify different inputformat classes for different input dirs for Map/Reduce
jobs
> --------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-372
>                 URL: http://issues.apache.org/jira/browse/HADOOP-372
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: mapred
>    Affects Versions: 0.4.0
>         Environment: all
>            Reporter: Runping Qi
>         Assigned To: Owen O'Malley
>
> Right now, the user can specify multiple input directories for a map reduce job. 
> However, the files under all the directories are assumed to be in the same format, 
> with the same key/value classes. This proves to be  a serious limit in many situations.

> Here is an example. Suppose I have three simple tables: 
> one has URLs and their rank values (page ranks), 
> another has URLs and their classification values, 
> and the third one has the URL meta data such as crawl status, last crawl time, etc. 
> Suppose now I need a job to generate a list of URLs to be crawled next. 
> The decision depends on the info in all the three tables.
> Right now, there is no easy way to accomplish this.
> However, this job can be done if the framework allows to specify different inputformats
for different input dirs.
> Suppose my three tables are in the following directory respectively: rankTable, classificationTable.
and metaDataTable. 
> If we extend JobConf class with the following method (as Owen suggested to me):
>     addInputPath(aPath, anInputFormatClass, anInputKeyClass, anInputValueClass)
> Then I can specify my job as follows:
>     addInputPath(rankTable, SequenceFileInputFormat.class, UTF8.class, DoubleWritable.class)
>     addInputPath(classificationTable, TextInputFormat.class, UTF8,class, UTF8.class)
>     addInputPath(metaDataTable, SequenceFileInputFormat.class, UTF8.class, MyRecord.class)
> If an input directory is added through the current API, it will have the same meaning
as it is now. 
> Thus this extension will not affect any applications that do not need this new feature.
> It is relatively easy for the M/R framework to create an appropriate record reader for
a map task based on the above information.
> And that is the only change needed for supporting this extension.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message