hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Owen O'Malley (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-331) map outputs should be written to a single output file with an index
Date Wed, 18 Oct 2006 17:08:11 GMT
    [ http://issues.apache.org/jira/browse/HADOOP-331?page=comments#action_12443298 ] 
            
Owen O'Malley commented on HADOOP-331:
--------------------------------------

Ok, I've been convinced that we should start with BufferedEntry as a baseline. I think the
overhead/record should be 40 rather than 20 for the size metric, but of course we should use
a constant for it anyways:

final int BUFFERED_KEY_VALUE_OVERHEAD = 40;

And I think BufferedKeyValue is a little more informative than BufferedEntry.

// condition for a spill would be
buffer.size() + BUFFERED_KEY_VALUE_OVERHEAD * numKeyValues > conf.getMapOutputBufferSize()


The records would be stored in an array of ArrayLists:

List<BufferedEntry>[numReduces]

Spills would be written as SequenceFile<PartKey<Key>, Value>

The spills would be merged (using the iterator output form of merge) to write:

SequenceFile<Key,Value> and partition index

If there have been no spills, you just write the SequenceFile<Key,Value> and partition
index from memory.

It will give a fixed usage of memory, a single dump to disk in the common case, and a reasonable
behavior for large cases.



> map outputs should be written to a single output file with an index
> -------------------------------------------------------------------
>
>                 Key: HADOOP-331
>                 URL: http://issues.apache.org/jira/browse/HADOOP-331
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: mapred
>    Affects Versions: 0.3.2
>            Reporter: eric baldeschwieler
>         Assigned To: Devaraj Das
>
> The current strategy of writing a file per target map is consuming a lot of unused buffer
space (causing out of memory crashes) and puts a lot of burden on the FS (many opens, inodes
used, etc).  
> I propose that we write a single file containing all output and also write an index file
IDing which byte range in the file goes to each reduce.  This will remove the issue of buffer
waste, address scaling issues with number of open files and generally set us up better for
scaling.  It will also have advantages with very small inputs, since the buffer cache will
reduce the number of seeks needed and the data serving node can open a single file and just
keep it open rather than needing to do directory and open ops on every request.
> The only issue I see is that in cases where the task output is substantiallyu larger
than its input, we may need to spill multiple times.  In this case, we can do a merge after
all spills are complete (or during the final spill).

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message