hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Owen O'Malley (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-5299) Reducer inputs should be spilled to HDFS rather than local disk.
Date Sun, 22 Feb 2009 09:17:02 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-5299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12675640#action_12675640
] 

Owen O'Malley commented on HADOOP-5299:
---------------------------------------

Using one big file won't work. The pread performance of hdfs is too low to be used in the
shuffle. Furthermore, that means you can't free up sections of it that have been merged. On
clusters with tight free space, doubling (or more) the size of the reduce input on disk is
not ok.

In order for other task attempts to reuse the shuffled data, the shuffle would also need to
continuously store its state in hdfs also, leading to more traffic...
 
Furthermore, I'm against a change that ties map/reduce to only work with hdfs. It should continue
to work with just s3 or kfs or even local file system. It is possible to make it work, by
using sub-directories of the system dir, but it will interact badly with security, i'd imagine.

> Reducer inputs should be spilled to HDFS rather than local disk.
> ----------------------------------------------------------------
>
>                 Key: HADOOP-5299
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5299
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>    Affects Versions: 0.19.0
>         Environment: All
>            Reporter: Milind Bhandarkar
>
> Currently, both map outputs and reduce inputs are stored on local disks of tasktrackers.
(Un) Availability of local disk space for intermediate data is seen as a major factor in job
failures. 
> The suggested solution is to store these intermediate data on HDFS (maybe with replication
factor of 1). However, the main blocker issue with that solution is that lots of temporary
names (proportional to total number of maps), can overwhelm the namenode, especially since
the map outputs are typically small (most produce one block output).
> Also, as we see in many applications, the map outputs can be estimated more accurately,
and thus users can plan accordingly, based on available local disk space.
> However, the reduce input sizes can vary a lot, especially for skewed data (or because
of bad partitioning.)
> So, I suggest that it makes more sense to keep map outputs on local disks, but the reduce
inputs (when spilled from reducer memory) should go to HDFS.
> Adding a configuration variable to indicate the filesystem to be used for reduce-side
spills would let us experiment and compare the efficiency of this new scheme.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message