hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Doug Cutting (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3441) Pass the size of the MapReduce input to JobInProgress
Date Fri, 23 May 2008 20:41:55 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12599499#action_12599499

Doug Cutting commented on HADOOP-3441:

 - The field & methods should call it 'length', not 'size' to be consistent with the InputSplit
API, which is the source of the data.
 - I'd prefer to see this change included in a patch that makes good use of the data.  Adding
features that have possible uses leads to bloat.  So perhaps this should patch be instead
bundled with a scheduler implementation that needs this information?

> Pass the size of the MapReduce input to JobInProgress
> -----------------------------------------------------
>                 Key: HADOOP-3441
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3441
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>    Affects Versions: 0.17.0
>         Environment: all
>            Reporter: Ari Rabkin
>            Assignee: Ari Rabkin
>            Priority: Minor
>             Fix For: 0.18.0
>         Attachments: addDataSize.patch
> Currently, there's no easy way for the JobInProgress to know how large the job's input
data is.
> This patch corrects the problem, by storing the size of the input split's data through
the RawSplit.  The sizes of each split are then totaled up and made available via JobInProgress.getInputSize().
> This is needed, among other reasons, so that the JobInProgress knows how much data it's
being run on, which will help build smarter schedulers.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message