hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Amareshwari Sriramadasu (JIRA)" <j...@apache.org>
Subject [jira] Commented: (MAPREDUCE-1555) Maybe some code of MapTask is wrong.
Date Mon, 05 Apr 2010 05:57:27 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-1555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12853314#action_12853314
] 

Amareshwari Sriramadasu commented on MAPREDUCE-1555:
----------------------------------------------------

bq. so that MapTask object in JobTracker doesn't have a reference to the input-split. Why?


This is to save memory on JobTracker. We have seen scenarios where the input split size so
huge which causes OOM on JT.  See HADOOP-3670

If we have to remove the nullification in serialization to avoid NPE, we should nullify the
splitMetaInfo either after the task completion or after the first update from the map task.
Thoughts?

> Maybe some code of MapTask is wrong.
> ------------------------------------
>
>                 Key: MAPREDUCE-1555
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1555
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: task
>            Reporter: Ruyue Ma
>
> {code:title=MapTask.java|borderStyle=solid}
>   public void write(DataOutput out) throws IOException {
>     super.write(out);
>     if (isMapOrReduce()) {
>       splitMetaInfo.write(out);
>       splitMetaInfo = null;  // HERE:  why set null ??????
>     }
>   }
> {code} 
> In above code, if the splitMetaInfo is set null, Second Serialization (will invoke write)
will throw NullPointerException. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message