hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hong Tang (JIRA)" <j...@apache.org>
Subject [jira] Commented: (MAPREDUCE-1317) Reducing memory consumption of rumen objects
Date Wed, 23 Dec 2009 10:05:29 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-1317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12793994#action_12793994
] 

Hong Tang commented on MAPREDUCE-1317:
--------------------------------------

The 2 failed unit tests in rumen were caused by my false assumption that LoggedXXX objects
are immutable - while in fact the HadoopLogAnalyzer actually mutates the List<LoggedTaskAttempt>
object returned from the getter method. I restore the original semantics by creating an empty
list instead of using Collections.emptyList().

I filed MAPREDUCE-1330 to propose to make LoggedXXX APIs more consistent in this regard.

> Reducing memory consumption of rumen objects
> --------------------------------------------
>
>                 Key: MAPREDUCE-1317
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1317
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>    Affects Versions: 0.21.0, 0.22.0
>            Reporter: Hong Tang
>            Assignee: Hong Tang
>             Fix For: 0.21.0, 0.22.0
>
>         Attachments: mapreduce-1317-20091218.patch, mapreduce-1317-20091222-2.patch,
mapreduce-1317-20091222.patch, mapreduce-1317-20091223.patch
>
>
> We have encountered OutOfMemoryErrors in mumak and gridmix when dealing with very large
jobs. The purpose of this jira is to optimze memory consumption of rumen produced job objects.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message