hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hong Tang (JIRA)" <j...@apache.org>
Subject [jira] Updated: (MAPREDUCE-1317) Reducing memory consumption of rumen objects
Date Sat, 19 Dec 2009 01:22:18 GMT

     [ https://issues.apache.org/jira/browse/MAPREDUCE-1317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Hong Tang updated MAPREDUCE-1317:

    Status: Open  (was: Patch Available)

I spoke too soon. The cache needs to be properly synchronized because although we expect LoggedLocation
objects are created through JSON library and should be read only afterwards, the cache may
be accessed concurrently, and thus should be properly synchronized.

Also found a few other minor improvements that I should incorporate.

With these, i think we also need to add a unit test to ensure the code runs properly with
multiple threads.

> Reducing memory consumption of rumen objects
> --------------------------------------------
>                 Key: MAPREDUCE-1317
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1317
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>    Affects Versions: 0.21.0, 0.22.0
>            Reporter: Hong Tang
>            Assignee: Hong Tang
>             Fix For: 0.21.0, 0.22.0
>         Attachments: mapreduce-1317-20091218.patch
> We have encountered OutOfMemoryErrors in mumak and gridmix when dealing with very large
jobs. The purpose of this jira is to optimze memory consumption of rumen produced job objects.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message