hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MAPREDUCE-3205) MR2 memory limits should be pmem, not vmem
Date Wed, 26 Oct 2011 21:29:33 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-3205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13136413#comment-13136413

Todd Lipcon commented on MAPREDUCE-3205:

bq. vmem.to.pmem.limit.ratio should we vmem-pmem-limit-ratio (better vmem-pmem-ratio)
renamed to vmem-pmem-ratio

bq. Should we default the ratio to 1.0 to be compatible with current CS in 0.20?
I don't think so, since then it will be yet another config that everyone has to set before
their Hadoop will work right :) We have too many of those today, and everyone is going to
need to revamp their configs when deploying MR2 anyway. Let's use it as a forcing function
to fix what we don't like.

bq. Do you want to make resource.memory-gb as resource.memory-mb i.e. incorporate MAPREDUCE-3266?

bq. The 80% limit on available RAM needs to be more conservative? I shudder to think it probably
should be configurable...
This is just a WARN level message, not a true limit. I think on large machines, 80% is reasonable
even if not quite advisable. (eg on a 48G machine it leaves 9.6GB free for other processes,
which isn't bad)

bq. Might be helpful to add both pmem and vmem in error msgs for both exceptional conditions
for users?
I changed the message to look like:

Container [pid=19843,containerID=container_0_0000_01_000000] is running beyond virtual memory
limits. Current usage: 1.9mb of 0b physical memory used; 20.2mb of 0b virtual memory used.
Killing container.
Dump of the process-tree for container_0_0000_01_000000 :
> MR2 memory limits should be pmem, not vmem
> ------------------------------------------
>                 Key: MAPREDUCE-3205
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3205
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: mrv2, nodemanager
>    Affects Versions: 0.23.0
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>            Priority: Blocker
>             Fix For: 0.23.0
>         Attachments: mr-3205.txt, mr-3205.txt, mr-3205.txt, mr-3205.txt, mr-3205.txt,
> Currently, the memory resources requested for a container limit the amount of virtual
memory used by the container. On my test clusters, at least, Java processes take up nearly
twice as much vmem as pmem - a Java process running with -Xmx500m uses 935m of vmem and only
about 560m of pmem.
> This will force admins to either under-utilize available physical memory, or oversubscribe
it by configuring the available resources on a TT to be larger than the true amount of physical
> Instead, I would propose that the resource limit apply to pmem, and allow the admin to
configure a "vmem overcommit ratio" which sets the vmem limit as a function of pmem limit.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message