hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Doug Cutting (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2764) specify different heap size for namenode/jobtracker vs. tasktracker/datanodes
Date Fri, 01 Feb 2008 19:17:08 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12564889#action_12564889
] 

Doug Cutting commented on HADOOP-2764:
--------------------------------------

> best to keep heap setting low.

The JVM doesn't generally use more heap unless it has to.  Are you seeing datanodes and tasktrackers
that use a lot of memory?  The purpose of limiting the heap size is to cause the JVM to promptly
crash instead of making the entire machine slow down by paging.

Reducing the heap size limit for the tasktracker and datanode will only help if these are
running amok, e.g., have a memory leak.  It would help us identify such problems sooner, but
such problems would debilitate the node either way.

Also one can use a different hadoop-env.sh on the namenode and jobtracker from that on other
nodes, but this can complicate deployments.

I think someone once proposed adding per-command config files, so that, if a file named 'hadoop-namenode-env.sh'
existed in the config directory then then 'bin/hadoop namenode' would include it after hadoop-env.sh.
 That would be simple to add and would enable the feature desired here and a lot of other
things.

> specify different heap size for namenode/jobtracker vs. tasktracker/datanodes
> -----------------------------------------------------------------------------
>
>                 Key: HADOOP-2764
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2764
>             Project: Hadoop Core
>          Issue Type: New Feature
>    Affects Versions: 0.15.3
>            Reporter: Joydeep Sen Sarma
>            Priority: Minor
>
> tasktrackers/datanodes should be run with low memory settings. theres a lot of competition
for memory on slave nodes and these tasks don't need much memory anyway and best to keep heap
setting low.
> namenode needs higher memory and there's usually lots to spare on separate box.
> hadoop-env.sh can provide different heap settings for central vs. slave daemons.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message