hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Chansler (JIRA)" <j...@apache.org>
Subject [jira] Created: (HADOOP-2499) Modify datanode configs to specify minimum JVM heapsize
Date Fri, 28 Dec 2007 23:07:45 GMT
Modify datanode configs to specify minimum JVM heapsize
-------------------------------------------------------

                 Key: HADOOP-2499
                 URL: https://issues.apache.org/jira/browse/HADOOP-2499
             Project: Hadoop
          Issue Type: Bug
          Components: dfs
            Reporter: Robert Chansler


>Y! 1524346

Currently the Hadoop DataNodes are running with the option -Xmx1000m. They
should and/or be running with the option -Xms1000m (if 1000m is correct; it
seems high?)

This turns out to be a sticky request. The place where Hadoop DFS is getting
the definition of how to define that 1000m is the hadoop-env file. Read the
code from bin/hadoop, which is used to start all hadoop processes:

) JAVA_HEAP_MAX=-Xmx1000m 
) 
) # check envvars which might override default args
) if [ "$HADOOP_HEAPSIZE" != "" ]; then
)   #echo "run with heapsize $HADOOP_HEAPSIZE"
)   JAVA_HEAP_MAX="-Xmx""$HADOOP_HEAPSIZE""m"
)   #echo $JAVA_HEAP_MAX
) fi

And here's the entry from hadoop-env.sh:
) # The maximum amount of heap to use, in MB. Default is 1000.
) export HADOOP_HEAPSIZE=1000

The problem is that I believe we want to specify -Xms for datanodes ONLY. But
the same script is used to start datanodes, tasktrackers, etc. This isn't
trivially a matter of distributing different config files, the options provided
are coded into the bin/hadoop executable. So this is an enhancement request.


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message