hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rekha Joshi <rekha...@yahoo-inc.com>
Subject RE: Out of Java heap space
Date Tue, 08 Dec 2009 04:24:37 GMT

If it is hadoop 0.20 the files to modify are core-site.xml, hdfs-site.xml and mapred-site.xml,
while the default configs are in core-default.xml,hdfs-default.xml and mapred-default.xml.

Otherwise also, are you saying that providing -D works with same memory but not via onfig?
If not, for memory issues, depending on your mapper/reducers buffer/merge params might be
needed.Some other params which help -
-Dmapred.task.default.maxvmem
-Dmapred.job.map.memory.mb
-Dmapred.job.reduce.memory.mb

Thanks!
________________________________________
From: Mark Kerzner [markkerzner@gmail.com]
Sent: Tuesday, December 08, 2009 5:56 AM
To: core-user@hadoop.apache.org
Subject: Out of Java heap space

Hi, guys,

first of all, I have added this section to hadoop-site.xml

<property>
  <name>mapred.child.java.opts</name>
  <value>-Xmx1024m</value>
</property>

Secondly, I am running on the EC2 Hadoop clusters using Apache distribution,
and I have modified the

hadoop-ec2-init-remote.sh in the src/contrib/ec2, so that it creates the
right setting for the hadoop-site.xml on cluster nodes.

and that's how I really added the section to hadoop-site.xml on each
machine. I verified that the setting is correct on all nodes.

Still, I run out of memory, as if this setting did not take effect. What
could I check?

Thank you,
Mark

Mime
View raw message