hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Juwei Shi <shiju...@gmail.com>
Subject Re: Out of memory error.
Date Wed, 20 Oct 2010 02:08:11 GMT
You should increase the heap size of the child JVM process running task
tracker rather than that of the process running job tracker. By default,
Hadoop allocates 1000 MB of memory to each daemon it runs. This is
controlled by the HADOOP_HEAPSIZE setting in hadoop-env.sh. Note that this
value is not for the child JVM to run map and reduce tasks.

The memory given to each of these child JVMs can be changed by setting the
mapred.child.java.opts property. The default setting is -Xmx200m, which
gives each task 200 MB of memory.

2010/10/20 Shrijeet Paliwal <shrijeet@rocketfuel.com>

> Where is it failing exactly? Map/Reduce tasks are failing or something
> else?
>
>
> On Tue, Oct 19, 2010 at 9:28 AM, Yin Lou <yin.lou.07@gmail.com> wrote:
>
>> Hi,
>>
>> You can increase heapsize by -D mapred.child.java.opts="-d64 -Xmx4096m"
>>
>> Hope it helps.
>> Yin
>>
>>
>> On Tue, Oct 19, 2010 at 12:03 PM, web service <wbsrvc@gmail.com> wrote:
>>
>>> I have a simple map-reduce program, which runs fine under eclipse.
>>> However when I execute it using hadoop, it gives me an out of memory error.
>>> Hadoop_heapsize is 2000MB
>>>
>>> Not sure what the problem is.
>>>
>>
>>
>

Mime
View raw message