Return-Path: Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: (qmail 48773 invoked from network); 20 Oct 2010 02:08:40 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 20 Oct 2010 02:08:40 -0000 Received: (qmail 67410 invoked by uid 500); 20 Oct 2010 02:08:39 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 67363 invoked by uid 500); 20 Oct 2010 02:08:39 -0000 Mailing-List: contact mapreduce-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: mapreduce-user@hadoop.apache.org Delivered-To: mailing list mapreduce-user@hadoop.apache.org Received: (qmail 67355 invoked by uid 99); 20 Oct 2010 02:08:39 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 20 Oct 2010 02:08:39 +0000 X-ASF-Spam-Status: No, hits=2.2 required=10.0 tests=FREEMAIL_FROM,HTML_MESSAGE,RCVD_IN_DNSWL_NONE,SPF_PASS,T_TO_NO_BRKTS_FREEMAIL X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of shijuwei@gmail.com designates 209.85.213.176 as permitted sender) Received: from [209.85.213.176] (HELO mail-yx0-f176.google.com) (209.85.213.176) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 20 Oct 2010 02:08:33 +0000 Received: by yxn35 with SMTP id 35so690651yxn.35 for ; Tue, 19 Oct 2010 19:08:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:content-type; bh=k20T40eYamMsr497gfxG2osRkoSRKBqHxt8Pf1W7nn4=; b=HAtGDfcGhI3dqpUWzAwLFTeMJetzuGZtF8X872XRghqwcR26jwuG8mGE6CZx5g05cT NlPbw4KscCi0cm37XfqEv6U3wBfaeZPWM1ATM2sE4CrTjtdecQtPyiaY8ZJyEtFCkEBN dql0ti2qJNq/HrRii2DRbVOKL2GDa6g4emK+s= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=cr7HrwTAriKroNO5hdFNI+fenDOq5J7zpixMok7LcyNGJycIwiOvoXZCJIJNR1+3DA 41uhRreR6g3yB1Y5DlUNspPsdWJlTPwPCdyM9XXt6eeh8+g4hMuUz2bDfYtT1gLDo2q2 G7O+CblvVxAy0CJW1fouRfBA3PVu1ijrgW87I= MIME-Version: 1.0 Received: by 10.90.67.6 with SMTP id p6mr64451aga.102.1287540491891; Tue, 19 Oct 2010 19:08:11 -0700 (PDT) Received: by 10.90.134.2 with HTTP; Tue, 19 Oct 2010 19:08:11 -0700 (PDT) In-Reply-To: References: Date: Wed, 20 Oct 2010 10:08:11 +0800 Message-ID: Subject: Re: Out of memory error. From: Juwei Shi To: mapreduce-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=0016e64f9156a732d5049302e17c X-Virus-Checked: Checked by ClamAV on apache.org --0016e64f9156a732d5049302e17c Content-Type: text/plain; charset=ISO-8859-1 You should increase the heap size of the child JVM process running task tracker rather than that of the process running job tracker. By default, Hadoop allocates 1000 MB of memory to each daemon it runs. This is controlled by the HADOOP_HEAPSIZE setting in hadoop-env.sh. Note that this value is not for the child JVM to run map and reduce tasks. The memory given to each of these child JVMs can be changed by setting the mapred.child.java.opts property. The default setting is -Xmx200m, which gives each task 200 MB of memory. 2010/10/20 Shrijeet Paliwal > Where is it failing exactly? Map/Reduce tasks are failing or something > else? > > > On Tue, Oct 19, 2010 at 9:28 AM, Yin Lou wrote: > >> Hi, >> >> You can increase heapsize by -D mapred.child.java.opts="-d64 -Xmx4096m" >> >> Hope it helps. >> Yin >> >> >> On Tue, Oct 19, 2010 at 12:03 PM, web service wrote: >> >>> I have a simple map-reduce program, which runs fine under eclipse. >>> However when I execute it using hadoop, it gives me an out of memory error. >>> Hadoop_heapsize is 2000MB >>> >>> Not sure what the problem is. >>> >> >> > --0016e64f9156a732d5049302e17c Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable You should increase the heap size of the child JVM process running task tra= cker rather than that of the process running job tracker. By default, Hadoo= p allocates 1000 MB of memory to each daemon it runs. This is controlled by= the HADOOP_HEAPSIZE setting in hadoop-env.sh. Note that this value is not = for the child JVM to run map and reduce tasks.

The memory given to each of these child JVMs can be changed by setting = the mapred.child.java.opts property. The default setting is -Xmx200m, which= gives each task 200 MB of memory.

2010/= 10/20 Shrijeet Paliwal <shrijeet@rocketfuel.com>
Where is i= t failing exactly? Map/Reduce tasks are failing or something else?


On Tue, Oct 19, = 2010 at 9:28 AM, Yin Lou <yin.lou.07@gmail.com> wrote:
Hi,

You ca= n increase heapsize by -D mapred.child.java.opts=3D"-d64 -Xmx4096m&quo= t;

Hope it helps.
Yin


On Tue, Oct 19, 2010 at 12:03 PM, web service <wbsrvc@gmai= l.com> wrote:
I have a simple m= ap-reduce program, which runs fine under eclipse. However when I execute it= using hadoop, it gives me an out of memory error.
Hadoop_heapsize is 2000MB

Not sure what the problem is.





--0016e64f9156a732d5049302e17c--