Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 3601B9953 for ; Mon, 19 Sep 2011 13:46:08 +0000 (UTC) Received: (qmail 33969 invoked by uid 500); 19 Sep 2011 13:46:04 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 33923 invoked by uid 500); 19 Sep 2011 13:46:04 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Received: (qmail 33909 invoked by uid 99); 19 Sep 2011 13:46:04 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 19 Sep 2011 13:46:04 +0000 X-ASF-Spam-Status: No, hits=-2.3 required=5.0 tests=RCVD_IN_DNSWL_MED,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of maheswara@huawei.com designates 206.16.17.211 as permitted sender) Received: from [206.16.17.211] (HELO usaga01-in.huawei.com) (206.16.17.211) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 19 Sep 2011 13:45:58 +0000 Received: from huawei.com (usaml01-in [172.18.4.6]) by usaga01-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LRR006SUVK105@usaga01-in.huawei.com>; Mon, 19 Sep 2011 08:45:37 -0500 (CDT) Received: from huawei.com ([172.17.1.90]) by usaga01-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LRR004EJVJZLG@usaga01-in.huawei.com>; Mon, 19 Sep 2011 08:45:37 -0500 (CDT) Received: from [172.24.1.33] (Forwarded-For: [10.18.1.36]) by szxmc01-in.huawei.com (mshttpd); Mon, 19 Sep 2011 18:45:35 +0500 Date: Mon, 19 Sep 2011 18:45:35 +0500 From: Uma Maheswara Rao G 72686 Subject: Re: Out of heap space errors on TTs In-reply-to: To: mapreduce-user@hadoop.apache.org Cc: common-user@hadoop.apache.org, js1987.smith@gmail.com Message-id: MIME-version: 1.0 X-Mailer: iPlanet Messenger Express 5.2 HotFix 2.14 (built Aug 8 2006) Content-type: text/plain; charset=us-ascii Content-language: en Content-transfer-encoding: 7BIT Content-disposition: inline X-Accept-Language: en Priority: normal References: X-Virus-Checked: Checked by ClamAV on apache.org Hello John You can use below properties mapred.tasktracker.map.tasks.maximum mapred.tasktracker.reduce.tasks.maximum By default that values will be 10. AFAIK, you can reduce io.sort.mb. But disk usage will be high. Since this is related to mapred, I have moved this discussion to Mapreduce. and cc'ed to common. Regards, Uma ----- Original Message ----- From: john smith Date: Monday, September 19, 2011 7:02 pm Subject: Re: Out of heap space errors on TTs To: common-user@hadoop.apache.org > Hi all, > > Thanks for the inputs... > > Can I reduce the ? (owing to the fact that I have less > ram size , > 2GB) > > My conf files doesn't have an entry mapred.child.java.opts .. So I > guess its > taking a default value of 200MB. > > Also how to decide the number of tasks per TT ? I have 4 cores per > node and > 2GB of total memory . So how many per node maximum tasks should I set? > > Thanks > > On Mon, Sep 19, 2011 at 6:28 PM, Uma Maheswara Rao G 72686 < > maheswara@huawei.com> wrote: > > > Hello, > > > > You need configure heap size for child tasks using below proprty. > > "mapred.child.java.opts" in mapred-site.xml > > > > by default it will be 200mb. But your io.sort.mb(300) is more > than that. > > So, configure more heap space for child tasks. > > > > ex: > > -Xmx512m > > > > Regards, > > Uma > > > > ----- Original Message ----- > > From: john smith > > Date: Monday, September 19, 2011 6:14 pm > > Subject: Out of heap space errors on TTs > > To: common-user@hadoop.apache.org > > > > > Hey guys, > > > > > > I am running hive and I am trying to join two tables (2.2GB and > > > 136MB) on a > > > cluster of 9 nodes (replication = 3) > > > > > > Hadoop version - 0.20.2 > > > Each data node memory - 2GB > > > HADOOP_HEAPSIZE - 1000MB > > > > > > other heap settings are defaults. My hive launches 40 Maptasks and > > > everytask failed with the same error > > > > > > 2011-09-19 18:37:17,110 INFO org.apache.hadoop.mapred.MapTask: > > > io.sort.mb = 300 > > > 2011-09-19 18:37:17,223 FATAL > org.apache.hadoop.mapred.TaskTracker:> > Error running child : > java.lang.OutOfMemoryError: Java heap space > > > at > > > > org.apache.hadoop.mapred.MapTask$MapOutputBuffer.(MapTask.java:781)> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:350) > > > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307) > > > at org.apache.hadoop.mapred.Child.main(Child.java:170) > > > > > > > > > Looks like I need to tweak some of the heap settings for TTs > to handle > > > the memory efficiently. I am unable to understand which > variables to > > > modify (there are too many related to heap sizes). > > > > > > Any specific things I must look at? > > > > > > Thanks, > > > > > > jS > > > > > >