Return-Path: Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: (qmail 17276 invoked from network); 31 Mar 2010 18:48:34 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 31 Mar 2010 18:48:34 -0000 Received: (qmail 2549 invoked by uid 500); 31 Mar 2010 18:48:32 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 2383 invoked by uid 500); 31 Mar 2010 18:48:32 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Received: (qmail 2375 invoked by uid 99); 31 Mar 2010 18:48:31 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 31 Mar 2010 18:48:31 +0000 X-ASF-Spam-Status: No, hits=2.2 required=10.0 tests=FREEMAIL_FROM,HTML_MESSAGE,RCVD_IN_DNSWL_NONE,SPF_PASS,T_FRT_BELOW2,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of erlfilho@gmail.com designates 209.85.160.176 as permitted sender) Received: from [209.85.160.176] (HELO mail-gy0-f176.google.com) (209.85.160.176) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 31 Mar 2010 18:48:26 +0000 Received: by gyf1 with SMTP id 1so201685gyf.35 for ; Wed, 31 Mar 2010 11:48:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:date:received:message-id :subject:from:to:content-type; bh=ccXjM02iFF/tgHnKjmX1+Qaf9LOCWJNkVoZ85qCbfU8=; b=uH3MSLremytWhy7eTh1p15XmYtAjsucjrFuijZ0wFUrYHOHlqtAJDAtPOTnDBp53N2 ydmsoH3rnaoZMlIktE5xbPtnljG/5pMrfhvqSUp6nv01vaxsfMb6uz24txQx9YGV1f8U mQ/uGHuX3YLiBVpJ+wkP9+Sd66BEJaBrPqgfk= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=KUsQQtvA/9zwIAvFsFPCpa7qJD1mTOSOmE0v2AEt7mgrGcViwHpvLRhAd6TpAfbWr5 P7wpvj8Ps9Oc9rqOhx99IzXjaGYIzgkLUezRPJDyQmfNKSAtnkQWKL0x4Rj6PrMSNDbR Gh725sturSxVQj3O8HzVYc0LZ9bh1gjYAoShI= MIME-Version: 1.0 Received: by 10.100.43.5 with HTTP; Wed, 31 Mar 2010 11:48:05 -0700 (PDT) Date: Wed, 31 Mar 2010 15:48:05 -0300 Received: by 10.101.198.29 with SMTP id a29mr112446anq.174.1270061285786; Wed, 31 Mar 2010 11:48:05 -0700 (PDT) Message-ID: Subject: OutOfMemoryError: Cannot create GC thread. Out of system resources From: Edson Ramiro To: Hadoop User Content-Type: multipart/alternative; boundary=0016e68f9eeec842f104831d2f83 --0016e68f9eeec842f104831d2f83 Content-Type: text/plain; charset=ISO-8859-1 Hi all, When I run the pi Hadoop sample I get this error: 10/03/31 15:46:13 WARN mapred.JobClient: Error reading task outputhttp:// h04.ctinfra.ufpr.br:50060/tasklog?plaintext=true&taskid=attempt_201003311545_0001_r_000002_0&filter=stdout 10/03/31 15:46:13 WARN mapred.JobClient: Error reading task outputhttp:// h04.ctinfra.ufpr.br:50060/tasklog?plaintext=true&taskid=attempt_201003311545_0001_r_000002_0&filter=stderr 10/03/31 15:46:20 INFO mapred.JobClient: Task Id : attempt_201003311545_0001_m_000006_1, Status : FAILED java.io.IOException: Task process exit with nonzero status of 134. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418) May be its because the datanode can't create more threads. ramiro@lcpad:~/hadoop-0.20.2$ cat logs/userlogs/attempt_201003311457_0001_r_000001_2/stdout # # A fatal error has been detected by the Java Runtime Environment: # # java.lang.OutOfMemoryError: Cannot create GC thread. Out of system resources. # # Internal Error (gcTaskThread.cpp:38), pid=28840, tid=140010745776400 # Error: Cannot create GC thread. Out of system resources. # # JRE version: 6.0_17-b04 # Java VM: Java HotSpot(TM) 64-Bit Server VM (14.3-b01 mixed mode linux-amd64 ) # An error report file with more information is saved as: # /var-host/tmp/hadoop-ramiro/mapred/local/taskTracker/jobcache/job_201003311457_0001/attempt_201003311457_0001_r_000001_2/work/hs_err_pid28840.log # # If you would like to submit a bug report, please visit: # http://java.sun.com/webapps/bugreport/crash.jsp # I configured the limits bellow, but I'm still getting the same error. fs.inmemory.size.mb 100 mapred.child.java.opts -Xmx128M Do you know what limit should I configure to fix it? Thanks in Advance Edson Ramiro --0016e68f9eeec842f104831d2f83--