Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 0F19D958C for ; Thu, 19 Jul 2012 16:45:18 +0000 (UTC) Received: (qmail 65173 invoked by uid 500); 19 Jul 2012 16:45:16 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 65116 invoked by uid 500); 19 Jul 2012 16:45:16 -0000 Mailing-List: contact mapreduce-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: mapreduce-user@hadoop.apache.org Delivered-To: mailing list mapreduce-user@hadoop.apache.org Received: (qmail 65104 invoked by uid 99); 19 Jul 2012 16:45:16 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 19 Jul 2012 16:45:16 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=FSL_RCVD_USER,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of harsh@cloudera.com designates 209.85.160.48 as permitted sender) Received: from [209.85.160.48] (HELO mail-pb0-f48.google.com) (209.85.160.48) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 19 Jul 2012 16:45:12 +0000 Received: by pbbrq8 with SMTP id rq8so5508101pbb.35 for ; Thu, 19 Jul 2012 09:44:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type:x-gm-message-state; bh=QPE7bnsBUgpYwTjWQdmrInz9mxEUxEjbLazeQZBQdlw=; b=nra0DKPg0d3hz0RfAjpRawhrWa8nlHTccjnmZNBymo/Ili5zHdh5wNfRfg91iXWnc9 vP3BRqfrE2o/ywhJ9ochgZ61DKv429vnzxKzxrGuM0FXXJbJIWa51OBKQgqcyqpHpQHC X4RyLOfIzNJN7Rey/AR4pJfp3kWbHJOwx6KVd+v13nUbs3GSTk4f+tbyPjhSqe03daCu 4H5LNr6lvR3tU9gsKWXaDM1Z/i32yp82fEhQ1VSNez+LTYBY4Ym/IEAGeriCYMqOVSac /DeaMimG5tP/CChIzUJLss3GfilFPoLKaYzStD/GOq0Tk+TO4dCfCzS3niHHRtzqBk/k EHng== Received: by 10.68.193.195 with SMTP id hq3mr6754758pbc.30.1342716291747; Thu, 19 Jul 2012 09:44:51 -0700 (PDT) MIME-Version: 1.0 Received: by 10.68.62.40 with HTTP; Thu, 19 Jul 2012 09:44:31 -0700 (PDT) In-Reply-To: <03AF7A47D43C954E92AC1F7CD5C124020C32AC68@TLN-MBX1.corp.seven.com> References: <03AF7A47D43C954E92AC1F7CD5C124020C329F43@TLN-MBX1.corp.seven.com> <03AF7A47D43C954E92AC1F7CD5C124020C32AC68@TLN-MBX1.corp.seven.com> From: Harsh J Date: Thu, 19 Jul 2012 22:14:31 +0530 Message-ID: Subject: Re: location of Java heap dumps To: mapreduce-user@hadoop.apache.org Content-Type: text/plain; charset=ISO-8859-1 X-Gm-Message-State: ALoCoQkZOXmVYbfvMJCT09DaoMc3kAEHKRuufaazcPhQq8C4oZS1EXppQbSu43bemn11OO9vA9Z0 X-Virus-Checked: Checked by ClamAV on apache.org You need to ask your job to not discard failed task files. Else they get cleared away (except for logs) and thats why you do not see it anymore afterwards. If you're using 1.x/0.20.x, set "keep.failed.task.files" to true in your JobConf/Job.getConfiguration objects, before submitting your job. Afterwards, visit the failed task's node and cd to the right mapred.local.dir subdir for the attempt and you should see your file in the task's working dir there. The config prevents the failed task files to get removed. Hope this helps! On Thu, Jul 19, 2012 at 1:37 PM, Marek Miglinski wrote: > Thanks Markus, > > But as I said, I have only read access on the nodes and I can't make that change. So the question open. > > > Marek M. > ________________________________________ > From: Markus Jelsma [markus.jelsma@openindex.io] > Sent: Wednesday, July 18, 2012 9:06 PM > To: mapreduce-user@hadoop.apache.org > Subject: RE: location of Java heap dumps > > -XX:HeapDumpPath=/path/to/heap.dump > > > -----Original message----- >> From:Marek Miglinski >> Sent: Wed 18-Jul-2012 19:51 >> To: mapreduce-user@hadoop.apache.org >> Subject: location of Java heap dumps >> >> Hi all, >> >> I have a setting of -XX:+HeapDumpOnOutOfMemoryError on all nodes and I don't have permissions to add location where those dumps will be saved, so I get a message in my mapred process: >> >> java.lang.OutOfMemoryError: Java heap space >> Dumping heap to java_pid10687.hprof ... >> Heap dump file created [1385031743 bytes in 30.259 secs] >> >> Where do I locate those dumps? Cause I can't find them anywhere. >> >> >> Thanks, >> Marek M. >> -- Harsh J