Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A2E0DF08D for ; Sun, 14 Apr 2013 00:28:55 +0000 (UTC) Received: (qmail 57079 invoked by uid 500); 14 Apr 2013 00:28:50 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 56994 invoked by uid 500); 14 Apr 2013 00:28:50 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 56987 invoked by uid 99); 14 Apr 2013 00:28:50 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 14 Apr 2013 00:28:50 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of chris.hokamp@gmail.com designates 209.85.216.42 as permitted sender) Received: from [209.85.216.42] (HELO mail-qa0-f42.google.com) (209.85.216.42) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 14 Apr 2013 00:28:46 +0000 Received: by mail-qa0-f42.google.com with SMTP id bv4so349163qab.8 for ; Sat, 13 Apr 2013 17:28:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=yPQ0jKROVeZq9jey1+r3ahNHOgbPJ6DDtvLhySaGkiU=; b=KTGC1/zzKXgkiUycvW2712ydeW7RSCELu3CdW4GV0h/y31+Ng7ySegbhjDno5zqhFP 7n4lPXY26dMBT3JRqCV4iSwfoSkJA89g6144OW6qnhka/dTn+snNMeS39qETiVeGvYYQ x0tYzE2tbJEIO+S73Ih8UhOkULyfs4tGpp2KvthaFauzyikAcNMhZiI11GvTAYfNcV0J M3GeEoHlbGIT4cKv7diclKInuRoEMnjjbgMEip5+nB6JSNLG9Ww+rPkHPfxl1ydU3pn3 P/yKqXPJIZqKlZ4uxMli5YzYqMHl8yNeIr2+JmsTo4fZ870Y474ZMvWUsHMbQaIKt5Qu ar0g== MIME-Version: 1.0 X-Received: by 10.224.147.65 with SMTP id k1mr17206915qav.26.1365899305737; Sat, 13 Apr 2013 17:28:25 -0700 (PDT) Received: by 10.229.52.5 with HTTP; Sat, 13 Apr 2013 17:28:25 -0700 (PDT) In-Reply-To: References: Date: Sun, 14 Apr 2013 01:28:25 +0100 Message-ID: Subject: Re: Mapper always hangs at the same spot From: Chris Hokamp To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=089e01536f74eae5e104da4736e0 X-Virus-Checked: Checked by ClamAV on apache.org --089e01536f74eae5e104da4736e0 Content-Type: text/plain; charset=ISO-8859-1 The UDF and our Pig scripts work fine for most languages' wikidumps, and this hanging mapper issue only pops up with English wikidumps. It is certainly an issue with the wikiparser getting stuck in a recursive loop, and it must be a markup-related bug since this only happens with English. We're working on tracking it down now. Thanks for the help and advice! CH On Sun, Apr 14, 2013 at 1:20 AM, Azuryy Yu wrote: > agree. just check your app. or paste map code here. > > --Send from my Sony mobile. > On Apr 14, 2013 4:08 AM, "Edward Capriolo" wrote: > >> Your application logic is likely stuck in a loop. >> >> >> On Sat, Apr 13, 2013 at 12:47 PM, Chris Hokamp wrote: >> >>> >When you say "never progresses", do you see the MR framework kill it >>> >automatically after 10 minutes of inactivity or does it never ever >>> >exit? >>> >>> The latter -- it never exits. Killing it manually seems like a good >>> option for now. We already have mapred.max.map.failures.percent set to >>> a non-zero value, but because the task never fails, this never comes into >>> effect. >>> >>> Thanks for the help, >>> Chris >>> >>> >>> On Sat, Apr 13, 2013 at 5:00 PM, Harsh J wrote: >>> >>>> When you say "never progresses", do you see the MR framework kill it >>>> automatically after 10 minutes of inactivity or does it never ever >>>> exit? >>>> >>>> You can lower the timeout period on tasks via mapred.task.timeout set >>>> in msec. You could also set mapred.max.map.failures.percent to a >>>> non-zero value to allow that much percentage of tasks to fail without >>>> also marking the whole job as a failure. >>>> >>>> If the task itself does not get killed by the framework due to >>>> inactiveness, try doing a hadoop job -fail-task on its attempt ID >>>> manually. >>>> >>>> On Sat, Apr 13, 2013 at 8:45 PM, Chris Hokamp >>>> wrote: >>>> > Hello, >>>> > >>>> > We have a job where all mappers finish except for one, which always >>>> hangs at >>>> > the same spot (i.e. reaches 49%, then never progresses). >>>> > >>>> > This is likely due to a bug in the wiki parser in our Pig UDF. We can >>>> afford >>>> > to lose the data this mapper is working on if it would allow the job >>>> to >>>> > finish. Question: is there a hadoop configuration parameter similar to >>>> > mapred.skip.map.max.skip.records that would let us skip a map that >>>> doesn't >>>> > progress after X amount of time? Any other possible workarounds for >>>> this >>>> > case would also be useful. >>>> > >>>> > We are currently using hadoop 1.1.0 and Pig 0.10.1. >>>> > >>>> > Thanks, >>>> > Chris >>>> >>>> >>>> >>>> -- >>>> Harsh J >>>> >>> >>> >> --089e01536f74eae5e104da4736e0 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
The UDF and our Pig scripts work fine for most languages&#= 39; wikidumps, and this hanging mapper issue only pops up with English wiki= dumps. It is certainly an issue with the wikiparser getting stuck in a recu= rsive loop, and it must be a markup-related bug since this only happens wit= h English. We're working on tracking it down now. Thanks for the help a= nd advice!

CH


On Sun, Apr 14, 2013 at 1:20 AM, Azuryy Yu <azuryy= yu@gmail.com> wrote:

agree. just check your app. o= r paste map code here.

--Send from my Sony mobile.

On Apr 14, 2013 4:08 AM, "Edward Capriolo&q= uot; <edlinux= guru@gmail.com> wrote:
Your application logic is likely stuck in a loop.


On Sat, Apr= 13, 2013 at 12:47 PM, Chris Hokamp <chris.hokamp@gmail.com> wrote:
>When you say "never progresses&qu= ot;, do you see the MR framework kill it
>automatical= ly after 10 minutes of inactivity or does it never ever
>exit?

<= /span>
The latter -- it never ex= its. Killing it manually seems like a good option for now. We already have<= /font>=A0mapred= .max.map.failures.percent set to a non-zero value, but because the task never fails= , this never comes into effect.=A0

Thanks for the help,
Chris


On Sat,= Apr 13, 2013 at 5:00 PM, Harsh J <harsh@cloudera.com> wrot= e:
When you say "never progresses", d= o you see the MR framework kill it
automatically after 10 minutes of inactivity or does it never ever
exit?

You can lower the timeout period on tasks via mapred.task.timeout set
in msec. You could also set mapred.max.map.failures.percent to a
non-zero value to allow that much percentage of tasks to fail without
also marking the whole job as a failure.

If the task itself does not get killed by the framework due to
inactiveness, try doing a hadoop job -fail-task on its attempt ID
manually.

On Sat, Apr 13, 2013 at 8:45 PM, Chris Hokamp <chris.hokamp@gmail.com> wrote: > Hello,
>
> We have a job where all mappers finish except for one, which always ha= ngs at
> the same spot (i.e. reaches 49%, then never progresses).
>
> This is likely due to a bug in the wiki parser in our Pig UDF. We can = afford
> to lose the data this mapper is working on if it would allow the job t= o
> finish. Question: is there a hadoop configuration parameter similar to=
> mapred.skip.map.max.skip.records that would let us skip a map that doe= sn't
> progress after X amount of time? Any other possible workarounds for th= is
> case would also be useful.
>
> We are currently using hadoop 1.1.0 and Pig 0.10.1.
>
> Thanks,
> Chris



--
Harsh J



--089e01536f74eae5e104da4736e0--