Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id C117711780 for ; Sun, 7 Sep 2014 21:14:23 +0000 (UTC) Received: (qmail 22111 invoked by uid 500); 7 Sep 2014 21:14:15 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 21993 invoked by uid 500); 7 Sep 2014 21:14:15 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 21983 invoked by uid 99); 7 Sep 2014 21:14:15 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 07 Sep 2014 21:14:15 +0000 X-ASF-Spam-Status: No, hits=2.4 required=5.0 tests=HTML_FONT_FACE_BAD,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (athena.apache.org: local policy) Received: from [217.70.183.197] (HELO relay5-d.mail.gandi.net) (217.70.183.197) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 07 Sep 2014 21:14:09 +0000 Received: from mfilter13-d.gandi.net (mfilter13-d.gandi.net [217.70.178.141]) by relay5-d.mail.gandi.net (Postfix) with ESMTP id E5D8241C053 for ; Sun, 7 Sep 2014 23:13:47 +0200 (CEST) X-Virus-Scanned: Debian amavisd-new at mfilter13-d.gandi.net Received: from relay5-d.mail.gandi.net ([217.70.183.197]) by mfilter13-d.gandi.net (mfilter13-d.gandi.net [10.0.15.180]) (amavisd-new, port 10024) with ESMTP id bYLQB3ARJbNV for ; Sun, 7 Sep 2014 23:13:46 +0200 (CEST) X-Originating-IP: 78.228.212.43 Received: from [192.168.0.11] (mar92-17-78-228-212-43.fbx.proxad.net [78.228.212.43]) (Authenticated sender: hadoop@ulul.org) by relay5-d.mail.gandi.net (Postfix) with ESMTPSA id 264B841C05A for ; Sun, 7 Sep 2014 23:13:45 +0200 (CEST) Message-ID: <540CCA89.3000502@ulul.org> Date: Sun, 07 Sep 2014 23:13:45 +0200 From: Ulul User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.0 MIME-Version: 1.0 To: user@hadoop.apache.org Subject: Re: Map job not finishing References: <540CC1A8.7080402@ulul.org> In-Reply-To: <540CC1A8.7080402@ulul.org> Content-Type: multipart/alternative; boundary="------------070600070206070401020105" X-Virus-Checked: Checked by ClamAV on apache.org This is a multi-part message in MIME format. --------------070600070206070401020105 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: quoted-printable Oops, you're using HDP 2.1 which means Hadoop 2.4, so property name is mapreduce.tasktracker.map.tasks.maximum and more importantly it should be irrelevant using Yarn for which map=20 slots don't matter. Explanation anyone ? Ulul Le 07/09/2014 22:35, Ulul a =C3=A9crit : > Hi > > Adding an another TT may not be the only way, increasing > mapred.tasktracker.map.tasks.maximum could also do the trick > > Explanation there :=20 > http://www.thecloudavenue.com/2014/01/oozie-hangs-on-single-node-for-wo= rk-flow-with-fork.html > > Cheers > Ulul > > Le 07/09/2014 01:01, Rich Haase a =C3=A9crit : >> >> You're welcome. Glad I could help. >> >> On Sep 6, 2014 9:56 AM, "Charles Robertson"=20 >> > wro= te: >> >> Hi Rich, >> >> Default setup, so presumably one. I opted to add a node rather >> than change the number of task trackers and it now runs successful= ly. >> >> Thank you! >> Charles >> >> >> On 5 September 2014 16:44, Rich Haase > > wrote: >> >> How many tasktrackers do you have setup for your single node >> cluster? Oozie runs each action as a java program on an >> arbitrary cluster node, so running a workflow requires a >> minimum of two tasktrackers. >> >> >> On Fri, Sep 5, 2014 at 7:33 AM, Charles Robertson >> > > wrote: >> >> Hi all, >> >> I'm using oozie to run a hive script, but the map job is >> not completing. The tracking page shows its progress as >> 100%, and there's no warnings or errors in the logs, it's >> just sitting there with a state of 'RUNNING'. >> >> As best I can make out from the logs, the last statement >> in the hive script has been successfully parsed and it >> tries to start the command, saying "launching job 1 of >> 3". That job is sitting there in the "ACCEPTED" state, >> but doing nothing. >> >> This is on a single-node cluster running Hortonworks Data >> Platform 2.1. Can anyone suggest what might be the cause, >> or where else to look for diagnostic information? >> >> Thanks, >> Charles >> >> >> >> >> --=20 >> *Kernighan's Law* >> "Debugging is twice as hard as writing the code in the first >> place. Therefore, if you write the code as cleverly as >> possible, you are, by definition, not smart enough to debug it= ." >> >> > --------------070600070206070401020105 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable Oops, you're using HDP 2.1 which mean= s Hadoop 2.4, so property name is
mapreduce.tasktracker.map.tasks.maximum

and more importantly it should be irrelevant using Yarn for which map slots don't matter. Explanation anyone ?

Ulul

Le 07/09/2014 22:35, Ulul a =C3=A9crit= =C2=A0:
Hi

Adding an another TT may not be the only way, increasing
<= br> mapred.tasktracker.map.tasks.maximum could also do the trick

Explanation there : http://www.thecloudavenue.com/2014/01/oozie= -hangs-on-single-node-for-work-flow-with-fork.html

Cheers
Ulul

Le 07/09/2014 01:01, Rich Haase a =C3=A9crit=C2=A0:

You're welcome.=C2=A0 Glad I could help.

On Sep 6, 2014 9:56 AM, "Charles Robertson" <charles.robertson= @gmail.com> wrote:
Hi Rich,

Default setup, so presumably one. I opted to add a node rather than change the number of task trackers and it now runs successfully.

Thank you!
Charles


On 5 September 2014 16:44, Rich Haase <r= dhaase@gmail.com> wrote:
How many tasktrackers do you have setu= p for your single node cluster? =C2=A0Oozie runs each action as a java program on an arbitrary cluster node, so running a workflow requires a minimum of two tasktrackers. =C2=A0


On Fri, Sep 5, 2014 at 7:33 AM, Charles Robertson &l= t;charles.robertson@gmail.c= om> wrote:
Hi all,

I'm using oozie to run a hive script, but the map job is not completing. The tracking page shows its progress as 100%, and there's no warnings or errors in the logs, it's just sitting there with a state of 'RUNNING'.

As best I can make out from the logs, the last statement in the hive script has been successfully parsed and it tries to start the command, saying "launching job 1 of 3". That job is sitting there in the "ACCEPTED" state, but doing nothing.

This is on a single-node cluster running Hortonworks Data Platform 2.1. Can anyone suggest what might be the cause, or where else to look for diagnostic information?

Thanks,
Charles



--
Kernighan's Law
"Debugging is twice as hard as writing the code in the first place.=C2=A0 Therefore, if you write= the code as cleverly as possible, you are, by definition, not smart enough to debug it."



--------------070600070206070401020105--