Return-Path: Delivered-To: apmail-hive-user-archive@www.apache.org Received: (qmail 24610 invoked from network); 21 Dec 2010 04:05:26 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 21 Dec 2010 04:05:26 -0000 Received: (qmail 72401 invoked by uid 500); 21 Dec 2010 04:05:25 -0000 Delivered-To: apmail-hive-user-archive@hive.apache.org Received: (qmail 72304 invoked by uid 500); 21 Dec 2010 04:05:25 -0000 Mailing-List: contact user-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hive.apache.org Delivered-To: mailing list user@hive.apache.org Received: (qmail 72296 invoked by uid 99); 21 Dec 2010 04:05:24 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 21 Dec 2010 04:05:24 +0000 X-ASF-Spam-Status: No, hits=2.7 required=10.0 tests=FREEMAIL_FROM,HTML_MESSAGE,HTML_OBFUSCATE_10_20,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,T_TO_NO_BRKTS_FREEMAIL,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of sean.curtis@gmail.com designates 209.85.216.48 as permitted sender) Received: from [209.85.216.48] (HELO mail-qw0-f48.google.com) (209.85.216.48) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 21 Dec 2010 04:05:15 +0000 Received: by qwh6 with SMTP id 6so3715831qwh.35 for ; Mon, 20 Dec 2010 20:04:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:from:mime-version :content-type:subject:date:in-reply-to:to:references:message-id :x-mailer; bh=UflIwyjRwjhtBAc7MAIEHzFttJQqGr1e67e8k02V3vI=; b=VkQTS2BlsnTvn2BS1rWvdbmT9I90Qe4dKcddE5eCpZpH49vt2XvU8NGyaNKBOrY+nC 1t8LAjGAX3ZCV0maVKHvSFfbLvtbouTeZYcaN7q00lo6rHyb2a4Zg7u6YnVEcPhdYzwp CJK1Y8+/Vii9oF2N2ZJvSjSn1rYeNoM7PzPyo= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=from:mime-version:content-type:subject:date:in-reply-to:to :references:message-id:x-mailer; b=EYBXFHmRnvB0/UrtggRqCN5IsM3LxUtY4pGfFGH+fnG+D2ZMSVH/BBULvsII+u9D4b 47JCuj5/Wb7bQYX0O2IXR02z5YS/fN/kUjc4pFKvh5WkSAC/fFRQbP8PUzXnU4Z1ZWOl WvumRdbYJgFeSIJTomgqYGLlxH+o3ICvTX47U= Received: by 10.224.140.147 with SMTP id i19mr4673850qau.395.1292904293767; Mon, 20 Dec 2010 20:04:53 -0800 (PST) Received: from [192.168.2.103] (ool-457bd457.dyn.optonline.net [69.123.212.87]) by mx.google.com with ESMTPS id h20sm2695919qck.12.2010.12.20.20.04.52 (version=TLSv1/SSLv3 cipher=RC4-MD5); Mon, 20 Dec 2010 20:04:53 -0800 (PST) From: Sean Curtis Mime-Version: 1.0 (Apple Message framework v1082) Content-Type: multipart/alternative; boundary=Apple-Mail-81-1064479047 Subject: Re: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask Date: Mon, 20 Dec 2010 23:04:51 -0500 In-Reply-To: <4D10268B.1010001@orkash.com> To: user@hive.apache.org References: <4D10268B.1010001@orkash.com> Message-Id: <1908AD29-521E-485D-A146-A470C299E8E5@gmail.com> X-Mailer: Apple Mail (2.1082) X-Virus-Checked: Checked by ClamAV on apache.org --Apple-Mail-81-1064479047 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=us-ascii in failed/killed task attempts, i see the following: = attempt_201012141048_0023_m_000000_0task_201012141048_0023_m_000000172.24.= 10.91FAILED Too many fetch-failures Last 4KB Last 8KB All = attempt_201012141048_0023_m_000000_1task_201012141048_0023_m_000000172.24.= 10.91FAILED Too many fetch-failures Last 4KB Last 8KB All = attempt_201012141048_0023_m_000001_0task_201012141048_0023_m_000001172.24.= 10.91FAILED Too many fetch-failures Last 4KB Last 8KB All = attempt_201012141048_0023_m_000001_1task_201012141048_0023_m_000001172.24.= 10.91FAILED Too many fetch-failures Last 4KB Last 8KB All = attempt_201012141048_0023_r_000000_0task_201012141048_0023_r_000000172.24.= 10.91FAILED Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out. Last 4KB Last 8KB All = attempt_201012141048_0023_r_000000_1task_201012141048_0023_r_000000172.24.= 10.91FAILED Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out. Last 4KB Last 8KB All = attempt_201012141048_0023_r_000000_2task_201012141048_0023_r_000000172.24.= 10.91FAILED Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out. Last 4KB Last 8KB All = attempt_201012141048_0023_r_000000_3task_201012141048_0023_r_000000172.24.= 10.91FAILED Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out. On Dec 20, 2010, at 11:01 PM, Adarsh Sharma wrote: > Sean Curtis wrote: >> just running a simple select count(1) from a table (using movielens = as an example) doesnt seem to work for me. anyone know why this doesnt = work? im using hive trunk: >>=20 >> hive> select avg(rating) from movierating where movieid=3D43; >> Total MapReduce jobs =3D 1 >> Launching Job 1 out of 1 >> Number of reduce tasks determined at compile time: 1 >> In order to change the average load for a reducer (in bytes): >> set hive.exec.reducers.bytes.per.reducer=3D >> In order to limit the maximum number of reducers: >> set hive.exec.reducers.max=3D >> In order to set a constant number of reducers: >> set mapred.reduce.tasks=3D >> Starting Job =3D job_201012141048_0023, Tracking URL =3D = http://localhost:50030/jobdetails.jsp?jobid=3Djob_201012141048_0023 >> Kill Command =3D /Users/Sean/dev/hadoop-0.20.2+737/bin/../bin/hadoop = job -Dmapred.job.tracker=3Dlocalhost:8021 -kill job_201012141048_0023 >> 2010-12-20 15:15:03,295 Stage-1 map =3D 0%, reduce =3D 0% >> 2010-12-20 15:15:09,420 Stage-1 map =3D 50%, reduce =3D 0% >> ... eventually fails after a couple of minutes with: >>=20 >> 2010-12-20 17:33:01,113 Stage-1 map =3D 100%, reduce =3D 0% >> 2010-12-20 17:33:32,182 Stage-1 map =3D 100%, reduce =3D 100% >> Ended Job =3D job_201012141048_0023 with errors >> FAILED: Execution Error, return code 2 from = org.apache.hadoop.hive.ql.exec.MapRedTask >> hive>=20 >>=20 >> almost seems like the reduce task never starts. any help would be = appreciated. >>=20 >> sean > To know the root cause of the problem, got to Jobtracker web UI ( = IP:50030) and Check Job Tracker History at the bottom corresponding to = this Job ID. >=20 >=20 > Best Regards >=20 > Adarsh Sharma --Apple-Mail-81-1064479047 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=us-ascii in = failed/killed task attempts, i see the = following:


attempt_201012141048_0023_m_0= 00000_0task_201012141048_0023_m_000000172.24.10.91FAILED
Too many =
fetch-failures
Last 4KB
Last 8KB
All
attempt_201012141048_0023_m_000000_1task_201012141048_0023_m_000000172.24.10.91FAILED
Too many =
fetch-failures
Last 4KB
Last 8KB
All
attempt_201012141048_0023_m_000001_0task_201012141048_0023_m_000001172.24.10.91FAILED
Too many =
fetch-failures
Last 4KB
Last 8KB
All
attempt_201012141048_0023_m_000001_1task_201012141048_0023_m_000001172.24.10.91FAILED
Too many =
fetch-failures
Last 4KB
Last 8KB
All
attempt_201012141048_0023_r_000000_0task_201012141048_0023_r_000000172.24.10.91FAILED
Shuffle =
Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
Last 4KB
Last 8KB
All
attempt_201012141048_0023_r_000000_1task_201012141048_0023_r_000000172.24.10.91FAILED
Shuffle =
Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
Last 4KB
Last 8KB
All
attempt_201012141048_0023_r_000000_2task_201012141048_0023_r_000000172.24.10.91FAILED
Shuffle =
Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
Last 4KB
Last 8KB
All
attempt_201012141048_0023_r_000000_3task_201012141048_0023_r_000000172.24.10.91FAILED
Shuffle =
Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; =
bailing-out.



O= n Dec 20, 2010, at 11:01 PM, Adarsh Sharma wrote:

Sean = Curtis wrote:
just running a simple select = count(1) from a table (using movielens as an example) doesnt seem to = work for me.  anyone know why this doesnt work? im using hive = trunk:

hive> select = avg(rating) from movierating where = movieid=3D43;
Total MapReduce = jobs =3D 1
Launching Job 1 out = of 1
Number of reduce tasks = determined at compile time: 1
In= order to change the average load for a reducer (in = bytes):
 set = hive.exec.reducers.bytes.per.reducer=3D<number>
In order to limit the maximum number of = reducers:
 set = hive.exec.reducers.max=3D<number>
In order to set a constant number of = reducers:
 set = mapred.reduce.tasks=3D<number>
Starting Job =3D job_201012141048_0023, Tracking URL =3D = http://localhost:50030/jobdetails.jsp?jobid=3Djob_201012141048_0023=
Kill Command =3D = /Users/Sean/dev/hadoop-0.20.2+737/bin/../bin/hadoop job =  -Dmapred.job.tracker=3Dlocalhost:8021 -kill = job_201012141048_0023
2010-12-20= 15:15:03,295 Stage-1 map =3D 0%,  reduce =3D = 0%
2010-12-20 15:15:09,420 = Stage-1 map =3D 50%,  reduce =3D 0%
... eventually fails after a couple of minutes = with:

2010-12-20 = 17:33:01,113 Stage-1 map =3D 100%,  reduce =3D = 0%
2010-12-20 17:33:32,182 = Stage-1 map =3D 100%,  reduce =3D 100%
Ended Job =3D job_201012141048_0023 with = errors
FAILED: Execution = Error, return code 2 from = org.apache.hadoop.hive.ql.exec.MapRedTask
hive>

almost seems = like the reduce task never starts. any help would be = appreciated.

sean
To know the root cause of the = problem, got to Jobtracker web UI ( IP:50030) and Check Job Tracker = History at the bottom corresponding to this Job ID.


Best = Regards

Adarsh = Sharma

= --Apple-Mail-81-1064479047--