Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 1D2C1D088 for ; Mon, 11 Mar 2013 20:33:10 +0000 (UTC) Received: (qmail 41860 invoked by uid 500); 11 Mar 2013 20:33:04 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 41716 invoked by uid 500); 11 Mar 2013 20:33:04 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 41709 invoked by uid 99); 11 Mar 2013 20:33:04 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 11 Mar 2013 20:33:04 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of daning@netseer.com designates 209.85.217.171 as permitted sender) Received: from [209.85.217.171] (HELO mail-lb0-f171.google.com) (209.85.217.171) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 11 Mar 2013 20:32:57 +0000 Received: by mail-lb0-f171.google.com with SMTP id gg13so3504358lbb.30 for ; Mon, 11 Mar 2013 13:32:37 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type:x-gm-message-state; bh=MENMaJSSe6UqQJN0mWNZTy8am33E1nFy3uBqTCazY2Y=; b=awK1g9J9ZkI9mzVHRS6HbNDCrUrKGASjeALxkKEAuX4JEKIIyLHX2VA1D0LS847emW 7fqOhSLou1+hmo4T7xA9RmBOXrlM5oSt+F5LeTnIWEERfy9wLadMkLMig72B7cmkwW/c 55oMT4N0ycMv/eMm5rtCxlH8zIMnr77g22DyL04U0S/lXk1NIjsytRHNqcEPHVt41OxP PqJcmFbUjOt504SxVL57GpEXt3W8f0GjVMlxsT9xlz0bAzQyg/vLgj/JomeTmGfrbCRB 0/I2YppqksZhjMHjuROnYvw53lMXDw6lr3SEi7VnxcybzNeyT/WUEvuATdG+mXXUUWhz 1+cg== MIME-Version: 1.0 X-Received: by 10.112.102.197 with SMTP id fq5mr4899259lbb.83.1363033957257; Mon, 11 Mar 2013 13:32:37 -0700 (PDT) Received: by 10.114.11.103 with HTTP; Mon, 11 Mar 2013 13:32:37 -0700 (PDT) In-Reply-To: References: Date: Mon, 11 Mar 2013 13:32:37 -0700 Message-ID: Subject: Re: Hadoop cluster hangs on big hive job From: Daning Wang To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=f46d0401f9d5d6fa8804d7ac126d X-Gm-Message-State: ALoCoQmD2oeFeC+vnLV4Ba/u1+WmQeQESkMgijqF0jZhda3ElkkJqTtyeBsl6sYNRr7Me/viW+vm X-Virus-Checked: Checked by ClamAV on apache.org --f46d0401f9d5d6fa8804d7ac126d Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable [hive@mr3-033 ~]$ hadoop version Hadoop 1.0.4 Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290 Compiled by hortonfo on Wed Oct 3 05:13:58 UTC 2012 On Sun, Mar 10, 2013 at 8:16 AM, Suresh Srinivas wr= ote: > What is the version of hadoop? > > Sent from phone > > On Mar 7, 2013, at 11:53 AM, Daning Wang wrote: > > We have hive query processing zipped csv files. the query was scanning fo= r > 10 days(partitioned by date). data for each day around 130G. The problem = is > not consistent since if you run it again, it might go through. but the > problem has never happened on the smaller jobs(like processing only one > days data). > > We don't have space issue. > > I have attached log file when problem happening. it is stuck like > following(just search "19706 of 49964") > > 2013-03-05 15:13:51,587 INFO org.apache.hadoop.mapred.TaskTracker: > attempt_201302270947_0010_r_000019_0 0.131468% reduce > copy (19706 of > 49964 at 0.00 MB/s) > > 2013-03-05 15:13:51,811 INFO org.apache.hadoop.mapred.TaskTracker: > attempt_201302270947_0010_r_000039_0 0.131468% reduce > copy (19706 of > 49964 at 0.00 MB/s) > > 2013-03-05 15:13:52,551 INFO org.apache.hadoop.mapred.TaskTracker: > attempt_201302270947_0010_r_000032_0 0.131468% reduce > copy (19706 of > 49964 at 0.00 MB/s) > > 2013-03-05 15:13:52,760 INFO org.apache.hadoop.mapred.TaskTracker: > attempt_201302270947_0010_r_000000_0 0.131468% reduce > copy (19706 of > 49964 at 0.00 MB/s) > > 2013-03-05 15:13:52,946 INFO org.apache.hadoop.mapred.TaskTracker: > attempt_201302270947_0010_r_000024_0 0.131468% reduce > copy (19706 of > 49964 at 0.00 MB/s) > > 2013-03-05 15:13:54,742 INFO org.apache.hadoop.mapred.TaskTracker: > attempt_201302270947_0010_r_000008_0 0.131468% reduce > copy (19706 of > 49964 at 0.00 MB/s) > > > Thanks, > > Daning > > > On Thu, Mar 7, 2013 at 12:21 AM, H=E5vard Wahl Kongsg=E5rd < > haavard.kongsgaard@gmail.com> wrote: > >> hadoop logs? >> On 6. mars 2013 21:04, "Daning Wang" wrote: >> >>> We have 5 nodes cluster(Hadoop 1.0.4), It hung a couple of times while >>> running big jobs. Basically all the nodes are dead, from that >>> trasktracker's log looks it went into some kinds of loop forever. >>> >>> All the log entries like this when problem happened. >>> >>> Any idea how to debug the issue? >>> >>> Thanks in advance. >>> >>> >>> 2013-03-05 15:13:19,526 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000012_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:19,552 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000028_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:20,858 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000036_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:21,141 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000016_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:21,486 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000019_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:21,692 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000039_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:22,448 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000032_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:22,643 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000000_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:22,840 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000024_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:24,628 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000008_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:24,723 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000039_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:25,336 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000004_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:25,539 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000043_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:25,545 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000012_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:25,569 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000028_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:25,855 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000024_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:26,876 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000036_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:27,159 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000016_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:27,505 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000019_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:28,464 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000032_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:28,553 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000043_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:28,561 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000012_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:28,659 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000000_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:30,519 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000019_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:30,644 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000008_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:30,741 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000039_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:31,369 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000004_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:31,675 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000000_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:31,875 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000024_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:32,372 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000028_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:32,893 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000036_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> >>> > > > --f46d0401f9d5d6fa8804d7ac126d Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
[hive@mr3-033 ~]$ hadoop version
Hadoop 1.0.4
Compiled by hortonfo on Wed Oct =A03 05:13:58 UTC 2012

<= /div>
On Sun, Mar 10, 2013 at 8:16 AM, Suresh= Srinivas <suresh@hortonworks.com> wrote:
What is the version o= f hadoop?

Sent from phone

On Ma= r 7, 2013, at 11:53 AM, Daning Wang <daning@netseer.com> wrote:

We have hive query processing zipp= ed csv files. the query was scanning for 10 days(partitioned by date). data= for each day around 130G. The problem is not consistent since if you run i= t again, it might go through. but the problem has never happened on the sma= ller jobs(like processing only one days data).

We don't have space issue.

I have att= ached log file when problem happening. it is stuck like following(just sear= ch "19706 of 49964")

2013-03-05 15:= 13:51,587 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201302270947_0= 010_r_000019_0 0.131468% reduce > copy (19706 of 49964 at 0.00 MB/s) >= ;
2013-03-05 15:13:51,811 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000039_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >
2013-03-05 15:13:52,551 INFO org.apache.hado= op.mapred.TaskTracker: attempt_201302270947_0010_r_000032_0 0.131468% reduc= e > copy (19706 of 49964 at 0.00 MB/s) >
2013-03-05 15:13:52,760 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000000_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >
2013-03-05 15:13:52,946 INFO org.apache.hado= op.mapred.TaskTracker: attempt_201302270947_0010_r_000024_0 0.131468% reduc= e > copy (19706 of 49964 at 0.00 MB/s) >
2013-03-05 15:13:54,742 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000008_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >

Thanks,

Daning


On Thu, Mar = 7, 2013 at 12:21 AM, H=E5vard Wahl Kongsg=E5rd <haavard.kongsga= ard@gmail.com> wrote:

hadoop logs?

On 6. mars 2013 21:04, "Daning Wang" &= lt;daning@netseer.c= om> wrote:
We have 5 nodes cluster(Hadoop 1.0.4), It hung a couple of times while runn= ing big jobs. Basically all the nodes are dead, from that trasktracker'= s log looks it went into some kinds of loop forever.

All the log entries like this when problem happened.

Any idea how to debug the issue?

Thanks in adva= nce.


2013-03-05 15:13:19,526 INFO o= rg.apache.hadoop.mapred.TaskTracker: attempt_201302270947_0010_r_000012_0 0= .131468% reduce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:19,552 INFO org.apache.hadoop.mapred.TaskTracker= : attempt_201302270947_0010_r_000028_0 0.131468% reduce > copy (19706 of= 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:20,858 INFO org.apa= che.hadoop.mapred.TaskTracker: attempt_201302270947_0010_r_000036_0 0.13146= 8% reduce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:21,141 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000016_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:21,486 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000019_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:21,692 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000039_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:22,448 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000032_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:22,643 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000000_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:22,840 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000024_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:24,628 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000008_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:24,723 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000039_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:25,336 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000004_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:25,539 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000043_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:25,545 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000012_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:25,569 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000028_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:25,855 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000024_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:26,876 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000036_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:27,159 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000016_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:27,505 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000019_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:28,464 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000032_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:28,553 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000043_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:28,561 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000012_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:28,659 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000000_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:30,519 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000019_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:30,644 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000008_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:30,741 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000039_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:31,369 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000004_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:31,675 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000000_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:31,875 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000024_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:32,372 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000028_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:32,893 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000036_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0


<hadoop-ha= doop3-tasktracker.log.gz>
--f46d0401f9d5d6fa8804d7ac126d--