Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 07485E417 for ; Mon, 11 Mar 2013 21:44:53 +0000 (UTC) Received: (qmail 51653 invoked by uid 500); 11 Mar 2013 21:44:47 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 51539 invoked by uid 500); 11 Mar 2013 21:44:47 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 51369 invoked by uid 99); 11 Mar 2013 21:44:47 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 11 Mar 2013 21:44:47 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of vicaya@gmail.com designates 209.85.212.46 as permitted sender) Received: from [209.85.212.46] (HELO mail-vb0-f46.google.com) (209.85.212.46) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 11 Mar 2013 21:44:43 +0000 Received: by mail-vb0-f46.google.com with SMTP id b13so1926244vby.33 for ; Mon, 11 Mar 2013 14:44:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:content-type; bh=+phsyw3WaAfPJVUGWxPSZQdOF1LSmT/m3Ds+GGuI91M=; b=rbp8LTpBa7WwAoBm846uFTz33rI8TJZtdYgHU3dGSCaF7b++GRAw1RrOn2xnyKCsg3 hRfY66JGDRgJ6uDMXhs0mZbdpTptZxo6V79Gw9wapGpQbVDyQdEu8v7t3GbCzY/71F4k pQh0F5GNk/7n747HgBWves1eJ73T+XJKnZslXA0eTFIygLdwMtKitMR85mH2WVifYxrZ LomgYYkCyB81QegjQZB3C1A6VQPpJ88EQIMDmv6qX/yaBNAE1f0nuVTKUEbHeIEMGHx3 ZWYe4ELK0Svdo47O+/efjjGiy0SUR4Me0RRZxk5woOrzocYGFHWAKWS55TDWaMXpxHzG wm8g== MIME-Version: 1.0 X-Received: by 10.58.106.161 with SMTP id gv1mr5699960veb.35.1363038262273; Mon, 11 Mar 2013 14:44:22 -0700 (PDT) Sender: vicaya@gmail.com Received: by 10.59.1.104 with HTTP; Mon, 11 Mar 2013 14:44:22 -0700 (PDT) In-Reply-To: References: Date: Mon, 11 Mar 2013 14:44:22 -0700 X-Google-Sender-Auth: v2zaQtNBrMN3Z_MjTSYK8-vVI4I Message-ID: Subject: Re: Hadoop cluster hangs on big hive job From: Luke Lu To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=047d7bacc178702d1e04d7ad13c2 X-Virus-Checked: Checked by ClamAV on apache.org --047d7bacc178702d1e04d7ad13c2 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable You mean HDFS-4479? The log seems to indicate the infamous jetty hang issue (MAPREDUCE-2386) though. On Mon, Mar 11, 2013 at 1:52 PM, Suresh Srinivas wr= ote: > I have seen one such problem related to big hive jobs that open a lot of > files. See HDFS-4496 for more details. Snippet from the description: > The following issue was observed in a cluster that was running a Hive job > and was writing to 100,000 temporary files (each task is writing to 1000s > of files). When this job is killed, a large number of files are left open > for write. Eventually when the lease for open files expires, lease recove= ry > is started for all these files in a very short duration of time. This > causes a large number of commitBlockSynchronization where logSync is > performed with the FSNamesystem lock held. This overloads the namenode > resulting in slowdown. > > Could this be the cause? Can you see namenode log to see if you have leas= e > recovery activity? If not, can you send some information about what is > happening in the namenode logs at the time of this slowdown? > > > > On Mon, Mar 11, 2013 at 1:32 PM, Daning Wang wrote: > >> [hive@mr3-033 ~]$ hadoop version >> Hadoop 1.0.4 >> Subversion >> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r >> 1393290 >> Compiled by hortonfo on Wed Oct 3 05:13:58 UTC 2012 >> >> >> On Sun, Mar 10, 2013 at 8:16 AM, Suresh Srinivas wrote: >> >>> What is the version of hadoop? >>> >>> Sent from phone >>> >>> On Mar 7, 2013, at 11:53 AM, Daning Wang wrote: >>> >>> We have hive query processing zipped csv files. the query was scanning >>> for 10 days(partitioned by date). data for each day around 130G. The >>> problem is not consistent since if you run it again, it might go throug= h. >>> but the problem has never happened on the smaller jobs(like processing = only >>> one days data). >>> >>> We don't have space issue. >>> >>> I have attached log file when problem happening. it is stuck like >>> following(just search "19706 of 49964") >>> >>> 2013-03-05 15:13:51,587 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000019_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:51,811 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000039_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:52,551 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000032_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:52,760 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000000_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:52,946 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000024_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> 2013-03-05 15:13:54,742 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201302270947_0010_r_000008_0 0.131468% reduce > copy (19706 of >>> 49964 at 0.00 MB/s) > >>> >>> Thanks, >>> >>> Daning >>> >>> >>> On Thu, Mar 7, 2013 at 12:21 AM, H=E5vard Wahl Kongsg=E5rd < >>> haavard.kongsgaard@gmail.com> wrote: >>> >>>> hadoop logs? >>>> On 6. mars 2013 21:04, "Daning Wang" wrote: >>>> >>>>> We have 5 nodes cluster(Hadoop 1.0.4), It hung a couple of times whil= e >>>>> running big jobs. Basically all the nodes are dead, from that >>>>> trasktracker's log looks it went into some kinds of loop forever. >>>>> >>>>> All the log entries like this when problem happened. >>>>> >>>>> Any idea how to debug the issue? >>>>> >>>>> Thanks in advance. >>>>> >>>>> >>>>> 2013-03-05 15:13:19,526 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000012_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:19,552 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000028_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:20,858 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000036_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:21,141 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000016_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:21,486 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000019_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:21,692 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000039_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:22,448 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000032_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:22,643 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000000_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:22,840 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000024_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:24,628 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000008_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:24,723 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000039_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:25,336 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000004_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:25,539 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000043_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:25,545 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000012_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:25,569 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000028_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:25,855 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000024_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:26,876 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000036_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:27,159 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000016_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:27,505 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000019_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:28,464 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000032_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:28,553 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000043_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:28,561 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000012_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:28,659 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000000_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:30,519 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000019_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:30,644 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000008_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:30,741 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000039_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:31,369 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000004_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:31,675 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000000_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:31,875 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000024_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:32,372 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000028_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> 2013-03-05 15:13:32,893 INFO org.apache.hadoop.mapred.TaskTracker: >>>>> attempt_201302270947_0010_r_000036_0 0.131468% reduce > copy (19706 o= f >>>>> 49964 at 0.00 MB/s) > >>>>> >>>>> >>> >>> >>> >> > > > -- > http://hortonworks.com/download/ > --047d7bacc178702d1e04d7ad13c2 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
You mean HDFS-4479?

The log seems to indicate the = infamous jetty hang issue (MAPREDUCE-2386) though.


On Mon, Mar 11, 2013 at 1:52= PM, Suresh Srinivas <suresh@hortonworks.com> wrote:
I have seen one such proble= m related to big hive jobs that open a lot of files. See HDFS-4496 for more= details. Snippet from the description:
The following issue was= observed in a cluster that was running a Hive job and was writing to 100,0= 00 temporary files (each task is writing to 1000s of files). When this job = is killed, a large number of files are left open for write. Eventually when= the lease for open files expires, lease recovery is started for all these = files in a very short duration of time. This causes a large number of commi= tBlockSynchronization where logSync is performed with the FSNamesystem lock= held. This overloads the namenode resulting in slowdown.

Could this be the cause? Can you see namenode lo= g to see if you have lease recovery activity? If not, can you send some inf= ormation about what is happening in the namenode logs at the time of this s= lowdown?



On Mon, Mar 11, 2013 at = 1:32 PM, Daning Wang <daning@netseer.com> wrote:
[hive@mr3-033 ~]$ hadoop version
<= div>Hadoop 1.0.4
Compiled by hortonfo on Wed Oct =A03 05:13:58 UTC 2012

<= /div>
On Sun, Mar 10, 2013 at 8:16 AM, Suresh= Srinivas <suresh@hortonworks.com> wrote:
What is the version o= f hadoop?

Sent from phone

On Mar 7, 2013, at= 11:53 AM, Daning Wang <daning@netseer.com> wrote:

We have hive query processing zipp= ed csv files. the query was scanning for 10 days(partitioned by date). data= for each day around 130G. The problem is not consistent since if you run i= t again, it might go through. but the problem has never happened on the sma= ller jobs(like processing only one days data).

We don't have space issue.

I have att= ached log file when problem happening. it is stuck like following(just sear= ch "19706 of 49964")

2013-03-05 15:= 13:51,587 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201302270947_0= 010_r_000019_0 0.131468% reduce > copy (19706 of 49964 at 0.00 MB/s) >= ;
2013-03-05 15:13:51,811 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000039_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >
2013-03-05 15:13:52,551 INFO org.apache.hado= op.mapred.TaskTracker: attempt_201302270947_0010_r_000032_0 0.131468% reduc= e > copy (19706 of 49964 at 0.00 MB/s) >
2013-03-05 15:13:52,760 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000000_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >
2013-03-05 15:13:52,946 INFO org.apache.hado= op.mapred.TaskTracker: attempt_201302270947_0010_r_000024_0 0.131468% reduc= e > copy (19706 of 49964 at 0.00 MB/s) >
2013-03-05 15:13:54,742 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000008_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >

Thanks,

Daning


On Thu, Mar = 7, 2013 at 12:21 AM, H=E5vard Wahl Kongsg=E5rd <haavard.kongsga= ard@gmail.com> wrote:

hadoop logs?

On 6. mars 2013 21:04, "Daning Wang" &= lt;daning@netseer.c= om> wrote:
We have 5 nodes cluster(Hadoop 1.0.4), It hung a couple of times while runn= ing big jobs. Basically all the nodes are dead, from that trasktracker'= s log looks it went into some kinds of loop forever.

All the log entries like this when problem happened.

Any idea how to debug the issue?

Thanks in adva= nce.


2013-03-05 15:13:19,526 INFO o= rg.apache.hadoop.mapred.TaskTracker: attempt_201302270947_0010_r_000012_0 0= .131468% reduce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:19,552 INFO org.apache.hadoop.mapred.TaskTracker= : attempt_201302270947_0010_r_000028_0 0.131468% reduce > copy (19706 of= 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:20,858 INFO org.apa= che.hadoop.mapred.TaskTracker: attempt_201302270947_0010_r_000036_0 0.13146= 8% reduce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:21,141 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000016_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:21,486 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000019_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:21,692 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000039_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:22,448 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000032_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:22,643 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000000_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:22,840 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000024_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:24,628 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000008_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:24,723 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000039_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:25,336 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000004_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:25,539 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000043_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:25,545 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000012_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:25,569 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000028_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:25,855 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000024_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:26,876 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000036_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:27,159 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000016_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:27,505 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000019_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:28,464 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000032_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:28,553 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000043_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:28,561 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000012_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:28,659 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000000_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:30,519 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000019_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:30,644 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000008_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:30,741 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000039_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:31,369 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000004_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:31,675 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000000_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:31,875 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000024_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0
2013-03-05 15:13:32,372 INFO org.apache.hadoop.mapred.TaskTracker: att= empt_201302270947_0010_r_000028_0 0.131468% reduce > copy (19706 of 4996= 4 at 0.00 MB/s) >=A0
2013-03-05 15:13:32,893 INFO org.apache.h= adoop.mapred.TaskTracker: attempt_201302270947_0010_r_000036_0 0.131468% re= duce > copy (19706 of 49964 at 0.00 MB/s) >=A0


<hadoop-ha= doop3-tasktracker.log.gz>



--
http://hortonworks.com/d= ownload/

--047d7bacc178702d1e04d7ad13c2--