Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 54D5CEF78 for ; Wed, 6 Feb 2013 21:25:40 +0000 (UTC) Received: (qmail 14549 invoked by uid 500); 6 Feb 2013 21:25:34 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 14418 invoked by uid 500); 6 Feb 2013 21:25:34 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 14407 invoked by uid 99); 6 Feb 2013 21:25:34 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 06 Feb 2013 21:25:34 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of yuzhihong@gmail.com designates 209.85.215.42 as permitted sender) Received: from [209.85.215.42] (HELO mail-la0-f42.google.com) (209.85.215.42) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 06 Feb 2013 21:25:23 +0000 Received: by mail-la0-f42.google.com with SMTP id fe20so1913741lab.15 for ; Wed, 06 Feb 2013 13:25:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=QA+St9YRbpE3q6zu5aw99e8WKDRFuoPGU2k5UskgDhk=; b=KGqbnez2LigHaSd7d+qEQPJw4os4/Zi6rv+JcNHqNqEUtOV9Aii8FSWFtCFEBxSvxA /aJZDHtyX3Bgdn8TSrgfPFBfW8n/ZJ4SKdFApzAN2bAzJyh1lhsIKzcJfTASH0XIKhD4 8iS5FtFdpyaNmDbwkMhraKSIR5as6mpDnACOYcua+Gvha525zb3eM1MHJnkP7JJ3w11m bBdFmV+Y828VwFKrun0sYJQmWUQFzOmHcYjHOHCdYSja5NbsORhI0QCYAknvIWRta9Xb ULefO3jq5lXQ1z1G6Xzgq+Ym2sJieWGc7kDtcn0lYPNFc5/giJa3SFHuJlmgQzjoJiL1 KTvw== MIME-Version: 1.0 X-Received: by 10.112.17.194 with SMTP id q2mr11856079lbd.7.1360185901685; Wed, 06 Feb 2013 13:25:01 -0800 (PST) Received: by 10.112.43.66 with HTTP; Wed, 6 Feb 2013 13:25:01 -0800 (PST) In-Reply-To: <012039977044474D9CFA6A36A4D1FF66F39030@PDXEXMAIL01.webtrends.corp> References: <012039977044474D9CFA6A36A4D1FF66F38CFD@PDXEXMAIL01.webtrends.corp> <012039977044474D9CFA6A36A4D1FF66F39030@PDXEXMAIL01.webtrends.corp> Date: Wed, 6 Feb 2013 13:25:01 -0800 Message-ID: Subject: Re: TaskStatus Exception using HFileOutputFormat From: Ted Yu To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=bcaec554d84c7f9c2a04d514f5ec X-Virus-Checked: Checked by ClamAV on apache.org --bcaec554d84c7f9c2a04d514f5ec Content-Type: text/plain; charset=ISO-8859-1 Thanks for this information. Here is related code: public static void configureIncrementalLoad(Job job, HTable table) throws IOException { Configuration conf = job.getConfiguration(); ... Path partitionsPath = new Path(job.getWorkingDirectory(), "partitions_" + UUID.randomUUID()); LOG.info("Writing partition information to " + partitionsPath); FileSystem fs = partitionsPath.getFileSystem(conf); writePartitions(conf, partitionsPath, startKeys); partitionsPath.makeQualified(fs); Can you check whether hdfs related config was passed to Job correctly ? Thanks On Wed, Feb 6, 2013 at 1:15 PM, Sean McNamara wrote: > Ok, a bit more info- From what I can tell is that the partitions file > is being placed into the working dir on the node I launch from, and the > task trackers are trying to look for that file, which doesn't exist where > they run (since they are on other nodes.) > > > Here is the exception on the TT in case it is helpful: > > > 2013-02-06 17:05:13,002 WARN org.apache.hadoop.mapred.TaskTracker: > Exception while localization java.io.FileNotFoundException: File > /opt/jobs/MyMapreduceJob/partitions_1360170306728 does not exist. > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:397) > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251) > at > org.apache.hadoop.filecache.TaskDistributedCacheManager.setupCache(TaskDistributedCacheManager.java:179) > at > org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1212) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) > at > org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203) > at > org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118) > at > org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430) > at java.lang.Thread.run(Thread.java:662) > > From: Sean McNamara > Reply-To: "user@hadoop.apache.org" > Date: Wednesday, February 6, 2013 9:35 AM > > To: "user@hadoop.apache.org" > Subject: Re: TaskStatus Exception using HFileOutputFormat > > > Using the below construct, do you still get exception ? > > Correct, I am still getting this exception. > > Sean > > From: Ted Yu > Reply-To: "user@hadoop.apache.org" > Date: Tuesday, February 5, 2013 7:50 PM > To: "user@hadoop.apache.org" > Subject: Re: TaskStatus Exception using HFileOutputFormat > > Using the below construct, do you still get exception ? > > Please consider upgrading to hadoop 1.0.4 > > Thanks > > On Tue, Feb 5, 2013 at 4:55 PM, Sean McNamara > wrote: > >> > an you tell us the HBase and hadoop versions you were using ? >> >> Ahh yes, sorry I left that out: >> >> Hadoop: 1.0.3 >> HBase: 0.92.0 >> >> >> > I guess you have used the above construct >> >> >> Our code is as follows: >> HTable table = new HTable(conf, configHBaseTable); >> FileOutputFormat.setOutputPath(job, outputDir); >> HFileOutputFormat.configureIncrementalLoad(job, table); >> >> >> Thanks! >> >> From: Ted Yu >> Reply-To: "user@hadoop.apache.org" >> Date: Tuesday, February 5, 2013 5:46 PM >> To: "user@hadoop.apache.org" >> Subject: Re: TaskStatus Exception using HFileOutputFormat >> >> Can you tell us the HBase and hadoop versions you were using ? >> From TestHFileOutputFormat: >> >> HFileOutputFormat.configureIncrementalLoad(job, table); >> >> FileOutputFormat.setOutputPath(job, outDir); >> I guess you have used the above construct ? >> >> Cheers >> >> On Tue, Feb 5, 2013 at 4:31 PM, Sean McNamara < >> Sean.McNamara@webtrends.com> wrote: >> >>> >>> We're trying to use HFileOutputFormat for bulk hbase loading. When >>> using HFileOutputFormat's setOutputPath or configureIncrementalLoad, the >>> job is unable to run. The error I see in the jobtracker logs is: Trying to >>> set finish time for task attempt_201301030046_123198_m_000002_0 when no >>> start time is set, stackTrace is : java.lang.Exception >>> >>> If I remove an references to HFileOutputFormat, and >>> use FileOutputFormat.setOutputPath, things seem to run great. Does anyone >>> know what could be causing the TaskStatus error when >>> using HFileOutputFormat? >>> >>> Thanks, >>> >>> Sean >>> >>> >>> What I see on the Job Tracker: >>> >>> 2013-02-06 00:17:33,685 ERROR org.apache.hadoop.mapred.TaskStatus: >>> Trying to set finish time for task attempt_201301030046_123198_m_000002_0 >>> when no start time is set, stackTrace is : java.lang.Exception >>> at >>> org.apache.hadoop.mapred.TaskStatus.setFinishTime(TaskStatus.java:145) >>> at >>> org.apache.hadoop.mapred.TaskInProgress.incompleteSubTask(TaskInProgress.java:670) >>> at >>> org.apache.hadoop.mapred.JobInProgress.failedTask(JobInProgress.java:2945) >>> at >>> org.apache.hadoop.mapred.JobInProgress.updateTaskStatus(JobInProgress.java:1162) >>> at >>> org.apache.hadoop.mapred.JobTracker.updateTaskStatuses(JobTracker.java:4739) >>> at >>> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3683) >>> at >>> org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:3378) >>> at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) >>> at >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) >>> at java.lang.reflect.Method.invoke(Method.java:597) >>> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) >>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) >>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) >>> at java.security.AccessController.doPrivileged(Native Method) >>> at javax.security.auth.Subject.doAs(Subject.java:396) >>> at >>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) >>> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382) >>> >>> >>> What I see from the console: >>> >>> 391 [main] INFO org.apache.hadoop.hbase.mapreduce.HFileOutputFormat >>> - Looking up current regions for table >>> org.apache.hadoop.hbase.client.HTable@3a083b1b >>> 1284 [main] INFO org.apache.hadoop.hbase.mapreduce.HFileOutputFormat - >>> Configuring 41 reduce partitions to match current region count >>> 1285 [main] INFO org.apache.hadoop.hbase.mapreduce.HFileOutputFormat - >>> Writing partition information to >>> file:/opt/webtrends/oozie/jobs/Lab/O/VisitorAnalytics.MapReduce/bin/partitions_1360109875112 >>> 1319 [main] INFO org.apache.hadoop.util.NativeCodeLoader - Loaded the >>> native-hadoop library >>> 1328 [main] INFO org.apache.hadoop.io.compress.zlib.ZlibFactory - >>> Successfully loaded & initialized native-zlib library >>> 1329 [main] INFO org.apache.hadoop.io.compress.CodecPool - Got >>> brand-new compressor >>> 1588 [main] INFO org.apache.hadoop.hbase.mapreduce.HFileOutputFormat - >>> Incremental table output configured. >>> 2896 [main] INFO org.apache.hadoop.hbase.mapreduce.TableOutputFormat - >>> Created table instance for Lab_O_VisitorHistory >>> 2910 [main] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat >>> - Total input paths to process : 1 >>> Job Name: job_201301030046_123199 >>> Job Id: >>> http://strack01.staging.dmz:50030/jobdetails.jsp?jobid=job_201301030046_123199 >>> Job URL: VisitorHistory MapReduce (soozie01.Lab.O) >>> 3141 [main] INFO org.apache.hadoop.mapred.JobClient - Running job: >>> job_201301030046_123199 >>> 4145 [main] INFO org.apache.hadoop.mapred.JobClient - map 0% reduce 0% >>> 10162 [main] INFO org.apache.hadoop.mapred.JobClient - Task Id : >>> attempt_201301030046_123199_m_000002_0, Status : FAILED >>> 10196 [main] WARN org.apache.hadoop.mapred.JobClient - Error reading >>> task output >>> http://sdata01.staging.dmz:50060/tasklog?plaintext=true&attemptid=attempt_201301030046_123199_m_000002_0&filter=stdout >>> 10199 [main] WARN org.apache.hadoop.mapred.JobClient - Error reading >>> task output >>> http://sdata01.staging.dmz:50060/tasklog?plaintext=true&attemptid=attempt_201301030046_123199_m_000002_0&filter=stderr >>> 10199 [main] INFO org.apache.hadoop.mapred.JobClient - Task Id : >>> attempt_201301030046_123199_r_000042_0, Status : FAILED >>> 10203 [main] WARN org.apache.hadoop.mapred.JobClient - Error reading >>> task output >>> http://sdata01.staging.dmz:50060/tasklog?plaintext=true&attemptid=attempt_201301030046_123199_r_000042_0&filter=stdout >>> 10205 [main] WARN org.apache.hadoop.mapred.JobClient - Error reading >>> task output >>> http://sdata01.staging.dmz:50060/tasklog?plaintext=true&attemptid=attempt_201301030046_123199_r_000042_0&filter=stderr >>> 10206 [main] INFO org.apache.hadoop.mapred.JobClient - Task Id : >>> attempt_201301030046_123199_m_000002_1, Status : FAILED >>> 10210 [main] WARN org.apache.hadoop.mapred.JobClient - Error reading >>> task output >>> http://sdata05.staging.dmz:50060/tasklog?plaintext=true&attemptid=attempt_201301030046_123199_m_000002_1&filter=stdout >>> 10213 [main] WARN org.apache.hadoop.mapred.JobClient - Error reading >>> task output >>> http://sdata05.staging.dmz:50060/tasklog?plaintext=true&attemptid=attempt_201301030046_123199_m_000002_1&filter=stderr >>> 10213 [main] INFO org.apache.hadoop.mapred.JobClient - Task Id : >>> attempt_201301030046_123199_r_000042_1, Status : FAILED >>> 10217 [main] WARN org.apache.hadoop.mapred.JobClient - Error reading >>> task output >>> http://sdata05.staging.dmz:50060/tasklog?plaintext=true&attemptid=attempt_201301030046_123199_r_000042_1&filter=stdout >>> 10219 [main] WARN org.apache.hadoop.mapred.JobClient - Error reading >>> task output >>> http://sdata05.staging.dmz:50060/tasklog?plaintext=true&attemptid=attempt_201301030046_123199_r_000042_1&filter=stderr >>> 10220 [main] INFO org.apache.hadoop.mapred.JobClient - Task Id : >>> attempt_201301030046_123199_m_000002_2, Status : FAILED >>> 10224 [main] WARN org.apache.hadoop.mapred.JobClient - Error reading >>> task output >>> http://sdata03.staging.dmz:50060/tasklog?plaintext=true&attemptid=attempt_201301030046_123199_m_000002_2&filter=stdout >>> 10226 [main] WARN org.apache.hadoop.mapred.JobClient - Error reading >>> task output >>> http://sdata03.staging.dmz:50060/tasklog?plaintext=true&attemptid=attempt_201301030046_123199_m_000002_2&filter=stderr >>> 10227 [main] INFO org.apache.hadoop.mapred.JobClient - Task Id : >>> attempt_201301030046_123199_r_000042_2, Status : FAILED >>> 10236 [main] WARN org.apache.hadoop.mapred.JobClient - Error reading >>> task output >>> http://sdata03.staging.dmz:50060/tasklog?plaintext=true&attemptid=attempt_201301030046_123199_r_000042_2&filter=stdout >>> 10239 [main] WARN org.apache.hadoop.mapred.JobClient - Error reading >>> task output >>> http://sdata03.staging.dmz:50060/tasklog?plaintext=true&attemptid=attempt_201301030046_123199_r_000042_2&filter=stderr >>> 10239 [main] INFO org.apache.hadoop.mapred.JobClient - Task Id : >>> attempt_201301030046_123199_m_000001_0, Status : FAILED >>> 10244 [main] WARN org.apache.hadoop.mapred.JobClient - Error reading >>> task output >>> http://sdata02.staging.dmz:50060/tasklog?plaintext=true&attemptid=attempt_201301030046_123199_m_000001_0&filter=stdout >>> 10247 [main] WARN org.apache.hadoop.mapred.JobClient - Error reading >>> task output >>> http://sdata02.staging.dmz:50060/tasklog?plaintext=true&attemptid=attempt_201301030046_123199_m_000001_0&filter=stderr >>> 10247 [main] INFO org.apache.hadoop.mapred.JobClient - Task Id : >>> attempt_201301030046_123199_r_000041_0, Status : FAILED >>> 10250 [main] WARN org.apache.hadoop.mapred.JobClient - Error reading >>> task output >>> http://sdata02.staging.dmz:50060/tasklog?plaintext=true&attemptid=attempt_201301030046_123199_r_000041_0&filter=stdout >>> 10252 [main] WARN org.apache.hadoop.mapred.JobClient - Error reading >>> task output >>> http://sdata02.staging.dmz:50060/tasklog?plaintext=true&attemptid=attempt_201301030046_123199_r_000041_0&filter=stderr >>> 11255 [main] INFO org.apache.hadoop.mapred.JobClient - Task Id : >>> attempt_201301030046_123199_m_000001_1, Status : FAILED >>> 11259 [main] WARN org.apache.hadoop.mapred.JobClient - Error reading >>> task output >>> http://sdata05.staging.dmz:50060/tasklog?plaintext=true&attemptid=attempt_201301030046_123199_m_000001_1&filter=stdout >>> 11262 [main] WARN org.apache.hadoop.mapred.JobClient - Error reading >>> task output >>> http://sdata05.staging.dmz:50060/tasklog?plaintext=true&attemptid=attempt_201301030046_123199_m_000001_1&filter=stderr >>> 11262 [main] INFO org.apache.hadoop.mapred.JobClient - Task Id : >>> attempt_201301030046_123199_r_000041_1, Status : FAILED >>> 11265 [main] WARN org.apache.hadoop.mapred.JobClient - Error reading >>> task output >>> http://sdata05.staging.dmz:50060/tasklog?plaintext=true&attemptid=attempt_201301030046_123199_r_000041_1&filter=stdout >>> 11267 [main] WARN org.apache.hadoop.mapred.JobClient - Error reading >>> task output >>> http://sdata05.staging.dmz:50060/tasklog?plaintext=true&attemptid=attempt_201301030046_123199_r_000041_1&filter=stderr >>> 11267 [main] INFO org.apache.hadoop.mapred.JobClient - Task Id : >>> attempt_201301030046_123199_m_000001_2, Status : FAILED >>> 11271 [main] WARN org.apache.hadoop.mapred.JobClient - Error reading >>> task output >>> http://sdata03.staging.dmz:50060/tasklog?plaintext=true&attemptid=attempt_201301030046_123199_m_000001_2&filter=stdout >>> 11273 [main] WARN org.apache.hadoop.mapred.JobClient - Error reading >>> task output >>> http://sdata03.staging.dmz:50060/tasklog?plaintext=true&attemptid=attempt_201301030046_123199_m_000001_2&filter=stderr >>> 11274 [main] INFO org.apache.hadoop.mapred.JobClient - Task Id : >>> attempt_201301030046_123199_r_000041_2, Status : FAILED >>> 11277 [main] WARN org.apache.hadoop.mapred.JobClient - Error reading >>> task output >>> http://sdata03.staging.dmz:50060/tasklog?plaintext=true&attemptid=attempt_201301030046_123199_r_000041_2&filter=stdout >>> 11279 [main] WARN org.apache.hadoop.mapred.JobClient - Error reading >>> task output >>> http://sdata03.staging.dmz:50060/tasklog?plaintext=true&attemptid=attempt_201301030046_123199_r_000041_2&filter=stderr >>> 11280 [main] INFO org.apache.hadoop.mapred.JobClient - Job complete: >>> job_201301030046_123199 >>> 11291 [main] INFO org.apache.hadoop.mapred.JobClient - Counters: 4 >>> 11292 [main] INFO org.apache.hadoop.mapred.JobClient - Job Counters >>> 11292 [main] INFO org.apache.hadoop.mapred.JobClient - >>> SLOTS_MILLIS_MAPS=0 >>> 11292 [main] INFO org.apache.hadoop.mapred.JobClient - Total time >>> spent by all reduces waiting after reserving slots (ms)=0 >>> 11292 [main] INFO org.apache.hadoop.mapred.JobClient - Total time >>> spent by all maps waiting after reserving slots (ms)=0 >>> 11293 [main] INFO org.apache.hadoop.mapred.JobClient - >>> SLOTS_MILLIS_REDUCES=0 >>> >>> >>> >>> >> > --bcaec554d84c7f9c2a04d514f5ec Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Thanks for this information. Here is related code:

=A0=A0public static void configureIncrementalLoad(Jo= b job, HTable table)

=A0 throws IOException {

=A0 =A0 Configuration conf =3D job.getConfiguration();

<= p class=3D"p1">...

=A0 =A0=A0Path partitionsPath = =3D new Path(job.getWorkingDirectory(),

=A0=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 "partitions_" + UUID.random= UUID());

=A0 =A0 LOG.info("Writing partition information to = " + partitionsPath);

=A0 =A0 FileSystem fs =3D partitionsPath.getFileSystem(conf= );

=A0 =A0 writePartitions(conf, partitionsPath, startKeys);

=A0 =A0 partitionsPath.makeQualified(fs);

Can you check whether hdfs related config was passed to Job correctly ?

Thanks


On Wed, Feb 6, 2= 013 at 1:15 PM, Sean McNamara <Sean.McNamara@webtrends.com&g= t; wrote:
Ok, a bit more info- =A0From what I can te= ll is that the partitions file is being placed into the working dir on the = node I launch from, and the task trackers are trying to look for that file,= which doesn't exist where they run (since they are on other nodes.)


Here is the exception on the TT in case it= is helpful:


2013-02-06 17:05:13,002 WARN org.apache.hadoop.mapred.TaskTracker: Exceptio= n while localization java.io.FileNotFoundException: File /opt/jobs/MyMapred= uceJob/partitions_1360170306728 does not exist.
=A0 =A0 =A0 =A0 at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(Ra= wLocalFileSystem.java:397)
=A0 =A0 =A0 =A0 at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(Filt= erFileSystem.java:251)
=A0 =A0 =A0 =A0 at org.apache.hadoop.filecache.TaskDistributedCacheManager.= setupCache(TaskDistributedCacheManager.java:179)
=A0 =A0 =A0 =A0 at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.j= ava:1212)
=A0 =A0 =A0 =A0 at java.security.AccessController.doPrivileged(Native Metho= d)
=A0 =A0 =A0 =A0 at javax.security.auth.Subject.doAs(Subject.java:396)
=A0 =A0 =A0 =A0 at org.apache.hadoop.security.UserGroupInformation.doAs(Use= rGroupInformation.java:1121)
=A0 =A0 =A0 =A0 at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskT= racker.java:1203)
=A0 =A0 =A0 =A0 at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTra= cker.java:1118)
=A0 =A0 =A0 =A0 at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.j= ava:2430)
=A0 =A0 =A0 =A0 at java.lang.Thread.run(Thread.java:662)

From: Sean McNamara <sean.mcnamara@webtren= ds.com>
Reply-To: "user@hadoop.apache.org" &= lt;user@hadoop.= apache.org>
Date: Wednesday, February 6, 2013 9= :35 AM

To: "user@hadoop.apache.org" <user@hadoop.apache= .org>
Subject: Re: TaskStatus Exception u= sing HFileOutputFormat

>=A0Using the below construct, do you still get exception ?

Correct, I am still getting this exception.

Sean

From: Ted Yu <yuzhihong@gmail.com>
Reply-To: "user@hadoop.apache.org" &= lt;user@hadoop.= apache.org>
Date: Tuesday, February 5, 2013 7:5= 0 PM
To: "user@hadoop.apache.org" <user@hadoop.apache= .org>
Subject: Re: TaskStatus Exception u= sing HFileOutputFormat

Using the below construct, do you still get exception ?

Please consider upgrading to hadoop=A01.0.4

Thanks

On Tue, Feb 5, 2013 at 4:55 PM, Sean McNamara <Sean.M= cNamara@webtrends.com> wrote:
>=A0an you tell us the HBase and hadoop versions you were using ?

Ahh yes, sorry I left that out:

Hadoop:=A01.0.3
HBase:=A00.92.0


>=A0I guess you have used the above construct


Our code is as follows:
HTable table =3D new HTable(conf, configHBaseTable);
FileOutputFormat.setOutputPath(job, outputDir);
HFileOutputFormat.configureIncrementalLoad(job, table);


Thanks!

From: Ted Yu <yuzhihong@gmail.com>
Reply-To: "user@hadoop.apache.org" &= lt;user@hadoop.= apache.org>
Date: Tuesday, February 5, 2013 5:4= 6 PM
To: "user@hadoop.apache.org" <user@hadoop.apache= .org>
Subject: Re: TaskStatus Exception u= sing HFileOutputFormat

Can you tell us the HBase and hadoop versions you were using ?
From=A0TestHFileOutputFormat:

=A0 =A0=A0HFileOutputFormat.configureIncrementalLoad(job, table);

=A0 =A0 FileOutputFormat.setOutputPath(job, outDir);

I guess you have used the above construct ?

Cheers

On Tue, Feb 5, 2013 at 4:31 PM, Sean McNamara <Sean.M= cNamara@webtrends.com> wrote:

We're trying to use=A0HFileOutputFormat for bulk hbase loading. = =A0 When using=A0HFileOutputFormat's=A0setOutputPath or=A0configureIncr= ementalLoad, the job is unable to run. =A0The error I see in the jobtracker= logs is:=A0Trying to set finish time for task attempt_201301030046_123198_= m_000002_0 when no start time is set, stackTrace is : java.lang.Exception

If I remove an references to=A0HFileOutputFormat, and use=A0FileOutput= Format.setOutputPath, things seem to run great. =A0Does anyone know what co= uld be causing the TaskStatus error when using=A0HFileOutputFormat?

Thanks,

Sean


What I see on the Job Tracker:

2013-02-06 00:17:33,685 ERROR org.apache.hadoop.mapred.TaskStatus: Try= ing to set finish time for task attempt_201301030046_123198_m_000002_0 when= no start time is set, stackTrace is : java.lang.Exception
=A0 =A0 =A0 =A0 at org.apache.hadoop.mapred.TaskStatus.setFinishTime(T= askStatus.java:145)
=A0 =A0 =A0 =A0 at org.apache.hadoop.mapred.TaskInProgress.incompleteS= ubTask(TaskInProgress.java:670)
=A0 =A0 =A0 =A0 at org.apache.hadoop.mapred.JobInProgress.failedTask(J= obInProgress.java:2945)
=A0 =A0 =A0 =A0 at org.apache.hadoop.mapred.JobInProgress.updateTaskSt= atus(JobInProgress.java:1162)
=A0 =A0 =A0 =A0 at org.apache.hadoop.mapred.JobTracker.updateTaskStatu= ses(JobTracker.java:4739)
=A0 =A0 =A0 =A0 at org.apache.hadoop.mapred.JobTracker.processHeartbea= t(JobTracker.java:3683)
=A0 =A0 =A0 =A0 at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTr= acker.java:3378)
=A0 =A0 =A0 =A0 at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown= Source)
=A0 =A0 =A0 =A0 at sun.reflect.DelegatingMethodAccessorImpl.invoke(Del= egatingMethodAccessorImpl.java:25)
=A0 =A0 =A0 =A0 at java.lang.reflect.Method.invoke(Method.java:597)
=A0 =A0 =A0 =A0 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)=
=A0 =A0 =A0 =A0 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.j= ava:1388)
=A0 =A0 =A0 =A0 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.j= ava:1384)
=A0 =A0 =A0 =A0 at java.security.AccessController.doPrivileged(Native = Method)
=A0 =A0 =A0 =A0 at javax.security.auth.Subject.doAs(Subject.java:396)<= /div>
=A0 =A0 =A0 =A0 at org.apache.hadoop.security.UserGroupInformation.doA= s(UserGroupInformation.java:1121)
=A0 =A0 =A0 =A0 at org.apache.hadoop.ipc.Server$Handler.run(Server.jav= a:1382)


What I see from the console:

391 =A0[main] INFO =A0org.apache.hadoop.hbase.mapreduce.HFileOutputFor= mat =A0- Looking up current regions for table org.apache.hadoop.hbase.clien= t.HTable@3a083b1b
1284 [main] INFO =A0org.apache.hadoop.hbase.mapreduce.HFileOutputForma= t =A0- Configuring 41 reduce partitions to match current region count
1285 [main] INFO =A0org.apache.hadoop.hbase.mapreduce.HFileOutputForma= t =A0- Writing partition information to file:/opt/webtrends/oozie/jobs/Lab/= O/VisitorAnalytics.MapReduce/bin/partitions_1360109875112
1319 [main] INFO =A0org.apache.hadoop.util.NativeCodeLoader =A0- Loade= d the native-hadoop library
1328 [main] INFO =A0org.apache.hadoop.io.compress.zlib.ZlibFactory =A0= - Successfully loaded & initialized native-zlib library
1329 [main] INFO =A0org.apache.hadoop.io.compress.CodecPool =A0- Got b= rand-new compressor
1588 [main] INFO =A0org.apache.hadoop.hbase.mapreduce.HFileOutputForma= t =A0- Incremental table output configured.
2896 [main] INFO =A0org.apache.hadoop.hbase.mapreduce.TableOutputForma= t =A0- Created table instance for Lab_O_VisitorHistory
2910 [main] INFO =A0org.apache.hadoop.mapreduce.lib.input.FileInputFor= mat =A0- Total input paths to process : 1
Job Name: =A0 =A0 =A0 job_201301030046_123199
Job URL: =A0 =A0 =A0 =A0VisitorHistory MapReduce (soozie01.Lab.O)
3141 [main] INFO =A0org.apache.hadoop.mapred.JobClient =A0- Running jo= b: job_201301030046_123199
4145 [main] INFO =A0org.apache.hadoop.mapred.JobClient =A0- =A0map 0% = reduce 0%
10162 [main] INFO =A0org.apache.hadoop.mapred.JobClient =A0- Task Id := attempt_201301030046_123199_m_000002_0, Status : FAILED
10196 [main] WARN =A0org.apache.hadoop.mapred.JobClient =A0- Error rea= ding task outputhttp://sdata01.staging.dmz:50060/tasklog?plain= text=3Dtrue&attemptid=3Dattempt_201301030046_123199_m_000002_0&filt= er=3Dstdout
10199 [main] WARN =A0org.apache.hadoop.mapred.JobClient =A0- Error rea= ding task outputhttp://sdata01.staging.dmz:50060/tasklog?plain= text=3Dtrue&attemptid=3Dattempt_201301030046_123199_m_000002_0&filt= er=3Dstderr
10199 [main] INFO =A0org.apache.hadoop.mapred.JobClient =A0- Task Id := attempt_201301030046_123199_r_000042_0, Status : FAILED
10203 [main] WARN =A0org.apache.hadoop.mapred.JobClient =A0- Error rea= ding task outputhttp://sdata01.staging.dmz:50060/tasklog?plain= text=3Dtrue&attemptid=3Dattempt_201301030046_123199_r_000042_0&filt= er=3Dstdout
10205 [main] WARN =A0org.apache.hadoop.mapred.JobClient =A0- Error rea= ding task outputhttp://sdata01.staging.dmz:50060/tasklog?plain= text=3Dtrue&attemptid=3Dattempt_201301030046_123199_r_000042_0&filt= er=3Dstderr
10206 [main] INFO =A0org.apache.hadoop.mapred.JobClient =A0- Task Id := attempt_201301030046_123199_m_000002_1, Status : FAILED
10210 [main] WARN =A0org.apache.hadoop.mapred.JobClient =A0- Error rea= ding task outputhttp://sdata05.staging.dmz:50060/tasklog?plain= text=3Dtrue&attemptid=3Dattempt_201301030046_123199_m_000002_1&filt= er=3Dstdout
10213 [main] WARN =A0org.apache.hadoop.mapred.JobClient =A0- Error rea= ding task outputhttp://sdata05.staging.dmz:50060/tasklog?plain= text=3Dtrue&attemptid=3Dattempt_201301030046_123199_m_000002_1&filt= er=3Dstderr
10213 [main] INFO =A0org.apache.hadoop.mapred.JobClient =A0- Task Id := attempt_201301030046_123199_r_000042_1, Status : FAILED
10217 [main] WARN =A0org.apache.hadoop.mapred.JobClient =A0- Error rea= ding task outputhttp://sdata05.staging.dmz:50060/tasklog?plain= text=3Dtrue&attemptid=3Dattempt_201301030046_123199_r_000042_1&filt= er=3Dstdout
10219 [main] WARN =A0org.apache.hadoop.mapred.JobClient =A0- Error rea= ding task outputhttp://sdata05.staging.dmz:50060/tasklog?plain= text=3Dtrue&attemptid=3Dattempt_201301030046_123199_r_000042_1&filt= er=3Dstderr
10220 [main] INFO =A0org.apache.hadoop.mapred.JobClient =A0- Task Id := attempt_201301030046_123199_m_000002_2, Status : FAILED
10224 [main] WARN =A0org.apache.hadoop.mapred.JobClient =A0- Error rea= ding task outputhttp://sdata03.staging.dmz:50060/tasklog?plain= text=3Dtrue&attemptid=3Dattempt_201301030046_123199_m_000002_2&filt= er=3Dstdout
10226 [main] WARN =A0org.apache.hadoop.mapred.JobClient =A0- Error rea= ding task outputhttp://sdata03.staging.dmz:50060/tasklog?plain= text=3Dtrue&attemptid=3Dattempt_201301030046_123199_m_000002_2&filt= er=3Dstderr
10227 [main] INFO =A0org.apache.hadoop.mapred.JobClient =A0- Task Id := attempt_201301030046_123199_r_000042_2, Status : FAILED
10236 [main] WARN =A0org.apache.hadoop.mapred.JobClient =A0- Error rea= ding task outputhttp://sdata03.staging.dmz:50060/tasklog?plain= text=3Dtrue&attemptid=3Dattempt_201301030046_123199_r_000042_2&filt= er=3Dstdout
10239 [main] WARN =A0org.apache.hadoop.mapred.JobClient =A0- Error rea= ding task outputhttp://sdata03.staging.dmz:50060/tasklog?plain= text=3Dtrue&attemptid=3Dattempt_201301030046_123199_r_000042_2&filt= er=3Dstderr
10239 [main] INFO =A0org.apache.hadoop.mapred.JobClient =A0- Task Id := attempt_201301030046_123199_m_000001_0, Status : FAILED
10244 [main] WARN =A0org.apache.hadoop.mapred.JobClient =A0- Error rea= ding task outputhttp://sdata02.staging.dmz:50060/tasklog?plain= text=3Dtrue&attemptid=3Dattempt_201301030046_123199_m_000001_0&filt= er=3Dstdout
10247 [main] WARN =A0org.apache.hadoop.mapred.JobClient =A0- Error rea= ding task outputhttp://sdata02.staging.dmz:50060/tasklog?plain= text=3Dtrue&attemptid=3Dattempt_201301030046_123199_m_000001_0&filt= er=3Dstderr
10247 [main] INFO =A0org.apache.hadoop.mapred.JobClient =A0- Task Id := attempt_201301030046_123199_r_000041_0, Status : FAILED
10250 [main] WARN =A0org.apache.hadoop.mapred.JobClient =A0- Error rea= ding task outputhttp://sdata02.staging.dmz:50060/tasklog?plain= text=3Dtrue&attemptid=3Dattempt_201301030046_123199_r_000041_0&filt= er=3Dstdout
10252 [main] WARN =A0org.apache.hadoop.mapred.JobClient =A0- Error rea= ding task outputhttp://sdata02.staging.dmz:50060/tasklog?plain= text=3Dtrue&attemptid=3Dattempt_201301030046_123199_r_000041_0&filt= er=3Dstderr
11255 [main] INFO =A0org.apache.hadoop.mapred.JobClient =A0- Task Id := attempt_201301030046_123199_m_000001_1, Status : FAILED
11259 [main] WARN =A0org.apache.hadoop.mapred.JobClient =A0- Error rea= ding task outputhttp://sdata05.staging.dmz:50060/tasklog?plain= text=3Dtrue&attemptid=3Dattempt_201301030046_123199_m_000001_1&filt= er=3Dstdout
11262 [main] WARN =A0org.apache.hadoop.mapred.JobClient =A0- Error rea= ding task outputhttp://sdata05.staging.dmz:50060/tasklog?plain= text=3Dtrue&attemptid=3Dattempt_201301030046_123199_m_000001_1&filt= er=3Dstderr
11262 [main] INFO =A0org.apache.hadoop.mapred.JobClient =A0- Task Id := attempt_201301030046_123199_r_000041_1, Status : FAILED
11265 [main] WARN =A0org.apache.hadoop.mapred.JobClient =A0- Error rea= ding task outputhttp://sdata05.staging.dmz:50060/tasklog?plain= text=3Dtrue&attemptid=3Dattempt_201301030046_123199_r_000041_1&filt= er=3Dstdout
11267 [main] WARN =A0org.apache.hadoop.mapred.JobClient =A0- Error rea= ding task outputhttp://sdata05.staging.dmz:50060/tasklog?plain= text=3Dtrue&attemptid=3Dattempt_201301030046_123199_r_000041_1&filt= er=3Dstderr
11267 [main] INFO =A0org.apache.hadoop.mapred.JobClient =A0- Task Id := attempt_201301030046_123199_m_000001_2, Status : FAILED
11271 [main] WARN =A0org.apache.hadoop.mapred.JobClient =A0- Error rea= ding task outputhttp://sdata03.staging.dmz:50060/tasklog?plain= text=3Dtrue&attemptid=3Dattempt_201301030046_123199_m_000001_2&filt= er=3Dstdout
11273 [main] WARN =A0org.apache.hadoop.mapred.JobClient =A0- Error rea= ding task outputhttp://sdata03.staging.dmz:50060/tasklog?plain= text=3Dtrue&attemptid=3Dattempt_201301030046_123199_m_000001_2&filt= er=3Dstderr
11274 [main] INFO =A0org.apache.hadoop.mapred.JobClient =A0- Task Id := attempt_201301030046_123199_r_000041_2, Status : FAILED
11277 [main] WARN =A0org.apache.hadoop.mapred.JobClient =A0- Error rea= ding task outputhttp://sdata03.staging.dmz:50060/tasklog?plain= text=3Dtrue&attemptid=3Dattempt_201301030046_123199_r_000041_2&filt= er=3Dstdout
11279 [main] WARN =A0org.apache.hadoop.mapred.JobClient =A0- Error rea= ding task outputhttp://sdata03.staging.dmz:50060/tasklog?plain= text=3Dtrue&attemptid=3Dattempt_201301030046_123199_r_000041_2&filt= er=3Dstderr
11280 [main] INFO =A0org.apache.hadoop.mapred.JobClient =A0- Job compl= ete: job_201301030046_123199
11291 [main] INFO =A0org.apache.hadoop.mapred.JobClient =A0- Counters:= 4
11292 [main] INFO =A0org.apache.hadoop.mapred.JobClient =A0- =A0 Job C= ounters=A0
11292 [main] INFO =A0org.apache.hadoop.mapred.JobClient =A0- =A0 =A0 S= LOTS_MILLIS_MAPS=3D0
11292 [main] INFO =A0org.apache.hadoop.mapred.JobClient =A0- =A0 =A0 T= otal time spent by all reduces waiting after reserving slots (ms)=3D0
11292 [main] INFO =A0org.apache.hadoop.mapred.JobClient =A0- =A0 =A0 T= otal time spent by all maps waiting after reserving slots (ms)=3D0
11293 [main] INFO =A0org.apache.hadoop.mapred.JobClient =A0- =A0 =A0 S= LOTS_MILLIS_REDUCES=3D0






--bcaec554d84c7f9c2a04d514f5ec--