Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 314A210B21 for ; Wed, 3 Jun 2015 06:14:49 +0000 (UTC) Received: (qmail 40674 invoked by uid 500); 3 Jun 2015 06:14:44 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 40566 invoked by uid 500); 3 Jun 2015 06:14:44 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 40556 invoked by uid 99); 3 Jun 2015 06:14:43 -0000 Received: from Unknown (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 03 Jun 2015 06:14:43 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 7F38B1821DC for ; Wed, 3 Jun 2015 06:14:43 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 3.129 X-Spam-Level: *** X-Spam-Status: No, score=3.129 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_ENVFROM_END_DIGIT=0.25, HTML_MESSAGE=3, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd3-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-us-east.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id 8MFDTCqK_ZTw for ; Wed, 3 Jun 2015 06:14:42 +0000 (UTC) Received: from mail-ob0-f173.google.com (mail-ob0-f173.google.com [209.85.214.173]) by mx1-us-east.apache.org (ASF Mail Server at mx1-us-east.apache.org) with ESMTPS id 1A8AE428ED for ; Wed, 3 Jun 2015 06:14:42 +0000 (UTC) Received: by objn8 with SMTP id n8so61924obj.3 for ; Tue, 02 Jun 2015 23:13:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=Ls0DHujo4pb2Xk/swxEN3sQBHGv1vqLI3WYYhJHbUPI=; b=xUOPIggLOpxEJs3RYNEseJDHFZiIimW3iXnQkrEHSFe8w6RYcZoXSVUUKDrEsylOCD 60u0r+AywZGVOFbOWa9WkDSSZ1jNTOXJpHkcYlxIlF11k9w/u9PtT+ANo3iKpxko9iVP bVrc62apMueRyjZPbq/gTd4/4EmQgUBkAlty/TRj4OOiiqaZC6P/Nj0rWimsU+taIbsl Ql2Zh2jPzfrzWvbPxJrdoD1wBqIyVeG8tbmd5NU/O2eRxio1PI/EeJbufrmQvxWLcnm6 IbaKM+Jg6V/Q+AjVl5vu94TCsPr3xhWwtj9RR03m2bhlgN/gu68KBDww177PJs4cWPhm Rh3g== MIME-Version: 1.0 X-Received: by 10.182.129.19 with SMTP id ns19mr25681020obb.3.1433312036599; Tue, 02 Jun 2015 23:13:56 -0700 (PDT) Received: by 10.202.170.146 with HTTP; Tue, 2 Jun 2015 23:13:56 -0700 (PDT) Date: Wed, 3 Jun 2015 11:43:56 +0530 Message-ID: Subject: Map Reduce job failing - Hbase Bulk load From: Shashi Vishwakarma To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=e89a8fb1f54ccb3ce4051796f62a --e89a8fb1f54ccb3ce4051796f62a Content-Type: text/plain; charset=UTF-8 Hi I have map reduce job for hbase bulk load. Job is converting data into Hfiles and loading into hbase but after certain map % job is failing. Below is the exception that I am getting. Error: java.io.FileNotFoundException: /var/mapr/local/tm4/mapred/nodeManager/spill/job_1433110149357_0005/attempt_1433110149357_0005_m_000000_0/spill83.out.index at org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:198) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:800) at org.apache.hadoop.io.SecureIOUtils.openFSDataInputStream(SecureIOUtils.java:156) at org.apache.hadoop.mapred.SpillRecord.(SpillRecord.java:74) at org.apache.hadoop.mapred.MapRFsOutputBuffer.mergeParts(MapRFsOutputBuffer.java:1382) at org.apache.hadoop.mapred.MapRFsOutputBuffer.flush(MapRFsOutputBuffer.java:1627) at org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:709) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:779) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:345) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1566) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Above error says that file not found exception but I was able to locate that particular spill on disk. Only thing that i noticed in job that for small set of data it is working fine but as data grows job starts failing. Let me know if anyone has faced this issue. Thanks Shashi --e89a8fb1f54ccb3ce4051796f62a Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hi

I have map reduce job for hbase bulk load. Job is conve= rting data into Hfiles and loading into hbase but after certain map % job i= s failing. Below is the exception that I am getting.

Error: java.io.FileNotFoundException: /var/mapr/= local/tm4/mapred/nodeManager/spill/job_1433110149357_0005/attempt_143311014= 9357_0005_m_000000_0/spill83.out.index
=C2=A0 =C2=A0 at org.apache.hadoop.fs.RawLocal= FileSystem.open(RawLocalFileSystem.java:198)
= =C2=A0 =C2=A0 at org.apache.hadoop.fs.Fi= leSystem.open(FileSystem.java:800)
=C2=A0 =C2=A0 at org.apache.hadoop.io.SecureIOUtil= s.openFSDataInputStream(SecureIOUtils.java:156)
=C2=A0 =C2=A0 at org.apache.hadoop.m= apred.SpillRecord.<init>(SpillRecord.java:74)
=C2=A0 =C2=A0 at org.apache.hadoo= p.mapred.MapRFsOutputBuffer.mergeParts(MapRFsOutputBuffer.java:1382)=
=C2=A0 =C2=A0 at= org.apache.hadoop.mapred.MapRFsOutputBuffer.flush(MapRFsOutputBuffer.java:= 1627)
=C2= =A0 =C2=A0 at org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(Map= Task.java:709)
=C2=A0 =C2=A0 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask= .java:779)
=C2=A0 =C2=A0 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:345)
=C2=A0 =C2= =A0 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)<= /font>
=C2=A0 =C2=A0 at = java.security.AccessController.doPrivileged(Native Method)
<= span style=3D"font-size:15px;line-height:19.5px">=C2=A0 =C2=A0 at javax.sec= urity.auth.Subject.doAs(Subject.java:415)
=C2=A0 =C2=A0 at org.apache.hadoop.security= .UserGroupInformation.doAs(UserGroupInformation.java:1566)
<= span style=3D"font-size:15px;line-height:19.5px">=C2=A0 =C2=A0 at org.apach= e.hadoop.mapred.YarnChild.main(YarnChild.java:163)

Above error says that file not fo= und exception but I was able to locate that particular spill on disk.

=

Only thing that i noticed in job that= for small set of data it is working fine but as data grows job starts fail= ing.

Let me know if anyone has faced this issue.

Thanks

Shashi

--e89a8fb1f54ccb3ce4051796f62a--