Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A5C9C1813D for ; Wed, 3 Jun 2015 14:34:25 +0000 (UTC) Received: (qmail 92933 invoked by uid 500); 3 Jun 2015 14:34:23 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 92863 invoked by uid 500); 3 Jun 2015 14:34:23 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 92850 invoked by uid 99); 3 Jun 2015 14:34:23 -0000 Received: from Unknown (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 03 Jun 2015 14:34:23 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id BEFDECB00C for ; Wed, 3 Jun 2015 14:34:22 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 3.15 X-Spam-Level: *** X-Spam-Status: No, score=3.15 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_ENVFROM_END_DIGIT=0.25, HTML_MESSAGE=3, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-eu-west.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id yxvOYGC5R9hR for ; Wed, 3 Jun 2015 14:34:17 +0000 (UTC) Received: from mail-vn0-f47.google.com (mail-vn0-f47.google.com [209.85.216.47]) by mx1-eu-west.apache.org (ASF Mail Server at mx1-eu-west.apache.org) with ESMTPS id A999C2054B for ; Wed, 3 Jun 2015 14:34:16 +0000 (UTC) Received: by vnbg1 with SMTP id g1so1318955vnb.3 for ; Wed, 03 Jun 2015 07:32:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=CCVWuxfNnnk4hzV4Xw9OvkPxiQuNHyQibskDg5RvtQU=; b=upKJlpbQWEh6fpCirvZ3RM29ed27lf2uZP+IOBFxR8OO34PZ5p4zimaaTdwL8wByPl ZSxMjLpTJ+mxsmdypAQ7AHQUozRyyLXQX8WsV6j8x5carYiJQHv9N6UPogUoHUy9zZEc d7yYSgTv18oMJbopw3Y9Ph4VsrKeUR5BaKhucuZGw4zb7Rdx06H+cG8SmeDoymXucQp9 pP0XKGIJPxOtirV+GEzUoiVRLePIZG/O6TFt8ZfvjXECBeIbhTio5Yrj0xS15fnuDgGw mOYlsmoPK1UOKwvcYxs70ePdJSiziacePtgcJXTc98JRP+hlyGSEVlVuQU3uU7jKJXkM Tm3g== MIME-Version: 1.0 X-Received: by 10.52.113.97 with SMTP id ix1mr46930472vdb.1.1433341965225; Wed, 03 Jun 2015 07:32:45 -0700 (PDT) Received: by 10.53.7.37 with HTTP; Wed, 3 Jun 2015 07:32:45 -0700 (PDT) In-Reply-To: References: Date: Wed, 3 Jun 2015 20:02:45 +0530 Message-ID: Subject: Re: Hbase Bulk load - Map Reduce job failing From: Shashi Vishwakarma To: user@hbase.apache.org Content-Type: multipart/alternative; boundary=bcaec548a627add45c05179deeb8 --bcaec548a627add45c05179deeb8 Content-Type: text/plain; charset=UTF-8 Hi Yes I am using MapR FS. I have posted this problem on their forum but I haven't received any reply yet. Is there any other mapr mailing list apart from forum? Here is the link that i have posted. http://answers.mapr.com/questions/163440/hbase-bulk-load-map-reduce-job-failing-on-mapr.html Thanks. On Wed, Jun 3, 2015 at 7:15 PM, Ted Yu wrote: > Looks like you're using MapR FS. > > Have you considered posting this question on their mailing list ? > > Cheers > > On Tue, Jun 2, 2015 at 11:14 PM, Shashi Vishwakarma < > shashi.vish123@gmail.com> wrote: > > > Hi > > > > I have map reduce job for hbase bulk load. Job is converting data into > > Hfiles and loading into hbase but after certain map % job is failing. > Below > > is the exception that I am getting. > > > > Error: java.io.FileNotFoundException: > > > > > /var/mapr/local/tm4/mapred/nodeManager/spill/job_1433110149357_0005/attempt_1433110149357_0005_m_000000_0/spill83.out.index > > at > > org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:198) > > at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:800) > > at > > > > > org.apache.hadoop.io.SecureIOUtils.openFSDataInputStream(SecureIOUtils.java:156) > > at org.apache.hadoop.mapred.SpillRecord.(SpillRecord.java:74) > > at > > > > > org.apache.hadoop.mapred.MapRFsOutputBuffer.mergeParts(MapRFsOutputBuffer.java:1382) > > at > > > > > org.apache.hadoop.mapred.MapRFsOutputBuffer.flush(MapRFsOutputBuffer.java:1627) > > at > > > org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:709) > > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:779) > > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:345) > > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) > > at java.security.AccessController.doPrivileged(Native Method) > > at javax.security.auth.Subject.doAs(Subject.java:415) > > at > > > > > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1566) > > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) > > > > Above error says that file not found exception but I was able to locate > > that particular spill on disk. > > > > Only thing that i noticed in job that for small set of data it is working > > fine but as data grows job starts failing. > > > > Let me know if anyone has faced this issue. > > > > Thanks > > > > Shashi > > > --bcaec548a627add45c05179deeb8--