Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6610010FFD for ; Tue, 12 Nov 2013 10:53:30 +0000 (UTC) Received: (qmail 92477 invoked by uid 500); 12 Nov 2013 10:52:55 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 92106 invoked by uid 500); 12 Nov 2013 10:52:43 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 92090 invoked by uid 99); 12 Nov 2013 10:52:39 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 12 Nov 2013 10:52:39 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of unmeshabiju@gmail.com designates 209.85.220.175 as permitted sender) Received: from [209.85.220.175] (HELO mail-vc0-f175.google.com) (209.85.220.175) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 12 Nov 2013 10:52:35 +0000 Received: by mail-vc0-f175.google.com with SMTP id ht17so2024141vcb.6 for ; Tue, 12 Nov 2013 02:52:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=uCngzw5Jr6Wl6ZUBbCqbbjUImCaeLO+KVDcJDshY5ac=; b=MkEfOIycTcbXCGTRqTZygLZjPDHAmsD0o9QXY4dJX+Oa7tURftdM96gkUGsuHqwNX0 yuIdep/09kio2CkHsloNY+V3Qn+5470HXAy8tpb3jH8H0fQ1BS3IWxzxOqKROb+HuabF 0WcV7GEZ6uXWdlZ8l4nam8Df2thbWA0DzFNQ4NryYbCA8i4VcBAkfcCImBr1DZVjGahY uPPnDWMh/KVJGDxgI5R/iaxh9cEUStX6xCaHJSQA+s03CoITXHUCfKN6vECh+EHBum7P zV64HBJ3fAzz7SIJI6JPH+Pgn2c5l6FclPUqJUJylakQyKyBDK09rTPElzUoF7pibLCN HngQ== MIME-Version: 1.0 X-Received: by 10.58.210.66 with SMTP id ms2mr28775047vec.10.1384253534228; Tue, 12 Nov 2013 02:52:14 -0800 (PST) Received: by 10.59.8.2 with HTTP; Tue, 12 Nov 2013 02:52:14 -0800 (PST) Date: Tue, 12 Nov 2013 16:22:14 +0530 Message-ID: Subject: LeaseExpiredException : Lease mismatch in Hadoop mapReduce| How to solve? From: unmesha sreeveni To: User Hadoop Content-Type: multipart/alternative; boundary=047d7bd6ae8c30035e04eaf8a4ab X-Virus-Checked: Checked by ClamAV on apache.org --047d7bd6ae8c30035e04eaf8a4ab Content-Type: text/plain; charset=ISO-8859-1 While running job with 90 Mb file i am getting LeaseExpiredException 13/11/12 15:46:41 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 13/11/12 15:46:42 INFO input.FileInputFormat: Total input paths to process : 1 13/11/12 15:46:43 INFO mapred.JobClient: Running job: job_201310301645_25033 13/11/12 15:46:44 INFO mapred.JobClient: map 0% reduce 0% 13/11/12 15:46:56 INFO mapred.JobClient: Task Id : attempt_201310301645_25033_m_000000_0, Status : FAILED org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): Lease mismatch on /user/hdfs/in/map owned by DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by DFSClient_NONMAPREDUCE_-1561990512_1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958) at org. attempt_201310301645_25033_m_000000_0: SLF4J: Class path contains multiple SLF4J bindings. attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class] attempt_201310301645_25033_m_000000_0: SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 13/11/12 15:47:02 INFO mapred.JobClient: Task Id : attempt_201310301645_25033_m_000000_1, Status : FAILED org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): Lease mismatch on /user/hdfs/in/map owned by DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by DFSClient_NONMAPREDUCE_-1662926329_1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java attempt_201310301645_25033_m_000000_1: SLF4J: Class path contains multiple SLF4J bindings. attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class] attempt_201310301645_25033_m_000000_1: SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. attempt_201310301645_25033_m_000000_1: log4j:WARN No appenders could be found for logger (org.apache.hadoop.hdfs.DFSClient). attempt_201310301645_25033_m_000000_1: log4j:WARN Please initialize the log4j system properly. attempt_201310301645_25033_m_000000_1: log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. 13/11/12 15:47:10 INFO mapred.JobClient: Task Id : attempt_201310301645_25033_m_000001_0, Status : FAILED org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /user/hdfs/in/map: File is not open for writing. Holder DFSClient_NONMAPREDUCE_-1622335545_1 does not have any open files. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958) Why is it so? My mapper code is public void map(Object key, Text value, Context context) throws IOException, InterruptedException { Configuration conf = new Configuration(); FileSystem fs = FileSystem.get(conf); Path inputfile = new Path("in/map"); BufferedWriter getdatabuffer = new BufferedWriter(new OutputStreamWriter(fs.create(inputfile))); getdatabuffer.write(value.toString()); getdatabuffer.close(); Path Attribute = new Path("in/Attribute"); int row =0; BufferedReader read = new BufferedReader(new InputStreamReader(fs.open(inputfile))); String str = null; while((str = read.readLine())!=null){ row++; //total row count StringTokenizer st =new StringTokenizer(str," "); col = st.countTokens(); } read.close(); ........... ........... ............. ............ Further computation is based on the above "map" file. Why this happens? I think it is unable to write into in/map for several times. How to get rid of this? *Any Suggestions?* -- *Thanks & Regards* Unmesha Sreeveni U.B *Junior Developer* --047d7bd6ae8c30035e04eaf8a4ab Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
= While running job with 90 Mb file i am g= etting=A0LeaseExpiredException

13/11/12 15:46:41 WARN mapred.JobClient: Use Gene= ricOptionsParser for parsing the arguments. Applications should implement T= ool for the same.
13/11/12 15:46:42 INFO input.FileInputFormat: = Total input paths to process : 1
13/11/12= 15:46:43 INFO mapred.JobClient: Running job: job_201310301645_25033
<= div class=3D"gmail_default"> 13/11/12 15:46:44 INFO mapred.JobClient: =A0map 0% reduce 0%
13/11/12 15:46:56 INFO mapred.JobClient: Task Id : atte= mpt_201310301645_25033_m_000000_0, Status : FAILED
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenod= e.LeaseExpiredException): Lease mismatch on /user/hdfs/in/map owned by DFSC= lient_NONMAPREDUCE_-1622335545_1 but is accessed by DFSClient_NONMAPREDUCE_= -1561990512_1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNa= mesystem.java:2459)
at org.apache.hadoop.hdfs.server.namenode.FSN= amesystem.checkLease(FSNamesystem.java:2437)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInt= ernal(FSNamesystem.java:2503)
at org.apache.hadoop.hdfs.server.na= menode.FSNamesystem.completeFile(FSNamesystem.java:2480)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(N= ameNodeRpcServer.java:556)
at org.apache.hadoop.hdfs.protocolPB.C= lientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocol= ServerSideTranslatorPB.java:337)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$= ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.ja= va:44958)
at org.
attempt_201310301645_25033_m= _000000_0: SLF4J: Class path contains multiple SLF4J bindings.
attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in [jar:file:/u= sr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBi= nder.class]
attempt_201310301645_25033_m_= 000000_0: SLF4J: Found binding in [jar:file:/tmp/hadoop-mapred/mapred/local= /taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/i= mpl/StaticLoggerBinder.class]
attempt_201310301645_25033_m_000000_0: SLF4J: = See http://ww= w.slf4j.org/codes.html#multiple_bindings for an explanation.
13/11/12 15:47:02 INFO mapred.JobClient: Task Id : attempt_201310301645_250= 33_m_000000_1, Status : FAILED
org.apache= .hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExp= iredException): Lease mismatch on /user/hdfs/in/map owned by DFSClient_NONM= APREDUCE_-1622335545_1 but is accessed by DFSClient_NONMAPREDUCE_-166292632= 9_1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNa= mesystem.java:2459)
at org.apache.hadoop.hdfs.server.namenode.FSN= amesystem.analyzeFileState(FSNamesystem.java:2262)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBl= ock(FSNamesystem.java:2175)
at org.apache.hadoop.hdfs.server.na= menode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideT= ranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)=
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$= ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.ja= va:44954)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.c= all(ProtobufRpcEngine.java
attempt_201310= 301645_25033_m_000000_1: SLF4J: Class path contains multiple SLF4J bindings= .
attempt_201310301645_25033_m_000000_1: SLF4J: = Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/= org/slf4j/impl/StaticLoggerBinder.class]
attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in [jar:file:/t= mp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25= 033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
attempt_201310301645_25033_m_000000_1: SLF4J: See http://www.slf4j.org/codes.html#multip= le_bindings for an explanation.
attem= pt_201310301645_25033_m_000000_1: log4j:WARN No appenders could be found fo= r logger (org.apache.hadoop.hdfs.DFSClient).
attempt_201310301645_25033_m_000000_1: log4j:W= ARN Please initialize the log4j system properly.
attempt_201310301645_25033_m_000000_1: log4j:WARN See http://logging.apache.= org/log4j/1.2/faq.html#noconfig for more info.
13/11/12 15:47:10 INFO mapred.JobClient: Task = Id : attempt_201310301645_25033_m_000001_0, Status : FAILED
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.= hdfs.server.namenode.LeaseExpiredException): No lease on /user/hdfs/in/map:= File is not open for writing. Holder DFSClient_NONMAPREDUCE_-1622335545_1 = does not have any open files.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNa= mesystem.java:2452)
at org.apache.hadoop.hdfs.server.namenode.FSN= amesystem.checkLease(FSNamesystem.java:2437)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInt= ernal(FSNamesystem.java:2503)
at org.apache.hadoop.hdfs.server.na= menode.FSNamesystem.completeFile(FSNamesystem.java:2480)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(N= ameNodeRpcServer.java:556)
at org.apache.hadoop.hdfs.protocolPB.C= lientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocol= ServerSideTranslatorPB.java:337)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$= ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.ja= va:44958)

Why is it so?=A0
= My mapper code is=A0

public void map(Object key, Text value,= Context context)
throws I= OException, InterruptedException {
Configuration conf =3D new Configuration()= ;
FileSystem fs = =3D FileSystem.get(conf);
=A0 =A0 <= /span>
=A0Path inputfil= e =3D new Path("in/map");
=A0BufferedWriter getdatabuffer =3D new Buf= feredWriter(new OutputStreamWriter(fs.create(inputfile)));
=A0=A0=A0getdata= buffer.write(value.toString());
=A0 =A0getdatabuffer.close();
Path Attribute = =3D new Path("in/Attribute");
= =A0 =A0 int row =3D0;
=A0 =A0 =A0 =A0 BufferedReader read =3D new BufferedReader(new InputStreamR= eader(fs.open(inputfile)));
=A0 =A0 =A0 = =A0 String str =3D null;
=A0 =A0 =A0 =A0 = while((str =3D read.readLine())!=3Dnull){
=A0 =A0 =A0 =A0 =A0
=A0 =A0 =A0 =A0 row++; //total row count
=A0 =A0 =A0 =A0 StringTokenizer st =3Dnew StringTokenizer(str," &= quot;);
=A0 =A0 =A0 =A0 col =3D st.countTokens();
=A0 =A0 =A0}
=A0 =A0 =A0 =A0 read.c= lose();
...........
...........
.............
............
Further computation is based on the = above "map" file.

<= /div>
= Why this happens?
I think it is unable to write into in/map for several times.
How to get rid= of this?
Any Suggestions?

--
Thanks & Regards

Unmesha Sreeveni U.B
Junior Developer

--047d7bd6ae8c30035e04eaf8a4ab--