Return-Path: X-Original-To: apmail-hama-user-archive@www.apache.org Delivered-To: apmail-hama-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 707B0118E2 for ; Wed, 21 May 2014 08:01:30 +0000 (UTC) Received: (qmail 39530 invoked by uid 500); 21 May 2014 08:01:30 -0000 Delivered-To: apmail-hama-user-archive@hama.apache.org Received: (qmail 39499 invoked by uid 500); 21 May 2014 08:01:30 -0000 Mailing-List: contact user-help@hama.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hama.apache.org Delivered-To: mailing list user@hama.apache.org Received: (qmail 39491 invoked by uid 99); 21 May 2014 08:01:30 -0000 Received: from minotaur.apache.org (HELO minotaur.apache.org) (140.211.11.9) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 21 May 2014 08:01:30 +0000 Received: from localhost (HELO mail-we0-f182.google.com) (127.0.0.1) (smtp-auth username edwardyoon, mechanism plain) by minotaur.apache.org (qpsmtpd/0.29) with ESMTP; Wed, 21 May 2014 08:01:30 +0000 Received: by mail-we0-f182.google.com with SMTP id t60so1621326wes.41 for ; Wed, 21 May 2014 01:01:28 -0700 (PDT) X-Gm-Message-State: ALoCoQm/QlPfC9hRMzpU6SNXClZKaZ+5bgZwHDn5HTD3Eh2nNMc9FG2G8snRS7BVrPtKIXu9ijKR MIME-Version: 1.0 X-Received: by 10.194.8.229 with SMTP id u5mr1246487wja.65.1400659288451; Wed, 21 May 2014 01:01:28 -0700 (PDT) Received: by 10.194.178.225 with HTTP; Wed, 21 May 2014 01:01:28 -0700 (PDT) In-Reply-To: <1400640305.52416.YahooMailNeo@web181006.mail.ne1.yahoo.com> References: <1400640305.52416.YahooMailNeo@web181006.mail.ne1.yahoo.com> Date: Wed, 21 May 2014 17:01:28 +0900 Message-ID: Subject: Re: Running Hama on Hortonworks 2.0 distribution From: "Edward J. Yoon" To: "user@hama.apache.org" , Kiru Pakkirisamy Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hi, > attempt_201405201413_0006_000000_0: org.apache.hadoop.ipc.RemoteException= (java.io.IOException): File /tmp/hama-parts/job_201405201413_0005/part-3/fi= le-0 could only be replicated to 0 nodes instead of minReplication (=3D1). = There are 1 datanode(s) running and no node(s) are excluded in this operat= ion. First of all, this means that you do not have DataNode process running. See http://wiki.apache.org/hadoop/CouldOnlyBeReplicatedTo P.S., And, please replace hadoop jar files in ${HAMA_HOME}/lib folder. On Wed, May 21, 2014 at 11:45 AM, Kiru Pakkirisamy wrote: > I have been able to run the PageRank example on my Apache pseudo cluster = on my laptop (1.0.x) > But unable to run it on my dev cluster running Hortonworks 2.0 (I built 0= .6.4 src with mvn clean install -Phadoop2 -Dhadoop.version=3D2.2.0 > ) > I get the following error - (even though I have no trouble with hdfs in p= utting/getting files) > > attempt_201405201413_0006_000000_0: 14/05/20 14:41:31 WARN conf.Configura= tion: /tmp/hadoop-kiru/bsp/local/groomServer/attempt_201405201413_0006_0000= 00_0/job.xml:an attempt to override final parameter: mapreduce.job.end-noti= fication.max.attempts; Ignoring. > attempt_201405201413_0006_000000_0: 14/05/20 14:41:31 INFO ipc.Server: St= arting Socket Reader #1 for port 61001 > attempt_201405201413_0006_000000_0: 14/05/20 14:41:31 INFO ipc.Server: IP= C Server Responder: starting > attempt_201405201413_0006_000000_0: 14/05/20 14:41:31 INFO ipc.Server: IP= C Server listener on 61001: starting > attempt_201405201413_0006_000000_0: 14/05/20 14:41:31 INFO ipc.Server: IP= C Server handler 0 on 61001: starting > attempt_201405201413_0006_000000_0: 14/05/20 14:41:31 INFO ipc.Server: IP= C Server handler 1 on 61001: starting > attempt_201405201413_0006_000000_0: 14/05/20 14:41:31 INFO ipc.Server: IP= C Server handler 2 on 61001: starting > attempt_201405201413_0006_000000_0: 14/05/20 14:41:31 INFO ipc.Server: IP= C Server handler 3 on 61001: starting > attempt_201405201413_0006_000000_0: 14/05/20 14:41:31 INFO message.HamaMe= ssageManagerImpl: BSPPeer address:server02.infnet port:61001 > attempt_201405201413_0006_000000_0: 14/05/20 14:41:31 INFO ipc.Server: IP= C Server handler 4 on 61001: starting > attempt_201405201413_0006_000000_0: 14/05/20 14:41:32 INFO sync.ZKSyncCli= ent: Initializing ZK Sync Client > attempt_201405201413_0006_000000_0: 14/05/20 14:41:32 INFO sync.ZooKeeper= SyncClientImpl: Start connecting to Zookeeper! At server02.infnet/192.168.1= .85:61001 > attempt_201405201413_0006_000000_0: 14/05/20 14:41:33 WARN hdfs.DFSClient= : DataStreamer Exception > attempt_201405201413_0006_000000_0: org.apache.hadoop.ipc.RemoteException= (java.io.IOException): File /tmp/hama-parts/job_201405201413_0005/part-3/fi= le-0 could only be replicated to 0 nodes instead of minReplication (=3D1). = There are 1 datanode(s) running and no node(s) are excluded in this operat= ion. > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.server.bloc= kmanagement.BlockManager.chooseTarget(BlockManager.java:1384) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.server.name= node.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2503) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.server.name= node.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.protocolPB.= ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtoco= lServerSideTranslatorPB.java:387) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.protocol.pr= oto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMetho= d(ClientNamenodeProtocolProtos.java:59582) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.ProtobufRpcE= ngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.RPC$Server.c= all(RPC.java:928) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Server$Handl= er$1.run(Server.java:2053) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Server$Handl= er$1.run(Server.java:2049) > attempt_201405201413_0006_000000_0: at java.security.AccessController.doP= rivileged(Native Method) > attempt_201405201413_0006_000000_0: at javax.security.auth.Subject.doAs(S= ubject.java:415) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.security.UserGro= upInformation.doAs(UserGroupInformation.java:1491) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Server$Handl= er.run(Server.java:2047) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Client.call(= Client.java:1347) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Client.call(= Client.java:1300) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.ProtobufRpcE= ngine$Invoker.invoke(ProtobufRpcEngine.java:206) > attempt_201405201413_0006_000000_0: at com.sun.proxy.$Proxy10.addBlock(Un= known Source) > attempt_201405201413_0006_000000_0: at sun.reflect.NativeMethodAccessorIm= pl.invoke0(Native Method) > attempt_201405201413_0006_000000_0: at sun.reflect.NativeMethodAccessorIm= pl.invoke(NativeMethodAccessorImpl.java:57) > attempt_201405201413_0006_000000_0: at sun.reflect.DelegatingMethodAccess= orImpl.invoke(DelegatingMethodAccessorImpl.java:43) > attempt_201405201413_0006_000000_0: at java.lang.reflect.Method.invoke(Me= thod.java:606) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.io.retry.RetryIn= vocationHandler.invokeMethod(RetryInvocationHandler.java:186) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.io.retry.RetryIn= vocationHandler.invoke(RetryInvocationHandler.java:102) > attempt_201405201413_0006_000000_0: at com.sun.proxy.$Proxy10.addBlock(Un= known Source) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.protocolPB.= ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslato= rPB.java:330) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.DFSOutputSt= ream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.DFSOutputSt= ream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.DFSOutputSt= ream$DataStreamer.run(DFSOutputStream.java:514) > attempt_201405201413_0006_000000_0: 14/05/20 14:41:33 ERROR bsp.BSPTask: = Error running bsp setup and bsp function. > attempt_201405201413_0006_000000_0: org.apache.hadoop.ipc.RemoteException= (java.io.IOException): File /tmp/hama-parts/job_201405201413_0005/part-3/fi= le-0 could only be replicated to 0 nodes instead of minReplication (=3D1). = There are 1 datanode(s) running and no node(s) are excluded in this operat= ion. > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.server.bloc= kmanagement.BlockManager.chooseTarget(BlockManager.java:1384) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.server.name= node.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2503) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.server.name= node.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.protocolPB.= ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtoco= lServerSideTranslatorPB.java:387) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.protocol.pr= oto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMetho= d(ClientNamenodeProtocolProtos.java:59582) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.ProtobufRpcE= ngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.RPC$Server.c= all(RPC.java:928) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Server$Handl= er$1.run(Server.java:2053) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Server$Handl= er$1.run(Server.java:2049) > attempt_201405201413_0006_000000_0: at java.security.AccessController.doP= rivileged(Native Method) > attempt_201405201413_0006_000000_0: at javax.security.auth.Subject.doAs(S= ubject.java:415) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.security.UserGro= upInformation.doAs(UserGroupInformation.java:1491) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Server$Handl= er.run(Server.java:2047) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Client.call(= Client.java:1347) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Client.call(= Client.java:1300) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.ProtobufRpcE= ngine$Invoker.invoke(ProtobufRpcEngine.java:206) > attempt_201405201413_0006_000000_0: at com.sun.proxy.$Proxy10.addBlock(Un= known Source) > attempt_201405201413_0006_000000_0: at sun.reflect.NativeMethodAccessorIm= pl.invoke0(Native Method) > attempt_201405201413_0006_000000_0: at sun.reflect.NativeMethodAccessorIm= pl.invoke(NativeMethodAccessorImpl.java:57) > attempt_201405201413_0006_000000_0: at sun.reflect.DelegatingMethodAccess= orImpl.invoke(DelegatingMethodAccessorImpl.java:43) > attempt_201405201413_0006_000000_0: at java.lang.reflect.Method.invoke(Me= thod.java:606) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.io.retry.RetryIn= vocationHandler.invokeMethod(RetryInvocationHandler.java:186) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.io.retry.RetryIn= vocationHandler.invoke(RetryInvocationHandler.java:102) > attempt_201405201413_0006_000000_0: at com.sun.proxy.$Proxy10.addBlock(Un= known Source) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.protocolPB.= ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslato= rPB.java:330) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.DFSOutputSt= ream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.DFSOutputSt= ream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.DFSOutputSt= ream$DataStreamer.run(DFSOutputStream.java:514) > attempt_201405201413_0006_000000_0: 14/05/20 14:41:33 INFO ipc.Server: St= opping server on 61001 > attempt_201405201413_0006_000000_0: 14/05/20 14:41:33 INFO ipc.Server: IP= C Server handler 1 on 61001: exiting > attempt_201405201413_0006_000000_0: 14/05/20 14:41:33 INFO ipc.Server: IP= C Server handler 0 on 61001: exiting > attempt_201405201413_0006_000000_0: 14/05/20 14:41:33 INFO ipc.Server: IP= C Server handler 2 on 61001: exiting > attempt_201405201413_0006_000000_0: 14/05/20 14:41:33 INFO ipc.Server: IP= C Server handler 4 on 61001: exiting > attempt_201405201413_0006_000000_0: 14/05/20 14:41:33 INFO ipc.Server: IP= C Server handler 3 on 61001: exiting > attempt_201405201413_0006_000000_0: 14/05/20 14:41:33 INFO ipc.Server: St= opping IPC Server listener on 61001 > attempt_201405201413_0006_000000_0: 14/05/20 14:41:33 INFO ipc.Server: St= opping IPC Server Responder > attempt_201405201413_0006_000000_0: 14/05/20 14:41:33 INFO Configuration.= deprecation: mapred.cache.localFiles is deprecated. Instead, use mapreduce.= job.cache.local.files > attempt_201405201413_0006_000000_0: 14/05/20 14:41:33 ERROR bsp.BSPTask: = Shutting down ping service. > attempt_201405201413_0006_000000_0: 14/05/20 14:41:33 FATAL bsp.GroomServ= er: Error running child > attempt_201405201413_0006_000000_0: org.apache.hadoop.ipc.RemoteException= (java.io.IOException): File /tmp/hama-parts/job_201405201413_0005/part-3/fi= le-0 could only be replicated to 0 nodes instead of minReplication (=3D1). = There are 1 datanode(s) running and no node(s) are excluded in this operat= ion. > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.server.bloc= kmanagement.BlockManager.chooseTarget(BlockManager.java:1384) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.server.name= node.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2503) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.server.name= node.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.protocolPB.= ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtoco= lServerSideTranslatorPB.java:387) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.protocol.pr= oto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMetho= d(ClientNamenodeProtocolProtos.java:59582) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.ProtobufRpcE= ngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.RPC$Server.c= all(RPC.java:928) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Server$Handl= er$1.run(Server.java:2053) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Server$Handl= er$1.run(Server.java:2049) > attempt_201405201413_0006_000000_0: at java.security.AccessController.doP= rivileged(Native Method) > attempt_201405201413_0006_000000_0: at javax.security.auth.Subject.doAs(S= ubject.java:415) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.security.UserGro= upInformation.doAs(UserGroupInformation.java:1491) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Server$Handl= er.run(Server.java:2047) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Client.call(= Client.java:1347) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Client.call(= Client.java:1300) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.ProtobufRpcE= ngine$Invoker.invoke(ProtobufRpcEngine.java:206) > attempt_201405201413_0006_000000_0: at com.sun.proxy.$Proxy10.addBlock(Un= known Source) > attempt_201405201413_0006_000000_0: at sun.reflect.NativeMethodAccessorIm= pl.invoke0(Native Method) > attempt_201405201413_0006_000000_0: at sun.reflect.NativeMethodAccessorIm= pl.invoke(NativeMethodAccessorImpl.java:57) > attempt_201405201413_0006_000000_0: at sun.reflect.DelegatingMethodAccess= orImpl.invoke(DelegatingMethodAccessorImpl.java:43) > attempt_201405201413_0006_000000_0: at java.lang.reflect.Method.invoke(Me= thod.java:606) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.io.retry.RetryIn= vocationHandler.invokeMethod(RetryInvocationHandler.java:186) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.io.retry.RetryIn= vocationHandler.invoke(RetryInvocationHandler.java:102) > attempt_201405201413_0006_000000_0: at com.sun.proxy.$Proxy10.addBlock(Un= known Source) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.protocolPB.= ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslato= rPB.java:330) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.DFSOutputSt= ream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.DFSOutputSt= ream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.DFSOutputSt= ream$DataStreamer.run(DFSOutputStream.java:514) > attempt_201405201413_0006_000000_0: org.apache.hadoop.ipc.RemoteException= (java.io.IOException): File /tmp/hama-parts/job_201405201413_0005/part-3/fi= le-0 could only be replicated to 0 nodes instead of minReplication (=3D1). = There are 1 datanode(s) running and no node(s) are excluded in this operat= ion. > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.server.bloc= kmanagement.BlockManager.chooseTarget(BlockManager.java:1384) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.server.name= node.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2503) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.server.name= node.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.protocolPB.= ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtoco= lServerSideTranslatorPB.java:387) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.protocol.pr= oto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMetho= d(ClientNamenodeProtocolProtos.java:59582) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.ProtobufRpcE= ngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.RPC$Server.c= all(RPC.java:928) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Server$Handl= er$1.run(Server.java:2053) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Server$Handl= er$1.run(Server.java:2049) > attempt_201405201413_0006_000000_0: at java.security.AccessController.doP= rivileged(Native Method) > attempt_201405201413_0006_000000_0: at javax.security.auth.Subject.doAs(S= ubject.java:415) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.security.UserGro= upInformation.doAs(UserGroupInformation.java:1491) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Server$Handl= er.run(Server.java:2047) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Client.call(= Client.java:1347) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Client.call(= Client.java:1300) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.ProtobufRpcE= ngine$Invoker.invoke(ProtobufRpcEngine.java:206) > attempt_201405201413_0006_000000_0: at com.sun.proxy.$Proxy10.addBlock(Un= known Source) > attempt_201405201413_0006_000000_0: at sun.reflect.NativeMethodAccessorIm= pl.invoke0(Native Method) > attempt_201405201413_0006_000000_0: at sun.reflect.NativeMethodAccessorIm= pl.invoke(NativeMethodAccessorImpl.java:57) > attempt_201405201413_0006_000000_0: at sun.reflect.DelegatingMethodAccess= orImpl.invoke(DelegatingMethodAccessorImpl.java:43) > attempt_201405201413_0006_000000_0: at java.lang.reflect.Method.invoke(Me= thod.java:606) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.io.retry.RetryIn= vocationHandler.invokeMethod(RetryInvocationHandler.java:186) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.io.retry.RetryIn= vocationHandler.invoke(RetryInvocationHandler.java:102) > attempt_201405201413_0006_000000_0: at com.sun.proxy.$Proxy10.addBlock(Un= known Source) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.protocolPB.= ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslato= rPB.java:330) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.DFSOutputSt= ream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.DFSOutputSt= ream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078) > attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.DFSOutputSt= ream$DataStreamer.run(DFSOutputStream.java:514) > attempt_201405201413_0006_000000_0: log4j:WARN No appenders could be foun= d for logger (org.apache.hadoop.hdfs.DFSClient). > attempt_201405201413_0006_000000_0: log4j:WARN Please initialize the log4= j system properly. > attempt_201405201413_0006_000000_0: log4j:WARN See http://logging.apache.= org/log4j/1.2/faq.html#noconfig for more info. > 14/05/20 14:42:40 INFO bsp.BSPJobClient: Job failed. > 14/05/20 14:42:40 ERROR bsp.BSPJobClient: Error partitioning the input pa= th. > java.io.IOException: Runtime partition failed for the job. > at org.apache.hama.bsp.BSPJobClient.partition(BSPJobClient.java:478) > at org.apache.hama.bsp.BSPJobClient.submitJobInternal(BSPJobClient.java:3= 41) > at org.apache.hama.bsp.BSPJobClient.submitJob(BSPJobClient.java:296) > at org.apache.hama.bsp.BSPJob.submit(BSPJob.java:219) > at org.apache.hama.graph.GraphJob.submit(GraphJob.java:208) > at org.apache.hama.bsp.BSPJob.waitForCompletion(BSPJob.java:226) > at org.apache.hama.examples.PageRank.main(PageRank.java:160) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j= ava:57) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccess= orImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hama.util.ProgramDriver$ProgramDescription.invoke(ProgramDr= iver.java:68) > at org.apache.hama.util.ProgramDriver.driver(ProgramDriver.java:139) > at org.apache.hama.examples.ExampleDriver.main(ExampleDriver.java:45) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j= ava:57) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccess= orImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hama.util.RunJar.main(RunJar.java:146) > > > > Regards, > - kiru > > > Kiru Pakkirisamy | webcloudtech.wordpress.com --=20 Best Regards, Edward J. Yoon CEO at DataSayer Co., Ltd.