hama-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Edward J. Yoon" <edwardy...@apache.org>
Subject Re: Running Hama on Hortonworks 2.0 distribution
Date Thu, 22 May 2014 07:20:18 GMT
OK, according to Kiru, Hama 0.6.4 works with Hortonworks 2.0 distribution.

On Wed, May 21, 2014 at 5:01 PM, Edward J. Yoon <edwardyoon@apache.org> wrote:
> Hi,
>
>> attempt_201405201413_0006_000000_0: org.apache.hadoop.ipc.RemoteException(java.io.IOException):
File /tmp/hama-parts/job_201405201413_0005/part-3/file-0 could only be replicated to 0 nodes
instead of minReplication (=1).  There are 1 datanode(s) running and no node(s) are excluded
in this operation.
>
> First of all, this means that you do not have DataNode process
> running. See http://wiki.apache.org/hadoop/CouldOnlyBeReplicatedTo
>
> P.S., And, please replace hadoop jar files in ${HAMA_HOME}/lib folder.
>
>
> On Wed, May 21, 2014 at 11:45 AM, Kiru Pakkirisamy
> <kirupakkirisamy@yahoo.com> wrote:
>> I have been able to run the PageRank example on my Apache pseudo cluster on my laptop
(1.0.x)
>> But unable to run it on my dev cluster running Hortonworks 2.0 (I built 0.6.4 src
 with  mvn clean install -Phadoop2 -Dhadoop.version=2.2.0
>> )
>> I get the following error - (even though I have no trouble with hdfs in putting/getting
files)
>>
>> attempt_201405201413_0006_000000_0: 14/05/20 14:41:31 WARN conf.Configuration: /tmp/hadoop-kiru/bsp/local/groomServer/attempt_201405201413_0006_000000_0/job.xml:an
attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
>> attempt_201405201413_0006_000000_0: 14/05/20 14:41:31 INFO ipc.Server: Starting Socket
Reader #1 for port 61001
>> attempt_201405201413_0006_000000_0: 14/05/20 14:41:31 INFO ipc.Server: IPC Server
Responder: starting
>> attempt_201405201413_0006_000000_0: 14/05/20 14:41:31 INFO ipc.Server: IPC Server
listener on 61001: starting
>> attempt_201405201413_0006_000000_0: 14/05/20 14:41:31 INFO ipc.Server: IPC Server
handler 0 on 61001: starting
>> attempt_201405201413_0006_000000_0: 14/05/20 14:41:31 INFO ipc.Server: IPC Server
handler 1 on 61001: starting
>> attempt_201405201413_0006_000000_0: 14/05/20 14:41:31 INFO ipc.Server: IPC Server
handler 2 on 61001: starting
>> attempt_201405201413_0006_000000_0: 14/05/20 14:41:31 INFO ipc.Server: IPC Server
handler 3 on 61001: starting
>> attempt_201405201413_0006_000000_0: 14/05/20 14:41:31 INFO message.HamaMessageManagerImpl:
BSPPeer address:server02.infnet port:61001
>> attempt_201405201413_0006_000000_0: 14/05/20 14:41:31 INFO ipc.Server: IPC Server
handler 4 on 61001: starting
>> attempt_201405201413_0006_000000_0: 14/05/20 14:41:32 INFO sync.ZKSyncClient: Initializing
ZK Sync Client
>> attempt_201405201413_0006_000000_0: 14/05/20 14:41:32 INFO sync.ZooKeeperSyncClientImpl:
Start connecting to Zookeeper! At server02.infnet/192.168.1.85:61001
>> attempt_201405201413_0006_000000_0: 14/05/20 14:41:33 WARN hdfs.DFSClient: DataStreamer
Exception
>> attempt_201405201413_0006_000000_0: org.apache.hadoop.ipc.RemoteException(java.io.IOException):
File /tmp/hama-parts/job_201405201413_0005/part-3/file-0 could only be replicated to 0 nodes
instead of minReplication (=1).  There are 1 datanode(s) running and no node(s) are excluded
in this operation.
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2503)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2053)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>> attempt_201405201413_0006_000000_0: at java.security.AccessController.doPrivileged(Native
Method)
>> attempt_201405201413_0006_000000_0: at javax.security.auth.Subject.doAs(Subject.java:415)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2047)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>> attempt_201405201413_0006_000000_0: at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
>> attempt_201405201413_0006_000000_0: at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
>> attempt_201405201413_0006_000000_0: at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> attempt_201405201413_0006_000000_0: at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> attempt_201405201413_0006_000000_0: at java.lang.reflect.Method.invoke(Method.java:606)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>> attempt_201405201413_0006_000000_0: at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
>> attempt_201405201413_0006_000000_0: 14/05/20 14:41:33 ERROR bsp.BSPTask: Error running
bsp setup and bsp function.
>> attempt_201405201413_0006_000000_0: org.apache.hadoop.ipc.RemoteException(java.io.IOException):
File /tmp/hama-parts/job_201405201413_0005/part-3/file-0 could only be replicated to 0 nodes
instead of minReplication (=1).  There are 1 datanode(s) running and no node(s) are excluded
in this operation.
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2503)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2053)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>> attempt_201405201413_0006_000000_0: at java.security.AccessController.doPrivileged(Native
Method)
>> attempt_201405201413_0006_000000_0: at javax.security.auth.Subject.doAs(Subject.java:415)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2047)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>> attempt_201405201413_0006_000000_0: at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
>> attempt_201405201413_0006_000000_0: at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
>> attempt_201405201413_0006_000000_0: at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> attempt_201405201413_0006_000000_0: at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> attempt_201405201413_0006_000000_0: at java.lang.reflect.Method.invoke(Method.java:606)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>> attempt_201405201413_0006_000000_0: at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
>> attempt_201405201413_0006_000000_0: 14/05/20 14:41:33 INFO ipc.Server: Stopping server
on 61001
>> attempt_201405201413_0006_000000_0: 14/05/20 14:41:33 INFO ipc.Server: IPC Server
handler 1 on 61001: exiting
>> attempt_201405201413_0006_000000_0: 14/05/20 14:41:33 INFO ipc.Server: IPC Server
handler 0 on 61001: exiting
>> attempt_201405201413_0006_000000_0: 14/05/20 14:41:33 INFO ipc.Server: IPC Server
handler 2 on 61001: exiting
>> attempt_201405201413_0006_000000_0: 14/05/20 14:41:33 INFO ipc.Server: IPC Server
handler 4 on 61001: exiting
>> attempt_201405201413_0006_000000_0: 14/05/20 14:41:33 INFO ipc.Server: IPC Server
handler 3 on 61001: exiting
>> attempt_201405201413_0006_000000_0: 14/05/20 14:41:33 INFO ipc.Server: Stopping IPC
Server listener on 61001
>> attempt_201405201413_0006_000000_0: 14/05/20 14:41:33 INFO ipc.Server: Stopping IPC
Server Responder
>> attempt_201405201413_0006_000000_0: 14/05/20 14:41:33 INFO Configuration.deprecation:
mapred.cache.localFiles is deprecated. Instead, use mapreduce.job.cache.local.files
>> attempt_201405201413_0006_000000_0: 14/05/20 14:41:33 ERROR bsp.BSPTask: Shutting
down ping service.
>> attempt_201405201413_0006_000000_0: 14/05/20 14:41:33 FATAL bsp.GroomServer: Error
running child
>> attempt_201405201413_0006_000000_0: org.apache.hadoop.ipc.RemoteException(java.io.IOException):
File /tmp/hama-parts/job_201405201413_0005/part-3/file-0 could only be replicated to 0 nodes
instead of minReplication (=1).  There are 1 datanode(s) running and no node(s) are excluded
in this operation.
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2503)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2053)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>> attempt_201405201413_0006_000000_0: at java.security.AccessController.doPrivileged(Native
Method)
>> attempt_201405201413_0006_000000_0: at javax.security.auth.Subject.doAs(Subject.java:415)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2047)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>> attempt_201405201413_0006_000000_0: at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
>> attempt_201405201413_0006_000000_0: at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
>> attempt_201405201413_0006_000000_0: at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> attempt_201405201413_0006_000000_0: at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> attempt_201405201413_0006_000000_0: at java.lang.reflect.Method.invoke(Method.java:606)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>> attempt_201405201413_0006_000000_0: at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
>> attempt_201405201413_0006_000000_0: org.apache.hadoop.ipc.RemoteException(java.io.IOException):
File /tmp/hama-parts/job_201405201413_0005/part-3/file-0 could only be replicated to 0 nodes
instead of minReplication (=1).  There are 1 datanode(s) running and no node(s) are excluded
in this operation.
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2503)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2053)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>> attempt_201405201413_0006_000000_0: at java.security.AccessController.doPrivileged(Native
Method)
>> attempt_201405201413_0006_000000_0: at javax.security.auth.Subject.doAs(Subject.java:415)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2047)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>> attempt_201405201413_0006_000000_0: at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
>> attempt_201405201413_0006_000000_0: at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
>> attempt_201405201413_0006_000000_0: at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> attempt_201405201413_0006_000000_0: at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> attempt_201405201413_0006_000000_0: at java.lang.reflect.Method.invoke(Method.java:606)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>> attempt_201405201413_0006_000000_0: at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
>> attempt_201405201413_0006_000000_0: at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
>> attempt_201405201413_0006_000000_0: log4j:WARN No appenders could be found for logger
(org.apache.hadoop.hdfs.DFSClient).
>> attempt_201405201413_0006_000000_0: log4j:WARN Please initialize the log4j system
properly.
>> attempt_201405201413_0006_000000_0: log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig
for more info.
>> 14/05/20 14:42:40 INFO bsp.BSPJobClient: Job failed.
>> 14/05/20 14:42:40 ERROR bsp.BSPJobClient: Error partitioning the input path.
>> java.io.IOException: Runtime partition failed for the job.
>> at org.apache.hama.bsp.BSPJobClient.partition(BSPJobClient.java:478)
>> at org.apache.hama.bsp.BSPJobClient.submitJobInternal(BSPJobClient.java:341)
>> at org.apache.hama.bsp.BSPJobClient.submitJob(BSPJobClient.java:296)
>> at org.apache.hama.bsp.BSPJob.submit(BSPJob.java:219)
>> at org.apache.hama.graph.GraphJob.submit(GraphJob.java:208)
>> at org.apache.hama.bsp.BSPJob.waitForCompletion(BSPJob.java:226)
>> at org.apache.hama.examples.PageRank.main(PageRank.java:160)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>> at org.apache.hama.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>> at org.apache.hama.util.ProgramDriver.driver(ProgramDriver.java:139)
>> at org.apache.hama.examples.ExampleDriver.main(ExampleDriver.java:45)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>> at org.apache.hama.util.RunJar.main(RunJar.java:146)
>>
>>
>>
>> Regards,
>> - kiru
>>
>>
>> Kiru Pakkirisamy | webcloudtech.wordpress.com
>
>
>
> --
> Best Regards, Edward J. Yoon
> CEO at DataSayer Co., Ltd.



-- 
Best Regards, Edward J. Yoon
CEO at DataSayer Co., Ltd.

Mime
View raw message