hama-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 邓凯 <hbd...@gmail.com>
Subject Re: Error partitioning the input path
Date Thu, 05 Sep 2013 01:15:42 GMT
Sorry,I didn't complete the mail but sent it.
Here is the Partitioner:
       public static class WeiboRankPartioner extends
HashPartitioner<LongWritable,Text>{

@Override
public int getPartition(LongWritable key, Text value, int numTasks) {
String[] keyvaluePair=value.toString().split("\t");
System.out.println(keyvaluePair[0]+" "+keyvaluePair.length);
return Math.abs(keyvaluePair[0].hashCode() % numTasks);
}
 }
And the following is the properties:
                GraphJob weiboJob = new GraphJob(conf, WeiboRank.class);
weiboJob.setJobName("Weiborank");

weiboJob.setVertexClass(WeiboRankVertex.class);
weiboJob.setInputPath(new Path(args[0]));
weiboJob.setOutputPath(new Path(args[1]));

// set the defaults
weiboJob.setMaxIteration(30);
weiboJob.set("hama.weiborank.alpha", "0.85");
// reference vertices to itself, because we don't have a dangling node
// contribution here
weiboJob.set("hama.graph.self.ref", "true");
weiboJob.set("hama.graph.max.convergence.error", "0.001");

if (args.length == 3) {
weiboJob.setNumBspTask(Integer.parseInt(args[2]));
}

// error
weiboJob.setAggregatorClass(AverageAggregator.class);

// Vertex reader
weiboJob.setVertexInputReaderClass(WeiboRankReader.class);

weiboJob.setVertexIDClass(Text.class);
weiboJob.setVertexValueClass(DoubleWritable.class);
weiboJob.setEdgeValueClass(NullWritable.class);

weiboJob.setInputFormat(TextInputFormat.class);

weiboJob.setPartitioner(WeiboRankPartioner.class);
weiboJob.setOutputFormat(TextOutputFormat.class);
weiboJob.setOutputKeyClass(Text.class);
weiboJob.setOutputValueClass(DoubleWritable.class);

That's all.Thank U.


2013/9/5 邓凯 <hbdkzj@gmail.com>

> Hi,
>    Here is the output in the console,I can't find anymore in the
> HAMA_HOME/logs.
>
> hadoop@datanode4:/usr/local/hama$ bin/hama jar
> /home/datanode4/Desktop/WeiboRank.jar vertexresult weiborankresult
> 13/09/05 08:55:08 INFO mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 13/09/05 08:55:11 INFO bsp.FileInputFormat: Total input paths to process :
> 1
> 13/09/05 08:55:11 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> 13/09/05 08:55:11 WARN snappy.LoadSnappy: Snappy native library not loaded
> 13/09/05 08:55:11 INFO bsp.FileInputFormat: Total input paths to process :
> 1
> 13/09/05 08:55:13 INFO bsp.BSPJobClient: Running job: job_201309041029_0025
> 13/09/05 08:55:13 INFO bsp.BSPJobClient: Job failed.
> 13/09/05 08:55:13 ERROR bsp.BSPJobClient: Error partitioning the input
> path.
> Exception in thread "main" java.io.IOException: Runtime partition failed
> for the job.
> at org.apache.hama.bsp.BSPJobClient.partition(BSPJobClient.java:465)
>  at
> org.apache.hama.bsp.BSPJobClient.submitJobInternal(BSPJobClient.java:333)
> at org.apache.hama.bsp.BSPJobClient.submitJob(BSPJobClient.java:293)
>  at org.apache.hama.bsp.BSPJob.submit(BSPJob.java:229)
> at org.apache.hama.graph.GraphJob.submit(GraphJob.java:203)
>  at org.apache.hama.bsp.BSPJob.waitForCompletion(BSPJob.java:236)
> at WeiboRank.main(WeiboRank.java:161)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
>  at org.apache.hama.util.RunJar.main(RunJar.java:146)
>
> The class extends Vertex just like the PageRank.
> Here is the InputReader:
> public static class WeiboRankReader
> extends
> VertexInputReader<LongWritable, Text, Text, NullWritable, DoubleWritable> {
>
> @Override
> public boolean parseVertex(LongWritable key, Text value,
> Vertex<Text, NullWritable, DoubleWritable> vertex)
>  throws Exception {
> String[] keyvaluePair = value.toString().split("\t");
>  if (keyvaluePair.length > 1) {
> vertex.setVertexID(new Text(keyvaluePair[0]));
> String edgeString = keyvaluePair[1];
>  if (!edgeString.equals("")) {
> String[] edges = edgeString.split(",");
> for (String e : edges) {
>  vertex.addEdge(new Edge<Text, NullWritable>(
> new Text(e), null));
> }
>  }
> }
> else
> vertex.setVertexID(new Text(keyvaluePair[0]));
>  return true;
> }
>
> }
>
>
>
> 2013/9/5 Edward J. Yoon <edwardyoon@apache.org>
>
> Can you provide full client console logs?
>>
>> On Wed, Sep 4, 2013 at 10:21 PM, 邓凯 <hbdkzj@gmail.com> wrote:
>> > Hi,
>> >       I have a hadoop-1.1.2 cluster with one namenode and four
>> datanodes.I
>> > built the hama-0.6.2 on it.When I run the benchmarks and the examples
>> such
>> > as Pagerank it goes well.
>> >       But today when I ran my own code it met a exception.
>> >       The log says ERROR bsp.BSPJobClient:Error partitioning the input
>> path
>> >       The exception is Execption inthread "main" java.io.IOException :
>> > Runtime partition failed for the job.
>> >       According to this,I think there is someting wrong with my code.
>> >       My hama has 4 groomservers and task capacity is 12.
>> >       I use the command bin/hama jar Weiborank.jar vertexresult
>> > weiborankresult 12
>> >       The directory vertexresult has only one file in it.And I use the
>> > HashPartitioner.class as the partitioner.
>> >       I wonder whether it caused by the only one file in the input path
>> but
>> > there are 12 bsp tasks.If so,can I fix it by increasing the num of file
>> in
>> > the input path.
>> >       Thanks a lot.
>>
>>
>>
>> --
>> Best Regards, Edward J. Yoon
>> @eddieyoon
>>
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message