giraph-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mirko Kämpf <mirko.kae...@cloudera.com>
Subject Re: Using the RandomEdge ... & RandomVertex InputFormat
Date Tue, 05 Nov 2013 13:38:57 GMT
Claudio,
this was the missing thing I was looking for.
Now the simple random graph generator works well.
Thanks again.

Best wishes
Mirko



On Tue, Nov 5, 2013 at 8:55 PM, Claudio Martella <claudio.martella@gmail.com
> wrote:

> I don't see errors in your logs. The parameters that defines the total
> number of vertices created is giraph.pseudoRandomInputFormat.
> aggregateVertices.
>
>
> On Tue, Nov 5, 2013 at 1:05 PM, Mirko Kämpf <mirko.kaempf@cloudera.com>wrote:
>
>> Hi,
>>
>> after I changed the code of the PseudoRandomVertex.... and
>> PseudoRandomEdgeInputFormat I can compile and run the job but it will stop
>> after the 10 minute timeout.
>>
>> I think there is still some parameter missing or defined wrong.
>>
>> Claudio wrote, one has to set the number of vertexes, but how?
>>
>> This my call:
>>
>> hadoop jar giraph-ex.jar org.apache.giraph.GiraphRunner -Dgiraph.zkList=
>> 127.0.0.1:2181 -libjars giraph-core.jar org.apache.giraph.examples.SSPV2
>> -vif org.apache.giraph.io.formats.PseudoRandomVertexInputFormat2
>> -eif org.apache.giraph.io.formats.PseudoRandomEdgeInputFormat2
>> -of org.apache.giraph.io.formats.IdWithValueTextOutputFormat
>> -op /user/cloudera/goutput/shortestpaths_rand_$NOW
>> -w 1
>> -ca giraph.pseudoRandomInputFormat.edgesPerVertex=100
>> -ca giraph.pseudoRandomInputFormat.aggregateVertices=2
>> -ca giraph.pseudoRandomInputFormat.localEdgesMinRatio=200
>>
>> What parameter defines the number of vertexes, I want to create?
>> My test graph should have 1000 nodes.
>>
>> There are no exceptions, but also no vertexes are created.
>>
>> The code runs and gives the following log:
>>
>> mapper 1:
>>
>>
>> 2013-11-05 03:43:30,666 WARN mapreduce.Counters: Group
>> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
>> org.apache.hadoop.mapreduce.TaskCounter instead
>> 2013-11-05 03:43:32,535 WARN org.apache.hadoop.conf.Configuration:
>> session.id is deprecated. Instead, use dfs.metrics.session-id
>> 2013-11-05 03:43:32,536 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> Initializing JVM Metrics with processName=MAP, sessionId=
>> 2013-11-05 03:43:33,393 INFO org.apache.hadoop.util.ProcessTree: setsid
>> exited with exit code 0
>> 2013-11-05 03:43:33,420 INFO org.apache.hadoop.mapred.Task:  Using
>> ResourceCalculatorPlugin :
>> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@307b4703
>> 2013-11-05 03:43:33,965 INFO org.apache.hadoop.mapred.MapTask: Processing
>> split: 'org.apache.giraph.bsp.BspInputSplit, index=-1, num=-1
>> 2013-11-05 03:43:33,992 INFO org.apache.giraph.graph.GraphTaskManager:
>> setup: Log level remains at info
>> 2013-11-05 03:43:34,088 INFO org.apache.giraph.graph.GraphTaskManager:
>> Distributed cache is empty. Assuming fatjar.
>> 2013-11-05 03:43:34,088 INFO org.apache.giraph.graph.GraphTaskManager:
>> setup: classpath @
>> /mapred/local/taskTracker/cloudera/jobcache/job_201311040001_0018/jars/job.jar
>> for job Giraph: org.apache.giraph.examples.SSPV2
>> 2013-11-05 03:43:34,088 INFO org.apache.giraph.graph.GraphTaskManager:
>> setup: Starting up BspServiceWorker...
>> 2013-11-05 03:43:34,136 INFO org.apache.giraph.bsp.BspService:
>> BspService: Connecting to ZooKeeper with job job_201311040001_0018, 1 on
>> 127.0.0.1:2181
>> 2013-11-05 03:43:34,147 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:zookeeper.version=3.4.5-cdh4.4.0--1, built on 09/03/2013 16:14
>> GMT
>> 2013-11-05 03:43:34,147 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:host.name=localhost.localdomain
>> 2013-11-05 03:43:34,147 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.version=1.6.0_32
>> 2013-11-05 03:43:34,147 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.vendor=Sun Microsystems Inc.
>> 2013-11-05 03:43:34,147 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.home=/usr/java/jdk1.6.0_32/jre
>> 2013-11-05 03:43:34,147 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.class.path=/var/run/cloudera-scm-agent/process/10-mapreduce-TASKTRACKER:/usr/java/jdk1.6.0_32/lib/tools.jar:/usr/lib/hadoop-0.20-mapreduce:/usr/lib/hadoop-0.20-mapreduce/hadoop-core-2.0.0-mr1-cdh4.4.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/activation-1.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/ant-contrib-1.0b3.jar:/usr/lib/hadoop-0.20-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/avro-1.7.4.jar:/usr/lib/hadoop-0.20-mapreduce/lib/avro-compiler-1.7.4.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-cli-1.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-codec-1.4.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-configuration-1.6.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-digester-1.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-el-1.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-io-2.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-lang-2.5.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-math-2.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-net-3.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/guava-11.0.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/hadoop-fairscheduler-2.0.0-mr1-cdh4.4.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/hsqldb-1.8.0.10.jar:/usr/lib/hadoop-0.20-mapreduce/lib/hue-plugins-2.5.0-cdh4.4.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jersey-core-1.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jersey-json-1.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jersey-server-1.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jets3t-0.6.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jettison-1.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jline-0.9.94.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jsch-0.1.42.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jsp-api-2.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-0.20-mapreduce/lib/junit-4.8.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/kfs-0.2.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/kfs-0.3.jar:/usr/lib/hadoop-0.20-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-0.20-mapreduce/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop-0.20-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-0.20-mapreduce/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-0.20-mapreduce/lib/servlet-api-2.5.jar:/usr/lib/hadoop-0.20-mapreduce/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/stax-api-1.0.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/xmlenc-0.52.jar:/usr/lib/hadoop-0.20-mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/zookeeper-3.4.5-cdh4.4.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jsp-2.1/jsp-2.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jsp-2.1/jsp-api-2.1.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-4.7.2.jar:/usr/share/cmf/lib/plugins/navigator-plugin-4.7.2-shaded.jar:/usr/share/cmf/lib/plugins/event-publish-4.7.2-shaded.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/jline-0.9.94.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.8.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.1.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.8.jar:/usr/lib/hadoop-hdfs/lib/zookeeper-3.4.5-cdh4.4.0.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.3.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.5.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs-2.0.0-cdh4.4.0.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs-2.0.0-cdh4.4.0-tests.jar:/usr/lib/hadoop/lib/hue-plugins-2.5.0-cdh4.4.0.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/avro-1.7.4.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/jline-0.9.94.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/zookeeper-3.4.5-cdh4.4.0.jar:/usr/lib/hadoop/lib/junit-4.8.2.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/hadoop-auth-2.0.0-cdh4.4.0.jar:/usr/lib/hadoop/hadoop-common-2.0.0-cdh4.4.0.jar:/usr/lib/hadoop/hadoop-annotations-2.0.0-cdh4.4.0.jar:/usr/lib/hadoop/hadoop-annotations.jar:/usr/lib/hadoop/hadoop-common.jar:/usr/lib/hadoop/hadoop-auth.jar:/usr/lib/hadoop/hadoop-common-2.0.0-cdh4.4.0-tests.jar:/mapred/local/taskTracker/cloudera/jobcache/job_201311040001_0018/jars/classes:/mapred/local/taskTracker/cloudera/jobcache/job_201311040001_0018/jars/job.jar:/mapred/local/taskTracker/cloudera/distcache/-7916763288898539971_1067008346_672332826/localhost.localdomain/user/cloudera/.staging/job_201311040001_0018/libjars/giraph-core.jar:/mapred/local/taskTracker/cloudera/jobcache/job_201311040001_0018/attempt_201311040001_0018_m_000001_0/work
>> 2013-11-05 03:43:34,147 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.library.path=/usr/lib/hadoop-0.20-mapreduce/lib/native/Linux-amd64-64:/mapred/local/taskTracker/cloudera/jobcache/job_201311040001_0018/attempt_201311040001_0018_m_000001_0/work
>> 2013-11-05 03:43:34,147 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.io.tmpdir=/mapred/local/taskTracker/cloudera/jobcache/job_201311040001_0018/attempt_201311040001_0018_m_000001_0/work/tmp
>> 2013-11-05 03:43:34,152 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.compiler=<NA>
>> 2013-11-05 03:43:34,152 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:os.name=Linux
>> 2013-11-05 03:43:34,152 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:os.arch=amd64
>> 2013-11-05 03:43:34,152 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:os.version=2.6.32-220.el6.x86_64
>> 2013-11-05 03:43:34,152 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:user.name=mapred
>> 2013-11-05 03:43:34,152 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:user.home=/usr/lib/hadoop
>> 2013-11-05 03:43:34,152 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:user.dir=/mapred/local/taskTracker/cloudera/jobcache/job_201311040001_0018/attempt_201311040001_0018_m_000001_0/work
>> 2013-11-05 03:43:34,153 INFO org.apache.zookeeper.ZooKeeper: Initiating
>> client connection, connectString=127.0.0.1:2181 sessionTimeout=60000
>> watcher=org.apache.giraph.worker.BspServiceWorker@6098f192
>> 2013-11-05 03:43:34,209 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server localhost.localdomain/127.0.0.1:2181. Will
>> not attempt to authenticate using SASL (Unable to locate a login
>> configuration)
>> 2013-11-05 03:43:34,210 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to localhost.localdomain/127.0.0.1:2181,
>> initiating session
>> 2013-11-05 03:43:34,227 INFO org.apache.zookeeper.ClientCnxn: Session
>> establishment complete on server localhost.localdomain/127.0.0.1:2181,
>> sessionid = 0x14222216d540030, negotiated timeout = 60000
>> 2013-11-05 03:43:34,232 INFO org.apache.giraph.bsp.BspService: process:
>> Asynchronous connection complete.
>> 2013-11-05 03:43:34,239 INFO
>> org.apache.giraph.comm.netty.NettyWorkerServer: createMessageStoreFactory:
>> Using ByteArrayMessagesPerVertexStore since there is no combiner
>> 2013-11-05 03:43:34,394 INFO org.apache.giraph.comm.netty.NettyServer:
>> NettyServer: Using execution handler with 8 threads after
>> requestFrameDecoder.
>> 2013-11-05 03:43:34,480 INFO org.apache.giraph.comm.netty.NettyServer:
>> start: Started server communication server: localhost.localdomain/
>> 127.0.0.1:30001 with up to 16 threads on bind attempt 0 with
>> sendBufferSize = 32768 receiveBufferSize = 524288 backlog = 1
>> 2013-11-05 03:43:34,505 INFO org.apache.giraph.comm.netty.NettyClient:
>> NettyClient: Using execution handler with 8 threads after requestEncoder.
>> 2013-11-05 03:43:34,539 INFO org.apache.giraph.graph.GraphTaskManager:
>> setup: Registering health of this worker...
>> 2013-11-05 03:43:34,607 INFO org.apache.giraph.bsp.BspService:
>> getJobState: Job state already exists
>> (/_hadoopBsp/job_201311040001_0018/_masterJobState)
>> 2013-11-05 03:43:34,631 INFO org.apache.giraph.bsp.BspService:
>> getApplicationAttempt: Node
>> /_hadoopBsp/job_201311040001_0018/_applicationAttemptsDir already exists!
>> 2013-11-05 03:43:34,633 INFO org.apache.giraph.bsp.BspService:
>> getApplicationAttempt: Node
>> /_hadoopBsp/job_201311040001_0018/_applicationAttemptsDir already exists!
>> 2013-11-05 03:43:34,652 INFO org.apache.giraph.worker.BspServiceWorker:
>> registerHealth: Created my health node for attempt=0, superstep=-1 with
>> /_hadoopBsp/job_201311040001_0018/_applicationAttemptsDir/0/_superstepDir/-1/_workerHealthyDir/localhost.localdomain_1
>> and workerInfo= Worker(hostname=localhost.localdomain, MRtaskID=1,
>> port=30001)
>> 2013-11-05 03:43:34,725 INFO org.apache.giraph.comm.netty.NettyServer:
>> start: Using Netty without authentication.
>> 2013-11-05 03:43:34,761 INFO org.apache.giraph.bsp.BspService: process:
>> partitionAssignmentsReadyChanged (partitions are assigned)
>> 2013-11-05 03:43:34,773 INFO org.apache.giraph.worker.BspServiceWorker:
>> startSuperstep: Master(hostname=localhost.localdomain, MRtaskID=0,
>> port=30000)
>> 2013-11-05 03:43:34,773 INFO org.apache.giraph.worker.BspServiceWorker:
>> startSuperstep: Ready for computation on superstep -1 since worker
>> selection and vertex range assignments are done in
>> /_hadoopBsp/job_201311040001_0018/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
>> 2013-11-05 03:43:34,775 INFO org.apache.giraph.comm.netty.NettyClient:
>> Using Netty without authentication.
>> 2013-11-05 03:43:34,806 INFO org.apache.giraph.comm.netty.NettyClient:
>> connectAllAddresses: Successfully added 1 connections, (1 total connected)
>> 0 failed, 0 failures total.
>> 2013-11-05 03:43:34,818 INFO org.apache.giraph.worker.BspServiceWorker:
>> loadInputSplits: Using 1 thread(s), originally 1 threads(s) for 1 total
>> splits.
>> 2013-11-05 03:43:34,831 INFO org.apache.giraph.comm.SendPartitionCache:
>> SendPartitionCache: maxVerticesPerTransfer = 10000
>> 2013-11-05 03:43:34,832 INFO org.apache.giraph.comm.SendPartitionCache:
>> SendPartitionCache: maxEdgesPerTransfer = 80000
>> 2013-11-05 03:43:34,844 INFO org.apache.giraph.worker.InputSplitsHandler:
>> reserveInputSplit: Reserved input split path
>> /_hadoopBsp/job_201311040001_0018/_vertexInputSplitDir/0, overall roughly
>> 0.0% input splits reserved
>> 2013-11-05 03:43:34,851 INFO
>> org.apache.giraph.worker.InputSplitsCallable: getInputSplit: Reserved
>> /_hadoopBsp/job_201311040001_0018/_vertexInputSplitDir/0 from ZooKeeper and
>> got input split ''org.apache.giraph.bsp.BspInputSplit, index=0, num=1'
>> 2013-11-05 03:43:34,855 INFO org.apache.giraph.partition.PartitionUtils:
>> computePartitionCount: Creating 1, default would have been 1 partitions.
>> 2013-11-05 03:44:34,842 INFO org.apache.giraph.utils.ProgressableUtils:
>> waitFor: Future result not ready yet
>> java.util.concurrent.FutureTask@2b8ca663
>> 2013-11-05 03:44:34,844 INFO org.apache.giraph.utils.ProgressableUtils:
>> waitFor: Waiting for
>> org.apache.giraph.utils.ProgressableUtils$FutureWaitable@29978933
>> 2013-11-05 03:45:34,845 INFO org.apache.giraph.utils.ProgressableUtils:
>> waitFor: Future result not ready yet
>> java.util.concurrent.FutureTask@2b8ca663
>> 2013-11-05 03:45:34,845 INFO org.apache.giraph.utils.ProgressableUtils:
>> waitFor: Waiting for
>> org.apache.giraph.utils.ProgressableUtils$FutureWaitable@29978933
>>
>>
>>
>> Thanks for any hint which might help me use the RandomInputFormat.
>>
>> Best wishes
>> Mirko
>>
>>
>>
>>
>>
>> On Mon, Nov 4, 2013 at 7:38 PM, Mirko Kämpf <mirko.kaempf@cloudera.com>wrote:
>>
>>> That helped me a lot. Thanks.
>>>
>>> Mirko
>>>
>>>
>>> On Mon, Nov 4, 2013 at 7:33 PM, Claudio Martella <
>>> claudio.martella@gmail.com> wrote:
>>>
>>>> Yes, you'll have to make sure that the pseudorandomedgeinputformat
>>>> provides the right types.
>>>> The code for the watts strogatz model is the same package as the
>>>> pseudorandom... but in trunk and not in 1.0.
>>>>
>>>>
>>>> On Mon, Nov 4, 2013 at 12:14 PM, Mirko Kämpf <mirko.kaempf@cloudera.com
>>>> > wrote:
>>>>
>>>>> Thanks, Claudio.
>>>>>
>>>>> I conclude from your mail, I have to create my own
>>>>> PseudoRandomEdgeInputFormat and PseudoRandomVertexInputFormat with types,
>>>>> which fit to the algorithm I want to use. So I misunderstood the concept
>>>>> and not all InputFormats fit to any given implemented algorithm. I this
>>>>> right?
>>>>>
>>>>> But what about the *config parameters*, I have to provide for the
>>>>> PseudoRandom ... InputFormat and where is the code for the *watts
>>>>> strogatz model* you mentioned in a previous post?
>>>>>
>>>>> Best wishes
>>>>>  Mirko
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>>    Claudio Martella
>>>>    claudio.martella@gmail.com
>>>>
>>>
>>>
>>>
>>>
>
>
> --
>    Claudio Martella
>    claudio.martella@gmail.com
>



-- 
-- 
Mirko Kämpf

*Trainer* @ Cloudera

tel: +49 *176 20 63 51 99*
skype: *kamir1604*
mirko@cloudera.com

Mime
View raw message