Return-Path: X-Original-To: apmail-giraph-user-archive@www.apache.org Delivered-To: apmail-giraph-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 4D22E10B8C for ; Tue, 5 Nov 2013 13:02:39 +0000 (UTC) Received: (qmail 44024 invoked by uid 500); 5 Nov 2013 12:58:08 -0000 Delivered-To: apmail-giraph-user-archive@giraph.apache.org Received: (qmail 43822 invoked by uid 500); 5 Nov 2013 12:57:29 -0000 Mailing-List: contact user-help@giraph.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@giraph.apache.org Delivered-To: mailing list user@giraph.apache.org Received: (qmail 43316 invoked by uid 99); 5 Nov 2013 12:56:43 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 05 Nov 2013 12:56:43 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of claudio.martella@gmail.com designates 209.85.128.174 as permitted sender) Received: from [209.85.128.174] (HELO mail-ve0-f174.google.com) (209.85.128.174) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 05 Nov 2013 12:56:35 +0000 Received: by mail-ve0-f174.google.com with SMTP id pa12so2445155veb.33 for ; Tue, 05 Nov 2013 04:56:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type; bh=lFj+wPQQQvhuhPAwqyVpTRGqst43vOTSoJa4KjXDP/Y=; b=a0PMlLWg8BITU4keODjibz3R7DwptqrfMAF+iRZB6YE1TOoheG7tDasnfKXOfhlLWk TO7+DNMKHbj2hOL/+haPe2s8HadQreX9b3PUzax9D4WYbCHJlmt0SJ1cax6SS3r+ETRG eDKaVOMOyfBAFJI2jbBOYUPyBU19PEdTDgAl01Ubrwkg8CW+dcGPpvOAxMJx/uRowsLo lBOU0jz0WnHpXJJH4Ve7nBHJtxA03y1SlOH89sgO55xfiexJR0XCkStnLLVpulRbh6MX ZXXTY9eXKZ+6HU3VBAbkc8x43MGk9eKmevphiY8+DzMQqSOulfVB6ud/oaAyie0ND+KZ FAZA== X-Received: by 10.52.34.109 with SMTP id y13mr13430610vdi.8.1383656174858; Tue, 05 Nov 2013 04:56:14 -0800 (PST) MIME-Version: 1.0 Received: by 10.221.56.202 with HTTP; Tue, 5 Nov 2013 04:55:54 -0800 (PST) In-Reply-To: References: From: Claudio Martella Date: Tue, 5 Nov 2013 13:55:54 +0100 Message-ID: Subject: Re: Using the RandomEdge ... & RandomVertex InputFormat To: "user@giraph.apache.org" Content-Type: multipart/alternative; boundary=20cf30780c5ccb450204ea6d8e3d X-Virus-Checked: Checked by ClamAV on apache.org --20cf30780c5ccb450204ea6d8e3d Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable I don't see errors in your logs. The parameters that defines the total number of vertices created is giraph.pseudoRandomInputFormat. aggregateVertices. On Tue, Nov 5, 2013 at 1:05 PM, Mirko K=E4mpf wr= ote: > Hi, > > after I changed the code of the PseudoRandomVertex.... and > PseudoRandomEdgeInputFormat I can compile and run the job but it will sto= p > after the 10 minute timeout. > > I think there is still some parameter missing or defined wrong. > > Claudio wrote, one has to set the number of vertexes, but how? > > This my call: > > hadoop jar giraph-ex.jar org.apache.giraph.GiraphRunner -Dgiraph.zkList= =3D > 127.0.0.1:2181 -libjars giraph-core.jar org.apache.giraph.examples.SSPV2 > -vif org.apache.giraph.io.formats.PseudoRandomVertexInputFormat2 > -eif org.apache.giraph.io.formats.PseudoRandomEdgeInputFormat2 > -of org.apache.giraph.io.formats.IdWithValueTextOutputFormat > -op /user/cloudera/goutput/shortestpaths_rand_$NOW > -w 1 > -ca giraph.pseudoRandomInputFormat.edgesPerVertex=3D100 > -ca giraph.pseudoRandomInputFormat.aggregateVertices=3D2 > -ca giraph.pseudoRandomInputFormat.localEdgesMinRatio=3D200 > > What parameter defines the number of vertexes, I want to create? > My test graph should have 1000 nodes. > > There are no exceptions, but also no vertexes are created. > > The code runs and gives the following log: > > mapper 1: > > > 2013-11-05 03:43:30,666 WARN mapreduce.Counters: Group > org.apache.hadoop.mapred.Task$Counter is deprecated. Use > org.apache.hadoop.mapreduce.TaskCounter instead > 2013-11-05 03:43:32,535 WARN org.apache.hadoop.conf.Configuration: > session.id is deprecated. Instead, use dfs.metrics.session-id > 2013-11-05 03:43:32,536 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: > Initializing JVM Metrics with processName=3DMAP, sessionId=3D > 2013-11-05 03:43:33,393 INFO org.apache.hadoop.util.ProcessTree: setsid > exited with exit code 0 > 2013-11-05 03:43:33,420 INFO org.apache.hadoop.mapred.Task: Using > ResourceCalculatorPlugin : > org.apache.hadoop.util.LinuxResourceCalculatorPlugin@307b4703 > 2013-11-05 03:43:33,965 INFO org.apache.hadoop.mapred.MapTask: Processing > split: 'org.apache.giraph.bsp.BspInputSplit, index=3D-1, num=3D-1 > 2013-11-05 03:43:33,992 INFO org.apache.giraph.graph.GraphTaskManager: > setup: Log level remains at info > 2013-11-05 03:43:34,088 INFO org.apache.giraph.graph.GraphTaskManager: > Distributed cache is empty. Assuming fatjar. > 2013-11-05 03:43:34,088 INFO org.apache.giraph.graph.GraphTaskManager: > setup: classpath @ > /mapred/local/taskTracker/cloudera/jobcache/job_201311040001_0018/jars/jo= b.jar > for job Giraph: org.apache.giraph.examples.SSPV2 > 2013-11-05 03:43:34,088 INFO org.apache.giraph.graph.GraphTaskManager: > setup: Starting up BspServiceWorker... > 2013-11-05 03:43:34,136 INFO org.apache.giraph.bsp.BspService: BspService= : > Connecting to ZooKeeper with job job_201311040001_0018, 1 on > 127.0.0.1:2181 > 2013-11-05 03:43:34,147 INFO org.apache.zookeeper.ZooKeeper: Client > environment:zookeeper.version=3D3.4.5-cdh4.4.0--1, built on 09/03/2013 16= :14 > GMT > 2013-11-05 03:43:34,147 INFO org.apache.zookeeper.ZooKeeper: Client > environment:host.name=3Dlocalhost.localdomain > 2013-11-05 03:43:34,147 INFO org.apache.zookeeper.ZooKeeper: Client > environment:java.version=3D1.6.0_32 > 2013-11-05 03:43:34,147 INFO org.apache.zookeeper.ZooKeeper: Client > environment:java.vendor=3DSun Microsystems Inc. > 2013-11-05 03:43:34,147 INFO org.apache.zookeeper.ZooKeeper: Client > environment:java.home=3D/usr/java/jdk1.6.0_32/jre > 2013-11-05 03:43:34,147 INFO org.apache.zookeeper.ZooKeeper: Client > environment:java.class.path=3D/var/run/cloudera-scm-agent/process/10-mapr= educe-TASKTRACKER:/usr/java/jdk1.6.0_32/lib/tools.jar:/usr/lib/hadoop-0.20-= mapreduce:/usr/lib/hadoop-0.20-mapreduce/hadoop-core-2.0.0-mr1-cdh4.4.0.jar= :/usr/lib/hadoop-0.20-mapreduce/lib/activation-1.1.jar:/usr/lib/hadoop-0.20= -mapreduce/lib/ant-contrib-1.0b3.jar:/usr/lib/hadoop-0.20-mapreduce/lib/asm= -3.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/avro-1.7.4.jar:/usr/lib/hadoop-= 0.20-mapreduce/lib/avro-compiler-1.7.4.jar:/usr/lib/hadoop-0.20-mapreduce/l= ib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-b= eanutils-core-1.8.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-cli-1.2.= jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-codec-1.4.jar:/usr/lib/hadoo= p-0.20-mapreduce/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop-0.20-map= reduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/co= mmons-configuration-1.6.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-dige= ster-1.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-el-1.0.jar:/usr/lib= /hadoop-0.20-mapreduce/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-0.20-= mapreduce/lib/commons-io-2.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons= -lang-2.5.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-logging-1.1.1.jar:= /usr/lib/hadoop-0.20-mapreduce/lib/commons-math-2.1.jar:/usr/lib/hadoop-0.2= 0-mapreduce/lib/commons-net-3.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/guav= a-11.0.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/hadoop-fairscheduler-2.0.0-= mr1-cdh4.4.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/hsqldb-1.8.0.10.jar:/us= r/lib/hadoop-0.20-mapreduce/lib/hue-plugins-2.5.0-cdh4.4.0.jar:/usr/lib/had= oop-0.20-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-0.20-mapr= educe/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jackso= n-mapper-asl-1.8.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jackson-xc-1.8.8.= jar:/usr/lib/hadoop-0.20-mapreduce/lib/jasper-compiler-5.5.23.jar:/usr/lib/= hadoop-0.20-mapreduce/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-0.20-ma= preduce/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jaxb-impl= -2.2.3-1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jersey-core-1.8.jar:/usr/li= b/hadoop-0.20-mapreduce/lib/jersey-json-1.8.jar:/usr/lib/hadoop-0.20-mapred= uce/lib/jersey-server-1.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jets3t-0.6= .1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jettison-1.1.jar:/usr/lib/hadoop-= 0.20-mapreduce/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop-0.20-mapredu= ce/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/= jline-0.9.94.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jsch-0.1.42.jar:/usr/li= b/hadoop-0.20-mapreduce/lib/jsp-api-2.1.jar:/usr/lib/hadoop-0.20-mapreduce/= lib/jsr305-1.3.9.jar:/usr/lib/hadoop-0.20-mapreduce/lib/junit-4.8.2.jar:/us= r/lib/hadoop-0.20-mapreduce/lib/kfs-0.2.2.jar:/usr/lib/hadoop-0.20-mapreduc= e/lib/kfs-0.3.jar:/usr/lib/hadoop-0.20-mapreduce/lib/log4j-1.2.17.jar:/usr/= lib/hadoop-0.20-mapreduce/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop-0.20-ma= preduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-0.20-mapreduce/lib/protobuf-j= ava-2.4.0a.jar:/usr/lib/hadoop-0.20-mapreduce/lib/servlet-api-2.5.jar:/usr/= lib/hadoop-0.20-mapreduce/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop-0.20-mapr= educe/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/stax-a= pi-1.0.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/xmlenc-0.52.jar:/usr/lib/ha= doop-0.20-mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/zooke= eper-3.4.5-cdh4.4.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jsp-2.1/jsp-2.1.= jar:/usr/lib/hadoop-0.20-mapreduce/lib/jsp-2.1/jsp-api-2.1.jar:/usr/share/c= mf/lib/plugins/tt-instrumentation-4.7.2.jar:/usr/share/cmf/lib/plugins/navi= gator-plugin-4.7.2-shaded.jar:/usr/share/cmf/lib/plugins/event-publish-4.7.= 2-shaded.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/ha= doop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/u= sr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/servlet-ap= i-2.5.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/ha= doop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.clouder= a.2.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs= /lib/jline-0.9.94.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-= hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.= 8.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/= commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.1.jar:/usr/lib/ha= doop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/jetty-util= -6.1.26.cloudera.2.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.8.jar:/usr/= lib/hadoop-hdfs/lib/zookeeper-3.4.5-cdh4.4.0.jar:/usr/lib/hadoop-hdfs/lib/p= rotobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/h= adoop-hdfs/lib/commons-daemon-1.0.3.jar:/usr/lib/hadoop-hdfs/lib/commons-la= ng-2.5.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/hadoop= -hdfs-2.0.0-cdh4.4.0.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs-2.0.0-cdh4.4.0-te= sts.jar:/usr/lib/hadoop/lib/hue-plugins-2.5.0-cdh4.4.0.jar:/usr/lib/hadoop/= lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/l= ib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/l= ib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/guava-11.0.2.j= ar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1= .8.8.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/avro-1= .7.4.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/= lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/l= ib/hadoop/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop/lib/slf4j-api-1.6= .1.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/common= s-codec-1.4.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar:/usr/lib/hadoop= /lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/ha= doop/lib/jline-0.9.94.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/l= ib/asm-3.2.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hado= op/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/commons-httpcli= ent-3.1.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/snappy= -java-1.0.4.1.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/l= ib/commons-el-1.0.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hado= op/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/li= b/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/commons-collecti= ons-3.2.1.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib= /hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr= /lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/commons-digester-1= .8.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/jersey-s= erver-1.8.jar:/usr/lib/hadoop/lib/zookeeper-3.4.5-cdh4.4.0.jar:/usr/lib/had= oop/lib/junit-4.8.2.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/h= adoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar= :/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/xmlenc-0.= 52.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/commons-lan= g-2.5.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/hadoop-auth-2= .0.0-cdh4.4.0.jar:/usr/lib/hadoop/hadoop-common-2.0.0-cdh4.4.0.jar:/usr/lib= /hadoop/hadoop-annotations-2.0.0-cdh4.4.0.jar:/usr/lib/hadoop/hadoop-annota= tions.jar:/usr/lib/hadoop/hadoop-common.jar:/usr/lib/hadoop/hadoop-auth.jar= :/usr/lib/hadoop/hadoop-common-2.0.0-cdh4.4.0-tests.jar:/mapred/local/taskT= racker/cloudera/jobcache/job_201311040001_0018/jars/classes:/mapred/local/t= askTracker/cloudera/jobcache/job_201311040001_0018/jars/job.jar:/mapred/loc= al/taskTracker/cloudera/distcache/-7916763288898539971_1067008346_672332826= /localhost.localdomain/user/cloudera/.staging/job_201311040001_0018/libjars= /giraph-core.jar:/mapred/local/taskTracker/cloudera/jobcache/job_2013110400= 01_0018/attempt_201311040001_0018_m_000001_0/work > 2013-11-05 03:43:34,147 INFO org.apache.zookeeper.ZooKeeper: Client > environment:java.library.path=3D/usr/lib/hadoop-0.20-mapreduce/lib/native= /Linux-amd64-64:/mapred/local/taskTracker/cloudera/jobcache/job_20131104000= 1_0018/attempt_201311040001_0018_m_000001_0/work > 2013-11-05 03:43:34,147 INFO org.apache.zookeeper.ZooKeeper: Client > environment:java.io.tmpdir=3D/mapred/local/taskTracker/cloudera/jobcache/= job_201311040001_0018/attempt_201311040001_0018_m_000001_0/work/tmp > 2013-11-05 03:43:34,152 INFO org.apache.zookeeper.ZooKeeper: Client > environment:java.compiler=3D > 2013-11-05 03:43:34,152 INFO org.apache.zookeeper.ZooKeeper: Client > environment:os.name=3DLinux > 2013-11-05 03:43:34,152 INFO org.apache.zookeeper.ZooKeeper: Client > environment:os.arch=3Damd64 > 2013-11-05 03:43:34,152 INFO org.apache.zookeeper.ZooKeeper: Client > environment:os.version=3D2.6.32-220.el6.x86_64 > 2013-11-05 03:43:34,152 INFO org.apache.zookeeper.ZooKeeper: Client > environment:user.name=3Dmapred > 2013-11-05 03:43:34,152 INFO org.apache.zookeeper.ZooKeeper: Client > environment:user.home=3D/usr/lib/hadoop > 2013-11-05 03:43:34,152 INFO org.apache.zookeeper.ZooKeeper: Client > environment:user.dir=3D/mapred/local/taskTracker/cloudera/jobcache/job_20= 1311040001_0018/attempt_201311040001_0018_m_000001_0/work > 2013-11-05 03:43:34,153 INFO org.apache.zookeeper.ZooKeeper: Initiating > client connection, connectString=3D127.0.0.1:2181 sessionTimeout=3D60000 > watcher=3Dorg.apache.giraph.worker.BspServiceWorker@6098f192 > 2013-11-05 03:43:34,209 INFO org.apache.zookeeper.ClientCnxn: Opening > socket connection to server localhost.localdomain/127.0.0.1:2181. Will > not attempt to authenticate using SASL (Unable to locate a login > configuration) > 2013-11-05 03:43:34,210 INFO org.apache.zookeeper.ClientCnxn: Socket > connection established to localhost.localdomain/127.0.0.1:2181, > initiating session > 2013-11-05 03:43:34,227 INFO org.apache.zookeeper.ClientCnxn: Session > establishment complete on server localhost.localdomain/127.0.0.1:2181, > sessionid =3D 0x14222216d540030, negotiated timeout =3D 60000 > 2013-11-05 03:43:34,232 INFO org.apache.giraph.bsp.BspService: process: > Asynchronous connection complete. > 2013-11-05 03:43:34,239 INFO > org.apache.giraph.comm.netty.NettyWorkerServer: createMessageStoreFactory= : > Using ByteArrayMessagesPerVertexStore since there is no combiner > 2013-11-05 03:43:34,394 INFO org.apache.giraph.comm.netty.NettyServer: > NettyServer: Using execution handler with 8 threads after > requestFrameDecoder. > 2013-11-05 03:43:34,480 INFO org.apache.giraph.comm.netty.NettyServer: > start: Started server communication server: localhost.localdomain/ > 127.0.0.1:30001 with up to 16 threads on bind attempt 0 with > sendBufferSize =3D 32768 receiveBufferSize =3D 524288 backlog =3D 1 > 2013-11-05 03:43:34,505 INFO org.apache.giraph.comm.netty.NettyClient: > NettyClient: Using execution handler with 8 threads after requestEncoder. > 2013-11-05 03:43:34,539 INFO org.apache.giraph.graph.GraphTaskManager: > setup: Registering health of this worker... > 2013-11-05 03:43:34,607 INFO org.apache.giraph.bsp.BspService: > getJobState: Job state already exists > (/_hadoopBsp/job_201311040001_0018/_masterJobState) > 2013-11-05 03:43:34,631 INFO org.apache.giraph.bsp.BspService: > getApplicationAttempt: Node > /_hadoopBsp/job_201311040001_0018/_applicationAttemptsDir already exists! > 2013-11-05 03:43:34,633 INFO org.apache.giraph.bsp.BspService: > getApplicationAttempt: Node > /_hadoopBsp/job_201311040001_0018/_applicationAttemptsDir already exists! > 2013-11-05 03:43:34,652 INFO org.apache.giraph.worker.BspServiceWorker: > registerHealth: Created my health node for attempt=3D0, superstep=3D-1 wi= th > /_hadoopBsp/job_201311040001_0018/_applicationAttemptsDir/0/_superstepDir= /-1/_workerHealthyDir/localhost.localdomain_1 > and workerInfo=3D Worker(hostname=3Dlocalhost.localdomain, MRtaskID=3D1, > port=3D30001) > 2013-11-05 03:43:34,725 INFO org.apache.giraph.comm.netty.NettyServer: > start: Using Netty without authentication. > 2013-11-05 03:43:34,761 INFO org.apache.giraph.bsp.BspService: process: > partitionAssignmentsReadyChanged (partitions are assigned) > 2013-11-05 03:43:34,773 INFO org.apache.giraph.worker.BspServiceWorker: > startSuperstep: Master(hostname=3Dlocalhost.localdomain, MRtaskID=3D0, > port=3D30000) > 2013-11-05 03:43:34,773 INFO org.apache.giraph.worker.BspServiceWorker: > startSuperstep: Ready for computation on superstep -1 since worker > selection and vertex range assignments are done in > /_hadoopBsp/job_201311040001_0018/_applicationAttemptsDir/0/_superstepDir= /-1/_addressesAndPartitions > 2013-11-05 03:43:34,775 INFO org.apache.giraph.comm.netty.NettyClient: > Using Netty without authentication. > 2013-11-05 03:43:34,806 INFO org.apache.giraph.comm.netty.NettyClient: > connectAllAddresses: Successfully added 1 connections, (1 total connected= ) > 0 failed, 0 failures total. > 2013-11-05 03:43:34,818 INFO org.apache.giraph.worker.BspServiceWorker: > loadInputSplits: Using 1 thread(s), originally 1 threads(s) for 1 total > splits. > 2013-11-05 03:43:34,831 INFO org.apache.giraph.comm.SendPartitionCache: > SendPartitionCache: maxVerticesPerTransfer =3D 10000 > 2013-11-05 03:43:34,832 INFO org.apache.giraph.comm.SendPartitionCache: > SendPartitionCache: maxEdgesPerTransfer =3D 80000 > 2013-11-05 03:43:34,844 INFO org.apache.giraph.worker.InputSplitsHandler: > reserveInputSplit: Reserved input split path > /_hadoopBsp/job_201311040001_0018/_vertexInputSplitDir/0, overall roughly > 0.0% input splits reserved > 2013-11-05 03:43:34,851 INFO org.apache.giraph.worker.InputSplitsCallable= : > getInputSplit: Reserved > /_hadoopBsp/job_201311040001_0018/_vertexInputSplitDir/0 from ZooKeeper a= nd > got input split ''org.apache.giraph.bsp.BspInputSplit, index=3D0, num=3D1= ' > 2013-11-05 03:43:34,855 INFO org.apache.giraph.partition.PartitionUtils: > computePartitionCount: Creating 1, default would have been 1 partitions. > 2013-11-05 03:44:34,842 INFO org.apache.giraph.utils.ProgressableUtils: > waitFor: Future result not ready yet > java.util.concurrent.FutureTask@2b8ca663 > 2013-11-05 03:44:34,844 INFO org.apache.giraph.utils.ProgressableUtils: > waitFor: Waiting for > org.apache.giraph.utils.ProgressableUtils$FutureWaitable@29978933 > 2013-11-05 03:45:34,845 INFO org.apache.giraph.utils.ProgressableUtils: > waitFor: Future result not ready yet > java.util.concurrent.FutureTask@2b8ca663 > 2013-11-05 03:45:34,845 INFO org.apache.giraph.utils.ProgressableUtils: > waitFor: Waiting for > org.apache.giraph.utils.ProgressableUtils$FutureWaitable@29978933 > > > > Thanks for any hint which might help me use the RandomInputFormat. > > Best wishes > Mirko > > > > > > On Mon, Nov 4, 2013 at 7:38 PM, Mirko K=E4mpf = wrote: > >> That helped me a lot. Thanks. >> >> Mirko >> >> >> On Mon, Nov 4, 2013 at 7:33 PM, Claudio Martella < >> claudio.martella@gmail.com> wrote: >> >>> Yes, you'll have to make sure that the pseudorandomedgeinputformat >>> provides the right types. >>> The code for the watts strogatz model is the same package as the >>> pseudorandom... but in trunk and not in 1.0. >>> >>> >>> On Mon, Nov 4, 2013 at 12:14 PM, Mirko K=E4mpf wrote: >>> >>>> Thanks, Claudio. >>>> >>>> I conclude from your mail, I have to create my own >>>> PseudoRandomEdgeInputFormat and PseudoRandomVertexInputFormat with typ= es, >>>> which fit to the algorithm I want to use. So I misunderstood the conce= pt >>>> and not all InputFormats fit to any given implemented algorithm. I thi= s >>>> right? >>>> >>>> But what about the *config parameters*, I have to provide for the >>>> PseudoRandom ... InputFormat and where is the code for the *watts >>>> strogatz model* you mentioned in a previous post? >>>> >>>> Best wishes >>>> Mirko >>>> >>>> >>>> >>>> >>> >>> >>> -- >>> Claudio Martella >>> claudio.martella@gmail.com >>> >> >> >> >> --=20 Claudio Martella claudio.martella@gmail.com --20cf30780c5ccb450204ea6d8e3d Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
I don't see errors in your logs. The parameters that d= efines the total number of vertices created is=A0giraph.pseudoRandomInputFormat.aggregateVertices.<= /div>


On Tue, Nov 5= , 2013 at 1:05 PM, Mirko K=E4mpf <mirko.kaempf@cloudera.com>= ; wrote:
Hi,

afte= r I changed the code of the PseudoRandomVertex.... and PseudoRandomEdgeInpu= tFormat I can compile and run the job but it will stop after the 10 minute = timeout.

I think there is still some parameter missing or defined wro= ng.

Claudio wrote, one has to set the number of ve= rtexes, but how?

This my call:

hadoop jar giraph-ex.jar org.apache.= giraph.GiraphRunner -Dgiraph.zkList=3D127.0.0.1:2181 -libjars giraph-core.jar org.apache.girap= h.examples.SSPV2=A0
-vif org.apache.giraph.io.formats.PseudoRandomVertexInputFormat2=A0
-eif org.apache.giraph.io.formats.PseudoRandomEdgeInputFormat2=A0
-of org.apache.giraph.io.formats.IdWithValueText= OutputFormat=A0
-op /user/cloudera/goutput/shortestpaths_rand_$NOW=A0
-w 1= =A0
-ca giraph.pseudoRandomInputFormat.edgesPerVertex=3D100= =A0
-ca giraph.pseudoRandomInputFormat.aggregateVertices=3D2=A0
-ca giraph.pseudoRandomInputFormat.localEdgesMinRatio=3D200
<= div>
What parameter defines the number of vertexes, I wa= nt to create?
My test graph should have 1000 nodes.

There are no exceptions, but also no vertex= es are created.

The code runs and gives the follow= ing log:=A0

mapper 1:


2013-11-05 03:43:30,666 WARN mapred= uce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Us= e org.apache.hadoop.mapreduce.TaskCounter instead
2013-11-05 03:43:32,535 WARN org.apache.hadoop.conf.Configuration: session.id is deprecated. = Instead, use dfs.metrics.session-id
2013-11-05 03:43:32,536 INFO = org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with pro= cessName=3DMAP, sessionId=3D
2013-11-05 03:43:33,393 INFO org.apache.hadoop.util.ProcessTree: setsi= d exited with exit code 0
2013-11-05 03:43:33,420 INFO org.apache= .hadoop.mapred.Task: =A0Using ResourceCalculatorPlugin : org.apache.hadoop.= util.LinuxResourceCalculatorPlugin@307b4703
2013-11-05 03:43:33,965 INFO org.apache.hadoop.mapred.MapTask: Process= ing split: 'org.apache.giraph.bsp.BspInputSplit, index=3D-1, num=3D-1
2013-11-05 03:43:33,992 INFO org.apache.giraph.graph.GraphTaskMana= ger: setup: Log level remains at info
2013-11-05 03:43:34,088 INFO org.apache.giraph.graph.GraphTaskManager:= Distributed cache is empty. Assuming fatjar.
2013-11-05 03:43:34= ,088 INFO org.apache.giraph.graph.GraphTaskManager: setup: classpath @ /map= red/local/taskTracker/cloudera/jobcache/job_201311040001_0018/jars/job.jar = for job Giraph: org.apache.giraph.examples.SSPV2
2013-11-05 03:43:34,088 INFO org.apache.giraph.graph.GraphTaskManager:= setup: Starting up BspServiceWorker...
2013-11-05 03:43:34,136 I= NFO org.apache.giraph.bsp.BspService: BspService: Connecting to ZooKeeper w= ith job job_201311040001_0018, 1 on 127.0.0.1:2181
2013-11-05 03:43:34,147 INFO org.apache.zookeeper.ZooKeeper: Client en= vironment:zookeeper.version=3D3.4.5-cdh4.4.0--1, built on 09/03/2013 16:14 = GMT
2013-11-05 03:43:34,147 INFO org.apache.zookeeper.ZooKeeper: = Client environment:host.name= =3Dlocalhost.localdomain
2013-11-05 03:43:34,147 INFO org.apache.zookeeper.ZooKeeper: Client en= vironment:java.version=3D1.6.0_32
2013-11-05 03:43:34,147 INFO or= g.apache.zookeeper.ZooKeeper: Client environment:java.vendor=3DSun Microsys= tems Inc.
2013-11-05 03:43:34,147 INFO org.apache.zookeeper.ZooKeeper: Client en= vironment:java.home=3D/usr/java/jdk1.6.0_32/jre
2013-11-05 03:43:= 34,147 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.class.p= ath=3D/var/run/cloudera-scm-agent/process/10-mapreduce-TASKTRACKER:/usr/jav= a/jdk1.6.0_32/lib/tools.jar:/usr/lib/hadoop-0.20-mapreduce:/usr/lib/hadoop-= 0.20-mapreduce/hadoop-core-2.0.0-mr1-cdh4.4.0.jar:/usr/lib/hadoop-0.20-mapr= educe/lib/activation-1.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/ant-contrib= -1.0b3.jar:/usr/lib/hadoop-0.20-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-0= .20-mapreduce/lib/avro-1.7.4.jar:/usr/lib/hadoop-0.20-mapreduce/lib/avro-co= mpiler-1.7.4.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-beanutils-1.7.0= .jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-beanutils-core-1.8.0.jar:/u= sr/lib/hadoop-0.20-mapreduce/lib/commons-cli-1.2.jar:/usr/lib/hadoop-0.20-m= apreduce/lib/commons-codec-1.4.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commo= ns-collections-3.2.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-compres= s-1.4.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-configuration-1.6.ja= r:/usr/lib/hadoop-0.20-mapreduce/lib/commons-digester-1.8.jar:/usr/lib/hado= op-0.20-mapreduce/lib/commons-el-1.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib= /commons-httpclient-3.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-io-2= .1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-lang-2.5.jar:/usr/lib/had= oop-0.20-mapreduce/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-0.20-mapre= duce/lib/commons-math-2.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/commons-ne= t-3.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/guava-11.0.2.jar:/usr/lib/hado= op-0.20-mapreduce/lib/hadoop-fairscheduler-2.0.0-mr1-cdh4.4.0.jar:/usr/lib/= hadoop-0.20-mapreduce/lib/hsqldb-1.8.0.10.jar:/usr/lib/hadoop-0.20-mapreduc= e/lib/hue-plugins-2.5.0-cdh4.4.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jac= kson-core-asl-1.8.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jackson-jaxrs-1.= 8.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/us= r/lib/hadoop-0.20-mapreduce/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop-0.20-m= apreduce/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop-0.20-mapreduce/lib/= jasper-runtime-5.5.23.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jaxb-api-2.2.2= .jar:/usr/lib/hadoop-0.20-mapreduce/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hado= op-0.20-mapreduce/lib/jersey-core-1.8.jar:/usr/lib/hadoop-0.20-mapreduce/li= b/jersey-json-1.8.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jersey-server-1.8.= jar:/usr/lib/hadoop-0.20-mapreduce/lib/jets3t-0.6.1.jar:/usr/lib/hadoop-0.2= 0-mapreduce/lib/jettison-1.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jetty-6= .1.26.cloudera.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jetty-util-6.1.26.c= loudera.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jline-0.9.94.jar:/usr/lib/= hadoop-0.20-mapreduce/lib/jsch-0.1.42.jar:/usr/lib/hadoop-0.20-mapreduce/li= b/jsp-api-2.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/jsr305-1.3.9.jar:/usr/= lib/hadoop-0.20-mapreduce/lib/junit-4.8.2.jar:/usr/lib/hadoop-0.20-mapreduc= e/lib/kfs-0.2.2.jar:/usr/lib/hadoop-0.20-mapreduce/lib/kfs-0.3.jar:/usr/lib= /hadoop-0.20-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-0.20-mapreduce/= lib/mockito-all-1.8.5.jar:/usr/lib/hadoop-0.20-mapreduce/lib/paranamer-2.3.= jar:/usr/lib/hadoop-0.20-mapreduce/lib/protobuf-java-2.4.0a.jar:/usr/lib/ha= doop-0.20-mapreduce/lib/servlet-api-2.5.jar:/usr/lib/hadoop-0.20-mapreduce/= lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/snappy-java-1.0.= 4.1.jar:/usr/lib/hadoop-0.20-mapreduce/lib/stax-api-1.0.1.jar:/usr/lib/hado= op-0.20-mapreduce/lib/xmlenc-0.52.jar:/usr/lib/hadoop-0.20-mapreduce/lib/xz= -1.0.jar:/usr/lib/hadoop-0.20-mapreduce/lib/zookeeper-3.4.5-cdh4.4.0.jar:/u= sr/lib/hadoop-0.20-mapreduce/lib/jsp-2.1/jsp-2.1.jar:/usr/lib/hadoop-0.20-m= apreduce/lib/jsp-2.1/jsp-api-2.1.jar:/usr/share/cmf/lib/plugins/tt-instrume= ntation-4.7.2.jar:/usr/share/cmf/lib/plugins/navigator-plugin-4.7.2-shaded.= jar:/usr/share/cmf/lib/plugins/event-publish-4.7.2-shaded.jar:/usr/lib/hado= op-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9= .jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/log= 4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-= hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.= jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop-hd= fs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/jline-0.9.94.jar:/usr= /lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-= 1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.8.jar:/usr/lib/hadoop-hdfs= /lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/l= ib/hadoop-hdfs/lib/commons-io-2.1.jar:/usr/lib/hadoop-hdfs/lib/jasper-runti= me-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.2.jar:/us= r/lib/hadoop-hdfs/lib/jersey-server-1.8.jar:/usr/lib/hadoop-hdfs/lib/zookee= per-3.4.5-cdh4.4.0.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/u= sr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/commons-dae= mon-1.0.3.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.5.jar:/usr/lib/hadoop= -hdfs/hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs-2.0.0-cdh4.4.0.jar:/= usr/lib/hadoop-hdfs/hadoop-hdfs-2.0.0-cdh4.4.0-tests.jar:/usr/lib/hadoop/li= b/hue-plugins-2.5.0-cdh4.4.0.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/us= r/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/jsr305-1.3.9= .jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/commons-comp= ress-1.4.1.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/log= 4j-1.2.17.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/l= ib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/avro-1.7.4.jar:/usr/lib/hadoop/l= ib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8= .8.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26= .cloudera.2.jar:/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop/lib= /jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/h= adoop/lib/slf4j-log4j12-1.6.1.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7= .0.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/jline-0.9.94.jar:= /usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/ha= doop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/commons-beanutils-c= ore-1.8.0.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoo= p/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib= /hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr= /lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.j= ar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/jasper-runtim= e-5.5.23.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/had= oop/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop/lib/activation-1.1= .jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/jackson-xc= -1.8.8.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib= /commons-net-3.1.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/had= oop/lib/zookeeper-3.4.5-cdh4.4.0.jar:/usr/lib/hadoop/lib/junit-4.8.2.jar:/u= sr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.j= ar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/proto= buf-java-2.4.0a.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib= /jettison-1.1.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/= lib/jsch-0.1.42.jar:/usr/lib/hadoop/hadoop-auth-2.0.0-cdh4.4.0.jar:/usr/lib= /hadoop/hadoop-common-2.0.0-cdh4.4.0.jar:/usr/lib/hadoop/hadoop-annotations= -2.0.0-cdh4.4.0.jar:/usr/lib/hadoop/hadoop-annotations.jar:/usr/lib/hadoop/= hadoop-common.jar:/usr/lib/hadoop/hadoop-auth.jar:/usr/lib/hadoop/hadoop-co= mmon-2.0.0-cdh4.4.0-tests.jar:/mapred/local/taskTracker/cloudera/jobcache/j= ob_201311040001_0018/jars/classes:/mapred/local/taskTracker/cloudera/jobcac= he/job_201311040001_0018/jars/job.jar:/mapred/local/taskTracker/cloudera/di= stcache/-7916763288898539971_1067008346_672332826/localhost.localdomain/use= r/cloudera/.staging/job_201311040001_0018/libjars/giraph-core.jar:/mapred/l= ocal/taskTracker/cloudera/jobcache/job_201311040001_0018/attempt_2013110400= 01_0018_m_000001_0/work
2013-11-05 03:43:34,147 INFO org.apache.zookeeper.ZooKeeper: Client en= vironment:java.library.path=3D/usr/lib/hadoop-0.20-mapreduce/lib/native/Lin= ux-amd64-64:/mapred/local/taskTracker/cloudera/jobcache/job_201311040001_00= 18/attempt_201311040001_0018_m_000001_0/work
2013-11-05 03:43:34,147 INFO org.apache.zookeeper.ZooKeeper: Client en= vironment:java.io.tmpdir=3D/mapred/local/taskTracker/cloudera/jobcache/job_= 201311040001_0018/attempt_201311040001_0018_m_000001_0/work/tmp
2013-11-05 03:43:34,152 INFO org.apache.zookeeper.ZooKeeper: Client environ= ment:java.compiler=3D<NA>
2013-11-05 03:43:34,152 INFO org.= apache.zookeeper.ZooKeeper: Client environment:os.name=3DLinux
2013-11-05 03:43:34,152 INFO org.apache.zookeeper.ZooKeeper: Client en= vironment:os.arch=3Damd64
2013-11-05 03:43:34,152 INFO org.apache= .zookeeper.ZooKeeper: Client environment:os.version=3D2.6.32-220.el6.x86_64=
2013-11-05 03:43:34,152 INFO org.apache.zookeeper.ZooKeeper: Client en= vironment:user.name=3Dma= pred
2013-11-05 03:43:34,152 INFO org.apache.zookeeper.ZooKeeper:= Client environment:user.home=3D/usr/lib/hadoop
2013-11-05 03:43:34,152 INFO org.apache.zookeeper.ZooKeeper: Client en= vironment:user.dir=3D/mapred/local/taskTracker/cloudera/jobcache/job_201311= 040001_0018/attempt_201311040001_0018_m_000001_0/work
2013-11-05 = 03:43:34,153 INFO org.apache.zookeeper.ZooKeeper: Initiating client connect= ion, connectString=3D12= 7.0.0.1:2181 sessionTimeout=3D60000 watcher=3Dorg.apache.giraph.worker.= BspServiceWorker@6098f192
2013-11-05 03:43:34,209 INFO org.apache.zookeeper.ClientCnxn: Opening = socket connection to server localhost.localdomain/127.0.0.1:2181. Will not attempt to authenti= cate using SASL (Unable to locate a login configuration)
2013-11-05 03:43:34,210 INFO org.apache.zookeeper.ClientCnxn: Socket c= onnection established to localhost.localdomain/127.0.0.1:2181, initiating session
20= 13-11-05 03:43:34,227 INFO org.apache.zookeeper.ClientCnxn: Session establi= shment complete on server localhost.localdomain/127.0.0.1:2181, sessionid =3D 0x14222216d54003= 0, negotiated timeout =3D 60000
2013-11-05 03:43:34,232 INFO org.apache.giraph.bsp.BspService: process= : Asynchronous connection complete.
2013-11-05 03:43:34,239 INFO = org.apache.giraph.comm.netty.NettyWorkerServer: createMessageStoreFactory: = Using ByteArrayMessagesPerVertexStore since there is no combiner
2013-11-05 03:43:34,394 INFO org.apache.giraph.comm.netty.NettyServer:= NettyServer: Using execution handler with 8 threads after requestFrameDeco= der.
2013-11-05 03:43:34,480 INFO org.apache.giraph.comm.netty.Ne= ttyServer: start: Started server communication server: localhost.localdomai= n/127.0.0.1:30001 = with up to 16 threads on bind attempt 0 with sendBufferSize =3D 32768 recei= veBufferSize =3D 524288 backlog =3D 1
2013-11-05 03:43:34,505 INFO org.apache.giraph.comm.netty.NettyClient:= NettyClient: Using execution handler with 8 threads after requestEncoder.<= /div>
2013-11-05 03:43:34,539 INFO org.apache.giraph.graph.GraphTaskMan= ager: setup: Registering health of this worker...
2013-11-05 03:43:34,607 INFO org.apache.giraph.bsp.BspService: getJobS= tate: Job state already exists (/_hadoopBsp/job_201311040001_0018/_masterJo= bState)
2013-11-05 03:43:34,631 INFO org.apache.giraph.bsp.BspSer= vice: getApplicationAttempt: Node /_hadoopBsp/job_201311040001_0018/_applic= ationAttemptsDir already exists!
2013-11-05 03:43:34,633 INFO org.apache.giraph.bsp.BspService: getAppl= icationAttempt: Node /_hadoopBsp/job_201311040001_0018/_applicationAttempts= Dir already exists!
2013-11-05 03:43:34,652 INFO org.apache.girap= h.worker.BspServiceWorker: registerHealth: Created my health node for attem= pt=3D0, superstep=3D-1 with /_hadoopBsp/job_201311040001_0018/_applicationA= ttemptsDir/0/_superstepDir/-1/_workerHealthyDir/localhost.localdomain_1 and= workerInfo=3D Worker(hostname=3Dlocalhost.localdomain, MRtaskID=3D1, port= =3D30001)
2013-11-05 03:43:34,725 INFO org.apache.giraph.comm.netty.NettyServer:= start: Using Netty without authentication.
2013-11-05 03:43:34,7= 61 INFO org.apache.giraph.bsp.BspService: process: partitionAssignmentsRead= yChanged (partitions are assigned)
2013-11-05 03:43:34,773 INFO org.apache.giraph.worker.BspServiceWorker= : startSuperstep: Master(hostname=3Dlocalhost.localdomain, MRtaskID=3D0, po= rt=3D30000)
2013-11-05 03:43:34,773 INFO org.apache.giraph.worker= .BspServiceWorker: startSuperstep: Ready for computation on superstep -1 si= nce worker selection and vertex range assignments are done in /_hadoopBsp/j= ob_201311040001_0018/_applicationAttemptsDir/0/_superstepDir/-1/_addressesA= ndPartitions
2013-11-05 03:43:34,775 INFO org.apache.giraph.comm.netty.NettyClient:= Using Netty without authentication.
2013-11-05 03:43:34,806 INFO= org.apache.giraph.comm.netty.NettyClient: connectAllAddresses: Successfull= y added 1 connections, (1 total connected) 0 failed, 0 failures total.
2013-11-05 03:43:34,818 INFO org.apache.giraph.worker.BspServiceWorker= : loadInputSplits: Using 1 thread(s), originally 1 threads(s) for 1 total s= plits.
2013-11-05 03:43:34,831 INFO org.apache.giraph.comm.SendPa= rtitionCache: SendPartitionCache: maxVerticesPerTransfer =3D 10000
2013-11-05 03:43:34,832 INFO org.apache.giraph.comm.SendPartitionCache= : SendPartitionCache: maxEdgesPerTransfer =3D 80000
2013-11-05 03= :43:34,844 INFO org.apache.giraph.worker.InputSplitsHandler: reserveInputSp= lit: Reserved input split path /_hadoopBsp/job_201311040001_0018/_vertexInp= utSplitDir/0, overall roughly 0.0% input splits reserved
2013-11-05 03:43:34,851 INFO org.apache.giraph.worker.InputSplitsCalla= ble: getInputSplit: Reserved /_hadoopBsp/job_201311040001_0018/_vertexInput= SplitDir/0 from ZooKeeper and got input split ''org.apache.giraph.b= sp.BspInputSplit, index=3D0, num=3D1'
2013-11-05 03:43:34,855 INFO org.apache.giraph.partition.PartitionUtil= s: computePartitionCount: Creating 1, default would have been 1 partitions.=
2013-11-05 03:44:34,842 INFO org.apache.giraph.utils.Progressabl= eUtils: waitFor: Future result not ready yet java.util.concurrent.FutureTas= k@2b8ca663
2013-11-05 03:44:34,844 INFO org.apache.giraph.utils.ProgressableUtils= : waitFor: Waiting for org.apache.giraph.utils.ProgressableUtils$FutureWait= able@29978933
2013-11-05 03:45:34,845 INFO org.apache.giraph.util= s.ProgressableUtils: waitFor: Future result not ready yet java.util.concurr= ent.FutureTask@2b8ca663
2013-11-05 03:45:34,845 INFO org.apache.giraph.utils.ProgressableUtils= : waitFor: Waiting for org.apache.giraph.utils.ProgressableUtils$FutureWait= able@29978933



Thanks for any hint which might help me use the RandomInputFormat.

Best wishes
Mirko



<= /span>


On Mon, Nov 4, 2013 at 7:38 PM, Mirko K= =E4mpf <mirko.kaempf@cloudera.com> wrote:
That helped me a lot.=A0Thanks.

<= div>Mirko


On Mon, Nov 4, 2013 at 7:33 PM, Claudio Martella <claudio.martella@gmail.com> wrote:
Yes, you'll have to mak= e sure that the pseudorandomedgeinputformat provides the right types.
T= he code for the watts strogatz model is the same package as the pseudorando= m... but in trunk and not in 1.0.


On Mon, Nov 4, 2013 at 12:14 PM, Mirko K=E4mpf <mirko.kaempf@cl= oudera.com> wrote:
Thanks, Claudio.
=

I conclude from your mail, I have to create my own=A0PseudoRandomEdgeInputFormat=A0and=A0PseudoRandomVertexInputFor= mat=A0with=A0types, which fit to the algorithm I want to use.=A0So I= misunderstood the concept and not all InputFormats fit to any given implem= ented algorithm.=A0I this right?

But what about the config parameters, I have to = provide for the=A0
PseudoRandom ... InputFormat and where is the code for the=A0watts strogatz model= =A0you m= entioned in a previous post?

Best wishes
Mirko






<= font color=3D"#888888">--
=A0 =A0Claudio Martella
=A0 =A0claudio.martella@g= mail.com=A0 =A0



<= /blockquote>



--
=A0 =A0Clau= dio Martella
=A0 =A0claudio.martella@gmail.com=A0 =A0
--20cf30780c5ccb450204ea6d8e3d--