Return-Path: X-Original-To: apmail-giraph-user-archive@www.apache.org Delivered-To: apmail-giraph-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 3E1991091F for ; Tue, 1 Apr 2014 16:28:30 +0000 (UTC) Received: (qmail 24619 invoked by uid 500); 1 Apr 2014 16:28:29 -0000 Delivered-To: apmail-giraph-user-archive@giraph.apache.org Received: (qmail 24188 invoked by uid 500); 1 Apr 2014 16:28:26 -0000 Mailing-List: contact user-help@giraph.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@giraph.apache.org Delivered-To: mailing list user@giraph.apache.org Received: (qmail 24179 invoked by uid 99); 1 Apr 2014 16:28:25 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 01 Apr 2014 16:28:25 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of liannetrh@gmail.com designates 209.85.217.177 as permitted sender) Received: from [209.85.217.177] (HELO mail-lb0-f177.google.com) (209.85.217.177) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 01 Apr 2014 16:28:19 +0000 Received: by mail-lb0-f177.google.com with SMTP id z11so6984926lbi.8 for ; Tue, 01 Apr 2014 09:27:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type; bh=kpq6lI0spOxyZ16gzYwreqMI1uiIJwdO3D6XycH3E9o=; b=pz+LDx2SOw3fNRHYEVfAP7+ejuqukRsZCpXK/CEmESsN/s3cEvRJgmxERpR8pISMNx jZuCjcR/2iVm5Uws90SaWyEEqa9uItws1NrNtdOUwEg6awdDuTriuzS8htncw0VvgE66 /gHy/x4Ml6AhlVyS0Ubl2nkVcRycLNMxZXkGuF3ORvbwNkMP5Z4KrRuzpK9s1Aiprh14 yHdzky8XeVHj7YEixdJFvJI/uLgAr6a6oqazSH67FahTR9NeoHBAfL7AEgnEOGvaS/Ud OUe1VSK9xjlA+noHS2fLC/BD+wspi94GaK9Or/XKbLvksUKzJwuvZBy8p6mLFIs/bgpH +yew== X-Received: by 10.112.222.225 with SMTP id qp1mr69115lbc.59.1396369677659; Tue, 01 Apr 2014 09:27:57 -0700 (PDT) MIME-Version: 1.0 Received: by 10.112.138.201 with HTTP; Tue, 1 Apr 2014 09:27:37 -0700 (PDT) In-Reply-To: References: From: Liannet Reyes Date: Tue, 1 Apr 2014 18:27:37 +0200 Message-ID: Subject: Re: why this messages? To: user@giraph.apache.org Content-Type: multipart/alternative; boundary=001a113476de9cb58504f5fda6b1 X-Virus-Checked: Checked by ClamAV on apache.org --001a113476de9cb58504f5fda6b1 Content-Type: text/plain; charset=ISO-8859-1 Hi Nishant, Have you look at the jobtracker logs? (localhost:50030/jobtracker.jsp) It is likely you will find the cause of the failure in the jobtasks logs. Be sure you the tiny_graph file has no empty lines in the end by mistake, that may cause that error. Also, I've once run into the same "Loading data ... min free memory on worker" when I try to use more workers than the mapred.tasktracker.map.tasks.maximum. I guest this was the normal behaviour as is the user responsibility to guarantee that the number of workers is less than mapred.tasktracker.map.tasks.maximum - 1 (master). Am I right? However, this is not your case as you are setting w=1 Regards, Liannet 2014-04-01 13:57 GMT+02:00 nishant gandhi : > My code: > import java.io.IOException; > import java.util.Iterator; > > import org.apache.hadoop.io.LongWritable; > import org.apache.hadoop.io.DoubleWritable; > import org.apache.hadoop.io.FloatWritable; > import org.apache.giraph.edge.Edge; > import org.apache.giraph.graph.Vertex; > import org.apache.giraph.graph.BasicComputation; > > public class InDegree extends > BasicComputation { > > @Override > public void compute( > Vertex v, > Iterable msg) throws IOException { > // TODO Auto-generated method stub > > > if(getSuperstep()==0) > { > Iterable< Edge > edge = > v.getEdges(); > > for(Edge i: edge) > { > sendMessage(i.getTargetVertexId(),new DoubleWritable(1)); > } > } > else > { > long sum=0; > for (Iterator iterator = msg.iterator(); > iterator.hasNext();) > { > sum++; > } > v.setValue(new DoubleWritable(sum)); > v.voteToHalt(); > } > > } > > } > > > How i am running it: > > hadoop jar > /usr/local/giraph/giraph-examples/target/giraph-examples-1.1.0-SNAPSHOT-for-hadoop-1.2.1-jar-with-dependencies.jar > org.apache.giraph.GiraphRunner InDegree -vif > org.apache.giraph.io.formats.JsonLongDoubleFloatDoubleVertexInputFormat > -vip /input/tiny_graph.txt -vof > org.apache.giraph.io.formats.JsonLongDoubleFloatDoubleVertexOutputFormat > -op /output/InDegree -w 1 > > I am using that same classic example tiny_graph file > > > On Tue, Apr 1, 2014 at 3:17 PM, ghufran malik wrote: > >> Hi, >> >> >> 14/03/31 15:48:01 INFO job.JobProgressTracker: Data from 1 workers - >> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges >> loaded, 0 edge input splits loaded; min free memory on worker 1 - 50.18MB, >> average 50.18MB >> >> 14/03/31 15:48:06 INFO job.JobProgressTracker: Data from 1 workers - >> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges >> loaded, 0 edge input splits loaded; min free memory on worker 1 - 50.18MB, >> average 50.18MB >> 14/03/31 15:48:11 INFO job.JobProgressTracker: Data from 1 workers - >> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges >> loaded, 0 edge input splits loaded; min free memory on worker 1 - 50.18MB, >> average 50.18MB >> >> I may be wrong but, I have received this output before, and it had >> something to do with the format of my text file. Is your Input Format class >> splitting the line by the separator pattern [\t ] ? if so are you >> separating the values in your .txt file by a space or by a tab space? >> >> Ghufran >> >> >> >> On Tue, Apr 1, 2014 at 6:02 AM, Agrta Rawat wrote: >> >>> Perhaps you have not specified EdgeInputFormat and EdgeOutFormat in your >>> jar run command. And it is just a message not exception as you can see that >>> your task runs. >>> >>> Regards, >>> Agrta Rawat >>> >>> >>> On Mon, Mar 31, 2014 at 10:09 PM, nishant gandhi < >>> nishantgandhi99@gmail.com> wrote: >>> >>>> Why this kind of error comes? What could be wrong? Is it related with >>>> hadoop configuration or giraph code? >>>> >>>> >>>> 14/03/31 15:47:29 INFO utils.ConfigurationUtils: No edge input format >>>> specified. Ensure your InputFormat does not require one. >>>> 14/03/31 15:47:29 INFO utils.ConfigurationUtils: No edge output format >>>> specified . Ensure your OutputFormat does not require one. >>>> 14/03/31 15:47:30 INFO job.GiraphJob: run: Since checkpointing is >>>> disabled (defa ult), do not allow any task retries (setting >>>> mapred.map.max.attempts = 0, old va lue = 4) >>>> 14/03/31 15:47:31 INFO job.GiraphJob: run: Tracking URL: >>>> http://localhost:50030/ jobdetails.jsp?jobid=job_201403310811_0012 >>>> 14/03/31 15:47:56 INFO >>>> job.HaltApplicationUtils$DefaultHaltInstructionsWriter: w >>>> riteHaltInstructions: To halt after next superstep execute: >>>> 'bin/halt-applicatio n --zkServer localhost:22181 --zkNode >>>> /_hadoopBsp/job_201403310811_0012/_haltCom putation' >>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client >>>> environment:zookeeper.version =3.4.5-1392090, built on 09/30/2012 17:52 GMT >>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment: >>>> host.name=localho st >>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client >>>> environment:java.version=1.7. 0_21 >>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client >>>> environment:java.vendor=Oracl e Corporation >>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client >>>> environment:java.home=/usr/li b/jvm/java-7-openjdk-amd64/jre >>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client >>>> environment:java.class.path=/ >>>> usr/local/hadoop/bin/../conf:/usr/lib/jvm/java-7-openjdk-amd64/lib/tools.jar:/us >>>> r/local/hadoop/bin/..:/usr/local/hadoop/bin/../hadoop-core-0.20.203.0.jar:/usr/l >>>> ocal/hadoop/bin/../lib/aspectjrt-1.6.5.jar:/usr/local/hadoop/bin/../lib/aspectjt >>>> ools-1.6.5.jar:/usr/local/hadoop/bin/../lib/commons-beanutils-1.7.0.jar:/usr/loc >>>> al/hadoop/bin/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/bin/../l >>>> ib/commons-cli-1.2.jar:/usr/local/hadoop/bin/../lib/commons-codec-1.4.jar:/usr/l >>>> ocal/hadoop/bin/../lib/commons-collections-3.2.1.jar:/usr/local/hadoop/bin/../li >>>> b/commons-configuration-1.6.jar:/usr/local/hadoop/bin/../lib/commons-daemon-1.0. >>>> 1.jar:/usr/local/hadoop/bin/../lib/commons-digester-1.8.jar:/usr/local/hadoop/bi >>>> n/../lib/commons-el-1.0.jar:/usr/local/hadoop/bin/../lib/commons-httpclient-3.0. >>>> 1.jar:/usr/local/hadoop/bin/../lib/commons-lang-2.4.jar:/usr/local/hadoop/bin/.. >>>> /lib/commons-logging-1.1.1.jar:/usr/local/hadoop/bin/../lib/commons-logging-api- >>>> 1.0.4.jar:/usr/local/hadoop/bin/../lib/commons-math-2.1.jar:/usr/local/hadoop/bi >>>> n/../lib/commons-net-1.4.1.jar:/usr/local/hadoop/bin/../lib/core-3.1.1.jar:/usr/ >>>> local/hadoop/bin/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop/bin/../lib/jackson >>>> -core-asl-1.0.1.jar:/usr/local/hadoop/bin/../lib/jackson-mapper-asl-1.0.1.jar:/u >>>> sr/local/hadoop/bin/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop/bin/../l >>>> ib/jasper-runtime-5.5.12.jar:/usr/local/hadoop/bin/../lib/jets3t-0.6.1.jar:/usr/ >>>> local/hadoop/bin/../lib/jetty-6.1.26.jar:/usr/local/hadoop/bin/../lib/jetty-util >>>> -6.1.26.jar:/usr/local/hadoop/bin/../lib/jsch-0.1.42.jar:/usr/local/hadoop/bin/. >>>> ./lib/junit-4.5.jar:/usr/local/hadoop/bin/../lib/kfs-0.2.2.jar:/usr/local/hadoop >>>> /bin/../lib/log4j-1.2.15.jar:/usr/local/hadoop/bin/../lib/mockito-all-1.8.5.jar: >>>> /usr/local/hadoop/bin/../lib/oro-2.0.8.jar:/usr/local/hadoop/bin/../lib/servlet- >>>> api-2.5-20081211.jar:/usr/local/hadoop/bin/../lib/slf4j-api-1.4.3.jar:/usr/local >>>> /hadoop/bin/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop/bin/../lib/xmlenc-0 >>>> .52.jar:/usr/local/hadoop/bin/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop/bin/. >>>> ./lib/jsp-2.1/jsp-api-2.1.jar >>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client >>>> environment:java.library.path >>>> =/usr/local/hadoop/bin/../lib/native/Linux-amd64-64 >>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client >>>> environment:java.io.tmpdir=/t mp >>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client >>>> environment:java.compiler= >>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:os.name >>>> =Linux >>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client >>>> environment:os.arch=amd64 >>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client >>>> environment:os.version=3.8.0- 23-generic >>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment: >>>> user.name=hduser >>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client >>>> environment:user.home=/home/h duser >>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client >>>> environment:user.dir=/home/hd user >>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Initiating client >>>> connection, connec tString=localhost:22181 sessionTimeout=60000 >>>> watcher=org.apache.giraph.job.JobPr ogressTracker@599a2875 >>>> 14/03/31 15:47:56 INFO mapred.JobClient: Running job: >>>> job_201403310811_0012 >>>> 14/03/31 15:47:56 INFO zookeeper.ClientCnxn: Opening socket connection >>>> to server localhost/127.0.0.1:22181. Will not attempt to authenticate >>>> using SASL (unknown error) >>>> 14/03/31 15:47:56 INFO zookeeper.ClientCnxn: Socket connection >>>> established to lo calhost/127.0.0.1:22181, initiating session >>>> 14/03/31 15:47:56 INFO zookeeper.ClientCnxn: Session establishment >>>> complete on s erver localhost/127.0.0.1:22181, sessionid = >>>> 0x14518d346810002, negotiated timeo ut = 600000 >>>> 14/03/31 15:47:56 INFO job.JobProgressTracker: Data from 1 workers - >>>> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges >>>> loaded, 0 edge input splits loaded; min free memory on worker 1 - 50.18MB, >>>> average 50.18MB >>>> 14/03/31 15:47:57 INFO mapred.JobClient: map 50% reduce 0% >>>> 14/03/31 15:48:00 INFO mapred.JobClient: map 100% reduce 0% >>>> 14/03/31 15:48:01 INFO job.JobProgressTracker: Data from 1 workers - >>>> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges >>>> loaded, 0 edge input splits loaded; min free memory on worker 1 - 50.18MB, >>>> average 50.18MB >>>> 14/03/31 15:48:06 INFO job.JobProgressTracker: Data from 1 workers - >>>> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges >>>> loaded, 0 edge input splits loaded; min free memory on worker 1 - 50.18MB, >>>> average 50.18MB >>>> 14/03/31 15:48:11 INFO job.JobProgressTracker: Data from 1 workers - >>>> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges >>>> loaded, 0 edge input splits loaded; min free memory on worker 1 - 50.18MB, >>>> average 50.18MB >>>> 14/03/31 15:48:16 INFO job.JobProgressTracker: Data from 1 workers - >>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions >>>> computed; min free memory on worker 1 - 70.47MB, average 70.47MB >>>> 14/03/31 15:48:21 INFO job.JobProgressTracker: Data from 1 workers - >>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions >>>> computed; min free memory on worker 1 - 70.47MB, average 70.47MB >>>> 14/03/31 15:48:26 INFO job.JobProgressTracker: Data from 1 workers - >>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions >>>> computed; min free memory on worker 1 - 70.47MB, average 70.47MB >>>> 14/03/31 15:48:31 INFO job.JobProgressTracker: Data from 1 workers - >>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions >>>> computed; min free memory on worker 1 - 70.47MB, average 70.47MB >>>> 14/03/31 15:48:36 INFO job.JobProgressTracker: Data from 1 workers - >>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions >>>> computed; min free memory on worker 1 - 70.29MB, average 70.29MB >>>> 14/03/31 15:48:41 INFO job.JobProgressTracker: Data from 1 workers - >>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions >>>> computed; min free memory on worker 1 - 70.29MB, average 70.29MB >>>> 14/03/31 15:48:46 INFO job.JobProgressTracker: Data from 1 workers - >>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions >>>> computed; min free memory on worker 1 - 70.29MB, average 70.29MB >>>> 14/03/31 15:48:51 INFO job.JobProgressTracker: Data from 1 workers - >>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions >>>> computed; min free memory on worker 1 - 70.29MB, average 70.29MB >>>> 14/03/31 15:48:56 INFO job.JobProgressTracker: Data from 1 workers - >>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions >>>> computed; min free memory on worker 1 - 69.44MB, average 69.44MB >>>> 14/03/31 15:49:01 INFO job.JobProgressTracker: Data from 1 workers - >>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions >>>> computed; min free memory on worker 1 - 69.44MB, average 69.44MB >>>> 14/03/31 15:49:06 INFO job.JobProgressTracker: Data from 1 workers - >>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions >>>> computed; min free memory on worker 1 - 69.44MB, average 69.44MB >>>> 14/03/31 15:49:11 INFO job.JobProgressTracker: Data from 1 workers - >>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions >>>> computed; min free memory on worker 1 - 69.22MB, average 69.22MB >>>> 14/03/31 15:49:16 INFO job.JobProgressTracker: Data from 1 workers - >>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions >>>> computed; min free memory on worker 1 - 69.22MB, average 69.22MB >>>> 14/03/31 15:49:21 INFO job.JobProgressTracker: Data from 1 workers - >>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions >>>> computed; min free memory on worker 1 - 69.22MB, average 69.22MB >>>> 14/03/31 15:49:26 INFO job.JobProgressTracker: Data from 1 workers - >>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions >>>> computed; min free memory on worker 1 - 69.22MB, average 69.22MB >>>> 14/03/31 15:49:31 INFO job.JobProgressTracker: Data from 1 workers - >>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions >>>> computed; min free memory on worker 1 - 69.22MB, average 69.22MB >>>> 14/03/31 15:49:36 INFO job.JobProgressTracker: Data from 1 workers - >>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions >>>> computed; min free memory on worker 1 - 69.22MB, average 69.22MB >>>> 14/03/31 15:49:41 INFO job.JobProgressTracker: Data from 1 workers - >>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions >>>> computed; min free memory on worker 1 - 69.21MB, average 69.21MB >>>> 14/03/31 15:49:46 INFO job.JobProgressTracker: Data from 1 workers - >>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions >>>> computed; min free memory on worker 1 - 69.21MB, average 69.21MB >>>> 14/03/31 15:49:51 INFO job.JobProgressTracker: Data from 1 workers - >>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions >>>> computed; min free memory on worker 1 - 68.86MB, average 68.86MB >>>> 14/03/31 15:49:56 INFO job.JobProgressTracker: Data from 1 workers - >>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions >>>> computed; min free memory on worker 1 - 68.86MB, average 68.86MB >>>> 14/03/31 15:50:01 INFO job.JobProgressTracker: Data from 1 workers - >>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions >>>> computed; min free memory on worker 1 - 68.86MB, average 68.86MB >>>> 14/03/31 15:50:06 INFO job.JobProgressTracker: Data from 1 workers - >>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions >>>> computed; min free memory on worker 1 - 68.86MB, average 68.86MB >>>> ^Z >>>> >>>> >>> >> > --001a113476de9cb58504f5fda6b1 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Hi Nishant,

Have you look at the jobtra= cker logs? (localhost:50030/jobtracker.jsp)
It is likely you will= find the cause of the failure in the jobtasks logs.

Be sure you the tiny_graph file has no empty lines in the end by mistake, = =A0that may cause that error.

Also, I've once = run into the same "Loading data ... min free memory on worker" wh= en I try to use more workers than the=A0mapred.tasktracker= .map.tasks.maximum. I guest this was the normal behaviour as is the user re= sponsibility to guarantee that the number of workers is less than=A0= mapred.tasktracker.map.tasks.maximum - 1 (= master). Am I right?=A0
However, this is not your case as you are setting w= =3D1

Regards,
Liannet



2014-04-01 13:57 GMT+02:00 nishant gandhi <nishantgandhi99@g= mail.com>:
My code:
import jav= a.io.IOException;
import java.util.Iterator;

import org.apache.ha= doop.io.LongWritable;
import org.apache.hadoop.io.DoubleWritable;
import org.apache.hadoop.io.= FloatWritable;
import org.apache.giraph.edge.Edge;
import org.apache.giraph.graph.Verte= x;
import org.apache.giraph.graph.BasicComputation;

public class = InDegree extends BasicComputation<LongWritable,DoubleWritable,FloatWrita= ble,DoubleWritable> {

=A0=A0=A0 @Override
=A0=A0=A0 public void compute(
=A0=A0=A0 =A0= =A0=A0 =A0=A0=A0 Vertex<LongWritable, DoubleWritable, FloatWritable> = v,
=A0=A0=A0 =A0=A0=A0 =A0=A0=A0 Iterable<DoubleWritable> msg) thr= ows IOException {
=A0=A0=A0 =A0=A0=A0 // TODO Auto-generated method stub=
=A0=A0=A0 =A0=A0=A0
=A0=A0=A0 =A0=A0=A0
=A0=A0=A0 =A0=A0=A0 if(getS= uperstep()=3D=3D0)
=A0=A0=A0 =A0=A0=A0 {
=A0=A0=A0 =A0=A0=A0 =A0=A0= =A0 Iterable< Edge<LongWritable,FloatWritable> > edge =3D v.get= Edges();
=A0=A0=A0 =A0=A0=A0 =A0=A0=A0
=A0=A0=A0 =A0=A0=A0 =A0=A0=A0= for(Edge<LongWritable,FloatWritable> i: edge)
=A0=A0=A0 =A0=A0=A0 =A0=A0=A0 {
=A0=A0=A0 =A0=A0=A0 =A0=A0=A0 =A0=A0=A0 = sendMessage(i.getTargetVertexId(),new DoubleWritable(1));
=A0=A0=A0 =A0= =A0=A0 =A0=A0=A0 }
=A0=A0=A0 =A0=A0=A0 }
=A0=A0=A0 =A0=A0=A0 else
= =A0=A0=A0 =A0=A0=A0 {
=A0=A0=A0 =A0=A0=A0 =A0=A0=A0 long sum=3D0;
=A0= =A0=A0 =A0=A0=A0 =A0=A0=A0 for (Iterator<DoubleWritable> iterator =3D= msg.iterator(); iterator.hasNext();)
=A0=A0=A0 =A0=A0=A0 =A0=A0=A0 {
=A0=A0=A0 =A0=A0=A0 =A0=A0=A0 =A0=A0=A0 = sum++;
=A0=A0=A0 =A0=A0=A0 =A0=A0=A0 }
=A0=A0=A0 =A0=A0=A0 =A0=A0=A0 = v.setValue(new DoubleWritable(sum));
=A0=A0=A0 =A0=A0=A0 =A0=A0=A0 v.vot= eToHalt();
=A0=A0=A0 =A0=A0=A0 }
=A0=A0=A0 =A0=A0=A0
=A0=A0=A0 }<= br>
}


How i am running it:

hadoop jar /usr/local/giraph/giraph-examples= /target/giraph-examples-1.1.0-SNAPSHOT-for-hadoop-1.2.1-jar-with-dependenci= es.jar org.apache.giraph.GiraphRunner InDegree=A0 -vif org.apache.giraph.io= .formats.JsonLongDoubleFloatDoubleVertexInputFormat -vip /input/tiny_graph.= txt -vof org.apache.giraph.io.formats.JsonLongDoubleFloatDoubleVertexOutput= Format -op /output/InDegree -w 1

I am using that same classic example tiny_graph file


On Tue, Apr 1, 2014 at 3:17 PM, ghufran malik <g= hufran1malik@gmail.com> wrote:
Hi,=A0


14/03/31 15:48:01 INFO = job.JobProgressTracker: Data from 1 workers - Loading dat a: 0 vertices loa= ded, 0 vertex input splits loaded; 0 edges loaded, 0 edge input=A0 splits l= oaded; min free memory on worker 1 - 50.18MB, average 50.18MB

14/03/31 15:48:= 06 INFO job.JobProgressTracker: Data from 1 workers - Loading dat a: 0 vert= ices loaded, 0 vertex input splits loaded; 0 edges loaded, 0 edge input=A0 = splits loaded; min free memory on worker 1 - 50.18MB, average 50.18MB
14/03/31 15:48:= 11 INFO job.JobProgressTracker: Data from 1 workers - Loading dat a: 0 vert= ices loaded, 0 vertex input splits loaded; 0 edges loaded, 0 edge input=A0 = splits loaded; min free memory on worker 1 - 50.18MB, average 50.18MB

I may be wrong but, I have received this output before, an= d it had something to do with the format of my text file. Is your Input For= mat class splitting the line by the separator pattern [\t ] ? if so are you= separating the values in your .txt file by a space or by a tab space?

Ghufran =A0
=A0 =A0


On Tue, Apr 1, 2014 at = 6:02 AM, Agrta Rawat <agrta.rawat@gmail.com> wrote:
Perhaps you have = not specified EdgeInputFormat and EdgeOutFormat in your jar run command. An= d it is just a message not exception as you can see that your task runs.
Regards,
Agrta Rawat


On Mon, Mar 31, 2014 at 10:09 PM, nishant gandhi <<= a href=3D"mailto:nishantgandhi99@gmail.com" target=3D"_blank">nishantgandhi= 99@gmail.com> wrote:
Why this kind of error come= s? What could be wrong? Is it related with hadoop configuration or giraph c= ode?


14/03/31 15:47:29 INFO utils.ConfigurationUtils: No edge input= format specified.=A0 Ensure your InputFormat does not require one.
14/03/31 15:47:29 INFO utils.ConfigurationUtils: No edge output format spec= ified . Ensure your OutputFormat does not require one.
14/03/31 15:47:30= INFO job.GiraphJob: run: Since checkpointing is disabled (defa ult), do no= t allow any task retries (setting mapred.map.max.attempts =3D 0, old va lue= =3D 4)
14/03/31 15:47:31 INFO job.GiraphJob: run: Tracking URL: http://localhost:50030/ jobdetails.= jsp?jobid=3Djob_201403310811_0012
14/03/31 15:47:56 INFO job.HaltApplica= tionUtils$DefaultHaltInstructionsWriter: w riteHaltInstructions: To halt af= ter next superstep execute: 'bin/halt-applicatio n --zkServer localhost= :22181 --zkNode /_hadoopBsp/job_201403310811_0012/_haltCom putation' 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:zookeeper.ve= rsion =3D3.4.5-1392090, built on 09/30/2012 17:52 GMT
14/03/31 15:47:56 = INFO zookeeper.ZooKeeper: Client environment:host.name=3Dlocalho st
14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:java.version= =3D1.7. 0_21
14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environm= ent:java.vendor=3DOracl e Corporation
14/03/31 15:47:56 INFO zookeeper.Z= ooKeeper: Client environment:java.home=3D/usr/li b/jvm/java-7-openjdk-amd64= /jre
14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:java.class.p= ath=3D/ usr/local/hadoop/bin/../conf:/usr/lib/jvm/java-7-openjdk-amd64/lib/= tools.jar:/us r/local/hadoop/bin/..:/usr/local/hadoop/bin/../hadoop-core-0.= 20.203.0.jar:/usr/l ocal/hadoop/bin/../lib/aspectjrt-1.6.5.jar:/usr/local/h= adoop/bin/../lib/aspectjt ools-1.6.5.jar:/usr/local/hadoop/bin/../lib/commo= ns-beanutils-1.7.0.jar:/usr/loc al/hadoop/bin/../lib/commons-beanutils-core= -1.8.0.jar:/usr/local/hadoop/bin/../l ib/commons-cli-1.2.jar:/usr/local/had= oop/bin/../lib/commons-codec-1.4.jar:/usr/l ocal/hadoop/bin/../lib/commons-= collections-3.2.1.jar:/usr/local/hadoop/bin/../li b/commons-configuration-1= .6.jar:/usr/local/hadoop/bin/../lib/commons-daemon-1.0. 1.jar:/usr/local/ha= doop/bin/../lib/commons-digester-1.8.jar:/usr/local/hadoop/bi n/../lib/comm= ons-el-1.0.jar:/usr/local/hadoop/bin/../lib/commons-httpclient-3.0. 1.jar:/= usr/local/hadoop/bin/../lib/commons-lang-2.4.jar:/usr/local/hadoop/bin/.. /= lib/commons-logging-1.1.1.jar:/usr/local/hadoop/bin/../lib/commons-logging-= api- 1.0.4.jar:/usr/local/hadoop/bin/../lib/commons-math-2.1.jar:/usr/local= /hadoop/bi n/../lib/commons-net-1.4.1.jar:/usr/local/hadoop/bin/../lib/core= -3.1.1.jar:/usr/ local/hadoop/bin/../lib/hsqldb-1.8.0.10.jar:/usr/local/had= oop/bin/../lib/jackson -core-asl-1.0.1.jar:/usr/local/hadoop/bin/../lib/jac= kson-mapper-asl-1.0.1.jar:/u sr/local/hadoop/bin/../lib/jasper-compiler-5.5= .12.jar:/usr/local/hadoop/bin/../l ib/jasper-runtime-5.5.12.jar:/usr/local/= hadoop/bin/../lib/jets3t-0.6.1.jar:/usr/ local/hadoop/bin/../lib/jetty-6.1.= 26.jar:/usr/local/hadoop/bin/../lib/jetty-util -6.1.26.jar:/usr/local/hadoo= p/bin/../lib/jsch-0.1.42.jar:/usr/local/hadoop/bin/. ./lib/junit-4.5.jar:/u= sr/local/hadoop/bin/../lib/kfs-0.2.2.jar:/usr/local/hadoop /bin/../lib/log4= j-1.2.15.jar:/usr/local/hadoop/bin/../lib/mockito-all-1.8.5.jar: /usr/local= /hadoop/bin/../lib/oro-2.0.8.jar:/usr/local/hadoop/bin/../lib/servlet- api-= 2.5-20081211.jar:/usr/local/hadoop/bin/../lib/slf4j-api-1.4.3.jar:/usr/loca= l /hadoop/bin/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop/bin/../lib/x= mlenc-0 .52.jar:/usr/local/hadoop/bin/../lib/jsp-2.1/jsp-2.1.jar:/usr/local= /hadoop/bin/. ./lib/jsp-2.1/jsp-api-2.1.jar
14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:java.library= .path =3D/usr/local/hadoop/bin/../lib/native/Linux-amd64-64
14/03/31 15:= 47:56 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=3D/t mp 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:java.compile= r=3D<NA >
14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client envir= onment:os.name=3DLinux
= 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:os.arch=3Dam= d64
14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:os.version= =3D3.8.0- 23-generic
14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client = environment:user.name=3D= hduser
14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:user.home=3D= /home/h duser
14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:user.dir=3D/= home/hd user
14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Initiating clie= nt connection, connec tString=3Dlocalhost:22181 sessionTimeout=3D60000 watc= her=3Dorg.apache.giraph.job.JobPr ogressTracker@599a2875
14/03/31 15:47:56 INFO mapred.JobClient: Running job: job_201403310811_0012=
14/03/31 15:47:56 INFO zookeeper.ClientCnxn: Opening socket connection = to server=A0 localhost/127.0.0.1:22181. Will not attempt to authenticate using SASL (unknown= =A0 error)
14/03/31 15:47:56 INFO zookeeper.ClientCnxn: Socket connection established = to lo calhost/127.0.0.= 1:22181, initiating session
14/03/31 15:47:56 INFO zookeeper.ClientC= nxn: Session establishment complete on s erver localhost/127.0.0.1:22181, sessionid =3D 0x145= 18d346810002, negotiated timeo ut =3D 600000
14/03/31 15:47:56 INFO job.JobProgressTracker: Data from 1 workers - Loadin= g dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges loaded, 0= edge input=A0 splits loaded; min free memory on worker 1 - 50.18MB, averag= e 50.18MB
14/03/31 15:47:57 INFO mapred.JobClient:=A0 map 50% reduce 0%
14/03/31 1= 5:48:00 INFO mapred.JobClient:=A0 map 100% reduce 0%
14/03/31 15:48:01 I= NFO job.JobProgressTracker: Data from 1 workers - Loading dat a: 0 vertices= loaded, 0 vertex input splits loaded; 0 edges loaded, 0 edge input=A0 spli= ts loaded; min free memory on worker 1 - 50.18MB, average 50.18MB
14/03/31 15:48:06 INFO job.JobProgressTracker: Data from 1 workers - Loadin= g dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges loaded, 0= edge input=A0 splits loaded; min free memory on worker 1 - 50.18MB, averag= e 50.18MB
14/03/31 15:48:11 INFO job.JobProgressTracker: Data from 1 workers - Loadin= g dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges loaded, 0= edge input=A0 splits loaded; min free memory on worker 1 - 50.18MB, averag= e 50.18MB
14/03/31 15:48:16 INFO job.JobProgressTracker: Data from 1 workers - Comput= e sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions compute= d; min free=A0 memory on worker 1 - 70.47MB, average 70.47MB
14/03/31 15= :48:21 INFO job.JobProgressTracker: Data from 1 workers - Compute sup erste= p 1: 0 out of 5 vertices computed; 0 out of 1 partitions computed; min free= =A0 memory on worker 1 - 70.47MB, average 70.47MB
14/03/31 15:48:26 INFO job.JobProgressTracker: Data from 1 workers - Comput= e sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions compute= d; min free=A0 memory on worker 1 - 70.47MB, average 70.47MB
14/03/31 15= :48:31 INFO job.JobProgressTracker: Data from 1 workers - Compute sup erste= p 1: 0 out of 5 vertices computed; 0 out of 1 partitions computed; min free= =A0 memory on worker 1 - 70.47MB, average 70.47MB
14/03/31 15:48:36 INFO job.JobProgressTracker: Data from 1 workers - Comput= e sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions compute= d; min free=A0 memory on worker 1 - 70.29MB, average 70.29MB
14/03/31 15= :48:41 INFO job.JobProgressTracker: Data from 1 workers - Compute sup erste= p 1: 0 out of 5 vertices computed; 0 out of 1 partitions computed; min free= =A0 memory on worker 1 - 70.29MB, average 70.29MB
14/03/31 15:48:46 INFO job.JobProgressTracker: Data from 1 workers - Comput= e sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions compute= d; min free=A0 memory on worker 1 - 70.29MB, average 70.29MB
14/03/31 15= :48:51 INFO job.JobProgressTracker: Data from 1 workers - Compute sup erste= p 1: 0 out of 5 vertices computed; 0 out of 1 partitions computed; min free= =A0 memory on worker 1 - 70.29MB, average 70.29MB
14/03/31 15:48:56 INFO job.JobProgressTracker: Data from 1 workers - Comput= e sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions compute= d; min free=A0 memory on worker 1 - 69.44MB, average 69.44MB
14/03/31 15= :49:01 INFO job.JobProgressTracker: Data from 1 workers - Compute sup erste= p 1: 0 out of 5 vertices computed; 0 out of 1 partitions computed; min free= =A0 memory on worker 1 - 69.44MB, average 69.44MB
14/03/31 15:49:06 INFO job.JobProgressTracker: Data from 1 workers - Comput= e sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions compute= d; min free=A0 memory on worker 1 - 69.44MB, average 69.44MB
14/03/31 15= :49:11 INFO job.JobProgressTracker: Data from 1 workers - Compute sup erste= p 1: 0 out of 5 vertices computed; 0 out of 1 partitions computed; min free= =A0 memory on worker 1 - 69.22MB, average 69.22MB
14/03/31 15:49:16 INFO job.JobProgressTracker: Data from 1 workers - Comput= e sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions compute= d; min free=A0 memory on worker 1 - 69.22MB, average 69.22MB
14/03/31 15= :49:21 INFO job.JobProgressTracker: Data from 1 workers - Compute sup erste= p 1: 0 out of 5 vertices computed; 0 out of 1 partitions computed; min free= =A0 memory on worker 1 - 69.22MB, average 69.22MB
14/03/31 15:49:26 INFO job.JobProgressTracker: Data from 1 workers - Comput= e sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions compute= d; min free=A0 memory on worker 1 - 69.22MB, average 69.22MB
14/03/31 15= :49:31 INFO job.JobProgressTracker: Data from 1 workers - Compute sup erste= p 1: 0 out of 5 vertices computed; 0 out of 1 partitions computed; min free= =A0 memory on worker 1 - 69.22MB, average 69.22MB
14/03/31 15:49:36 INFO job.JobProgressTracker: Data from 1 workers - Comput= e sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions compute= d; min free=A0 memory on worker 1 - 69.22MB, average 69.22MB
14/03/31 15= :49:41 INFO job.JobProgressTracker: Data from 1 workers - Compute sup erste= p 1: 0 out of 5 vertices computed; 0 out of 1 partitions computed; min free= =A0 memory on worker 1 - 69.21MB, average 69.21MB
14/03/31 15:49:46 INFO job.JobProgressTracker: Data from 1 workers - Comput= e sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions compute= d; min free=A0 memory on worker 1 - 69.21MB, average 69.21MB
14/03/31 15= :49:51 INFO job.JobProgressTracker: Data from 1 workers - Compute sup erste= p 1: 0 out of 5 vertices computed; 0 out of 1 partitions computed; min free= =A0 memory on worker 1 - 68.86MB, average 68.86MB
14/03/31 15:49:56 INFO job.JobProgressTracker: Data from 1 workers - Comput= e sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions compute= d; min free=A0 memory on worker 1 - 68.86MB, average 68.86MB
14/03/31 15= :50:01 INFO job.JobProgressTracker: Data from 1 workers - Compute sup erste= p 1: 0 out of 5 vertices computed; 0 out of 1 partitions computed; min free= =A0 memory on worker 1 - 68.86MB, average 68.86MB
14/03/31 15:50:06 INFO job.JobProgressTracker: Data from 1 workers - Comput= e sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions compute= d; min free=A0 memory on worker 1 - 68.86MB, average 68.86MB
^Z





--001a113476de9cb58504f5fda6b1--