giraph-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jerome richard <jeromerichard...@msn.com>
Subject RE: Scaling Problem
Date Fri, 26 Jul 2013 16:18:19 GMT



    "Can you paste your cluster information ?"
What kind of information do you need?How can I get these informations?
    "What are your message types?"
The message type is just LongWritable. I don't use collections during the graph processing.
I use collections just to load the input graph but it seems to works perfectly. Is it possible
to avoid allocation of primitive writables (like LongWritable) to increase performances and
use less memory ?
    "How you invoke the job?"
Here is the command typed in my terminal to start the job :    hadoop jar hadoop/jars/test-connexity.jar
\            lifo.giraph.test.Main \            /test-connexity \            /test-connexity-output
\            10
The first Giraph argument is the input file, the second is the output file and the last is
the number of workers.Please find attached the code of my Giraph application. Main.java configure
and start the job. VertexComputation.java compute the data and the thow last file define how
to load the input and save the output graph.
PS : I'm not English, so I'm sorry if I do some language mistakes.
Thanks for your help.

Date: Fri, 26 Jul 2013 08:13:22 -0700
From: aching@apache.org
To: user@giraph.apache.org
Subject: Re: Scaling Problem


  
    
  
  
    Hi guys,

        

        At some point, we do need to help with a guide for conserving
        memory, but this is a generic Java problem.  You can work around
        it by avoiding objects as much as possible by using primitives
        directly.  If you need primitive collections see FastUtils,
        Trove, etc.  Combiners also save a lot of memory for messages.

      

      What are your message types?

      

      Avery

      

      On 7/26/13 6:53 AM, Puneet Jain wrote:

    
    
      Can you paste your cluster information ? I am also
        struggling to make it work on 75M vertices and 100s of million
        edges.
      

        

        On Fri, Jul 26, 2013 at 8:02 AM, jerome
          richard <jeromerichard111@msn.com>
          wrote:

          
            
              Hi,

                
                  
                    

                    
                    I
                      encountered a critical scaling problem using
                      Giraph. I made a very simple algorithm to test
                      Giraph on large graphs : a connexity test. It
                      works on relatively large graphs (3 072 441 nodes
                      and 117 185 083 edges) but not on very large graph
                      (52 000 000 nodes and 2 000 000 000 edges). 
                    In

                      fact, during the processing of the biggest graph,
                      Giraph core seems to fail after the superstep 14
                      (15 on some jobs). The input graph size is 30 GB
                      stored as text and the output is also stored as
                      text. 9 working jobs are used to compute the
                      graph.
                    

                    
                    Here

                      is the tracktrace of jobs (this is the same for
                      the 9 jobs):
                     
                        java.lang.IllegalStateException: run: Caught an
                      unrecoverable exception exists: Failed to check
                      /_hadoopBsp/job_201307260439_0006/_applicationAttemptsDir/0/_superstepDir/97/_addressesAndPartitions

                      after 3 tries!
                     
                            at
                      org.apache.giraph.graph.GraphMapper.run(GraphMapper.java:101)
                     
                            at
                      org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
                     
                            at
                      org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
                     
                            at
                      org.apache.hadoop.mapred.Child$4.run(Child.java:255)
                     
                            at
                      java.security.AccessController.doPrivileged(Native
                      Method)
                     
                            at javax.security.auth.Subject.doAs(Unknown
                      Source)
                     
                            at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
                     
                            at
                      org.apache.hadoop.mapred.Child.main(Child.java:249)
                     
                        Caused by: java.lang.IllegalStateException:
                      exists: Failed to check
                      /_hadoopBsp/job_201307260439_0006/_applicationAttemptsDir/0/_superstepDir/97/_addressesAndPartitions

                      after 3 tries!
                     
                            at
                      org.apache.giraph.zk.ZooKeeperExt.exists(ZooKeeperExt.java:369)
                     
                            at
org.apache.giraph.worker.BspServiceWorker.startSuperstep(BspServiceWorker.java:678)
                     
                            at
org.apache.giraph.graph.GraphTaskManager.execute(GraphTaskManager.java:248)
                     
                            at
                      org.apache.giraph.graph.GraphMapper.run(GraphMapper.java:91)
                     
                            ... 7 more
                    

                    
                    Could

                      you help me to solve this problem?
                    If

                      you need the code of the program, I can put that
                      here (the code is relatively tiny).
                    

                    
                    Thanks, 
                    Jérôme.
                    

                    
                  
                
              
            
          
        
        

        
        

        
        -- 

        --Puneet 
    
    

 		 	   		  
Mime
View raw message