hama-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Zhuang Kechen (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HAMA-640) Large graph computing cause out of memory
Date Thu, 13 Sep 2012 10:38:07 GMT
Zhuang Kechen created HAMA-640:

             Summary: Large graph computing cause out of memory
                 Key: HAMA-640
                 URL: https://issues.apache.org/jira/browse/HAMA-640
             Project: Hama
          Issue Type: Question
          Components: graph
    Affects Versions: 0.5.0
         Environment: hadoop-1.0.3   hama-0.5.0
            Reporter: Zhuang Kechen

When I test some small graphs PageRank on distributed environment, everything went all right.
When I upload a Larger graph(758Mb) on HDFS, the graph format is all the same, and the PageRank
went right on SuperStep 0,1,2, then job failed after that, out of memory. I didn't change
the examples, the test use the hama-examples pagerank.
Somebody please help...Thanks!!

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

View raw message