hama-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Thomas Jungblut (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HAMA-642) Make GraphRunner disk based
Date Tue, 16 Oct 2012 17:13:03 GMT

    [ https://issues.apache.org/jira/browse/HAMA-642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13477173#comment-13477173
] 

Thomas Jungblut commented on HAMA-642:
--------------------------------------

I think I have the bug. So updates made within a tree object doesn't flush back to disk. JDBM
never checks if the object has been modified. So if it is modified we have to remove and reinsert
the dataset. Which is pretty inefficient :/

I could fix the problem by making a buffer map:

{noformat}
 private void doSuperstep(Map<V, List<M>> messages,
      BSPPeer<Writable, Writable, Writable, Writable, GraphJobMessage> peer)
      throws IOException {

    Map<V,Vertex<V,E,M>> bufferMap = new HashMap<V, Vertex<V,E,M>>();

    int activeVertices = 0;
    Set<V> keySet = vertices.keySet();
    for (V key : keySet) {
      Vertex<V, E, M> vertex = vertices.get(key);
...    
      if (!vertex.isHalted()) {
...
        if (!vertex.isHalted()) {
          activeVertices++;
        }  
        bufferMap.put(key, vertex);
      }
    }
    
    vertices.clear();
    vertices.putAll(bufferMap);
    iteration++;
{noformat}

So what are we doing with this? Keeping a buffered map of all active vertices isn't scalable
either.

My small fix idea:
- create a JDBM store for every superstep to write the vertices everytime and switch references
of the maps. Apparantly too slow? But scalable...
                
> Make GraphRunner disk based
> ---------------------------
>
>                 Key: HAMA-642
>                 URL: https://issues.apache.org/jira/browse/HAMA-642
>             Project: Hama
>          Issue Type: Improvement
>          Components: graph
>    Affects Versions: 0.5.0
>            Reporter: Thomas Jungblut
>            Assignee: Thomas Jungblut
>         Attachments: HAMA-642_unix_1.patch, HAMA-642_unix_2.patch, HAMA-scale_1.patch,
HAMA-scale_2.patch, HAMA-scale_3.patch, HAMA-scale_4.patch
>
>
> To improve scalability we can improve the graph runner to be disk based.
> Which basically means:
> - We have just a single Vertex instance that get's refilled.
> - We directly write vertices to disk after partitioning
> - In every superstep we iterate over the vertices on disk, fill the vertex instance and
call the users compute functions
> Problems:
> - State other than vertex value can't be stored easy
> - How do we deal with random access after messages have arrived?
> So I think we should make the graph runner more hybrid, like using the queues we have
implemented in the messaging. So the graphrunner can be configured to run completely on disk,
in cached mode or in in-memory mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message