giraph-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eli Reisman (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (GIRAPH-328) Outgoing messages from current superstep should be grouped at the sender by owning worker, not by partition
Date Mon, 24 Sep 2012 18:05:08 GMT

    [ https://issues.apache.org/jira/browse/GIRAPH-328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13461953#comment-13461953
] 

Eli Reisman commented on GIRAPH-328:
------------------------------------

About NettyWorkerClient line 179, the USE_WORKERINFO_ADDRESS is a flag to check the partitionId
input arg against. I use it to let parts of the message-send plumbing signal to use their
WorkerInfo for the remote address rather than the partitionId since (in its current form)
they are not using a partitionId and don't know it any more at that point. The only reason
those parts call getInetSocketAddress() instead of just pulling the address object out of
the WorkerInfo directly is that getInetSocketAddress() loops to make sure the address is resolved
which is nice. 

If I am changing this patch to keep the partition mapping then I might want to start over
with this anyway, there isn't much different from the trunk version in that case. So all this
might go away.

                
> Outgoing messages from current superstep should be grouped at the sender by owning worker,
not by partition
> -----------------------------------------------------------------------------------------------------------
>
>                 Key: GIRAPH-328
>                 URL: https://issues.apache.org/jira/browse/GIRAPH-328
>             Project: Giraph
>          Issue Type: Improvement
>          Components: bsp, graph
>    Affects Versions: 0.2.0
>            Reporter: Eli Reisman
>            Assignee: Eli Reisman
>            Priority: Minor
>             Fix For: 0.2.0
>
>         Attachments: GIRAPH-328-1.patch, GIRAPH-328-2.patch, GIRAPH-328-3.patch
>
>
> Currently, outgoing messages created by the Vertex#compute() cycle on each worker are
stored and grouped by the partitionId on the destination worker to which the messages belong.
This results in messages being duplicated on the wire per partition on a given receiving worker
that has delivery vertices for those messages.
> By partitioning the outgoing, current-superstep messages by destination worker, we can
split them into partitions at insertion into a MessageStore on the destination worker. What
we trade in come compute time while inserting at the receiver side, we gain in fine grained
control over the real number of messages each worker caches outbound for any given worker
before flushing, and how those flush messages are aggregated for delivery as well. Potentially,
it allows for a great reduction in duplicate messages sent in situations like Vertex#sendMessageToAllEdges()
-- see GIRAPH-322, GIRAPH-314. You get the idea.
> This might be a poor idea, and it can certainly use some additional refinement, but it
passes mvn verify and may even run ;) It interoperates with the disk spill code, but not as
well as it could. Consider this a request for comment on the idea (and the approach) rather
than a finished product.
> Comments/ideas/help welcome! Thanks

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message