giraph-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Claudio Martella <>
Subject Re: Using out of core messages
Date Thu, 24 Apr 2014 15:06:54 GMT
Answers are inline.

On Thu, Apr 24, 2014 at 4:21 PM, Pascal Jäger <>wrote:

>  Hi all,
>  I am struggling with the settings to use out of core messages.
> I have 3 nodes with 16 GB RAM each ( one master, two workers).
>  I ran into a java heap space OOM Error.
>  First question is: Where do I set the Options?
> Do I need to add them via the "-ca  mapred.child….“ option or by using
> „-Dmapred.child…..“
> I tried both, but nothing seems to work out.
> I run it on a cloudera cluster, and when looking in the web frontend I
> see, that it only uses 3 GB of my 16 GB RAM.
> Are those even the right options ?

You can use both, but the correct parameter name is

>  giraph.maxMessagesInMemory - is it per worker? Or what exactly is
> counted here? and how does it correlate to giraph.messagesBufferSize?

It is per worker, and it tells the maximum number of messages each worker
should keep in main memory. The messageBufferSize defines the buffer used
to read and write messages to the disk and you can probably keep the
current value.

>  I am really lost right now. My graph has currently only 8000 nodes and
> 70000 edges.
> During one step I need to send more than 15 000 000 messages and this is
> when I get the OOM error.
>  I turned on the out of core messages feature without changing the above
> mentioned options and my computation really slowed down.
> I guess because it was writing 14 000 000 messages to disk

Each worker is currently keeping 1M messages in memory (if you have
activated OOC messages but have not played with maxMessagesInMemory). In
your case, it's something around 1/8 of the messages a worker receives.
Once you're able to increase the heap and use all your 16GB of ram on your
workers, you should be able to increase that parameter, depending on the
message size.

>  Hope you can help me.
>  Regards Pascal
Hope this helps.

   Claudio Martella

View raw message