giraph-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alexander Asplund <>
Subject Re: Out of core execution has no effect on GC crash
Date Mon, 09 Sep 2013 22:50:48 GMT
Thanks for the reply.

I tried setting giraph.maxPartitionsInMemory to 1, but I'm still
getting OOM: GC limit exceeded.

Are there any particular cases the OOC will not be able to handle, or
is it supposed to work in all cases? If the latter, it might be that I
have made some configuration error.

I do have one concern that might indicateI have done something wrong:
to allow OOC to activate without crashing I had to modify the trunk
code. This was because Giraph relied on guava-12 and
DiskBackedPartitionStore used hasInt() - a method which does not exist
in guava-11 which hadoop 2 depends on. At runtime guava 11 was being

I suppose this problem might indicate I'm running submitting the job
using the wrong binary. Currently I am including the giraph
dependencies with the jar, and running using hadoop jar.

On 9/7/13, Claudio Martella <> wrote:
> OOC is used also at input superstep. try to decrease the number of
> partitions kept in memory.
> On Sat, Sep 7, 2013 at 1:37 AM, Alexander Asplund
> <>wrote:
>> Hi,
>> I'm trying to process a graph that is about 3 times the size of
>> available memory. On the other hand, there is plenty of disk space. I
>> have enabled the giraph.useOutOfCoreGraph property, but it still
>> crashes with outOfMemoryError: GC limit exceeded when I try running my
>> job.
>> I'm wondering of the spilling is supposed to work during the input
>> step. If so, are there any additional steps that must be taken to
>> ensure it functions?
>> Regards,
>> Alexander Asplund
> --
>    Claudio Martella

Alexander Asplund

View raw message