giraph-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alexander Asplund <alexaspl...@gmail.com>
Subject Re: Out of core execution has no effect on GC crash
Date Tue, 10 Sep 2013 00:33:27 GMT
A small note: I'm not seeing any partitions directory being formed
under _bsp, which is where I have understood that they should be
appearing.

On 9/10/13, Alexander Asplund <alexasplund@gmail.com> wrote:
> Really appreciate the swift responses! Thanks again.
>
> I have not both increased mapper tasks and decreased max number of
> partitions at the same time. I first did tests with increased Mapper
> heap available, but reset the setting after it apparently caused
> other, large volume, non-Giraph jobs to crash nodes when reducers also
> were running.
>
> I'm curious why increasing mapper heap is a requirement. Shouldn't the
> OOC mode be able to work with the amount of heap that is available? Is
> there some agreement on the minimum amount of heap necessary for OOC
> to succeed, to guide the choice of Mapper heap amount?
>
> Either way, I will try increasing mapper heap again as much as
> possible, which hopefully will run.
>
> On 9/9/13, Claudio Martella <claudio.martella@gmail.com> wrote:
>> did you extend the heap available to the mapper tasks? e.g. through
>> mapred.child.java.opts.
>>
>>
>> On Tue, Sep 10, 2013 at 12:50 AM, Alexander Asplund
>> <alexasplund@gmail.com>wrote:
>>
>>> Thanks for the reply.
>>>
>>> I tried setting giraph.maxPartitionsInMemory to 1, but I'm still
>>> getting OOM: GC limit exceeded.
>>>
>>> Are there any particular cases the OOC will not be able to handle, or
>>> is it supposed to work in all cases? If the latter, it might be that I
>>> have made some configuration error.
>>>
>>> I do have one concern that might indicateI have done something wrong:
>>> to allow OOC to activate without crashing I had to modify the trunk
>>> code. This was because Giraph relied on guava-12 and
>>> DiskBackedPartitionStore used hasInt() - a method which does not exist
>>> in guava-11 which hadoop 2 depends on. At runtime guava 11 was being
>>> used
>>>
>>> I suppose this problem might indicate I'm running submitting the job
>>> using the wrong binary. Currently I am including the giraph
>>> dependencies with the jar, and running using hadoop jar.
>>>
>>> On 9/7/13, Claudio Martella <claudio.martella@gmail.com> wrote:
>>> > OOC is used also at input superstep. try to decrease the number of
>>> > partitions kept in memory.
>>> >
>>> >
>>> > On Sat, Sep 7, 2013 at 1:37 AM, Alexander Asplund
>>> > <alexasplund@gmail.com>wrote:
>>> >
>>> >> Hi,
>>> >>
>>> >> I'm trying to process a graph that is about 3 times the size of
>>> >> available memory. On the other hand, there is plenty of disk space.
I
>>> >> have enabled the giraph.useOutOfCoreGraph property, but it still
>>> >> crashes with outOfMemoryError: GC limit exceeded when I try running
>>> >> my
>>> >> job.
>>> >>
>>> >> I'm wondering of the spilling is supposed to work during the input
>>> >> step. If so, are there any additional steps that must be taken to
>>> >> ensure it functions?
>>> >>
>>> >> Regards,
>>> >> Alexander Asplund
>>> >>
>>> >
>>> >
>>> >
>>> > --
>>> >    Claudio Martella
>>> >    claudio.martella@gmail.com
>>> >
>>>
>>>
>>> --
>>> Alexander Asplund
>>>
>>
>>
>>
>> --
>>    Claudio Martella
>>    claudio.martella@gmail.com
>>
>
>
> --
> Alexander Asplund
>


-- 
Alexander Asplund

Mime
View raw message