giraph-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Agrta Rawat <agrta.ra...@gmail.com>
Subject Re: Giraph Buffer Size
Date Wed, 16 Apr 2014 08:56:12 GMT
Yes.

Regards,
Agrta Rawat


On Wed, Apr 16, 2014 at 1:47 PM, Pavan Kumar A <pavanka@outlook.com> wrote:

> Are you using Java 7?
>
> ------------------------------
> Date: Wed, 16 Apr 2014 13:07:20 +0530
> Subject: Re: Giraph Buffer Size
>
> From: agrta.rawat@gmail.com
> To: user@giraph.apache.org
>
> Hi Pavan,
>
> For all the intermediate processing there would be a buffer (intermediate
> memory space) that stores data, messages etc.and then the complete process
> further.
> Pls correct me if I am wrong.
>
> I have set Xms and Xmx values properly.
>
> The problem is that the task runs for small datasets but as the input data
> size is increased, it fails.
>
> The error that I am getting in sysout logs is-
>
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGBUS (0x7) at pc=0x00002aaaab404144, pid=10397, tid=1144650048
> #
> # JRE version: 6.0_25-b06
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.0-b11 mixed mode
> linux-amd64 compressed oops)
> # Problematic frame:
> # J  sun.nio.ch.SelectorImpl.processDeregisterQueue()V
> #
> # An error report file with more information is saved as:
> # /hadoopTaskTrackerLogsLocation/process_id/s_err_pid10397.log
> #
> # If you would like to submit a bug report, please visit:
> #   http://java.sun.com/webapps/bugreport/crash.jsp
>
>
> Please suggest what should be done? Am I missing anything?
>
> Regards,
> Agrta Rawat
>
>
>
> On Wed, Apr 16, 2014 at 12:44 PM, Pavan Kumar A <pavanka@outlook.com>wrote:
>
>
> What do u mean by buffer size? Just as a note, please ensure that Xmx &
> Xms values are properly set for the mapper using mapred.child.java.opts
> or mapred.map.child.java.opts
> Also what does the error message show: please use pastebin & post the link
> here.
> ------------------------------
> Date: Wed, 16 Apr 2014 12:13:29 +0530
> Subject: Giraph Buffer Size
> From: agrta.rawat@gmail.com
> To: user@giraph.apache.org
>
>
> Hi All,
>
> I am trying to run a job in Giraph-1.0.0 on Hadoop-1.0.0 cluster with 3
> nodes.
> Each node has 32gb RAM.
>
> In superstep-8 of my algorithm approximately 2M messages are being sent
> where size of each message is more than 20 kb. But the process sticks here
> and task gets failed.
>
> In the sysout logs, it shows Fatal Error.
>
> Is this error because Buffer is getting full?
> How can I increase the buffer size for giraph application?
>
> Please suggest.
>
> Regards,
> Agrta Rawat
>
>
>

Mime
View raw message