ambari-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Dmitry Lysnichenko (JIRA)" <>
Subject [jira] [Commented] (AMBARI-4055) set core file size on hosts to get core dump when JVM crashes
Date Mon, 20 Jan 2014 17:48:19 GMT


Dmitry Lysnichenko commented on AMBARI-4055:


> set core file size on hosts to get core dump when JVM crashes
> -------------------------------------------------------------
>                 Key: AMBARI-4055
>                 URL:
>             Project: Ambari
>          Issue Type: Task
>          Components: agent
>            Reporter: Dmytro Shkvyra
>            Assignee: Dmytro Shkvyra
>             Fix For: 1.5.0
> We recently got some customer issue with NameNode crash caused by native code error.
Because the default ulimit for core file size is zero, the customer could not get a core dump
and thus it makes it very hard to debug the issue.
> As more native code is added to improve system performance, we would expect to see more
JVM crashes caused by errors in the native code before the code is eventually stabilized .
> We would like to set unlimit to the core file size on the host running NameNode and DataNode
or any host where the native code is invoked.
> Let's add this step though Abmari.
> By default on Linux, the limit is zero, and thus core file can't be created. 
> $> ulimit -c
> 0
> The command to set unlimited core file size is:
> $> ulimit -c unlimited
> after this, we can double check the limit:
> $> ulimit -c
> unlimited
> We can do this setting immediately before starting Hadoop service. The setting will take
effect since the Hadoop service will be started in the same shell.

This message was sent by Atlassian JIRA

View raw message