ambari-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Dmytro Shkvyra (JIRA)" <j...@apache.org>
Subject [jira] [Created] (AMBARI-4055) set core file size on hosts to get core dump when JVM crashes
Date Thu, 12 Dec 2013 17:02:06 GMT
Dmytro Shkvyra created AMBARI-4055:
--------------------------------------

             Summary: set core file size on hosts to get core dump when JVM crashes
                 Key: AMBARI-4055
                 URL: https://issues.apache.org/jira/browse/AMBARI-4055
             Project: Ambari
          Issue Type: Task
          Components: agent
            Reporter: Dmytro Shkvyra
            Assignee: Dmytro Shkvyra
             Fix For: 1.5.0


We recently got some customer issue with NameNode crash caused by native code error. Because
the default ulimit for core file size is zero, the customer could not get a core dump and
thus it makes it very hard to debug the issue.
As more native code is added to improve system performance, we would expect to see more JVM
crashes caused by errors in the native code before the code is eventually stabilized .
We would like to set unlimit to the core file size on the host running NameNode and DataNode
or any host where the native code is invoked.
Let's add this step though Abmari.
By default on Linux, the limit is zero, and thus core file can't be created. 
$> ulimit -c
0
The command to set unlimited core file size is:
$> ulimit -c unlimited
after this, we can double check the limit:
$> ulimit -c
unlimited
We can do this setting immediately before starting Hadoop service. The setting will take effect
since the Hadoop service will be started in the same shell.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

Mime
View raw message