hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Allen Wittenauer <awittena...@linkedin.com>
Subject Re: Java->native .so->seg fault->core dump file?
Date Fri, 28 Jan 2011 17:39:42 GMT

On Jan 21, 2011, at 12:57 PM, Keith Wiley wrote:
> and I have this in my .bashrc (which I believe should be propagated to the slave nodes):
> 	ulimit -c unlimited

	.bashrc likely isn't executed at task startup, btw.  Also, you would need to have this in
whatever account is used to run the tasktracker...

> and in my native code I call rlimit() and write the results, where I see:
> 	RLIMIT_CORE:  18446744073709551615     18446744073709551615
> 
> which indicates the "unlimited" setting, but I can't find any core dump files in the
node's hadoop directories after the job runs.
> 
> Any ideas what I'm doing wrong?

	Which operating system?  On Linux, what is the value of /proc/sys/kernel/core_pattern ? On
Solaris, what is in /etc/coreadm.conf ?


Mime
View raw message