hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ferdy Galema" <f.gal...@gmail.com>
Subject JVM core dumps unavailable because of temporary folders
Date Mon, 18 Feb 2008 14:42:25 GMT
Every once and a while some of our Tasks fail (not the Tracker, just the
Tasks), due to the JVM (jre1.6.0_04) crashing with exitcode 134. The logging
reports that it outputted a crashdump:

# An unexpected error has been detected by Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00002aaaaaeb5290, pid=25119, tid=1081145664
#
# Java VM: Java HotSpot(TM) 64-Bit Server VM (10.0-b19 mixed mode
linux-amd64)
# Problematic frame:
# V  [libjvm.so+0x2f5290]
#
# An error report file with more information is saved as:
#
/kalooga/filesystem/mapreduce/local/taskTracker/jobcache/job_200802151637_0006/task_200802151637_0006_m_000005_0/hs_err_pid25119.log
#
# If you would like to submit a bug report, please visit:
#  http://java.sun.com/webapps/bugreport/crash.jsp
#


The problem is that the dump does not exist at the specified location. My
bet is that Hadoop starts a new Task immediately after the failed one, which
results in the old jobcache to be deleted. Is it possible to keep this cache
instead?

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message