hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Akshay Aggarwal <akshay.pathfin...@gmail.com>
Subject Dump not copied to HDFS | Taking memory dumps of Hadoop tasks
Date Wed, 17 Aug 2016 07:11:27 GMT

I was following the following blog post to copy the dump in case the
container goes OOM -

But for some reason the dump is not getting pushed to hdfs, I get the
following logs -


Log Type: stderr

Log Upload Time: Wed Aug 17 12:01:55 +0530 2016

Log Length: 833

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
SLF4J: Found binding in
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type
[org.slf4j.impl.Log4jLoggerFactory]*./copy_dump.sh: 2: ./copy_dump.sh:
Bad substitution*

Log Type: stdout

Log Upload Time: Wed Aug 17 12:01:55 +0530 2016

Log Length: 272

*java.lang.OutOfMemoryError: Java heap space
Dumping heap to ./heapdump.hprof ...
Heap dump file created [1906572521 <%5B1906572521> bytes in 7.933 secs]
# java.lang.OutOfMemoryError: Java heap space
# -XX:OnOutOfMemoryError="./copy_dump.sh"
#   Executing /bin/sh -c "./copy_dump.sh"...*


copy_dump.sh looks like this -

hadoop fs -copyFromLocal heapdump.hprof

And I've added the following params to my job -

        -files hdfs:///user/fk-fdp-cdm/scripts/copy_dump.sh#copy_dump.sh \
        -archives ${metadata_archive} \
        -D mapred.create.symlink=yes \
        -D mapreduce.reduce.java.opts='-Xmx2048m
-XX:HeapDumpPath=./heapdump.hprof -XX:OnOutOfMemoryError=./copy_dump.sh' \

Any pointers to what might be wrong here?

Akshay Aggarwal

View raw message