hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Siddharth Ubale <siddharth.ub...@syncoms.com>
Subject Container exited with a non-zero exit code 1-SparkJOb on YARN
Date Wed, 20 Jan 2016 12:29:47 GMT
Hi,

I am running a Spark Job on the yarn cluster.
The spark job is a spark streaming application which is reading JSON from a kafka topic ,
inserting the JSON values to hbase tables via Phoenix , ands then sending out certain messages
to a websocket if the JSON satisfies a certain criteria.

My cluster is a 3 node cluster with 24GB ram and 24 cores in total.

Now :
1. when I am submitting the job with 10GB memory, the application fails saying memory is insufficient
to run the job
2. The job is submitted with 6G ram. However, it does not run successfully always.Common issues
faced :
                a. Container exited with a non-zero exit code 1 , and after multiple such
warning the job is finished.
                d. The failed job notifies that it was unable to find a file in HDFS which
is something like _hadoop_conf_xxxxxx.zip

Can someone pls let me know why am I seeing the above 2 issues.

Thanks,
Siddharth Ubale,


Mime
View raw message