hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Wenzel <mwen...@proheris.de>
Subject AW: Mapreduce Job fails if one Node is offline?
Date Mon, 24 Oct 2016 12:52:41 GMT
Hey alex,

first of all thanks for your reply.

>> Dfs replication has nothing to do with Yarn or MapReduce, its HDFS. Replication defines
how many replicas are existing in a cluster.
Okay, I got this. I mixed wrong things up here.

>> When you kill the NM and you don’t have yarn.nodemanager.recovery.enabled (https://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-site/NodeManagerRestart.html)
set, the containers running on that node are getting lost or killed, but your job will likely
run and wait until that NM comes back.
As far as I understood this, NodeManager restart is for keeping containers until the NodeManager
is online again, but I don’t see why this should have something to do with my problem.

My problem is:
If one of all my Nodes is offline, my MapReduce jobs don’t finish successfully anymore.
I don’t want to wait until it is online again. I want my jobs to run and finish on all my
other healthy notes instead.

Same behavior after enabling and configuring NM restart.
Attempts error message is:
java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 111
at
…
java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 2
at
…
java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 111
at
…
Task KILL is received. Killing attempt!

If my problem is not clear please let me know. If I shall post some log’s just let me know
which ones you’re looking for.

Thanks in advice,
And best regards.
-- Mike

Von: wget.null@gmail.com [mailto:wget.null@gmail.com]
Gesendet: Freitag, 21. Oktober 2016 11:53
An: Mike Wenzel <mwenzel@proheris.de>; user@hadoop.apache.org
Betreff: RE: Mapreduce Job fails if one Node is offline?

Hey Mike,

Dfs replication has nothing to do with Yarn or MapReduce, its HDFS. Replication defines how
many replicas are existing in a cluster.
When you kill the NM and you don’t have yarn.nodemanager.recovery.enabled (https://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-site/NodeManagerRestart.html)
set, the containers running on that node are getting lost or killed, but your job will likely
run and wait until that NM comes back.

http://hortonworks.com/blog/resilience-of-yarn-applications-across-nodemanager-restarts/
http://www.cloudera.com/documentation/enterprise/5-4-x/topics/admin_ha_yarn_work_preserving_recovery.html

--alex

--
B: mapredit.blogspot.com

From: Mike Wenzel<mailto:mwenzel@proheris.de>
Sent: Friday, October 21, 2016 11:29 AM
To: user@hadoop.apache.org<mailto:user@hadoop.apache.org>
Subject: Mapreduce Job fails if one Node is offline?

I got a small cluster for testing and learning hadoop:

Node1 - Namenode + ResourceManager + JobhistoryServer
Node2 - SecondaryNamenode
Node3 - Datanode + NodeManager
Node4 - Datanode + NodeManager
Node5 - Datanode + NodeManager

My dfs.replication is set to 2.

When I kill the Datanode and Nodemanager process on Node5  I expect Hadoop still to run and
finish my mapreduce jobs successfully.
In reality the job fails because he tries to transfer blocks to Node5 which is offline. Replication
is set to 2, so I expect him to see that Node5 is offline and only take the other two Nodes
to work with.

Can someone please explain to me how Hadoop should work in this case?
If my expectation of Hadoop is correct, and someone would try to help me out, I can add logs
and configuration.

Best Regards,
Mike.

Mime
View raw message