hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "mai shurong (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-3416) deadlock in a job between map and reduce cores allocation
Date Mon, 30 Mar 2015 06:03:53 GMT

    [ https://issues.apache.org/jira/browse/YARN-3416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14386244#comment-14386244
] 

mai shurong commented on YARN-3416:
-----------------------------------

In YARN-1680,there are only 4 NodeManagers in cluster, so it is possible all 4 NodeManagers
are in the blacklist. But in my case, there are more than 50 NodeManagers and over 1000 vcores
in cluster. Therefore, it is hardly probable all NodeManagers in cluster are in blacklist.

> deadlock in a job between map and reduce cores allocation 
> ----------------------------------------------------------
>
>                 Key: YARN-3416
>                 URL: https://issues.apache.org/jira/browse/YARN-3416
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: fairscheduler
>    Affects Versions: 2.6.0
>            Reporter: mai shurong
>
> I submit a  big job, which has 500 maps and 350 reduce, to a queue(fairscheduler) with
300 max cores. When the big mapreduce job is running 100% maps, the 300 reduces have occupied
300 max cores in the queue. And then, a map fails and retry, waiting for a core, while the
300 reduces are waiting for failed map to finish. So a deadlock occur. As a result, the job
is blocked, and the later job in the queue cannot run because no available cores in the queue.
> I think there is the similar issue for memory of a queue .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message