mesos-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Zhitao Li (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MESOS-8038) Launching GPU task sporadically fails.
Date Fri, 10 Aug 2018 17:49:00 GMT

    [ https://issues.apache.org/jira/browse/MESOS-8038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16576641#comment-16576641
] 

Zhitao Li commented on MESOS-8038:
----------------------------------

[~gilbert] I don't think we will use forever. My plan is to use a value like 10mins for this
flag after back port, then observe whether new timeout works.

[~bmahler] I agree that we are not really fixing the root cause here. I'll link the patches
to a new task MESOS-9148 and keep this one open instead.

> Launching GPU task sporadically fails.
> --------------------------------------
>
>                 Key: MESOS-8038
>                 URL: https://issues.apache.org/jira/browse/MESOS-8038
>             Project: Mesos
>          Issue Type: Bug
>          Components: allocation, containerization, gpu
>    Affects Versions: 1.4.0
>            Reporter: Sai Teja Ranuva
>            Assignee: Zhitao Li
>            Priority: Critical
>         Attachments: mesos-master.log, mesos-slave-with-issue-uber.txt, mesos-slave.INFO.log
>
>
> I was running a job which uses GPUs. It runs fine most of the time. 
> But occasionally I see the following message in the mesos log.
> "Collect failed: Requested 1 but only 0 available"
> Followed by executor getting killed and the tasks getting lost. This happens even before
the the job starts. A little search in the code base points me to something related to GPU
resource being the probable cause.
> There is no deterministic way that this can be reproduced. It happens occasionally.
> I have attached the slave log for the issue.
> Using 1.4.0 Mesos Master and 1.4.0 Mesos Slave.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message