mesos-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Niklas Quarfot Nielsen (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MESOS-2346) Docker tasks exiting normally, but returning TASK_FAILED
Date Tue, 17 Feb 2015 00:42:12 GMT

    [ https://issues.apache.org/jira/browse/MESOS-2346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14323493#comment-14323493
] 

Niklas Quarfot Nielsen commented on MESOS-2346:
-----------------------------------------------

[~tnachen] Is this something you can take a look at? If it is not a blocker, let's bump to
0.23.0

> Docker tasks exiting normally, but returning TASK_FAILED
> --------------------------------------------------------
>
>                 Key: MESOS-2346
>                 URL: https://issues.apache.org/jira/browse/MESOS-2346
>             Project: Mesos
>          Issue Type: Bug
>          Components: docker
>    Affects Versions: 0.22.0
>            Reporter: Brenden Matthews
>            Priority: Blocker
>
> Docker tasks which exit normally will return TASK_FAILED, as opposed to TASK_FINISHED.
This problem seems to occur only after `mesos-slave` has been running for some time. If the
slave is restarted, it will begin returning TASK_FINISHED correctly.
> Sample slave log:
> {noformat}
> Feb 11 23:22:13 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:13.483464
  798 slave.cpp:1138] Got assigned task ct:1423696932164:2:canary: for framework 20150211-045421-1401302794-5050-714-0001
> Feb 11 23:22:13 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:13.483667
  798 slave.cpp:3854] Checkpointing FrameworkInfo to '/tmp/mesos/meta/slaves/20150211-045421-1401302794-5050-714-S0/frameworks/20150211-045421-1401302794-5050-714-0001/framework.info'
> Feb 11 23:22:13 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:13.483894
  798 slave.cpp:3861] Checkpointing framework pid 'scheduler-f4679749-d7ad-4d8c-b610-f7043332d243@10.102.188.213:56385'
to '/tmp/mesos/meta/slaves/20150211-045421-1401302794-5050-714-S0/frameworks/20150211-045421-1401302794-5050-714-0001/framework.pid'
> Feb 11 23:22:13 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:13.484426
  798 gc.cpp:84] Unscheduling '/tmp/mesos/slaves/20150211-045421-1401302794-5050-714-S0/frameworks/20150211-045421-1401302794-5050-714-0001'
from gc
> Feb 11 23:22:13 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:13.484648
  797 gc.cpp:84] Unscheduling '/tmp/mesos/meta/slaves/20150211-045421-1401302794-5050-714-S0/frameworks/20150211-045421-1401302794-5050-714-0001'
from gc
> Feb 11 23:22:13 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:13.484748
  797 slave.cpp:1253] Launching task ct:1423696932164:2:canary: for framework 20150211-045421-1401302794-5050-714-0001
> Feb 11 23:22:13 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:13.485697
  797 slave.cpp:4297] Checkpointing ExecutorInfo to '/tmp/mesos/meta/slaves/20150211-045421-1401302794-5050-714-S0/frameworks/20150211-045421-1401302794-5050-714-0001/executors/ct:1423696932164:2:canary:/executor.info'
> Feb 11 23:22:13 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:13.485999
  797 slave.cpp:3929] Launching executor ct:1423696932164:2:canary: of framework 20150211-045421-1401302794-5050-714-0001
in work directory '/tmp/mesos/slaves/20150211-045421-1401302794-5050-714-S0/frameworks/20150211-045421-1401302794-5050-714-0001/executors/ct:1423696932164:2:canary:/runs/5395b133-d10d-4204-999e-4a38c03c55f5'
> Feb 11 23:22:13 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:13.486212
  797 slave.cpp:4320] Checkpointing TaskInfo to '/tmp/mesos/meta/slaves/20150211-045421-1401302794-5050-714-S0/frameworks/20150211-045421-1401302794-5050-714-0001/executors/ct:1423696932164:2:canary:/runs/5395b133-d10d-4204-999e-4a38c03c55f5/tasks/ct:1423696932164:2:canary:/task.info'
> Feb 11 23:22:13 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:13.509457
  797 slave.cpp:1376] Queuing task 'ct:1423696932164:2:canary:' for executor ct:1423696932164:2:canary:
of framework '20150211-045421-1401302794-5050-714-0001
> Feb 11 23:22:13 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:13.510926
  797 slave.cpp:574] Successfully attached file '/tmp/mesos/slaves/20150211-045421-1401302794-5050-714-S0/frameworks/20150211-045421-1401302794-5050-714-0001/executors/ct:1423696932164:2:canary:/runs/5395b133-d10d-4204-999e-4a38c03c55f5'
> Feb 11 23:22:13 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:13.516738
  799 docker.cpp:581] Starting container '5395b133-d10d-4204-999e-4a38c03c55f5' for task 'ct:1423696932164:2:canary:'
(and executor 'ct:1423696932164:2:canary:') of framework '20150211-045421-1401302794-5050-714-0001'
> Feb 11 23:22:13 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:13.516968
  799 docker.cpp:808] Running docker inspect mesosphere/test-suite:latest
> Feb 11 23:22:13 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:13.748788
  800 docker.cpp:253] Docker pull mesosphere/test-suite completed
> Feb 11 23:22:13 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:13.749068
  800 docker.cpp:462] Running docker run -d -c 102 -m 134217728 -e mesos_task_id=ct:1423696932164:2:canary:
-e CHRONOS_JOB_OWNER= -e CHRONOS_JOB_NAME=canary -e HOST=slave0.test-suite.msphere.co -e CHRONOS_RESOURCE_MEM=128.0
-e CHRONOS_RESOURCE_CPU=0.1 -e CHRONOS_RESOURCE_DISK=256.0 -e MESOS_SANDBOX=/mnt/mesos/sandbox
-v /opt/mesosphere:/opt/mesosphere:ro -v /tmp/mesos/slaves/20150211-045421-1401302794-5050-714-S0/docker/links/5395b133-d10d-4204-999e-4a38c03c55f5:/mnt/mesos/sandbox
--net bridge --entrypoint /bin/sh --name mesos-5395b133-d10d-4204-999e-4a38c03c55f5 mesosphere/test-suite
-c start ./tests/chronos-canary
> Feb 11 23:22:13 ip-10-102-188-213.ec2.internal mesos-slave[793]: W0211 23:22:13.949070
  799 containerizer.cpp:296] CommandInfo.grace_period flag is not set, using default value:
3secs
> Feb 11 23:22:13 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:13.971562
  799 docker.cpp:275] Checkpointing pid 26855 to '/tmp/mesos/meta/slaves/20150211-045421-1401302794-5050-714-S0/frameworks/20150211-045421-1401302794-5050-714-0001/executors/ct:1423696932164:2:canary:/runs/5395b133-d10d-4204-999e-4a38c03c55f5/pids/forked.pid'
> Feb 11 23:22:13 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:13.972506
  799 docker.cpp:674] Running logs() {
> Feb 11 23:22:13 ip-10-102-188-213.ec2.internal mesos-slave[793]: docker logs --follow
$1 &
> Feb 11 23:22:13 ip-10-102-188-213.ec2.internal mesos-slave[793]: pid=$!
> Feb 11 23:22:13 ip-10-102-188-213.ec2.internal mesos-slave[793]: docker wait $1 >/dev/null
2>&1
> Feb 11 23:22:13 ip-10-102-188-213.ec2.internal mesos-slave[793]: sleep 10
> Feb 11 23:22:13 ip-10-102-188-213.ec2.internal mesos-slave[793]: kill -TERM $pid >/dev/null
2>&1 &
> Feb 11 23:22:13 ip-10-102-188-213.ec2.internal mesos-slave[793]: }
> Feb 11 23:22:13 ip-10-102-188-213.ec2.internal mesos-slave[793]: logs mesos-5395b133-d10d-4204-999e-4a38c03c55f5
> Feb 11 23:22:13 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:13.995852
  796 slave.cpp:2898] Monitoring executor 'ct:1423696932164:2:canary:' of framework '20150211-045421-1401302794-5050-714-0001'
in container '5395b133-d10d-4204-999e-4a38c03c55f5'
> Feb 11 23:22:15 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:15.177346
  800 docker.cpp:568] Running docker inspect mesos-5395b133-d10d-4204-999e-4a38c03c55f5
> Feb 11 23:22:15 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:15.278367
  802 monitor.cpp:142] Failed to collect resource usage for container '5395b133-d10d-4204-999e-4a38c03c55f5'
for executor 'ct:1423696932164:2:canary:' of framework '20150211-045421-1401302794-5050-714-0001':
Container is not running
> Feb 11 23:22:16 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:16.249675
  795 slave.cpp:1920] Got registration for executor 'ct:1423696932164:2:canary:' of framework
20150211-045421-1401302794-5050-714-0001 from executor(1)@10.102.188.213:48220
> Feb 11 23:22:16 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:16.249784
  795 slave.cpp:2006] Checkpointing executor pid 'executor(1)@10.102.188.213:48220' to '/tmp/mesos/meta/slaves/20150211-045421-1401302794-5050-714-S0/frameworks/20150211-045421-1401302794-5050-714-0001/executors/ct:1423696932164:2:canary:/runs/5395b133-d10d-4204-999e-4a38c03c55f5/pids/libprocess.pid'
> Feb 11 23:22:16 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:16.250849
  798 docker.cpp:568] Running docker inspect mesos-5395b133-d10d-4204-999e-4a38c03c55f5
> Feb 11 23:22:16 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:16.250902
  795 slave.cpp:2039] Flushing queued task ct:1423696932164:2:canary: for executor 'ct:1423696932164:2:canary:'
of framework 20150211-045421-1401302794-5050-714-0001
> Feb 11 23:22:16 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:16.491888
  795 docker.cpp:568] Running docker inspect mesos-5395b133-d10d-4204-999e-4a38c03c55f5
> Feb 11 23:22:16 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:16.615794
  796 monitor.cpp:142] Failed to collect resource usage for container '5395b133-d10d-4204-999e-4a38c03c55f5'
for executor 'ct:1423696932164:2:canary:' of framework '20150211-045421-1401302794-5050-714-0001':
Container is not running
> Feb 11 23:22:17 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:17.770376
  795 docker.cpp:568] Running docker inspect mesos-5395b133-d10d-4204-999e-4a38c03c55f5
> Feb 11 23:22:17 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:17.871429
  802 monitor.cpp:142] Failed to collect resource usage for container '5395b133-d10d-4204-999e-4a38c03c55f5'
for executor 'ct:1423696932164:2:canary:' of framework '20150211-045421-1401302794-5050-714-0001':
Container is not running
> Feb 11 23:22:19 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:19.033041
  797 docker.cpp:568] Running docker inspect mesos-5395b133-d10d-4204-999e-4a38c03c55f5
> Feb 11 23:22:19 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:19.133776
  795 monitor.cpp:142] Failed to collect resource usage for container '5395b133-d10d-4204-999e-4a38c03c55f5'
for executor 'ct:1423696932164:2:canary:' of framework '20150211-045421-1401302794-5050-714-0001':
Container is not running
> Feb 11 23:22:20 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:20.293743
  797 docker.cpp:568] Running docker inspect mesos-5395b133-d10d-4204-999e-4a38c03c55f5
> Feb 11 23:22:20 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:20.394327
  796 monitor.cpp:142] Failed to collect resource usage for container '5395b133-d10d-4204-999e-4a38c03c55f5'
for executor 'ct:1423696932164:2:canary:' of framework '20150211-045421-1401302794-5050-714-0001':
Container is not running
> Feb 11 23:22:21 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:21.394729
  798 docker.cpp:568] Running docker inspect mesos-5395b133-d10d-4204-999e-4a38c03c55f5
> Feb 11 23:22:21 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:21.517295
  801 monitor.cpp:142] Failed to collect resource usage for container '5395b133-d10d-4204-999e-4a38c03c55f5'
for executor 'ct:1423696932164:2:canary:' of framework '20150211-045421-1401302794-5050-714-0001':
Container is not running
> Feb 11 23:22:22 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:22.517639
  796 docker.cpp:568] Running docker inspect mesos-5395b133-d10d-4204-999e-4a38c03c55f5
> Feb 11 23:22:22 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:22.577785
  798 monitor.cpp:142] Failed to collect resource usage for container '5395b133-d10d-4204-999e-4a38c03c55f5'
for executor 'ct:1423696932164:2:canary:' of framework '20150211-045421-1401302794-5050-714-0001':
Container is not running
> Feb 11 23:22:22 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:22.778309
  797 docker.cpp:1333] Executor for container '5395b133-d10d-4204-999e-4a38c03c55f5' has exited
> Feb 11 23:22:22 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:22.778420
  797 docker.cpp:1159] Destroying container '5395b133-d10d-4204-999e-4a38c03c55f5'
> Feb 11 23:22:22 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:22.778465
  797 docker.cpp:1248] Running docker stop on container '5395b133-d10d-4204-999e-4a38c03c55f5'
> Feb 11 23:22:22 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:22.778513
  797 docker.cpp:502] Running docker stop -t 0 mesos-5395b133-d10d-4204-999e-4a38c03c55f5
> Feb 11 23:22:22 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:22.878623
  800 slave.cpp:2956] Executor 'ct:1423696932164:2:canary:' of framework 20150211-045421-1401302794-5050-714-0001
exited with status 0
> Feb 11 23:22:22 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:22.879515
  800 slave.cpp:2273] Handling status update TASK_FAILED (UUID: da8d7b10-f9f8-45e9-aaea-8765e1ae0244)
for task ct:1423696932164:2:canary: of framework 20150211-045421-1401302794-5050-714-0001
from @0.0.0.0:0
> Feb 11 23:22:22 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:22.879593
  800 slave.cpp:4237] Terminating task ct:1423696932164:2:canary:
> Feb 11 23:22:22 ip-10-102-188-213.ec2.internal mesos-slave[793]: W0211 23:22:22.879900
  802 docker.cpp:841] Ignoring updating unknown container: 5395b133-d10d-4204-999e-4a38c03c55f5
> Feb 11 23:22:22 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:22.880166
  798 status_update_manager.cpp:317] Received status update TASK_FAILED (UUID: da8d7b10-f9f8-45e9-aaea-8765e1ae0244)
for task ct:1423696932164:2:canary: of framework 20150211-045421-1401302794-5050-714-0001
> Feb 11 23:22:22 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:22.880208
  798 status_update_manager.cpp:494] Creating StatusUpdate stream for task ct:1423696932164:2:canary:
of framework 20150211-045421-1401302794-5050-714-0001
> Feb 11 23:22:22 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:22.880494
  798 status_update_manager.hpp:346] Checkpointing UPDATE for status update TASK_FAILED (UUID:
da8d7b10-f9f8-45e9-aaea-8765e1ae0244) for task ct:1423696932164:2:canary: of framework 20150211-045421-1401302794-5050-714-0001
> Feb 11 23:22:22 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:22.882587
  798 status_update_manager.cpp:371] Forwarding update TASK_FAILED (UUID: da8d7b10-f9f8-45e9-aaea-8765e1ae0244)
for task ct:1423696932164:2:canary: of framework 20150211-045421-1401302794-5050-714-0001
to the slave
> Feb 11 23:22:22 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:22.882726
  802 slave.cpp:2516] Forwarding the update TASK_FAILED (UUID: da8d7b10-f9f8-45e9-aaea-8765e1ae0244)
for task ct:1423696932164:2:canary: of framework 20150211-045421-1401302794-5050-714-0001
to master@10.47.134.83:5050
> Feb 11 23:22:22 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:22.882962
  796 slave.cpp:2443] Status update manager successfully handled status update TASK_FAILED
(UUID: da8d7b10-f9f8-45e9-aaea-8765e1ae0244) for task ct:1423696932164:2:canary: of framework
20150211-045421-1401302794-5050-714-0001
> Feb 11 23:22:24 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:24.213045
  796 slave.cpp:2273] Handling status update TASK_RUNNING (UUID: 4fcf146f-fe13-474c-8d58-7a3616b2632f)
for task ct:1423696932164:2:canary: of framework 20150211-045421-1401302794-5050-714-0001
from executor(1)@10.102.188.213:48220
> Feb 11 23:22:24 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:24.213256
  796 slave.cpp:2273] Handling status update TASK_FINISHED (UUID: a272c6ea-3b78-4515-90fe-e797b1a062db)
for task ct:1423696932164:2:canary: of framework 20150211-045421-1401302794-5050-714-0001
from executor(1)@10.102.188.213:48220
> Feb 11 23:22:24 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:24.213551
  799 status_update_manager.cpp:317] Received status update TASK_RUNNING (UUID: 4fcf146f-fe13-474c-8d58-7a3616b2632f)
for task ct:1423696932164:2:canary: of framework 20150211-045421-1401302794-5050-714-0001
> Feb 11 23:22:24 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:24.213608
  799 status_update_manager.hpp:346] Checkpointing UPDATE for status update TASK_RUNNING (UUID:
4fcf146f-fe13-474c-8d58-7a3616b2632f) for task ct:1423696932164:2:canary: of framework 20150211-045421-1401302794-5050-714-0001
> Feb 11 23:22:24 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:24.215323
  799 status_update_manager.cpp:317] Received status update TASK_FINISHED (UUID: a272c6ea-3b78-4515-90fe-e797b1a062db)
for task ct:1423696932164:2:canary: of framework 20150211-045421-1401302794-5050-714-0001
> Feb 11 23:22:24 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:24.215348
  800 slave.cpp:2443] Status update manager successfully handled status update TASK_RUNNING
(UUID: 4fcf146f-fe13-474c-8d58-7a3616b2632f) for task ct:1423696932164:2:canary: of framework
20150211-045421-1401302794-5050-714-0001
> Feb 11 23:22:24 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:24.215386
  799 status_update_manager.hpp:346] Checkpointing UPDATE for status update TASK_FINISHED
(UUID: a272c6ea-3b78-4515-90fe-e797b1a062db) for task ct:1423696932164:2:canary: of framework
20150211-045421-1401302794-5050-714-0001
> Feb 11 23:22:24 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:24.215417
  800 slave.cpp:2449] Sending acknowledgement for status update TASK_RUNNING (UUID: 4fcf146f-fe13-474c-8d58-7a3616b2632f)
for task ct:1423696932164:2:canary: of framework 20150211-045421-1401302794-5050-714-0001
to executor(1)@10.102.188.213:48220
> Feb 11 23:22:24 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:24.217686
  799 slave.cpp:2443] Status update manager successfully handled status update TASK_FINISHED
(UUID: a272c6ea-3b78-4515-90fe-e797b1a062db) for task ct:1423696932164:2:canary: of framework
20150211-045421-1401302794-5050-714-0001
> Feb 11 23:22:24 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:24.217746
  799 slave.cpp:2449] Sending acknowledgement for status update TASK_FINISHED (UUID: a272c6ea-3b78-4515-90fe-e797b1a062db)
for task ct:1423696932164:2:canary: of framework 20150211-045421-1401302794-5050-714-0001
to executor(1)@10.102.188.213:48220
> Feb 11 23:22:26 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:26.608397
  803 poll_socket.cpp:93] Socket error while connecting
> Feb 11 23:22:26 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:26.608513
  803 process.cpp:1543] Failed to send, connect: Socket error while connecting
> Feb 11 23:22:27 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:27.793500
  798 slave.cpp:2596] Received ping from slave-observer(1)@10.47.134.83:5050
> Feb 11 23:22:27 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:27.793529
  797 status_update_manager.cpp:389] Received status update acknowledgement (UUID: da8d7b10-f9f8-45e9-aaea-8765e1ae0244)
for task ct:1423696932164:2:canary: of framework 20150211-045421-1401302794-5050-714-0001
> Feb 11 23:22:27 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:27.793663
  797 status_update_manager.hpp:346] Checkpointing ACK for status update TASK_FAILED (UUID:
da8d7b10-f9f8-45e9-aaea-8765e1ae0244) for task ct:1423696932164:2:canary: of framework 20150211-045421-1401302794-5050-714-0001
> Feb 11 23:22:27 ip-10-102-188-213.ec2.internal mesos-slave[793]: W0211 23:22:27.795445
  797 status_update_manager.cpp:443] Acknowledged a terminal status update TASK_FAILED (UUID:
da8d7b10-f9f8-45e9-aaea-8765e1ae0244) for task ct:1423696932164:2:canary: of framework 20150211-045421-1401302794-5050-714-0001
but updates are still pending
> Feb 11 23:22:27 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:27.795526
  797 status_update_manager.cpp:525] Cleaning up status update stream for task ct:1423696932164:2:canary:
of framework 20150211-045421-1401302794-5050-714-0001
> Feb 11 23:22:27 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:27.795626
  797 slave.cpp:1860] Status update manager successfully handled status update acknowledgement
(UUID: da8d7b10-f9f8-45e9-aaea-8765e1ae0244) for task ct:1423696932164:2:canary: of framework
20150211-045421-1401302794-5050-714-0001
> Feb 11 23:22:27 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:27.795656
  797 slave.cpp:4276] Completing task ct:1423696932164:2:canary:
> Feb 11 23:22:27 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:27.795703
  797 slave.cpp:3065] Cleaning up executor 'ct:1423696932164:2:canary:' of framework 20150211-045421-1401302794-5050-714-0001
> Feb 11 23:22:27 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:27.795945
  799 gc.cpp:56] Scheduling '/tmp/mesos/slaves/20150211-045421-1401302794-5050-714-S0/frameworks/20150211-045421-1401302794-5050-714-0001/executors/ct:1423696932164:2:canary:/runs/5395b133-d10d-4204-999e-4a38c03c55f5'
for gc 6.99999078895704days in the future
> Feb 11 23:22:27 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:27.795963
  797 slave.cpp:3144] Cleaning up framework 20150211-045421-1401302794-5050-714-0001
> Feb 11 23:22:27 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:27.796028
  799 gc.cpp:56] Scheduling '/tmp/mesos/slaves/20150211-045421-1401302794-5050-714-S0/frameworks/20150211-045421-1401302794-5050-714-0001/executors/ct:1423696932164:2:canary:'
for gc 6.99999078828741days in the future
> Feb 11 23:22:27 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:27.796077
  800 status_update_manager.cpp:279] Closing status update streams for framework 20150211-045421-1401302794-5050-714-0001
> Feb 11 23:22:27 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:27.796077
  799 gc.cpp:56] Scheduling '/tmp/mesos/meta/slaves/20150211-045421-1401302794-5050-714-S0/frameworks/20150211-045421-1401302794-5050-714-0001/executors/ct:1423696932164:2:canary:/runs/5395b133-d10d-4204-999e-4a38c03c55f5'
for gc 6.99999078791704days in the future
> Feb 11 23:22:27 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:27.796175
  799 gc.cpp:56] Scheduling '/tmp/mesos/meta/slaves/20150211-045421-1401302794-5050-714-S0/frameworks/20150211-045421-1401302794-5050-714-0001/executors/ct:1423696932164:2:canary:'
for gc 6.99999078757926days in the future
> Feb 11 23:22:27 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:27.796210
  799 gc.cpp:56] Scheduling '/tmp/mesos/slaves/20150211-045421-1401302794-5050-714-S0/frameworks/20150211-045421-1401302794-5050-714-0001'
for gc 6.99999078613333days in the future
> Feb 11 23:22:27 ip-10-102-188-213.ec2.internal mesos-slave[793]: I0211 23:22:27.796237
  799 gc.cpp:56] Scheduling '/tmp/mesos/meta/slaves/20150211-045421-1401302794-5050-714-S0/frameworks/20150211-045421-1401302794-5050-714-0001'
for gc 6.99999078585481days in the future
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message