mesos-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "haosdent (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MESOS-6273) SlaveRecoveryTest/0.KillTaskWithHTTPExecutor is flaky
Date Thu, 29 Sep 2016 05:19:20 GMT

    [ https://issues.apache.org/jira/browse/MESOS-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15531824#comment-15531824 ] 

haosdent commented on MESOS-6273:
---------------------------------

Orz, so fast

> SlaveRecoveryTest/0.KillTaskWithHTTPExecutor is flaky
> -----------------------------------------------------
>
>                 Key: MESOS-6273
>                 URL: https://issues.apache.org/jira/browse/MESOS-6273
>             Project: Mesos
>          Issue Type: Bug
>            Reporter: haosdent
>            Assignee: haosdent
>              Labels: flaky-test
>
> From apache jenkins CI {{autotools,clang,--verbose --enable-libevent --enable-ssl,GLOG_v=1 MESOS_VERBOSE=1,ubuntu:14.04,(docker||Hadoop)&&(!ubuntu-us1)&&(!ubuntu-6) #2696}}
> {code}
> [ RUN      ] SlaveRecoveryTest/0.KillTaskWithHTTPExecutor
> I0927 20:22:42.802944 28664 cluster.cpp:158] Creating default 'local' authorizer
> I0927 20:22:42.805765 28664 leveldb.cpp:174] Opened db in 2.526505ms
> I0927 20:22:42.806769 28664 leveldb.cpp:181] Compacted db in 975309ns
> I0927 20:22:42.806833 28664 leveldb.cpp:196] Created db iterator in 44129ns
> I0927 20:22:42.806854 28664 leveldb.cpp:202] Seeked to beginning of db in 10263ns
> I0927 20:22:42.806869 28664 leveldb.cpp:271] Iterated through 0 keys in the db in 7411ns
> I0927 20:22:42.806915 28664 replica.cpp:776] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned
> I0927 20:22:42.807858 28691 recover.cpp:451] Starting replica recovery
> I0927 20:22:42.808078 28691 recover.cpp:477] Replica is in EMPTY status
> I0927 20:22:42.809240 28695 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from __req_res__(5489)@172.17.0.1:46389
> I0927 20:22:42.809509 28694 recover.cpp:197] Received a recover response from a replica in EMPTY status
> I0927 20:22:42.810907 28694 recover.cpp:568] Updating replica status to STARTING
> I0927 20:22:42.811799 28694 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 642611ns
> I0927 20:22:42.811836 28694 replica.cpp:320] Persisted replica status to STARTING
> I0927 20:22:42.812047 28694 master.cpp:380] Master 94efab88-1953-4bb1-90e5-fff7ca94a463 (67ecb3fe2a7a) started on 172.17.0.1:46389
> I0927 20:22:42.812110 28696 recover.cpp:477] Replica is in STARTING status
> I0927 20:22:42.812077 28694 master.cpp:382] Flags at startup: --acls="" --agent_ping_timeout="15secs" --agent_reregister_timeout="10mins" --allocation_interval="1secs" --allocator="HierarchicalDRF" --authenticate_agents="true" --authenticate_frameworks="true" --authenticate_http_frameworks="true" --authenticate_http_readonly="true" --authenticate_http_readwrite="true" --authenticators="crammd5" --authorizers="local" --credentials="/tmp/u4JjqY/credentials" --framework_sorter="drf" --help="false" --hostname_lookup="true" --http_authenticators="basic" --http_framework_authenticators="basic" --initialize_driver_logging="true" --log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO" --max_agent_ping_timeouts="5" --max_completed_frameworks="50" --max_completed_tasks_per_framework="1000" --quiet="false" --recovery_agent_removal_limit="100%" --registry="replicated_log" --registry_fetch_timeout="1mins" --registry_gc_interval="15mins" --registry_max_agent_age="2weeks" --registry_max_agent_count="102400" --registry_store_timeout="100secs" --registry_strict="false" --root_submissions="true" --user_sorter="drf" --version="false" --webui_dir="/mesos/mesos-1.1.0/_inst/share/mesos/webui" --work_dir="/tmp/u4JjqY/master" --zk_session_timeout="10secs"
> I0927 20:22:42.812551 28694 master.cpp:432] Master only allowing authenticated frameworks to register
> I0927 20:22:42.812566 28694 master.cpp:446] Master only allowing authenticated agents to register
> I0927 20:22:42.812574 28694 master.cpp:459] Master only allowing authenticated HTTP frameworks to register
> I0927 20:22:42.812584 28694 credentials.hpp:37] Loading credentials for authentication from '/tmp/u4JjqY/credentials'
> I0927 20:22:42.812896 28694 master.cpp:504] Using default 'crammd5' authenticator
> I0927 20:22:42.813056 28694 http.cpp:883] Using default 'basic' HTTP authenticator for realm 'mesos-master-readonly'
> I0927 20:22:42.813194 28687 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from __req_res__(5490)@172.17.0.1:46389
> I0927 20:22:42.813211 28694 http.cpp:883] Using default 'basic' HTTP authenticator for realm 'mesos-master-readwrite'
> I0927 20:22:42.813489 28694 http.cpp:883] Using default 'basic' HTTP authenticator for realm 'mesos-master-scheduler'
> I0927 20:22:42.813503 28685 recover.cpp:197] Received a recover response from a replica in STARTING status
> I0927 20:22:42.814020 28685 recover.cpp:568] Updating replica status to VOTING
> I0927 20:22:42.814548 28694 master.cpp:584] Authorization enabled
> I0927 20:22:42.814710 28682 hierarchical.cpp:149] Initialized hierarchical allocator process
> I0927 20:22:42.814774 28682 whitelist_watcher.cpp:77] No whitelist given
> I0927 20:22:42.816146 28685 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 1.977879ms
> I0927 20:22:42.816187 28685 replica.cpp:320] Persisted replica status to VOTING
> I0927 20:22:42.816331 28685 recover.cpp:582] Successfully joined the Paxos group
> I0927 20:22:42.816543 28685 recover.cpp:466] Recover process terminated
> I0927 20:22:42.818918 28690 master.cpp:2013] Elected as the leading master!
> I0927 20:22:42.818948 28690 master.cpp:1560] Recovering from registrar
> I0927 20:22:42.819133 28684 registrar.cpp:329] Recovering registrar
> I0927 20:22:42.820485 28686 log.cpp:553] Attempting to start the writer
> I0927 20:22:42.821985 28687 replica.cpp:493] Replica received implicit promise request from __req_res__(5491)@172.17.0.1:46389 with proposal 1
> I0927 20:22:42.822475 28687 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 457812ns
> I0927 20:22:42.822509 28687 replica.cpp:342] Persisted promised to 1
> I0927 20:22:42.823482 28686 coordinator.cpp:238] Coordinator attempting to fill missing positions
> I0927 20:22:42.825855 28686 replica.cpp:388] Replica received explicit promise request from __req_res__(5492)@172.17.0.1:46389 for position 0 with proposal 2
> I0927 20:22:42.826539 28686 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 610611ns
> I0927 20:22:42.826581 28686 replica.cpp:708] Persisted action NOP at position 0
> I0927 20:22:42.828265 28687 replica.cpp:537] Replica received write request for position 0 from __req_res__(5493)@172.17.0.1:46389
> I0927 20:22:42.828363 28687 leveldb.cpp:436] Reading position from leveldb took 60107ns
> I0927 20:22:42.829083 28687 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 685510ns
> I0927 20:22:42.829118 28687 replica.cpp:708] Persisted action NOP at position 0
> I0927 20:22:42.830044 28687 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0
> I0927 20:22:42.830966 28687 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 909605ns
> I0927 20:22:42.830998 28687 replica.cpp:708] Persisted action NOP at position 0
> I0927 20:22:42.831933 28693 log.cpp:569] Writer started with ending position 0
> I0927 20:22:42.833644 28695 leveldb.cpp:436] Reading position from leveldb took 91245ns
> I0927 20:22:42.835265 28685 registrar.cpp:362] Successfully fetched the registry (0B) in 15.81696ms
> I0927 20:22:42.835427 28685 registrar.cpp:461] Applied 1 operations in 48122ns; attempting to update the registry
> I0927 20:22:42.836376 28692 log.cpp:577] Attempting to append 168 bytes to the log
> I0927 20:22:42.836663 28685 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1
> I0927 20:22:42.838640 28696 replica.cpp:537] Replica received write request for position 1 from __req_res__(5494)@172.17.0.1:46389
> I0927 20:22:42.839269 28696 leveldb.cpp:341] Persisting action (187 bytes) to leveldb took 580665ns
> I0927 20:22:42.839306 28696 replica.cpp:708] Persisted action APPEND at position 1
> I0927 20:22:42.840327 28682 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0
> I0927 20:22:42.840762 28682 leveldb.cpp:341] Persisting action (189 bytes) to leveldb took 414968ns
> I0927 20:22:42.840792 28682 replica.cpp:708] Persisted action APPEND at position 1
> I0927 20:22:42.842469 28682 registrar.cpp:506] Successfully updated the registry in 6.968832ms
> I0927 20:22:42.842615 28682 registrar.cpp:392] Successfully recovered registrar
> I0927 20:22:42.842895 28682 log.cpp:596] Attempting to truncate the log to 1
> I0927 20:22:42.843317 28682 master.cpp:1676] Recovered 0 agents from the registry (129B); allowing 10mins for agents to re-register
> I0927 20:22:42.843420 28682 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2
> I0927 20:22:42.843837 28682 hierarchical.cpp:176] Skipping recovery of hierarchical allocator: nothing to recover
> I0927 20:22:42.844843 28686 replica.cpp:537] Replica received write request for position 2 from __req_res__(5495)@172.17.0.1:46389
> I0927 20:22:42.845324 28686 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 448998ns
> I0927 20:22:42.845356 28686 replica.cpp:708] Persisted action TRUNCATE at position 2
> I0927 20:22:42.846235 28687 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0
> I0927 20:22:42.846657 28687 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 402893ns
> I0927 20:22:42.846714 28687 leveldb.cpp:399] Deleting ~1 keys from leveldb took 42108ns
> I0927 20:22:42.846738 28687 replica.cpp:708] Persisted action TRUNCATE at position 2
> I0927 20:22:42.850730 28664 containerizer.cpp:224] Using isolation: posix/cpu,posix/mem,filesystem/posix,network/cni
> W0927 20:22:42.851502 28664 backend.cpp:75] Failed to create 'aufs' backend: AufsBackend requires root privileges, but is running as user mesos
> W0927 20:22:42.851673 28664 backend.cpp:75] Failed to create 'bind' backend: BindBackend requires root privileges
> I0927 20:22:42.856077 28664 cluster.cpp:435] Creating default 'local' authorizer
> I0927 20:22:42.859477 28695 slave.cpp:208] Mesos agent started on @172.17.0.1:46389
> I0927 20:22:42.859531 28695 slave.cpp:209] Flags at startup: --acls="" --appc_simple_discovery_uri_prefix="http://" --appc_store_dir="/tmp/mesos/store/appc" --authenticate_http_readonly="true" --authenticate_http_readwrite="true" --authenticatee="crammd5" --authentication_backoff_factor="1secs" --authorizer="local" --cgroups_cpu_enable_pids_and_tids_count="false" --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup" --cgroups_limit_swap="false" --cgroups_root="mesos" --container_disk_watch_interval="15secs" --containerizers="mesos" --credential="/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_vFcjx8/credential" --default_role="*" --disk_watch_interval="1mins" --docker="docker" --docker_kill_orphans="true" --docker_registry="https://registry-1.docker.io" --docker_remove_delay="6hrs" --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns" --docker_store_dir="/tmp/mesos/store/docker" --docker_volume_checkpoint_dir="/var/run/mesos/isolators/docker/volume" --enforce_container_disk_quota="false" --executor_registration_timeout="1mins" --executor_shutdown_grace_period="5secs" --fetcher_cache_dir="/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_vFcjx8/fetch" --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks" --gc_disk_headroom="0.1" --hadoop_home="" --help="false" --hostname_lookup="true" --http_authenticators="basic" --http_command_executor="true" --http_credentials="/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_vFcjx8/http_credentials" --image_provisioner_backend="copy" --initialize_driver_logging="true" --isolation="posix/cpu,posix/mem" --launcher="posix" --launcher_dir="/mesos/mesos-1.1.0/_build/src" --logbufsecs="0" --logging_level="INFO" --oversubscribed_resources_interval="15secs" --perf_duration="10secs" --perf_interval="1mins" --qos_correction_interval_min="0ns" --quiet="false" --recover="reconnect" --recovery_timeout="15mins" --registration_backoff_factor="10ms" --resources="cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]" --revocable_cpu_low_priority="true" --runtime_dir="/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_vFcjx8" --sandbox_directory="/mnt/mesos/sandbox" --strict="true" --switch_user="true" --systemd_enable_support="true" --systemd_runtime_directory="/run/systemd/system" --version="false" --work_dir="/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_SaHNt3"
> I0927 20:22:42.860286 28695 credentials.hpp:86] Loading credential for authentication from '/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_vFcjx8/credential'
> I0927 20:22:42.860481 28695 slave.cpp:346] Agent using credential for: test-principal
> I0927 20:22:42.860508 28695 credentials.hpp:37] Loading credentials for authentication from '/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_vFcjx8/http_credentials'
> I0927 20:22:42.860698 28664 sched.cpp:226] Version: 1.1.0
> I0927 20:22:42.860808 28695 http.cpp:883] Using default 'basic' HTTP authenticator for realm 'mesos-agent-readonly'
> I0927 20:22:42.860975 28695 http.cpp:883] Using default 'basic' HTTP authenticator for realm 'mesos-agent-readwrite'
> I0927 20:22:42.861469 28690 sched.cpp:330] New master detected at master@172.17.0.1:46389
> I0927 20:22:42.861549 28690 sched.cpp:396] Authenticating with master master@172.17.0.1:46389
> I0927 20:22:42.861570 28690 sched.cpp:403] Using default CRAM-MD5 authenticatee
> I0927 20:22:42.861873 28689 authenticatee.cpp:121] Creating new client SASL connection
> I0927 20:22:42.862231 28689 master.cpp:6583] Authenticating scheduler-f6558647-3275-4c83-bc7d-8aa40594fe51@172.17.0.1:46389
> I0927 20:22:42.862432 28689 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(820)@172.17.0.1:46389
> I0927 20:22:42.862515 28695 slave.cpp:533] Agent resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]
> I0927 20:22:42.862639 28695 slave.cpp:541] Agent attributes: [  ]
> I0927 20:22:42.862661 28695 slave.cpp:546] Agent hostname: 67ecb3fe2a7a
> I0927 20:22:42.862735 28689 authenticator.cpp:98] Creating new server SASL connection
> I0927 20:22:42.862967 28685 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5
> I0927 20:22:42.863034 28685 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5'
> I0927 20:22:42.863142 28685 authenticator.cpp:204] Received SASL authentication start
> I0927 20:22:42.863216 28685 authenticator.cpp:326] Authentication requires more steps
> I0927 20:22:42.864922 28683 authenticatee.cpp:259] Received SASL authentication step
> I0927 20:22:42.865025 28683 authenticator.cpp:232] Received SASL authentication step
> I0927 20:22:42.865056 28683 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '67ecb3fe2a7a' server FQDN: '67ecb3fe2a7a' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false
> I0927 20:22:42.865070 28683 auxprop.cpp:181] Looking up auxiliary property '*userPassword'
> I0927 20:22:42.865113 28683 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5'
> I0927 20:22:42.865135 28683 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '67ecb3fe2a7a' server FQDN: '67ecb3fe2a7a' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true
> I0927 20:22:42.865146 28683 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true
> I0927 20:22:42.865155 28683 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true
> I0927 20:22:42.865157 28690 state.cpp:57] Recovering state from '/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_SaHNt3/meta'
> I0927 20:22:42.865172 28683 authenticator.cpp:318] Authentication success
> I0927 20:22:42.865327 28683 authenticatee.cpp:299] Authentication success
> I0927 20:22:42.865413 28688 master.cpp:6613] Successfully authenticated principal 'test-principal' at scheduler-f6558647-3275-4c83-bc7d-8aa40594fe51@172.17.0.1:46389
> I0927 20:22:42.865427 28695 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(820)@172.17.0.1:46389
> I0927 20:22:42.865450 28690 status_update_manager.cpp:203] Recovering status update manager
> I0927 20:22:42.865679 28697 containerizer.cpp:579] Recovering containerizer
> I0927 20:22:42.866066 28696 sched.cpp:502] Successfully authenticated with master master@172.17.0.1:46389
> I0927 20:22:42.866091 28696 sched.cpp:820] Sending SUBSCRIBE call to master@172.17.0.1:46389
> I0927 20:22:42.866418 28696 sched.cpp:853] Will retry registration in 1.860819469secs if necessary
> I0927 20:22:42.866598 28696 master.cpp:2587] Received SUBSCRIBE call for framework 'default' at scheduler-f6558647-3275-4c83-bc7d-8aa40594fe51@172.17.0.1:46389
> I0927 20:22:42.866623 28696 master.cpp:2049] Authorizing framework principal 'test-principal' to receive offers for role '*'
> I0927 20:22:42.867089 28696 master.cpp:2663] Subscribing framework default with checkpointing enabled and capabilities [  ]
> I0927 20:22:42.867385 28693 provisioner.cpp:253] Provisioner recovery complete
> I0927 20:22:42.867938 28693 hierarchical.cpp:272] Added framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:42.868013 28693 hierarchical.cpp:1691] No allocations performed
> I0927 20:22:42.868042 28693 hierarchical.cpp:1786] No inverse offers to send out!
> I0927 20:22:42.868101 28685 slave.cpp:5252] Finished recovery
> I0927 20:22:42.868288 28683 sched.cpp:743] Framework registered with 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:42.868345 28683 sched.cpp:757] Scheduler::registered took 35240ns
> I0927 20:22:42.868103 28693 hierarchical.cpp:1283] Performed allocation for 0 agents in 153664ns
> I0927 20:22:42.868726 28685 slave.cpp:5424] Querying resource estimator for oversubscribable resources
> I0927 20:22:42.869112 28688 slave.cpp:915] New master detected at master@172.17.0.1:46389
> I0927 20:22:42.869140 28688 slave.cpp:974] Authenticating with master master@172.17.0.1:46389
> I0927 20:22:42.869196 28688 slave.cpp:985] Using default CRAM-MD5 authenticatee
> I0927 20:22:42.869252 28691 status_update_manager.cpp:177] Pausing sending status updates
> I0927 20:22:42.869329 28688 slave.cpp:947] Detecting new master
> I0927 20:22:42.869397 28691 authenticatee.cpp:121] Creating new client SASL connection
> I0927 20:22:42.869660 28685 slave.cpp:5438] Received oversubscribable resources {} from the resource estimator
> I0927 20:22:42.869695 28691 master.cpp:6583] Authenticating agent@172.17.0.1:46389
> I0927 20:22:42.869792 28685 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(821)@172.17.0.1:46389
> I0927 20:22:42.870045 28690 authenticator.cpp:98] Creating new server SASL connection
> I0927 20:22:42.870249 28691 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5
> I0927 20:22:42.870290 28691 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5'
> I0927 20:22:42.870398 28691 authenticator.cpp:204] Received SASL authentication start
> I0927 20:22:42.870465 28691 authenticator.cpp:326] Authentication requires more steps
> I0927 20:22:42.870559 28691 authenticatee.cpp:259] Received SASL authentication step
> I0927 20:22:42.870666 28691 authenticator.cpp:232] Received SASL authentication step
> I0927 20:22:42.870703 28691 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '67ecb3fe2a7a' server FQDN: '67ecb3fe2a7a' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false
> I0927 20:22:42.870728 28691 auxprop.cpp:181] Looking up auxiliary property '*userPassword'
> I0927 20:22:42.870775 28691 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5'
> I0927 20:22:42.870812 28691 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '67ecb3fe2a7a' server FQDN: '67ecb3fe2a7a' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true
> I0927 20:22:42.870895 28691 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true
> I0927 20:22:42.870913 28691 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true
> I0927 20:22:42.870935 28691 authenticator.cpp:318] Authentication success
> I0927 20:22:42.871033 28686 authenticatee.cpp:299] Authentication success
> I0927 20:22:42.871132 28691 master.cpp:6613] Successfully authenticated principal 'test-principal' at agent@172.17.0.1:46389
> I0927 20:22:42.871156 28686 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(821)@172.17.0.1:46389
> I0927 20:22:42.871502 28697 slave.cpp:1069] Successfully authenticated with master master@172.17.0.1:46389
> I0927 20:22:42.871675 28697 slave.cpp:1475] Will retry registration in 7.061481ms if necessary
> I0927 20:22:42.871968 28691 master.cpp:5040] Registering agent at agent@172.17.0.1:46389 (67ecb3fe2a7a) with id 94efab88-1953-4bb1-90e5-fff7ca94a463-S0
> I0927 20:22:42.872498 28697 registrar.cpp:461] Applied 1 operations in 78656ns; attempting to update the registry
> I0927 20:22:42.873193 28689 log.cpp:577] Attempting to append 337 bytes to the log
> I0927 20:22:42.873308 28697 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 3
> I0927 20:22:42.874047 28691 replica.cpp:537] Replica received write request for position 3 from __req_res__(5496)@172.17.0.1:46389
> I0927 20:22:42.874275 28691 leveldb.cpp:341] Persisting action (356 bytes) to leveldb took 200651ns
> I0927 20:22:42.874305 28691 replica.cpp:708] Persisted action APPEND at position 3
> I0927 20:22:42.874833 28690 replica.cpp:691] Replica received learned notice for position 3 from @0.0.0.0:0
> I0927 20:22:42.875566 28690 leveldb.cpp:341] Persisting action (358 bytes) to leveldb took 716161ns
> I0927 20:22:42.875592 28690 replica.cpp:708] Persisted action APPEND at position 3
> I0927 20:22:42.877126 28688 registrar.cpp:506] Successfully updated the registry in 4.549888ms
> I0927 20:22:42.877324 28683 log.cpp:596] Attempting to truncate the log to 3
> I0927 20:22:42.877461 28688 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 4
> I0927 20:22:42.878018 28683 slave.cpp:4108] Received ping from slave-observer(380)@172.17.0.1:46389
> I0927 20:22:42.878095 28697 master.cpp:5111] Registered agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 at agent@172.17.0.1:46389 (67ecb3fe2a7a) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]
> I0927 20:22:42.878196 28687 replica.cpp:537] Replica received write request for position 4 from __req_res__(5497)@172.17.0.1:46389
> I0927 20:22:42.878196 28683 slave.cpp:1115] Registered with master master@172.17.0.1:46389; given agent ID 94efab88-1953-4bb1-90e5-fff7ca94a463-S0
> I0927 20:22:42.878222 28683 fetcher.cpp:86] Clearing fetcher cache
> I0927 20:22:42.878551 28684 status_update_manager.cpp:184] Resuming sending status updates
> I0927 20:22:42.878706 28683 slave.cpp:1138] Checkpointing SlaveInfo to '/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_SaHNt3/meta/slaves/94efab88-1953-4bb1-90e5-fff7ca94a463-S0/slave.info'
> I0927 20:22:42.878767 28687 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 554848ns
> I0927 20:22:42.878801 28687 replica.cpp:708] Persisted action TRUNCATE at position 4
> I0927 20:22:42.879078 28683 slave.cpp:1175] Forwarding total oversubscribed resources {}
> I0927 20:22:42.879233 28683 master.cpp:5508] Received update of agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 at agent@172.17.0.1:46389 (67ecb3fe2a7a) with total oversubscribed resources {}
> I0927 20:22:42.879743 28689 hierarchical.cpp:482] Added agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 (67ecb3fe2a7a) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: {})
> I0927 20:22:42.881165 28689 hierarchical.cpp:1786] No inverse offers to send out!
> I0927 20:22:42.881237 28689 hierarchical.cpp:1306] Performed allocation for agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 in 1.457191ms
> I0927 20:22:42.881366 28689 hierarchical.cpp:552] Agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 (67ecb3fe2a7a) updated with oversubscribed resources {} (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000])
> I0927 20:22:42.881492 28689 hierarchical.cpp:1691] No allocations performed
> I0927 20:22:42.881521 28689 hierarchical.cpp:1786] No inverse offers to send out!
> I0927 20:22:42.881551 28687 replica.cpp:691] Replica received learned notice for position 4 from @0.0.0.0:0
> I0927 20:22:42.881561 28689 hierarchical.cpp:1306] Performed allocation for agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 in 153899ns
> I0927 20:22:42.881750 28695 master.cpp:6412] Sending 1 offers to framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 (default) at scheduler-f6558647-3275-4c83-bc7d-8aa40594fe51@172.17.0.1:46389
> I0927 20:22:42.882107 28687 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 542815ns
> I0927 20:22:42.882184 28687 leveldb.cpp:399] Deleting ~2 keys from leveldb took 61368ns
> I0927 20:22:42.882212 28687 replica.cpp:708] Persisted action TRUNCATE at position 4
> I0927 20:22:42.882455 28695 sched.cpp:917] Scheduler::resourceOffers took 124879ns
> I0927 20:22:42.884361 28687 master.cpp:3521] Processing ACCEPT call for offers: [ 94efab88-1953-4bb1-90e5-fff7ca94a463-O0 ] on agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 at agent@172.17.0.1:46389 (67ecb3fe2a7a) for framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 (default) at scheduler-f6558647-3275-4c83-bc7d-8aa40594fe51@172.17.0.1:46389
> I0927 20:22:42.884493 28687 master.cpp:3143] Authorizing framework principal 'test-principal' to launch task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0
> I0927 20:22:42.887362 28687 master.cpp:8159] Adding task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 (67ecb3fe2a7a)
> I0927 20:22:42.887794 28687 master.cpp:4121] Launching task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 (default) at scheduler-f6558647-3275-4c83-bc7d-8aa40594fe51@172.17.0.1:46389 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 at agent@172.17.0.1:46389 (67ecb3fe2a7a)
> I0927 20:22:42.888658 28688 slave.cpp:1539] Got assigned task '792c5e61-3ab6-488a-af6b-46aaa67e3cb0' for framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:42.888871 28688 slave.cpp:6061] Checkpointing FrameworkInfo to '/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_SaHNt3/meta/slaves/94efab88-1953-4bb1-90e5-fff7ca94a463-S0/frameworks/94efab88-1953-4bb1-90e5-fff7ca94a463-0000/framework.info'
> I0927 20:22:42.889307 28688 slave.cpp:6072] Checkpointing framework pid 'scheduler-f6558647-3275-4c83-bc7d-8aa40594fe51@172.17.0.1:46389' to '/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_SaHNt3/meta/slaves/94efab88-1953-4bb1-90e5-fff7ca94a463-S0/frameworks/94efab88-1953-4bb1-90e5-fff7ca94a463-0000/framework.pid'
> I0927 20:22:42.890205 28688 slave.cpp:1696] Launching task '792c5e61-3ab6-488a-af6b-46aaa67e3cb0' for framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:42.891021 28688 paths.cpp:536] Trying to chown '/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_SaHNt3/slaves/94efab88-1953-4bb1-90e5-fff7ca94a463-S0/frameworks/94efab88-1953-4bb1-90e5-fff7ca94a463-0000/executors/792c5e61-3ab6-488a-af6b-46aaa67e3cb0/runs/8e11209b-d0f2-499f-ab8c-49f74390827d' to user 'mesos'
> I0927 20:22:42.905624 28688 slave.cpp:6526] Checkpointing ExecutorInfo to '/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_SaHNt3/meta/slaves/94efab88-1953-4bb1-90e5-fff7ca94a463-S0/frameworks/94efab88-1953-4bb1-90e5-fff7ca94a463-0000/executors/792c5e61-3ab6-488a-af6b-46aaa67e3cb0/executor.info'
> I0927 20:22:42.906615 28688 slave.cpp:6141] Launching executor '792c5e61-3ab6-488a-af6b-46aaa67e3cb0' of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_SaHNt3/slaves/94efab88-1953-4bb1-90e5-fff7ca94a463-S0/frameworks/94efab88-1953-4bb1-90e5-fff7ca94a463-0000/executors/792c5e61-3ab6-488a-af6b-46aaa67e3cb0/runs/8e11209b-d0f2-499f-ab8c-49f74390827d'
> I0927 20:22:42.907464 28690 containerizer.cpp:929] Starting container 8e11209b-d0f2-499f-ab8c-49f74390827d for executor '792c5e61-3ab6-488a-af6b-46aaa67e3cb0' of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:42.911730 28688 slave.cpp:6549] Checkpointing TaskInfo to '/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_SaHNt3/meta/slaves/94efab88-1953-4bb1-90e5-fff7ca94a463-S0/frameworks/94efab88-1953-4bb1-90e5-fff7ca94a463-0000/executors/792c5e61-3ab6-488a-af6b-46aaa67e3cb0/runs/8e11209b-d0f2-499f-ab8c-49f74390827d/tasks/792c5e61-3ab6-488a-af6b-46aaa67e3cb0/task.info'
> I0927 20:22:42.912176 28691 containerizer.cpp:1383] Launching 'mesos-containerizer' with flags '--command="{"arguments":["mesos-executor","--launcher_dir=\/mesos\/mesos-1.1.0\/_build\/src"],"shell":false,"value":"\/mesos\/mesos-1.1.0\/_build\/src\/mesos-executor"}" --help="false" --pipe_read="23" --pipe_write="28" --pre_exec_commands="[]" --runtime_directory="/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_vFcjx8/containers/8e11209b-d0f2-499f-ab8c-49f74390827d" --unshare_namespace_mnt="false" --user="mesos" --working_directory="/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_SaHNt3/slaves/94efab88-1953-4bb1-90e5-fff7ca94a463-S0/frameworks/94efab88-1953-4bb1-90e5-fff7ca94a463-0000/executors/792c5e61-3ab6-488a-af6b-46aaa67e3cb0/runs/8e11209b-d0f2-499f-ab8c-49f74390827d"'
> I0927 20:22:42.912493 28688 slave.cpp:1982] Queued task '792c5e61-3ab6-488a-af6b-46aaa67e3cb0' for executor '792c5e61-3ab6-488a-af6b-46aaa67e3cb0' of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:42.912595 28688 slave.cpp:868] Successfully attached file '/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_SaHNt3/slaves/94efab88-1953-4bb1-90e5-fff7ca94a463-S0/frameworks/94efab88-1953-4bb1-90e5-fff7ca94a463-0000/executors/792c5e61-3ab6-488a-af6b-46aaa67e3cb0/runs/8e11209b-d0f2-499f-ab8c-49f74390827d'
> I0927 20:22:42.917382 28691 launcher.cpp:127] Forked child with pid '5553' for container '8e11209b-d0f2-499f-ab8c-49f74390827d'
> I0927 20:22:42.917712 28691 containerizer.cpp:1418] Checkpointing container's forked pid 5553 to '/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_SaHNt3/meta/slaves/94efab88-1953-4bb1-90e5-fff7ca94a463-S0/frameworks/94efab88-1953-4bb1-90e5-fff7ca94a463-0000/executors/792c5e61-3ab6-488a-af6b-46aaa67e3cb0/runs/8e11209b-d0f2-499f-ab8c-49f74390827d/pids/forked.pid'
> I0927 20:22:43.170495  5567 executor.cpp:189] Version: 1.1.0
> I0927 20:22:43.187121 28693 process.cpp:3336] Handling HTTP event for process 'agent' with path: '/agent/api/v1/executor'
> I0927 20:22:43.188115 28693 http.cpp:273] HTTP POST for /agent/api/v1/executor from 172.17.0.1:59439
> I0927 20:22:43.188429 28693 slave.cpp:3017] Received Subscribe request for HTTP executor '792c5e61-3ab6-488a-af6b-46aaa67e3cb0' of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:43.188537 28693 slave.cpp:3080] Creating a marker file for HTTP based executor '792c5e61-3ab6-488a-af6b-46aaa67e3cb0' of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 (via HTTP) at path '/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_SaHNt3/meta/slaves/94efab88-1953-4bb1-90e5-fff7ca94a463-S0/frameworks/94efab88-1953-4bb1-90e5-fff7ca94a463-0000/executors/792c5e61-3ab6-488a-af6b-46aaa67e3cb0/runs/8e11209b-d0f2-499f-ab8c-49f74390827d/http.marker'
> I0927 20:22:43.192509 28693 slave.cpp:2186] Sending queued task '792c5e61-3ab6-488a-af6b-46aaa67e3cb0' to executor '792c5e61-3ab6-488a-af6b-46aaa67e3cb0' of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 (via HTTP)
> Received SUBSCRIBED event
> Subscribed executor on 67ecb3fe2a7a
> Received LAUNCH event
> Starting task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0
> /mesos/mesos-1.1.0/_build/src/mesos-containerizer launch --command="{"shell":true,"value":"sleep 1000"}" --help="false" --unshare_namespace_mnt="false"
> Forked command at 5598
> I0927 20:22:43.216197 28693 process.cpp:3336] Handling HTTP event for process 'agent' with path: '/agent/api/v1/executor'
> I0927 20:22:43.217299 28693 http.cpp:273] HTTP POST for /agent/api/v1/executor from 172.17.0.1:59440
> I0927 20:22:43.217643 28693 slave.cpp:3609] Handling status update TASK_RUNNING (UUID: 330c72a5-2fe2-420f-a7d5-69536ece1481) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:43.220669 28693 status_update_manager.cpp:323] Received status update TASK_RUNNING (UUID: 330c72a5-2fe2-420f-a7d5-69536ece1481) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:43.220953 28693 status_update_manager.cpp:500] Creating StatusUpdate stream for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:43.221915 28693 status_update_manager.cpp:832] Checkpointing UPDATE for status update TASK_RUNNING (UUID: 330c72a5-2fe2-420f-a7d5-69536ece1481) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:43.222390 28693 status_update_manager.cpp:377] Forwarding update TASK_RUNNING (UUID: 330c72a5-2fe2-420f-a7d5-69536ece1481) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 to the agent
> I0927 20:22:43.223430 28693 slave.cpp:4026] Forwarding the update TASK_RUNNING (UUID: 330c72a5-2fe2-420f-a7d5-69536ece1481) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 to master@172.17.0.1:46389
> I0927 20:22:43.223762 28693 slave.cpp:3920] Status update manager successfully handled status update TASK_RUNNING (UUID: 330c72a5-2fe2-420f-a7d5-69536ece1481) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:43.224381 28693 master.cpp:5638] Status update TASK_RUNNING (UUID: 330c72a5-2fe2-420f-a7d5-69536ece1481) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 from agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 at agent@172.17.0.1:46389 (67ecb3fe2a7a)
> I0927 20:22:43.224946 28693 master.cpp:5700] Forwarding status update TASK_RUNNING (UUID: 330c72a5-2fe2-420f-a7d5-69536ece1481) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:43.225304 28693 master.cpp:7537] Updating the state of task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING)
> I0927 20:22:43.225636 28688 sched.cpp:1025] Scheduler::statusUpdate took 43175ns
> I0927 20:22:43.226263 28688 master.cpp:4756] Processing ACKNOWLEDGE call 330c72a5-2fe2-420f-a7d5-69536ece1481 for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 (default) at scheduler-f6558647-3275-4c83-bc7d-8aa40594fe51@172.17.0.1:46389 on agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0
> I0927 20:22:43.226761 28688 status_update_manager.cpp:395] Received status update acknowledgement (UUID: 330c72a5-2fe2-420f-a7d5-69536ece1481) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> Received ACKNOWLEDGED event
> I0927 20:22:43.227613 28688 status_update_manager.cpp:832] Checkpointing ACK for status update TASK_RUNNING (UUID: 330c72a5-2fe2-420f-a7d5-69536ece1481) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:43.228097 28688 slave.cpp:2953] Status update manager successfully handled status update acknowledgement (UUID: 330c72a5-2fe2-420f-a7d5-69536ece1481) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:43.228987 28664 slave.cpp:787] Agent terminating
> I0927 20:22:43.229266 28696 master.cpp:1258] Agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 at agent@172.17.0.1:46389 (67ecb3fe2a7a) disconnected
> I0927 20:22:43.229300 28696 master.cpp:2947] Disconnecting agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 at agent@172.17.0.1:46389 (67ecb3fe2a7a)
> I0927 20:22:43.229368 28696 master.cpp:2966] Deactivating agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 at agent@172.17.0.1:46389 (67ecb3fe2a7a)
> I0927 20:22:43.229540 28696 hierarchical.cpp:581] Agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 deactivated
> I0927 20:22:43.229789 28664 containerizer.cpp:224] Using isolation: posix/cpu,posix/mem,filesystem/posix,network/cni
> W0927 20:22:43.230619 28664 backend.cpp:75] Failed to create 'aufs' backend: AufsBackend requires root privileges, but is running as user mesos
> W0927 20:22:43.230805 28664 backend.cpp:75] Failed to create 'bind' backend: BindBackend requires root privileges
> I0927 20:22:43.245085 28664 cluster.cpp:435] Creating default 'local' authorizer
> I0927 20:22:43.248883 28691 slave.cpp:208] Mesos agent started on @172.17.0.1:46389
> I0927 20:22:43.248942 28691 slave.cpp:209] Flags at startup: --acls="" --appc_simple_discovery_uri_prefix="http://" --appc_store_dir="/tmp/mesos/store/appc" --authenticate_http_readonly="true" --authenticate_http_readwrite="true" --authenticatee="crammd5" --authentication_backoff_factor="1secs" --authorizer="local" --cgroups_cpu_enable_pids_and_tids_count="false" --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup" --cgroups_limit_swap="false" --cgroups_root="mesos" --container_disk_watch_interval="15secs" --containerizers="mesos" --credential="/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_vFcjx8/credential" --default_role="*" --disk_watch_interval="1mins" --docker="docker" --docker_kill_orphans="true" --docker_registry="https://registry-1.docker.io" --docker_remove_delay="6hrs" --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns" --docker_store_dir="/tmp/mesos/store/docker" --docker_volume_checkpoint_dir="/var/run/mesos/isolators/docker/volume" --enforce_container_disk_quota="false" --executor_registration_timeout="1mins" --executor_shutdown_grace_period="5secs" --fetcher_cache_dir="/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_vFcjx8/fetch" --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks" --gc_disk_headroom="0.1" --hadoop_home="" --help="false" --hostname_lookup="true" --http_authenticators="basic" --http_command_executor="true" --http_credentials="/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_vFcjx8/http_credentials" --image_provisioner_backend="copy" --initialize_driver_logging="true" --isolation="posix/cpu,posix/mem" --launcher="posix" --launcher_dir="/mesos/mesos-1.1.0/_build/src" --logbufsecs="0" --logging_level="INFO" --oversubscribed_resources_interval="15secs" --perf_duration="10secs" --perf_interval="1mins" --qos_correction_interval_min="0ns" --quiet="false" --recover="reconnect" --recovery_timeout="15mins" --registration_backoff_factor="10ms" --resources="cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]" --revocable_cpu_low_priority="true" --runtime_dir="/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_vFcjx8" --sandbox_directory="/mnt/mesos/sandbox" --strict="true" --switch_user="true" --systemd_enable_support="true" --systemd_runtime_directory="/run/systemd/system" --version="false" --work_dir="/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_SaHNt3"
> I0927 20:22:43.249552 28691 credentials.hpp:86] Loading credential for authentication from '/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_vFcjx8/credential'
> I0927 20:22:43.249742 28691 slave.cpp:346] Agent using credential for: test-principal
> I0927 20:22:43.249768 28691 credentials.hpp:37] Loading credentials for authentication from '/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_vFcjx8/http_credentials'
> I0927 20:22:43.251060 28691 http.cpp:883] Using default 'basic' HTTP authenticator for realm 'mesos-agent-readonly'
> I0927 20:22:43.251282 28691 http.cpp:883] Using default 'basic' HTTP authenticator for realm 'mesos-agent-readwrite'
> I0927 20:22:43.253619 28691 slave.cpp:533] Agent resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]
> I0927 20:22:43.253733 28691 slave.cpp:541] Agent attributes: [  ]
> I0927 20:22:43.253751 28691 slave.cpp:546] Agent hostname: 67ecb3fe2a7a
> E0927 20:22:43.255632  5596 executor.cpp:681] End-Of-File received from agent. The agent closed the event stream
> I0927 20:22:43.257987 28683 state.cpp:57] Recovering state from '/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_SaHNt3/meta'
> I0927 20:22:43.258085 28683 state.cpp:698] No committed checkpointed resources found at '/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_SaHNt3/meta/resources/resources.info'
> I0927 20:22:43.264153 28683 fetcher.cpp:86] Clearing fetcher cache
> I0927 20:22:43.264247 28683 slave.cpp:5340] Recovering framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:43.264328 28683 slave.cpp:6291] Recovering executor '792c5e61-3ab6-488a-af6b-46aaa67e3cb0' of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:43.265882 28691 status_update_manager.cpp:203] Recovering status update manager
> I0927 20:22:43.265915 28691 status_update_manager.cpp:211] Recovering executor '792c5e61-3ab6-488a-af6b-46aaa67e3cb0' of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:43.265996 28691 status_update_manager.cpp:500] Creating StatusUpdate stream for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:43.266665 28691 status_update_manager.cpp:808] Replaying status update stream for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0
> I0927 20:22:43.267539 28683 process.cpp:3336] Handling HTTP event for process 'agent' with path: '/agent/api/v1/executor'
> I0927 20:22:43.268721 28683 containerizer.cpp:579] Recovering containerizer
> I0927 20:22:43.268857 28683 containerizer.cpp:635] Recovering container 8e11209b-d0f2-499f-ab8c-49f74390827d for executor '792c5e61-3ab6-488a-af6b-46aaa67e3cb0' of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:43.269094 28694 http.cpp:273] HTTP POST for /agent/api/v1/executor from 172.17.0.1:59446
> I0927 20:22:43.269335 28694 slave.cpp:3017] Received Subscribe request for HTTP executor '792c5e61-3ab6-488a-af6b-46aaa67e3cb0' of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 (via HTTP)
> I0927 20:22:43.269419 28694 slave.cpp:3080] Creating a marker file for HTTP based executor '792c5e61-3ab6-488a-af6b-46aaa67e3cb0' of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 (via HTTP) at path '/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_SaHNt3/meta/slaves/94efab88-1953-4bb1-90e5-fff7ca94a463-S0/frameworks/94efab88-1953-4bb1-90e5-fff7ca94a463-0000/executors/792c5e61-3ab6-488a-af6b-46aaa67e3cb0/runs/8e11209b-d0f2-499f-ab8c-49f74390827d/http.marker'
> E0927 20:22:43.272727 28697 slave.cpp:2055] Failed to update resources for container 8e11209b-d0f2-499f-ab8c-49f74390827d of executor '792c5e61-3ab6-488a-af6b-46aaa67e3cb0' of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000, destroying container: Collect failed: Unknown container
> I0927 20:22:43.274351 28683 containerizer.cpp:1789] Destroying container 8e11209b-d0f2-499f-ab8c-49f74390827d
> Received SUBSCRIBED event
> Subscribed executor on 67ecb3fe2a7a
> I0927 20:22:43.286520 28690 provisioner.cpp:253] Provisioner recovery complete
> I0927 20:22:43.287446 28688 slave.cpp:5199] Waiting for executor '792c5e61-3ab6-488a-af6b-46aaa67e3cb0' of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 (via HTTP) to subscribe
> I0927 20:22:43.355084 28697 containerizer.cpp:2101] Container 8e11209b-d0f2-499f-ab8c-49f74390827d has exited
> I0927 20:22:43.357300 28691 provisioner.cpp:438] Ignoring destroy request for unknown container 8e11209b-d0f2-499f-ab8c-49f74390827d
> I0927 20:22:43.358074 28693 slave.cpp:4517] Executor '792c5e61-3ab6-488a-af6b-46aaa67e3cb0' of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 terminated with signal Killed
> I0927 20:22:43.358239 28693 slave.cpp:3609] Handling status update TASK_LOST (UUID: 26de5785-3f4b-4228-b0fc-aea09af7e579) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 from @0.0.0.0:0
> W0927 20:22:43.359275 28696 containerizer.cpp:1609] Ignoring update for unknown container 8e11209b-d0f2-499f-ab8c-49f74390827d
> I0927 20:22:43.359694 28696 status_update_manager.cpp:323] Received status update TASK_LOST (UUID: 26de5785-3f4b-4228-b0fc-aea09af7e579) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:43.359776 28696 status_update_manager.cpp:832] Checkpointing UPDATE for status update TASK_LOST (UUID: 26de5785-3f4b-4228-b0fc-aea09af7e579) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:43.360108 28696 status_update_manager.cpp:377] Forwarding update TASK_LOST (UUID: 26de5785-3f4b-4228-b0fc-aea09af7e579) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 to the agent
> W0927 20:22:43.360538 28696 slave.cpp:3972] Dropping status update TASK_LOST (UUID: 26de5785-3f4b-4228-b0fc-aea09af7e579) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 sent by status update manager because the agent is in RECOVERING state
> I0927 20:22:43.360622 28696 slave.cpp:3920] Status update manager successfully handled status update TASK_LOST (UUID: 26de5785-3f4b-4228-b0fc-aea09af7e579) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:43.815976 28693 hierarchical.cpp:1691] No allocations performed
> I0927 20:22:43.816056 28693 hierarchical.cpp:1786] No inverse offers to send out!
> I0927 20:22:43.816140 28693 hierarchical.cpp:1283] Performed allocation for 1 agents in 453087ns
> I0927 20:22:44.817674 28693 hierarchical.cpp:1691] No allocations performed
> I0927 20:22:44.817749 28693 hierarchical.cpp:1786] No inverse offers to send out!
> I0927 20:22:44.817832 28693 hierarchical.cpp:1283] Performed allocation for 1 agents in 358279ns
> I0927 20:22:45.288569 28685 slave.cpp:3549] Cleaning up un-reregistered executors
> I0927 20:22:45.288712 28685 slave.cpp:5252] Finished recovery
> I0927 20:22:45.289381 28685 slave.cpp:5424] Querying resource estimator for oversubscribable resources
> I0927 20:22:45.289933 28685 slave.cpp:915] New master detected at master@172.17.0.1:46389
> I0927 20:22:45.289963 28685 slave.cpp:974] Authenticating with master master@172.17.0.1:46389
> I0927 20:22:45.290058 28685 slave.cpp:985] Using default CRAM-MD5 authenticatee
> I0927 20:22:45.290264 28685 slave.cpp:947] Detecting new master
> I0927 20:22:45.290370 28685 slave.cpp:5438] Received oversubscribable resources {} from the resource estimator
> I0927 20:22:45.290469 28685 status_update_manager.cpp:177] Pausing sending status updates
> I0927 20:22:45.290616 28685 authenticatee.cpp:121] Creating new client SASL connection
> I0927 20:22:45.291466 28689 master.cpp:6583] Authenticating agent@172.17.0.1:46389
> I0927 20:22:45.291662 28689 authenticator.cpp:414] Starting authentication session for crammd5-authenticatee(822)@172.17.0.1:46389
> I0927 20:22:45.292009 28689 authenticator.cpp:98] Creating new server SASL connection
> I0927 20:22:45.292351 28685 authenticatee.cpp:213] Received SASL authentication mechanisms: CRAM-MD5
> I0927 20:22:45.292383 28685 authenticatee.cpp:239] Attempting to authenticate with mechanism 'CRAM-MD5'
> I0927 20:22:45.292477 28685 authenticator.cpp:204] Received SASL authentication start
> I0927 20:22:45.292537 28685 authenticator.cpp:326] Authentication requires more steps
> I0927 20:22:45.292613 28685 authenticatee.cpp:259] Received SASL authentication step
> I0927 20:22:45.292700 28685 authenticator.cpp:232] Received SASL authentication step
> I0927 20:22:45.292726 28685 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '67ecb3fe2a7a' server FQDN: '67ecb3fe2a7a' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false
> I0927 20:22:45.292742 28685 auxprop.cpp:181] Looking up auxiliary property '*userPassword'
> I0927 20:22:45.292784 28685 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5'
> I0927 20:22:45.292804 28685 auxprop.cpp:109] Request to lookup properties for user: 'test-principal' realm: '67ecb3fe2a7a' server FQDN: '67ecb3fe2a7a' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true
> I0927 20:22:45.292825 28685 auxprop.cpp:131] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true
> I0927 20:22:45.292839 28685 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true
> I0927 20:22:45.292857 28685 authenticator.cpp:318] Authentication success
> I0927 20:22:45.293015 28685 authenticatee.cpp:299] Authentication success
> I0927 20:22:45.293102 28685 master.cpp:6613] Successfully authenticated principal 'test-principal' at agent@172.17.0.1:46389
> I0927 20:22:45.293179 28685 authenticator.cpp:432] Authentication session cleanup for crammd5-authenticatee(822)@172.17.0.1:46389
> I0927 20:22:45.293514 28685 slave.cpp:1069] Successfully authenticated with master master@172.17.0.1:46389
> I0927 20:22:45.295336 28693 master.cpp:5188] Re-registering agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 at agent@172.17.0.1:46389 (67ecb3fe2a7a)
> I0927 20:22:45.296119 28686 hierarchical.cpp:569] Agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 reactivated
> I0927 20:22:45.296526 28693 master.cpp:5447] Sending updated checkpointed resources {} to agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 at agent@172.17.0.1:46389 (67ecb3fe2a7a)
> I0927 20:22:45.296876 28685 slave.cpp:1475] Will retry registration in 14.905875ms if necessary
> I0927 20:22:45.298169 28685 slave.cpp:1217] Re-registered with master master@172.17.0.1:46389
> I0927 20:22:45.298787 28695 master.cpp:4648] Telling agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 at agent@172.17.0.1:46389 (67ecb3fe2a7a) to kill task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 (default) at scheduler-f6558647-3275-4c83-bc7d-8aa40594fe51@172.17.0.1:46389
> I0927 20:22:45.299315 28685 slave.cpp:1253] Forwarding total oversubscribed resources {}
> I0927 20:22:45.299370 28695 status_update_manager.cpp:184] Resuming sending status updates
> W0927 20:22:45.299718 28695 status_update_manager.cpp:191] Resending status update TASK_LOST (UUID: 26de5785-3f4b-4228-b0fc-aea09af7e579) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:45.299923 28695 status_update_manager.cpp:377] Forwarding update TASK_LOST (UUID: 26de5785-3f4b-4228-b0fc-aea09af7e579) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 to the agent
> I0927 20:22:45.300459 28685 slave.cpp:2675] Updating framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 pid to scheduler-f6558647-3275-4c83-bc7d-8aa40594fe51@172.17.0.1:46389
> I0927 20:22:45.300631 28695 master.cpp:5508] Received update of agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 at agent@172.17.0.1:46389 (67ecb3fe2a7a) with total oversubscribed resources {}
> I0927 20:22:45.301206 28695 hierarchical.cpp:552] Agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 (67ecb3fe2a7a) updated with oversubscribed resources {} (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000])
> I0927 20:22:45.301605 28695 hierarchical.cpp:1691] No allocations performed
> I0927 20:22:45.301796 28695 hierarchical.cpp:1786] No inverse offers to send out!
> I0927 20:22:45.302009 28695 hierarchical.cpp:1306] Performed allocation for agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 in 557103ns
> I0927 20:22:45.302228 28685 slave.cpp:2691] Checkpointing framework pid 'scheduler-f6558647-3275-4c83-bc7d-8aa40594fe51@172.17.0.1:46389' to '/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_SaHNt3/meta/slaves/94efab88-1953-4bb1-90e5-fff7ca94a463-S0/frameworks/94efab88-1953-4bb1-90e5-fff7ca94a463-0000/framework.pid'
> I0927 20:22:45.303030 28690 status_update_manager.cpp:184] Resuming sending status updates
> W0927 20:22:45.303205 28690 status_update_manager.cpp:191] Resending status update TASK_LOST (UUID: 26de5785-3f4b-4228-b0fc-aea09af7e579) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:45.303395 28690 status_update_manager.cpp:377] Forwarding update TASK_LOST (UUID: 26de5785-3f4b-4228-b0fc-aea09af7e579) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 to the agent
> I0927 20:22:45.303838 28685 slave.cpp:2738] Ignoring new checkpointed resources identical to the current version: {}
> I0927 20:22:45.304049 28685 slave.cpp:2283] Asked to kill task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> W0927 20:22:45.304231 28685 slave.cpp:2415] Ignoring kill task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 because the executor '792c5e61-3ab6-488a-af6b-46aaa67e3cb0' of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 (via HTTP) is terminated
> I0927 20:22:45.305016 28685 slave.cpp:4026] Forwarding the update TASK_LOST (UUID: 26de5785-3f4b-4228-b0fc-aea09af7e579) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 to master@172.17.0.1:46389
> I0927 20:22:45.305748 28694 master.cpp:5638] Status update TASK_LOST (UUID: 26de5785-3f4b-4228-b0fc-aea09af7e579) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 from agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 at agent@172.17.0.1:46389 (67ecb3fe2a7a)
> I0927 20:22:45.305966 28694 master.cpp:5700] Forwarding status update TASK_LOST (UUID: 26de5785-3f4b-4228-b0fc-aea09af7e579) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:45.306283 28694 master.cpp:7537] Updating the state of task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 (latest state: TASK_LOST, status update state: TASK_LOST)
> I0927 20:22:45.307447 28694 sched.cpp:1025] Scheduler::statusUpdate took 117824ns
> I0927 20:22:45.308940 28694 hierarchical.cpp:1015] Recovered cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: {}) on agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 from framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:45.309276 28689 master.cpp:4756] Processing ACKNOWLEDGE call 26de5785-3f4b-4228-b0fc-aea09af7e579 for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 (default) at scheduler-f6558647-3275-4c83-bc7d-8aa40594fe51@172.17.0.1:46389 on agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0
> I0927 20:22:45.310035 28689 master.cpp:7633] Removing task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 with resources cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 on agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 at agent@172.17.0.1:46389 (67ecb3fe2a7a)
> I0927 20:22:45.310741 28685 slave.cpp:4026] Forwarding the update TASK_LOST (UUID: 26de5785-3f4b-4228-b0fc-aea09af7e579) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 to master@172.17.0.1:46389
> I0927 20:22:45.311413 28685 master.cpp:5638] Status update TASK_LOST (UUID: 26de5785-3f4b-4228-b0fc-aea09af7e579) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 from agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 at agent@172.17.0.1:46389 (67ecb3fe2a7a)
> I0927 20:22:45.311591 28685 master.cpp:5700] Forwarding status update TASK_LOST (UUID: 26de5785-3f4b-4228-b0fc-aea09af7e579) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> W../../src/tests/slave_recovery_tests.cpp:2111: Failure
> Value of: status.get().state()
>   Actual: TASK_LOST
> Expected: TASK_KILLED
> 0927 20:22:45.311871 28685 master.cpp:5668] Could not lookup task for status update TASK_LOST (UUID: 26de5785-3f4b-4228-b0fc-aea09af7e579) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 from agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 at agent@172.17.0.1:46389 (67ecb3fe2a7a)
> I0927 20:22:45.312106 28691 hierarchical.cpp:1786] No inverse offers to send out!
> I0927 20:22:45.312192 28696 status_update_manager.cpp:395] Received status update acknowledgement (UUID: 26de5785-3f4b-4228-b0fc-aea09af7e579) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:45.313132 28696 status_update_manager.cpp:832] Checkpointing ACK for status update TASK_LOST (UUID: 26de5785-3f4b-4228-b0fc-aea09af7e579) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:45.313293 28696 status_update_manager.cpp:531] Cleaning up status update stream for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:45.313741 28685 master.cpp:6412] Sending 1 offers to framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 (default) at scheduler-f6558647-3275-4c83-bc7d-8aa40594fe51@172.17.0.1:46389
> I0927 20:22:45.313907 28692 slave.cpp:2953] Status update manager successfully handled status update acknowledgement (UUID: 26de5785-3f4b-4228-b0fc-aea09af7e579) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:45.313992 28692 slave.cpp:6505] Completing task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0
> I0927 20:22:45.314055 28692 slave.cpp:4621] Cleaning up executor '792c5e61-3ab6-488a-af6b-46aaa67e3cb0' of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 (via HTTP)
> I0927 20:22:45.312522 28684 sched.cpp:1025] Scheduler::statusUpdate took 67463ns
> I0927 20:22:45.314970 28685 gc.cpp:55] Scheduling '/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_SaHNt3/slaves/94efab88-1953-4bb1-90e5-fff7ca94a463-S0/frameworks/94efab88-1953-4bb1-90e5-fff7ca94a463-0000/executors/792c5e61-3ab6-488a-af6b-46aaa67e3cb0/runs/8e11209b-d0f2-499f-ab8c-49f74390827d' for gc 6.9999964136563days in the future
> I0927 20:22:45.315309 28685 gc.cpp:55] Scheduling '/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_SaHNt3/slaves/94efab88-1953-4bb1-90e5-fff7ca94a463-S0/frameworks/94efab88-1953-4bb1-90e5-fff7ca94a463-0000/executors/792c5e61-3ab6-488a-af6b-46aaa67e3cb0' for gc 6.9999964136563days in the future
> I0927 20:22:45.315666 28686 master.cpp:4756] Processing ACKNOWLEDGE call 26de5785-3f4b-4228-b0fc-aea09af7e579 for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 (default) at scheduler-f6558647-3275-4c83-bc7d-8aa40594fe51@172.17.0.1:46389 on agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0
> I0927 20:22:45.315434 28684 sched.cpp:917] Scheduler::resourceOffers took 133729ns
> I0927 20:22:45.316000 28685 gc.cpp:55] Scheduling '/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_SaHNt3/meta/slaves/94efab88-1953-4bb1-90e5-fff7ca94a463-S0/frameworks/94efab88-1953-4bb1-90e5-fff7ca94a463-0000/executors/792c5e61-3ab6-488a-af6b-46aaa67e3cb0/runs/8e11209b-d0f2-499f-ab8c-49f74390827d' for gc 6.9999964136563days in the future
> I0927 20:22:45.316169 28692 slave.cpp:4709] Cleaning up framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:45.316195 28685 gc.cpp:55] Scheduling '/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_SaHNt3/meta/slaves/94efab88-1953-4bb1-90e5-fff7ca94a463-S0/frameworks/94efab88-1953-4bb1-90e5-fff7ca94a463-0000/executors/792c5e61-3ab6-488a-af6b-46aaa67e3cb0' for gc 6.9999964136563days in the future
> I0927 20:22:45.316905 28687 gc.cpp:55] Scheduling '/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_SaHNt3/slaves/94efab88-1953-4bb1-90e5-fff7ca94a463-S0/frameworks/94efab88-1953-4bb1-90e5-fff7ca94a463-0000' for gc 6.9999964136563days in the future
> I0927 20:22:45.317044 28687 gc.cpp:55] Scheduling '/tmp/SlaveRecoveryTest_0_KillTaskWithHTTPExecutor_SaHNt3/meta/slaves/94efab88-1953-4bb1-90e5-fff7ca94a463-S0/frameworks/94efab88-1953-4bb1-90e5-fff7ca94a463-0000' for gc 6.9999964136563days in the future
> I0927 20:22:45.316807 28692 status_update_manager.cpp:285] Closing status update streams for framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:45.317258 28692 status_update_manager.cpp:395] Received status update acknowledgement (UUID: 26de5785-3f4b-4228-b0fc-aea09af7e579) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> E0927 20:22:45.317466 28692 slave.cpp:2946] Failed to handle status update acknowledgement (UUID: 26de5785-3f4b-4228-b0fc-aea09af7e579) for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000: Cannot find the status update stream for task 792c5e61-3ab6-488a-af6b-46aaa67e3cb0 of framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:45.317672 28691 hierarchical.cpp:1283] Performed allocation for 1 agents in 7.11671ms
> I0927 20:22:45.318480 28664 sched.cpp:1995] Asked to stop the driver
> I0927 20:22:45.318631 28687 sched.cpp:1187] Stopping framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:45.319967 28690 master.cpp:6988] Processing TEARDOWN call for framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 (default) at scheduler-f6558647-3275-4c83-bc7d-8aa40594fe51@172.17.0.1:46389
> I0927 20:22:45.320019 28690 master.cpp:7000] Removing framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 (default) at scheduler-f6558647-3275-4c83-bc7d-8aa40594fe51@172.17.0.1:46389
> I0927 20:22:45.320956 28682 hierarchical.cpp:383] Deactivated framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:45.321924 28684 slave.cpp:2506] Asked to shut down framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000 by master@172.17.0.1:46389
> W0927 20:22:45.321964 28684 slave.cpp:2521] Cannot shut down unknown framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:45.322552 28682 hierarchical.cpp:1015] Recovered cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: {}) on agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 from framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:45.323096 28682 hierarchical.cpp:334] Removed framework 94efab88-1953-4bb1-90e5-fff7ca94a463-0000
> I0927 20:22:45.326933 28682 slave.cpp:787] Agent terminating
> I0927 20:22:45.327174 28682 master.cpp:1258] Agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 at agent@172.17.0.1:46389 (67ecb3fe2a7a) disconnected
> I0927 20:22:45.327203 28682 master.cpp:2947] Disconnecting agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 at agent@172.17.0.1:46389 (67ecb3fe2a7a)
> I0927 20:22:45.327251 28682 master.cpp:2966] Deactivating agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 at agent@172.17.0.1:46389 (67ecb3fe2a7a)
> I0927 20:22:45.327394 28682 hierarchical.cpp:581] Agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0 deactivated
> I0927 20:22:45.351438 28664 master.cpp:1097] Master terminating
> I0927 20:22:45.352370 28695 hierarchical.cpp:514] Removed agent 94efab88-1953-4bb1-90e5-fff7ca94a463-S0
> [  FAILED  ] SlaveRecoveryTest/0.KillTaskWithHTTPExecutor, where TypeParam = mesos::internal::slave::MesosContainerizer (2618 ms)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message