mesos-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "James Peach (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MESOS-5308) ROOT_XFS_QuotaTest.NoCheckpointRecovery failed.
Date Sat, 30 Apr 2016 18:15:12 GMT

    [ https://issues.apache.org/jira/browse/MESOS-5308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15265404#comment-15265404 ] 

James Peach commented on MESOS-5308:
------------------------------------

Earlier I did 1000 iterations with no error. This time I got a different failure after 114:

{code}
I0430 11:10:54.666662 11591 exec.cpp:150] Version: 0.29.0
I0430 11:10:54.678015 11620 exec.cpp:225] Executor registered on agent ccb4b4e9-46f5-48ec-bc72-ae5504a67357-S0
Registered executor on fedora-23
Starting task ea4ece2b-fb5a-40a2-bfcd-ff79a31566d8
sh -c 'dd if=/dev/zero of=file bs=1048576 count=1; sleep 1000'
Forked command at 11627
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.000696071 s, 1.5 GB/s
I0430 11:10:54.691599 11620 exec.cpp:399] Executor asked to shutdown
Shutting down
Sending SIGTERM to process tree at pid 11627
Sent SIGTERM to the following process trees:
[
-+- 11627 sh -c dd if=/dev/zero of=file bs=1048576 count=1; sleep 1000
 \--- 11629 sleep 1000
]
/opt/home/src/mesos.git/src/tests/containerizer/xfs_quota_tests.cpp:575: Failure
Failed to wait 15secs for _recover
*** Aborted at 1462039869 (unix time) try "date -d @1462039869" if you are using GNU date ***
PC: @          0x184da1e testing::UnitTest::AddTestPartResult()
*** SIGSEGV (@0x0) received by PID 4417 (TID 0x7f7794df78c0) from PID 0; stack trace: ***
    @     0x7f778d1e49f0 (unknown)
    @          0x184da1e testing::UnitTest::AddTestPartResult()
    @          0x1842197 testing::internal::AssertHelper::operator=()
    @          0x17a0570 mesos::internal::tests::ROOT_XFS_QuotaTest_NoCheckpointRecovery_Test::TestBody()
    @          0x186bbcc testing::internal::HandleSehExceptionsInMethodIfSupported<>()
    @          0x1866bd2 testing::internal::HandleExceptionsInMethodIfSupported<>()
    @          0x184761c testing::Test::Run()
    @          0x1847dd4 testing::TestInfo::Run()
    @          0x1848425 testing::TestCase::Run()
    @          0x184ef63 testing::internal::UnitTestImpl::RunAllTests()
    @          0x186c893 testing::internal::HandleSehExceptionsInMethodIfSupported<>()
    @          0x1867748 testing::internal::HandleExceptionsInMethodIfSupported<>()
    @          0x184dc3f testing::UnitTest::Run()
    @           0xf640f1 RUN_ALL_TESTS()
    @           0xf63ce9 main
    @     0x7f778bcfc580 __libc_start_main
    @           0xa35039 _start
Segmentation fault
{code}

> ROOT_XFS_QuotaTest.NoCheckpointRecovery failed.
> -----------------------------------------------
>
>                 Key: MESOS-5308
>                 URL: https://issues.apache.org/jira/browse/MESOS-5308
>             Project: Mesos
>          Issue Type: Bug
>          Components: isolation
>         Environment: Fedora 23 with/without SSL
>            Reporter: Gilbert Song
>              Labels: isolation
>
> Here is the log:
> {code}
> [01:07:51] :	 [Step 10/10] [ RUN      ] ROOT_XFS_QuotaTest.NoCheckpointRecovery
> [01:07:51] :	 [Step 10/10] meta-data=/dev/loop0             isize=512    agcount=2, agsize=5120 blks
> [01:07:51] :	 [Step 10/10]          =                       sectsz=512   attr=2, projid32bit=1
> [01:07:51] :	 [Step 10/10]          =                       crc=1        finobt=1, sparse=0
> [01:07:51] :	 [Step 10/10] data     =                       bsize=4096   blocks=10240, imaxpct=25
> [01:07:51] :	 [Step 10/10]          =                       sunit=0      swidth=0 blks
> [01:07:51] :	 [Step 10/10] naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
> [01:07:51] :	 [Step 10/10] log      =internal log           bsize=4096   blocks=855, version=2
> [01:07:51] :	 [Step 10/10]          =                       sectsz=512   sunit=0 blks, lazy-count=1
> [01:07:51] :	 [Step 10/10] realtime =none                   extsz=4096   blocks=0, rtextents=0
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.690585 17604 cluster.cpp:149] Creating default 'local' authorizer
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.706126 17604 leveldb.cpp:174] Opened db in 15.452988ms
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.707135 17604 leveldb.cpp:181] Compacted db in 984939ns
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.707154 17604 leveldb.cpp:196] Created db iterator in 4159ns
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.707159 17604 leveldb.cpp:202] Seeked to beginning of db in 517ns
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.707165 17604 leveldb.cpp:271] Iterated through 0 keys in the db in 305ns
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.707176 17604 replica.cpp:779] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.707320 17621 recover.cpp:447] Starting replica recovery
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.707381 17621 recover.cpp:473] Replica is in EMPTY status
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.707638 17619 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from (17889)@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.707732 17624 recover.cpp:193] Received a recover response from a replica in EMPTY status
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.707885 17624 recover.cpp:564] Updating replica status to STARTING
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.708389 17618 master.cpp:382] Master 0c1e0a50-1212-4104-a148-661131b79f27 (ip-172-30-2-13.ec2.internal.mesosphere.io) started on 172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.708406 17618 master.cpp:384] Flags at startup: --acls="" --allocation_interval="1secs" --allocator="HierarchicalDRF" --authenticate="true" --authenticate_http="true" --authenticate_http_frameworks="true" --authenticate_slaves="true" --authenticators="crammd5" --authorizers="local" --credentials="/mnt/teamcity/temp/buildTmp/ROOT_XFS_QuotaTest_NoCheckpointRecovery_ZsRNg9/mnt/credentials" --framework_sorter="drf" --help="false" --hostname_lookup="true" --http_authenticators="basic" --http_framework_authenticators="basic" --initialize_driver_logging="true" --log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO" --max_completed_frameworks="50" --max_completed_tasks_per_framework="1000" --max_slave_ping_timeouts="5" --quiet="false" --recovery_slave_removal_limit="100%" --registry="replicated_log" --registry_fetch_timeout="1mins" --registry_store_timeout="100secs" --registry_strict="true" --root_submissions="true" --slave_ping_timeout="15secs" --slave_reregister_timeout="10mins" --user_sorter="drf" --version="false" --webui_dir="/usr/local/share/mesos/webui" --work_dir="/mnt/teamcity/temp/buildTmp/ROOT_XFS_QuotaTest_NoCheckpointRecovery_ZsRNg9/mnt/master" --zk_session_timeout="10secs"
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.708510 17618 master.cpp:433] Master only allowing authenticated frameworks to register
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.708518 17618 master.cpp:439] Master only allowing authenticated agents to register
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.708521 17618 master.cpp:445] Master only allowing authenticated HTTP frameworks to register
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.708526 17618 credentials.hpp:37] Loading credentials for authentication from '/mnt/teamcity/temp/buildTmp/ROOT_XFS_QuotaTest_NoCheckpointRecovery_ZsRNg9/mnt/credentials'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.708634 17618 master.cpp:489] Using default 'crammd5' authenticator
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.708674 17618 master.cpp:560] Using default 'basic' HTTP authenticator
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.708730 17618 master.cpp:640] Using default 'basic' HTTP framework authenticator
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.708771 17618 master.cpp:687] Authorization enabled
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.708856 17625 whitelist_watcher.cpp:77] No whitelist given
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.708884 17622 hierarchical.cpp:142] Initialized hierarchical allocator process
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.709393 17619 master.cpp:1938] The newly elected leader is master@172.30.2.13:37618 with id 0c1e0a50-1212-4104-a148-661131b79f27
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.709408 17619 master.cpp:1951] Elected as the leading master!
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.709414 17619 master.cpp:1638] Recovering from registrar
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.709465 17622 registrar.cpp:331] Recovering registrar
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.715296 17620 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 7.347619ms
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.715314 17620 replica.cpp:320] Persisted replica status to STARTING
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.715364 17620 recover.cpp:473] Replica is in STARTING status
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.715648 17619 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from (17892)@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.715733 17625 recover.cpp:193] Received a recover response from a replica in STARTING status
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.715867 17622 recover.cpp:564] Updating replica status to VOTING
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.718924 17619 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 2.95254ms
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.718940 17619 replica.cpp:320] Persisted replica status to VOTING
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.718974 17619 recover.cpp:578] Successfully joined the Paxos group
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.719036 17619 recover.cpp:462] Recover process terminated
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.719184 17620 log.cpp:524] Attempting to start the writer
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.719581 17621 replica.cpp:493] Replica received implicit promise request from (17893)@172.30.2.13:37618 with proposal 1
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.722290 17621 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 2.689374ms
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.722304 17621 replica.cpp:342] Persisted promised to 1
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.722470 17621 coordinator.cpp:238] Coordinator attempting to fill missing positions
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.722905 17624 replica.cpp:388] Replica received explicit promise request from (17894)@172.30.2.13:37618 for position 0 with proposal 2
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.727253 17624 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 4.328169ms
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.727272 17624 replica.cpp:712] Persisted action at 0
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.727738 17624 replica.cpp:537] Replica received write request for position 0 from (17895)@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.727771 17624 leveldb.cpp:436] Reading position from leveldb took 11229ns
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.734385 17624 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 6.599431ms
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.734401 17624 replica.cpp:712] Persisted action at 0
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.734623 17619 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.735486 17619 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 841324ns
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.735499 17619 replica.cpp:712] Persisted action at 0
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.735504 17619 replica.cpp:697] Replica learned NOP action at position 0
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.735741 17619 log.cpp:540] Writer started with ending position 0
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.736054 17621 leveldb.cpp:436] Reading position from leveldb took 10354ns
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.736268 17623 registrar.cpp:364] Successfully fetched the registry (0B) in 26.778112ms
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.736304 17623 registrar.cpp:463] Applied 1 operations in 2398ns; attempting to update the 'registry'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.736476 17619 log.cpp:548] Attempting to append 231 bytes to the log
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.736520 17619 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.736809 17625 replica.cpp:537] Replica received write request for position 1 from (17896)@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.741016 17625 leveldb.cpp:341] Persisting action (250 bytes) to leveldb took 4.185902ms
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.741034 17625 replica.cpp:712] Persisted action at 1
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.741235 17620 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.750710 17620 leveldb.cpp:341] Persisting action (252 bytes) to leveldb took 9.456873ms
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.750726 17620 replica.cpp:712] Persisted action at 1
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.750732 17620 replica.cpp:697] Replica learned APPEND action at position 1
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.750936 17621 registrar.cpp:508] Successfully updated the 'registry' in 14.617344ms
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.750998 17621 registrar.cpp:394] Successfully recovered registrar
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.751021 17625 log.cpp:567] Attempting to truncate the log to 1
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.751085 17621 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.751178 17622 master.cpp:1746] Recovered 0 agents from the Registry (192B) ; allowing 10mins for agents to re-register
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.751255 17618 hierarchical.cpp:169] Skipping recovery of hierarchical allocator: nothing to recover
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.751405 17619 replica.cpp:537] Replica received write request for position 2 from (17897)@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.753563 17619 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 2.139436ms
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.753579 17619 replica.cpp:712] Persisted action at 2
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.753764 17621 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.757881 17621 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 4.0951ms
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.757912 17621 leveldb.cpp:399] Deleting ~1 keys from leveldb took 13880ns
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.757922 17621 replica.cpp:712] Persisted action at 2
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.757927 17621 replica.cpp:697] Replica learned TRUNCATE action at position 2
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.762030 17604 containerizer.cpp:178] Using isolation: xfs/disk,filesystem/posix,network/cni
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.767380 17604 linux_launcher.cpp:101] Using /sys/fs/cgroup/freezer as the freezer hierarchy for the Linux launcher
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.767974 17604 disk.cpp:160] Allocating XFS project IDs from the range {[5000,10001)}
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.768280 17604 cluster.cpp:398] Creating default 'local' authorizer
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.768646 17619 slave.cpp:204] Agent started on 462)@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.768668 17619 slave.cpp:205] Flags at startup: --acls="" --appc_simple_discovery_uri_prefix="http://" --appc_store_dir="/tmp/mesos/store/appc" --authenticate_http="true" --authenticatee="crammd5" --authorizer="local" --cgroups_cpu_enable_pids_and_tids_count="false" --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup" --cgroups_limit_swap="false" --cgroups_root="mesos" --container_disk_watch_interval="15secs" --containerizers="mesos" --credential="/mnt/teamcity/temp/buildTmp/ROOT_XFS_QuotaTest_NoCheckpointRecovery_FrIFIF/credential" --default_role="*" --disk_watch_interval="1mins" --docker="docker" --docker_kill_orphans="true" --docker_registry="https://registry-1.docker.io" --docker_remove_delay="6hrs" --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns" --docker_store_dir="/tmp/mesos/store/docker" --enforce_container_disk_quota="false" --executor_registration_timeout="1mins" --executor_shutdown_grace_period="5secs" --fetcher_cache_dir="/mnt/teamcity/temp/buildTmp/ROOT_XFS_QuotaTest_NoCheckpointRecovery_FrIFIF/fetch" --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks" --gc_disk_headroom="0.1" --hadoop_home="" --help="false" --hostname_lookup="true" --http_authenticators="basic" --http_command_executor="false" --http_credentials="/mnt/teamcity/temp/buildTmp/ROOT_XFS_QuotaTest_NoCheckpointRecovery_FrIFIF/http_credentials" --image_provisioner_backend="copy" --initialize_driver_logging="true" --isolation="xfs/disk" --launcher_dir="/mnt/teamcity/work/4240ba9ddd0997c3/build/src" --logbufsecs="0" --logging_level="INFO" --oversubscribed_resources_interval="15secs" --perf_duration="10secs" --perf_interval="1mins" --qos_correction_interval_min="0ns" --quiet="false" --recover="reconnect" --recovery_timeout="15mins" --registration_backoff_factor="10ms" --resources="cpus:2;mem:1024;disk:1024;ports:[31000-32000]" --revocable_cpu_low_priority="true" --sandbox_directory="/mnt/mesos/sandbox" --strict="true" --switch_user="true" --systemd_enable_support="true" --systemd_runtime_directory="/run/systemd/system" --version="false" --work_dir="/mnt/teamcity/temp/buildTmp/ROOT_XFS_QuotaTest_NoCheckpointRecovery_ZsRNg9/mnt" --xfs_project_range="[5000-10000]"
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.768904 17619 credentials.hpp:86] Loading credential for authentication from '/mnt/teamcity/temp/buildTmp/ROOT_XFS_QuotaTest_NoCheckpointRecovery_FrIFIF/credential'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.768995 17619 slave.cpp:342] Agent using credential for: test-principal
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.769007 17619 credentials.hpp:37] Loading credentials for authentication from '/mnt/teamcity/temp/buildTmp/ROOT_XFS_QuotaTest_NoCheckpointRecovery_FrIFIF/http_credentials'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.769088 17619 slave.cpp:394] Using default 'basic' HTTP authenticator
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.769178 17619 resources.cpp:572] Parsing resources as JSON failed: cpus:2;mem:1024;disk:1024;ports:[31000-32000]
> [01:07:51]W:	 [Step 10/10] Trying semicolon-delimited string format instead
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.769258 17604 sched.cpp:224] Version: 0.29.0
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.769304 17619 slave.cpp:593] Agent resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.769338 17619 slave.cpp:601] Agent attributes: [  ]
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.769348 17619 slave.cpp:606] Agent hostname: ip-172-30-2-13.ec2.internal.mesosphere.io
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.769395 17622 sched.cpp:328] New master detected at master@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.769440 17622 sched.cpp:384] Authenticating with master master@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.769450 17622 sched.cpp:391] Using default CRAM-MD5 authenticatee
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.769523 17624 authenticatee.cpp:121] Creating new client SASL connection
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.769604 17622 master.cpp:5826] Authenticating scheduler-e7469c06-9d69-4f31-be71-3eb5067ed5dc@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.769662 17619 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(933)@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.769757 17620 state.cpp:57] Recovering state from '/mnt/teamcity/temp/buildTmp/ROOT_XFS_QuotaTest_NoCheckpointRecovery_ZsRNg9/mnt/meta'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.769772 17623 authenticator.cpp:98] Creating new server SASL connection
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.769856 17625 status_update_manager.cpp:200] Recovering status update manager
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.769860 17620 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.769896 17620 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.769960 17625 containerizer.cpp:444] Recovering containerizer
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.769973 17620 authenticator.cpp:203] Received SASL authentication start
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.770011 17620 authenticator.cpp:325] Authentication requires more steps
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.770045 17620 authenticatee.cpp:258] Received SASL authentication step
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.770103 17620 authenticator.cpp:231] Received SASL authentication step
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.770124 17620 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-13.ec2.internal' server FQDN: 'ip-172-30-2-13.ec2.internal' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.770131 17620 auxprop.cpp:179] Looking up auxiliary property '*userPassword'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.770139 17620 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.770146 17620 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-13.ec2.internal' server FQDN: 'ip-172-30-2-13.ec2.internal' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.770170 17620 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.770176 17620 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.770185 17620 authenticator.cpp:317] Authentication success
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.770233 17620 authenticatee.cpp:298] Authentication success
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.770248 17622 master.cpp:5856] Successfully authenticated principal 'test-principal' at scheduler-e7469c06-9d69-4f31-be71-3eb5067ed5dc@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.770287 17624 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(933)@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.770324 17620 sched.cpp:474] Successfully authenticated with master master@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.770339 17620 sched.cpp:783] Sending SUBSCRIBE call to master@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.770411 17620 sched.cpp:816] Will retry registration in 1.035293071secs if necessary
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.770467 17618 master.cpp:2488] Received SUBSCRIBE call for framework 'default' at scheduler-e7469c06-9d69-4f31-be71-3eb5067ed5dc@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.770495 17618 master.cpp:1977] Authorizing framework principal 'test-principal' to receive offers for role '*'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.770602 17619 master.cpp:2564] Subscribing framework default with checkpointing disabled and capabilities [  ]
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.770726 17623 hierarchical.cpp:264] Added framework 0c1e0a50-1212-4104-a148-661131b79f27-0000
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.770745 17623 hierarchical.cpp:1488] No resources available to allocate!
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.770750 17623 hierarchical.cpp:1583] No inverse offers to send out!
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.770757 17623 hierarchical.cpp:1139] Performed allocation for 0 agents in 18743ns
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.770802 17618 sched.cpp:710] Framework registered with 0c1e0a50-1212-4104-a148-661131b79f27-0000
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.770831 17618 sched.cpp:724] Scheduler::registered took 12456ns
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.771111 17624 provisioner.cpp:245] Provisioner recovery complete
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.771221 17621 slave.cpp:4826] Finished recovery
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.771435 17621 slave.cpp:4998] Querying resource estimator for oversubscribable resources
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.771519 17618 slave.cpp:953] New master detected at master@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.771540 17618 slave.cpp:1016] Authenticating with master master@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.771548 17618 slave.cpp:1021] Using default CRAM-MD5 authenticatee
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.771555 17619 status_update_manager.cpp:174] Pausing sending status updates
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.771580 17619 authenticatee.cpp:121] Creating new client SASL connection
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.771584 17618 slave.cpp:989] Detecting new master
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.771634 17618 slave.cpp:5012] Received oversubscribable resources  from the resource estimator
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.771682 17624 master.cpp:5826] Authenticating slave(462)@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.771733 17624 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(934)@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.771790 17622 authenticator.cpp:98] Creating new server SASL connection
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.771841 17622 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.771857 17622 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.771917 17618 authenticator.cpp:203] Received SASL authentication start
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.771942 17618 authenticator.cpp:325] Authentication requires more steps
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.771975 17618 authenticatee.cpp:258] Received SASL authentication step
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.772008 17618 authenticator.cpp:231] Received SASL authentication step
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.772022 17618 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-13.ec2.internal' server FQDN: 'ip-172-30-2-13.ec2.internal' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.772032 17618 auxprop.cpp:179] Looking up auxiliary property '*userPassword'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.772044 17618 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.772058 17618 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-13.ec2.internal' server FQDN: 'ip-172-30-2-13.ec2.internal' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.772063 17618 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.772066 17618 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.772075 17618 authenticator.cpp:317] Authentication success
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.772110 17618 authenticatee.cpp:298] Authentication success
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.772131 17623 master.cpp:5856] Successfully authenticated principal 'test-principal' at slave(462)@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.772153 17622 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(934)@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.772172 17618 slave.cpp:1086] Successfully authenticated with master master@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.772215 17618 slave.cpp:1482] Will retry registration in 18.896273ms if necessary
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.772279 17620 master.cpp:4537] Registering agent at slave(462)@172.30.2.13:37618 (ip-172-30-2-13.ec2.internal.mesosphere.io) with id 0c1e0a50-1212-4104-a148-661131b79f27-S0
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.772383 17623 registrar.cpp:463] Applied 1 operations in 10533ns; attempting to update the 'registry'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.772634 17619 log.cpp:548] Attempting to append 429 bytes to the log
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.772681 17619 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 3
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.772938 17622 replica.cpp:537] Replica received write request for position 3 from (17914)@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.773877 17622 leveldb.cpp:341] Persisting action (448 bytes) to leveldb took 920612ns
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.773891 17622 replica.cpp:712] Persisted action at 3
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.774106 17624 replica.cpp:691] Replica received learned notice for position 3 from @0.0.0.0:0
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.779655 17624 leveldb.cpp:341] Persisting action (450 bytes) to leveldb took 5.526797ms
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.779672 17624 replica.cpp:712] Persisted action at 3
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.779677 17624 replica.cpp:697] Replica learned APPEND action at position 3
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.779930 17622 registrar.cpp:508] Successfully updated the 'registry' in 7.527936ms
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.779989 17625 log.cpp:567] Attempting to truncate the log to 3
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.780073 17620 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 4
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.780122 17619 slave.cpp:3701] Received ping from slave-observer(418)@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.780119 17623 master.cpp:4605] Registered agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 at slave(462)@172.30.2.13:37618 (ip-172-30-2-13.ec2.internal.mesosphere.io) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.780180 17621 hierarchical.cpp:473] Added agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 (ip-172-30-2-13.ec2.internal.mesosphere.io) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: )
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.780237 17619 slave.cpp:1130] Registered with master master@172.30.2.13:37618; given agent ID 0c1e0a50-1212-4104-a148-661131b79f27-S0
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.780249 17619 fetcher.cpp:81] Clearing fetcher cache
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.780308 17624 status_update_manager.cpp:181] Resuming sending status updates
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.780347 17621 hierarchical.cpp:1583] No inverse offers to send out!
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.780369 17621 hierarchical.cpp:1162] Performed allocation for agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 in 160652ns
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.780410 17623 replica.cpp:537] Replica received write request for position 4 from (17915)@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.780426 17624 master.cpp:5655] Sending 1 offers to framework 0c1e0a50-1212-4104-a148-661131b79f27-0000 (default) at scheduler-e7469c06-9d69-4f31-be71-3eb5067ed5dc@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.780438 17619 slave.cpp:1153] Checkpointing SlaveInfo to '/mnt/teamcity/temp/buildTmp/ROOT_XFS_QuotaTest_NoCheckpointRecovery_ZsRNg9/mnt/meta/slaves/0c1e0a50-1212-4104-a148-661131b79f27-S0/slave.info'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.780601 17622 sched.cpp:880] Scheduler::resourceOffers took 56231ns
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.780614 17619 slave.cpp:1190] Forwarding total oversubscribed resources 
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.780725 17604 resources.cpp:572] Parsing resources as JSON failed: cpus:1;mem:128;disk:1
> [01:07:51]W:	 [Step 10/10] Trying semicolon-delimited string format instead
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.780879 17620 master.cpp:4949] Received update of agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 at slave(462)@172.30.2.13:37618 (ip-172-30-2-13.ec2.internal.mesosphere.io) with total oversubscribed resources 
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.780998 17622 hierarchical.cpp:531] Agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 (ip-172-30-2-13.ec2.internal.mesosphere.io) updated with oversubscribed resources  (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000])
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.781011 17620 master.cpp:3435] Processing ACCEPT call for offers: [ 0c1e0a50-1212-4104-a148-661131b79f27-O0 ] on agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 at slave(462)@172.30.2.13:37618 (ip-172-30-2-13.ec2.internal.mesosphere.io) for framework 0c1e0a50-1212-4104-a148-661131b79f27-0000 (default) at scheduler-e7469c06-9d69-4f31-be71-3eb5067ed5dc@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.781044 17620 master.cpp:3038] Authorizing framework principal 'test-principal' to launch task 5f149c02-0e8a-493a-8e5a-b2dde68fc8ee as user 'root'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.781045 17622 hierarchical.cpp:1488] No resources available to allocate!
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.781054 17622 hierarchical.cpp:1583] No inverse offers to send out!
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.781062 17622 hierarchical.cpp:1162] Performed allocation for agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 in 30983ns
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.781361 17624 master.hpp:177] Adding task 5f149c02-0e8a-493a-8e5a-b2dde68fc8ee with resources cpus(*):1; mem(*):128; disk(*):1 on agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 (ip-172-30-2-13.ec2.internal.mesosphere.io)
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.781393 17624 master.cpp:3920] Launching task 5f149c02-0e8a-493a-8e5a-b2dde68fc8ee of framework 0c1e0a50-1212-4104-a148-661131b79f27-0000 (default) at scheduler-e7469c06-9d69-4f31-be71-3eb5067ed5dc@172.30.2.13:37618 with resources cpus(*):1; mem(*):128; disk(*):1 on agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 at slave(462)@172.30.2.13:37618 (ip-172-30-2-13.ec2.internal.mesosphere.io)
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.781481 17619 slave.cpp:1522] Got assigned task 5f149c02-0e8a-493a-8e5a-b2dde68fc8ee for framework 0c1e0a50-1212-4104-a148-661131b79f27-0000
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.781509 17618 hierarchical.cpp:891] Recovered cpus(*):1; mem(*):896; disk(*):1023; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: cpus(*):1; mem(*):128; disk(*):1) on agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 from framework 0c1e0a50-1212-4104-a148-661131b79f27-0000
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.781533 17618 hierarchical.cpp:928] Framework 0c1e0a50-1212-4104-a148-661131b79f27-0000 filtered agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 for 5secs
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.781556 17619 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32
> [01:07:51]W:	 [Step 10/10] Trying semicolon-delimited string format instead
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.781659 17619 slave.cpp:1641] Launching task 5f149c02-0e8a-493a-8e5a-b2dde68fc8ee for framework 0c1e0a50-1212-4104-a148-661131b79f27-0000
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.781690 17619 resources.cpp:572] Parsing resources as JSON failed: cpus:0.1;mem:32
> [01:07:51]W:	 [Step 10/10] Trying semicolon-delimited string format instead
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.781963 17619 paths.cpp:528] Trying to chown '/mnt/teamcity/temp/buildTmp/ROOT_XFS_QuotaTest_NoCheckpointRecovery_ZsRNg9/mnt/slaves/0c1e0a50-1212-4104-a148-661131b79f27-S0/frameworks/0c1e0a50-1212-4104-a148-661131b79f27-0000/executors/5f149c02-0e8a-493a-8e5a-b2dde68fc8ee/runs/9c39f928-232f-4231-8b69-4d896b5364c8' to user 'root'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.783887 17623 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 3.295567ms
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.783923 17623 replica.cpp:712] Persisted action at 4
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.784929 17618 replica.cpp:691] Replica received learned notice for position 4 from @0.0.0.0:0
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.787096 17619 slave.cpp:5646] Launching executor 5f149c02-0e8a-493a-8e5a-b2dde68fc8ee of framework 0c1e0a50-1212-4104-a148-661131b79f27-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/mnt/teamcity/temp/buildTmp/ROOT_XFS_QuotaTest_NoCheckpointRecovery_ZsRNg9/mnt/slaves/0c1e0a50-1212-4104-a148-661131b79f27-S0/frameworks/0c1e0a50-1212-4104-a148-661131b79f27-0000/executors/5f149c02-0e8a-493a-8e5a-b2dde68fc8ee/runs/9c39f928-232f-4231-8b69-4d896b5364c8'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.787257 17619 slave.cpp:1867] Queuing task '5f149c02-0e8a-493a-8e5a-b2dde68fc8ee' for executor '5f149c02-0e8a-493a-8e5a-b2dde68fc8ee' of framework 0c1e0a50-1212-4104-a148-661131b79f27-0000
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.787279 17624 containerizer.cpp:703] Starting container '9c39f928-232f-4231-8b69-4d896b5364c8' for executor '5f149c02-0e8a-493a-8e5a-b2dde68fc8ee' of framework '0c1e0a50-1212-4104-a148-661131b79f27-0000'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.787293 17619 slave.cpp:906] Successfully attached file '/mnt/teamcity/temp/buildTmp/ROOT_XFS_QuotaTest_NoCheckpointRecovery_ZsRNg9/mnt/slaves/0c1e0a50-1212-4104-a148-661131b79f27-S0/frameworks/0c1e0a50-1212-4104-a148-661131b79f27-0000/executors/5f149c02-0e8a-493a-8e5a-b2dde68fc8ee/runs/9c39f928-232f-4231-8b69-4d896b5364c8'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.787724 17620 disk.cpp:269] Assigned project 5000 to '/mnt/teamcity/temp/buildTmp/ROOT_XFS_QuotaTest_NoCheckpointRecovery_ZsRNg9/mnt/slaves/0c1e0a50-1212-4104-a148-661131b79f27-S0/frameworks/0c1e0a50-1212-4104-a148-661131b79f27-0000/executors/5f149c02-0e8a-493a-8e5a-b2dde68fc8ee/runs/9c39f928-232f-4231-8b69-4d896b5364c8'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.787789 17620 disk.cpp:328] Set quota on container 9c39f928-232f-4231-8b69-4d896b5364c8 for project 5000 to 1MB
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.788480 17622 linux_launcher.cpp:281] Cloning child process with flags = 
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.789700 17618 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 4.647496ms
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.789855 17618 leveldb.cpp:399] Deleting ~2 keys from leveldb took 82560ns
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.789888 17618 replica.cpp:712] Persisted action at 4
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.789921 17618 replica.cpp:697] Replica learned TRUNCATE action at position 4
> [01:07:51]W:	 [Step 10/10] WARNING: Logging before InitGoogleLogging() is written to STDERR
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.858196 32097 process.cpp:999] libprocess is initialized on 172.30.2.13:44946 with 8 worker threads
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.858451 32097 logging.cpp:195] Logging to STDERR
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.859042 32097 exec.cpp:150] Version: 0.29.0
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.859999 32129 exec.cpp:200] Executor started at: executor(1)@172.30.2.13:44946 with pid 32097
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.860466 17625 slave.cpp:2862] Got registration for executor '5f149c02-0e8a-493a-8e5a-b2dde68fc8ee' of framework 0c1e0a50-1212-4104-a148-661131b79f27-0000 from executor(1)@172.30.2.13:44946
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.860869 32131 exec.cpp:225] Executor registered on agent 0c1e0a50-1212-4104-a148-661131b79f27-S0
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.861157 17624 slave.cpp:2032] Sending queued task '5f149c02-0e8a-493a-8e5a-b2dde68fc8ee' to executor '5f149c02-0e8a-493a-8e5a-b2dde68fc8ee' of framework 0c1e0a50-1212-4104-a148-661131b79f27-0000 at executor(1)@172.30.2.13:44946
> [01:07:51] :	 [Step 10/10] Registered executor on ip-172-30-2-13.ec2.internal.mesosphere.io
> [01:07:51] :	 [Step 10/10] Starting task 5f149c02-0e8a-493a-8e5a-b2dde68fc8ee
> [01:07:51] :	 [Step 10/10] Forked command at 32133
> [01:07:51] :	 [Step 10/10] sh -c 'dd if=/dev/zero of=file bs=1048576 count=1; sleep 1000'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.861464 32131 exec.cpp:237] Executor::registered took 59651ns
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.861522 32131 exec.cpp:312] Executor asked to run task '5f149c02-0e8a-493a-8e5a-b2dde68fc8ee'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.861559 32131 exec.cpp:321] Executor::launchTask took 24117ns
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.862833 32124 exec.cpp:535] Executor sending status update TASK_RUNNING (UUID: ea8c32a1-7091-43c7-8eb0-37a064bf4dce) for task 5f149c02-0e8a-493a-8e5a-b2dde68fc8ee of framework 0c1e0a50-1212-4104-a148-661131b79f27-0000
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.863124 17618 slave.cpp:3221] Handling status update TASK_RUNNING (UUID: ea8c32a1-7091-43c7-8eb0-37a064bf4dce) for task 5f149c02-0e8a-493a-8e5a-b2dde68fc8ee of framework 0c1e0a50-1212-4104-a148-661131b79f27-0000 from executor(1)@172.30.2.13:44946
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.863618 17623 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: ea8c32a1-7091-43c7-8eb0-37a064bf4dce) for task 5f149c02-0e8a-493a-8e5a-b2dde68fc8ee of framework 0c1e0a50-1212-4104-a148-661131b79f27-0000
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.863642 17623 status_update_manager.cpp:497] Creating StatusUpdate stream for task 5f149c02-0e8a-493a-8e5a-b2dde68fc8ee of framework 0c1e0a50-1212-4104-a148-661131b79f27-0000
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.863752 17623 status_update_manager.cpp:374] Forwarding update TASK_RUNNING (UUID: ea8c32a1-7091-43c7-8eb0-37a064bf4dce) for task 5f149c02-0e8a-493a-8e5a-b2dde68fc8ee of framework 0c1e0a50-1212-4104-a148-661131b79f27-0000 to the agent
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.863827 17624 slave.cpp:3619] Forwarding the update TASK_RUNNING (UUID: ea8c32a1-7091-43c7-8eb0-37a064bf4dce) for task 5f149c02-0e8a-493a-8e5a-b2dde68fc8ee of framework 0c1e0a50-1212-4104-a148-661131b79f27-0000 to master@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.863910 17624 slave.cpp:3513] Status update manager successfully handled status update TASK_RUNNING (UUID: ea8c32a1-7091-43c7-8eb0-37a064bf4dce) for task 5f149c02-0e8a-493a-8e5a-b2dde68fc8ee of framework 0c1e0a50-1212-4104-a148-661131b79f27-0000
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.863932 17624 slave.cpp:3529] Sending acknowledgement for status update TASK_RUNNING (UUID: ea8c32a1-7091-43c7-8eb0-37a064bf4dce) for task 5f149c02-0e8a-493a-8e5a-b2dde68fc8ee of framework 0c1e0a50-1212-4104-a148-661131b79f27-0000 to executor(1)@172.30.2.13:44946
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.863986 17618 master.cpp:5094] Status update TASK_RUNNING (UUID: ea8c32a1-7091-43c7-8eb0-37a064bf4dce) for task 5f149c02-0e8a-493a-8e5a-b2dde68fc8ee of framework 0c1e0a50-1212-4104-a148-661131b79f27-0000 from agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 at slave(462)@172.30.2.13:37618 (ip-172-30-2-13.ec2.internal.mesosphere.io)
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.864015 17618 master.cpp:5142] Forwarding status update TASK_RUNNING (UUID: ea8c32a1-7091-43c7-8eb0-37a064bf4dce) for task 5f149c02-0e8a-493a-8e5a-b2dde68fc8ee of framework 0c1e0a50-1212-4104-a148-661131b79f27-0000
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.864087 17618 master.cpp:6750] Updating the state of task 5f149c02-0e8a-493a-8e5a-b2dde68fc8ee of framework 0c1e0a50-1212-4104-a148-661131b79f27-0000 (latest state: TASK_RUNNING, status update state: TASK_RUNNING)
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.864119 32129 exec.cpp:358] Executor received status update acknowledgement ea8c32a1-7091-43c7-8eb0-37a064bf4dce for task 5f149c02-0e8a-493a-8e5a-b2dde68fc8ee of framework 0c1e0a50-1212-4104-a148-661131b79f27-0000
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.864205 17619 sched.cpp:988] Scheduler::statusUpdate took 49253ns
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.864375 17621 master.cpp:4249] Processing ACKNOWLEDGE call ea8c32a1-7091-43c7-8eb0-37a064bf4dce for task 5f149c02-0e8a-493a-8e5a-b2dde68fc8ee of framework 0c1e0a50-1212-4104-a148-661131b79f27-0000 (default) at scheduler-e7469c06-9d69-4f31-be71-3eb5067ed5dc@172.30.2.13:37618 on agent 0c1e0a50-1212-4104-a148-661131b79f27-S0
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.864567 17624 status_update_manager.cpp:392] Received status update acknowledgement (UUID: ea8c32a1-7091-43c7-8eb0-37a064bf4dce) for task 5f149c02-0e8a-493a-8e5a-b2dde68fc8ee of framework 0c1e0a50-1212-4104-a148-661131b79f27-0000
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.864650 17624 slave.cpp:2631] Status update manager successfully handled status update acknowledgement (UUID: ea8c32a1-7091-43c7-8eb0-37a064bf4dce) for task 5f149c02-0e8a-493a-8e5a-b2dde68fc8ee of framework 0c1e0a50-1212-4104-a148-661131b79f27-0000
> [01:07:51]W:	 [Step 10/10] 1+0 records in
> [01:07:51]W:	 [Step 10/10] 1+0 records out
> [01:07:51]W:	 [Step 10/10] 1048576 bytes (1.0 MB) copied, 0.00128219 s, 818 MB/s
> [01:07:51] :	 [Step 10/10] ../../src/tests/containerizer/xfs_quota_tests.cpp:559: Failure
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.865185 17604 slave.cpp:825] Agent terminating
> [01:07:51] :	 [Step 10/10] Value of: usage1->executors(0).statistics().disk_used_bytes()
> [01:07:51] :	 [Step 10/10]   Actual: 196608
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.865208 17604 slave.cpp:2252] Asked to shut down framework 0c1e0a50-1212-4104-a148-661131b79f27-0000 by @0.0.0.0:0
> [01:07:51] :	 [Step 10/10] Expected: Megabytes(1)
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.865226 17604 slave.cpp:2277] Shutting down framework 0c1e0a50-1212-4104-a148-661131b79f27-0000
> [01:07:51] :	 [Step 10/10] Which is: 8-byte object <00-00 10-00 00-00 00-00>
> [01:07:51] :	 [Step 10/10] Shutting down
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.865242 17604 slave.cpp:4451] Shutting down executor '5f149c02-0e8a-493a-8e5a-b2dde68fc8ee' of framework 0c1e0a50-1212-4104-a148-661131b79f27-0000 at executor(1)@172.30.2.13:44946
> [01:07:51] :	 [Step 10/10] Sending SIGTERM to process tree at pid 32133
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.865454 17622 master.cpp:1342] Agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 at slave(462)@172.30.2.13:37618 (ip-172-30-2-13.ec2.internal.mesosphere.io) disconnected
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.865478 17622 master.cpp:2848] Disconnecting agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 at slave(462)@172.30.2.13:37618 (ip-172-30-2-13.ec2.internal.mesosphere.io)
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.865506 17622 master.cpp:2867] Deactivating agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 at slave(462)@172.30.2.13:37618 (ip-172-30-2-13.ec2.internal.mesosphere.io)
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.865561 17622 master.cpp:1361] Removing framework 0c1e0a50-1212-4104-a148-661131b79f27-0000 (default) at scheduler-e7469c06-9d69-4f31-be71-3eb5067ed5dc@172.30.2.13:37618 from disconnected agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 at slave(462)@172.30.2.13:37618 (ip-172-30-2-13.ec2.internal.mesosphere.io) because the framework is not checkpointing
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.865573 17624 hierarchical.cpp:560] Agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 deactivated
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.865579 17622 master.cpp:6382] Removing framework 0c1e0a50-1212-4104-a148-661131b79f27-0000 (default) at scheduler-e7469c06-9d69-4f31-be71-3eb5067ed5dc@172.30.2.13:37618 from agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 at slave(462)@172.30.2.13:37618 (ip-172-30-2-13.ec2.internal.mesosphere.io)
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.865609 17622 master.cpp:6750] Updating the state of task 5f149c02-0e8a-493a-8e5a-b2dde68fc8ee of framework 0c1e0a50-1212-4104-a148-661131b79f27-0000 (latest state: TASK_LOST, status update state: TASK_LOST)
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.865739 17623 hierarchical.cpp:891] Recovered cpus(*):1; mem(*):128; disk(*):1 (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) on agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 from framework 0c1e0a50-1212-4104-a148-661131b79f27-0000
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.865741 17622 master.cpp:6816] Removing task 5f149c02-0e8a-493a-8e5a-b2dde68fc8ee with resources cpus(*):1; mem(*):128; disk(*):1 of framework 0c1e0a50-1212-4104-a148-661131b79f27-0000 on agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 at slave(462)@172.30.2.13:37618 (ip-172-30-2-13.ec2.internal.mesosphere.io)
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.865818 17622 master.cpp:5137] Sending status update TASK_LOST for task 5f149c02-0e8a-493a-8e5a-b2dde68fc8ee of framework 0c1e0a50-1212-4104-a148-661131b79f27-0000 'Slave ip-172-30-2-13.ec2.internal.mesosphere.io disconnected'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.865900 17622 sched.cpp:988] Scheduler::statusUpdate took 9337ns
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.866092 32130 exec.cpp:399] Executor asked to shutdown
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.866142 32130 exec.cpp:414] Executor::shutdown took 4913ns
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.866170 32126 exec.cpp:91] Scheduling shutdown of the executor in 5secs
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.866215 17604 containerizer.cpp:178] Using isolation: xfs/disk,filesystem/posix,network/cni
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.871803 17604 linux_launcher.cpp:101] Using /sys/fs/cgroup/freezer as the freezer hierarchy for the Linux launcher
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.872342 17604 disk.cpp:160] Allocating XFS project IDs from the range {[5000,10001)}
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.872653 17604 cluster.cpp:398] Creating default 'local' authorizer
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.872992 17623 slave.cpp:204] Agent started on 463)@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.873003 17623 slave.cpp:205] Flags at startup: --acls="" --appc_simple_discovery_uri_prefix="http://" --appc_store_dir="/tmp/mesos/store/appc" --authenticate_http="true" --authenticatee="crammd5" --authorizer="local" --cgroups_cpu_enable_pids_and_tids_count="false" --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup" --cgroups_limit_swap="false" --cgroups_root="mesos" --container_disk_watch_interval="15secs" --containerizers="mesos" --credential="/mnt/teamcity/temp/buildTmp/ROOT_XFS_QuotaTest_NoCheckpointRecovery_FrIFIF/credential" --default_role="*" --disk_watch_interval="1mins" --docker="docker" --docker_kill_orphans="true" --docker_registry="https://registry-1.docker.io" --docker_remove_delay="6hrs" --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns" --docker_store_dir="/tmp/mesos/store/docker" --enforce_container_disk_quota="false" --executor_registration_timeout="1mins" --executor_shutdown_grace_period="5secs" --fetcher_cache_dir="/mnt/teamcity/temp/buildTmp/ROOT_XFS_QuotaTest_NoCheckpointRecovery_FrIFIF/fetch" --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks" --gc_disk_headroom="0.1" --hadoop_home="" --help="false" --hostname_lookup="true" --http_authenticators="basic" --http_command_executor="false" --http_credentials="/mnt/teamcity/temp/buildTmp/ROOT_XFS_QuotaTest_NoCheckpointRecovery_FrIFIF/http_credentials" --image_provisioner_backend="copy" --initialize_driver_logging="true" --isolation="xfs/disk" --launcher_dir="/mnt/teamcity/work/4240ba9ddd0997c3/build/src" --logbufsecs="0" --logging_level="INFO" --oversubscribed_resources_interval="15secs" --perf_duration="10secs" --perf_interval="1mins" --qos_correction_interval_min="0ns" --quiet="false" --recover="reconnect" --recovery_timeout="15mins" --registration_backoff_factor="10ms" --resources="cpus:2;mem:1024;disk:1024;ports:[31000-32000]" --revocable_cpu_low_priority="true" --sandbox_directory="/mnt/mesos/sandbox" --strict="true" --switch_user="true" --systemd_enable_support="true" --systemd_runtime_directory="/run/systemd/system" --version="false" --work_dir="/mnt/teamcity/temp/buildTmp/ROOT_XFS_QuotaTest_NoCheckpointRecovery_ZsRNg9/mnt" --xfs_project_range="[5000-10000]"
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.873185 17623 credentials.hpp:86] Loading credential for authentication from '/mnt/teamcity/temp/buildTmp/ROOT_XFS_QuotaTest_NoCheckpointRecovery_FrIFIF/credential'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.873286 17623 slave.cpp:342] Agent using credential for: test-principal
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.873313 17623 credentials.hpp:37] Loading credentials for authentication from '/mnt/teamcity/temp/buildTmp/ROOT_XFS_QuotaTest_NoCheckpointRecovery_FrIFIF/http_credentials'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.873421 17623 slave.cpp:394] Using default 'basic' HTTP authenticator
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.873504 17623 resources.cpp:572] Parsing resources as JSON failed: cpus:2;mem:1024;disk:1024;ports:[31000-32000]
> [01:07:51]W:	 [Step 10/10] Trying semicolon-delimited string format instead
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.873646 17623 slave.cpp:593] Agent resources: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.873682 17623 slave.cpp:601] Agent attributes: [  ]
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.873692 17623 slave.cpp:606] Agent hostname: ip-172-30-2-13.ec2.internal.mesosphere.io
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.874004 17623 state.cpp:57] Recovering state from '/mnt/teamcity/temp/buildTmp/ROOT_XFS_QuotaTest_NoCheckpointRecovery_ZsRNg9/mnt/meta'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.874035 17623 state.cpp:697] No checkpointed resources found at '/mnt/teamcity/temp/buildTmp/ROOT_XFS_QuotaTest_NoCheckpointRecovery_ZsRNg9/mnt/meta/resources/resources.info'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.874418 17622 fetcher.cpp:81] Clearing fetcher cache
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.874527 17623 status_update_manager.cpp:200] Recovering status update manager
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.874644 17623 containerizer.cpp:444] Recovering containerizer
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.875880 17619 provisioner.cpp:245] Provisioner recovery complete
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.875941 17623 containerizer.cpp:612] Removing orphan container 9c39f928-232f-4231-8b69-4d896b5364c8
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.877531 17623 slave.cpp:4826] Finished recovery
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.877585 17621 cgroups.cpp:2676] Freezing cgroup /sys/fs/cgroup/freezer/mesos/9c39f928-232f-4231-8b69-4d896b5364c8
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.877769 17623 slave.cpp:4998] Querying resource estimator for oversubscribable resources
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.877876 17618 slave.cpp:5012] Received oversubscribable resources  from the resource estimator
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.877924 17618 slave.cpp:953] New master detected at master@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.877941 17623 status_update_manager.cpp:174] Pausing sending status updates
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.877948 17618 slave.cpp:1016] Authenticating with master master@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.877962 17618 slave.cpp:1021] Using default CRAM-MD5 authenticatee
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.878005 17618 slave.cpp:989] Detecting new master
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.878048 17620 authenticatee.cpp:121] Creating new client SASL connection
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.878130 17620 master.cpp:5826] Authenticating slave(463)@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.878211 17624 authenticator.cpp:413] Starting authentication session for crammd5_authenticatee(935)@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.878295 17620 authenticator.cpp:98] Creating new server SASL connection
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.878356 17620 authenticatee.cpp:212] Received SASL authentication mechanisms: CRAM-MD5
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.878372 17620 authenticatee.cpp:238] Attempting to authenticate with mechanism 'CRAM-MD5'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.878437 17618 authenticator.cpp:203] Received SASL authentication start
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.878465 17618 authenticator.cpp:325] Authentication requires more steps
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.878500 17618 authenticatee.cpp:258] Received SASL authentication step
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.878572 17624 authenticator.cpp:231] Received SASL authentication step
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.878598 17624 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-13.ec2.internal' server FQDN: 'ip-172-30-2-13.ec2.internal' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.878610 17624 auxprop.cpp:179] Looking up auxiliary property '*userPassword'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.878623 17624 auxprop.cpp:179] Looking up auxiliary property '*cmusaslsecretCRAM-MD5'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.878636 17624 auxprop.cpp:107] Request to lookup properties for user: 'test-principal' realm: 'ip-172-30-2-13.ec2.internal' server FQDN: 'ip-172-30-2-13.ec2.internal' SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.878646 17624 auxprop.cpp:129] Skipping auxiliary property '*userPassword' since SASL_AUXPROP_AUTHZID == true
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.878655 17624 auxprop.cpp:129] Skipping auxiliary property '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.878670 17624 authenticator.cpp:317] Authentication success
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.878733 17618 authenticatee.cpp:298] Authentication success
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.878742 17621 authenticator.cpp:431] Authentication session cleanup for crammd5_authenticatee(935)@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.878760 17624 master.cpp:5856] Successfully authenticated principal 'test-principal' at slave(463)@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.878811 17618 slave.cpp:1086] Successfully authenticated with master master@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.878871 17618 slave.cpp:1482] Will retry registration in 16.575202ms if necessary
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.878937 17624 master.cpp:4697] Re-registering agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 at slave(462)@172.30.2.13:37618 (ip-172-30-2-13.ec2.internal.mesosphere.io)
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.879001 17624 master.cpp:4887] Sending updated checkpointed resources  to agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 at slave(463)@172.30.2.13:37618 (ip-172-30-2-13.ec2.internal.mesosphere.io)
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.879073 17618 slave.cpp:1232] Re-registered with master master@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.879091 17624 hierarchical.cpp:548] Agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 reactivated
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.879101 17618 slave.cpp:1268] Forwarding total oversubscribed resources 
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.879122 17624 status_update_manager.cpp:181] Resuming sending status updates
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.879173 17624 master.cpp:4949] Received update of agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 at slave(463)@172.30.2.13:37618 (ip-172-30-2-13.ec2.internal.mesosphere.io) with total oversubscribed resources 
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.879295 17618 slave.cpp:2560] Updated checkpointed resources from  to 
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.879308 17624 hierarchical.cpp:531] Agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 (ip-172-30-2-13.ec2.internal.mesosphere.io) updated with oversubscribed resources  (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: )
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.879515 17624 hierarchical.cpp:1583] No inverse offers to send out!
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.879545 17624 hierarchical.cpp:1162] Performed allocation for agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 in 208496ns
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.879608 17621 master.cpp:5655] Sending 1 offers to framework 0c1e0a50-1212-4104-a148-661131b79f27-0000 (default) at scheduler-e7469c06-9d69-4f31-be71-3eb5067ed5dc@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.879716 17625 sched.cpp:880] Scheduler::resourceOffers took 7646ns
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.879845 17622 cgroups.cpp:1409] Successfully froze cgroup /sys/fs/cgroup/freezer/mesos/9c39f928-232f-4231-8b69-4d896b5364c8 after 2.232064ms
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.882483 17619 cgroups.cpp:2694] Thawing cgroup /sys/fs/cgroup/freezer/mesos/9c39f928-232f-4231-8b69-4d896b5364c8
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.884838 17621 cgroups.cpp:1438] Successfully thawed cgroup /sys/fs/cgroup/freezer/mesos/9c39f928-232f-4231-8b69-4d896b5364c8 after 2.328832ms
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.967190 17621 disk.cpp:379] Removing project ID 5000 from '/mnt/teamcity/temp/buildTmp/ROOT_XFS_QuotaTest_NoCheckpointRecovery_ZsRNg9/mnt/slaves/0c1e0a50-1212-4104-a148-661131b79f27-S0/frameworks/0c1e0a50-1212-4104-a148-661131b79f27-0000/executors/5f149c02-0e8a-493a-8e5a-b2dde68fc8ee/runs/9c39f928-232f-4231-8b69-4d896b5364c8'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.968063 17604 sched.cpp:1911] Asked to stop the driver
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.968127 17621 sched.cpp:1150] Stopping framework '0c1e0a50-1212-4104-a148-661131b79f27-0000'
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.968206 17624 master.cpp:6223] Processing TEARDOWN call for framework 0c1e0a50-1212-4104-a148-661131b79f27-0000 (default) at scheduler-e7469c06-9d69-4f31-be71-3eb5067ed5dc@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.968228 17624 master.cpp:6235] Removing framework 0c1e0a50-1212-4104-a148-661131b79f27-0000 (default) at scheduler-e7469c06-9d69-4f31-be71-3eb5067ed5dc@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.968334 17621 hierarchical.cpp:375] Deactivated framework 0c1e0a50-1212-4104-a148-661131b79f27-0000
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.968348 17620 slave.cpp:2252] Asked to shut down framework 0c1e0a50-1212-4104-a148-661131b79f27-0000 by master@172.30.2.13:37618
> [01:07:51]W:	 [Step 10/10] W0429 01:07:51.968370 17620 slave.cpp:2267] Cannot shut down unknown framework 0c1e0a50-1212-4104-a148-661131b79f27-0000
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.968472 17621 hierarchical.cpp:891] Recovered cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000], allocated: ) on agent 0c1e0a50-1212-4104-a148-661131b79f27-S0 from framework 0c1e0a50-1212-4104-a148-661131b79f27-0000
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.968538 17621 hierarchical.cpp:326] Removed framework 0c1e0a50-1212-4104-a148-661131b79f27-0000
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.968807 17618 slave.cpp:825] Agent terminating
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.970367 17604 master.cpp:1189] Master terminating
> [01:07:51]W:	 [Step 10/10] I0429 01:07:51.970533 17622 hierarchical.cpp:505] Removed agent 0c1e0a50-1212-4104-a148-661131b79f27-S0
> [01:07:52] :	 [Step 10/10] [  FAILED  ] ROOT_XFS_QuotaTest.NoCheckpointRecovery (962 ms)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message