Return-Path: X-Original-To: apmail-mesos-issues-archive@minotaur.apache.org Delivered-To: apmail-mesos-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 1B433186C7 for ; Mon, 29 Feb 2016 19:36:53 +0000 (UTC) Received: (qmail 20323 invoked by uid 500); 29 Feb 2016 19:36:18 -0000 Delivered-To: apmail-mesos-issues-archive@mesos.apache.org Received: (qmail 20131 invoked by uid 500); 29 Feb 2016 19:36:18 -0000 Mailing-List: contact issues-help@mesos.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@mesos.apache.org Delivered-To: mailing list issues@mesos.apache.org Received: (qmail 20082 invoked by uid 99); 29 Feb 2016 19:36:18 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 29 Feb 2016 19:36:18 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id 623482C1F64 for ; Mon, 29 Feb 2016 19:36:18 +0000 (UTC) Date: Mon, 29 Feb 2016 19:36:18 +0000 (UTC) From: "Artem Harutyunyan (JIRA)" To: issues@mesos.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (MESOS-4756) DockerContainerizerTest.ROOT_DOCKER_DockerInspectDiscard is flaky on CentOS 6 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/MESOS-4756?page=3Dcom.atlassia= n.jira.plugin.system.issuetabpanels:all-tabpanel ] Artem Harutyunyan updated MESOS-4756: ------------------------------------- Assignee: Jan Schlicht > DockerContainerizerTest.ROOT_DOCKER_DockerInspectDiscard is flaky on Cent= OS 6 > -------------------------------------------------------------------------= ---- > > Key: MESOS-4756 > URL: https://issues.apache.org/jira/browse/MESOS-4756 > Project: Mesos > Issue Type: Bug > Components: tests > Affects Versions: 0.27 > Environment: Centos6 (AWS) + GCC 4.9 > Reporter: Joseph Wu > Assignee: Jan Schlicht > Labels: mesosphere, tests > > {code} > [ RUN ] DockerContainerizerTest.ROOT_DOCKER_DockerInspectDiscard > I0224 17:50:26.577450 17755 leveldb.cpp:174] Opened db in 6.715352ms > I0224 17:50:26.579607 17755 leveldb.cpp:181] Compacted db in 2.128954ms > I0224 17:50:26.579648 17755 leveldb.cpp:196] Created db iterator in 16927= ns > I0224 17:50:26.579661 17755 leveldb.cpp:202] Seeked to beginning of db in= 1408ns > I0224 17:50:26.579669 17755 leveldb.cpp:271] Iterated through 0 keys in t= he db in 343ns > I0224 17:50:26.579721 17755 replica.cpp:779] Replica recovered with log p= ositions 0 -> 0 with 1 holes and 0 unlearned > I0224 17:50:26.580185 17776 recover.cpp:447] Starting replica recovery > I0224 17:50:26.580382 17776 recover.cpp:473] Replica is in EMPTY status > I0224 17:50:26.581264 17770 replica.cpp:673] Replica in EMPTY status rece= ived a broadcasted recover request from (14098)@172.30.2.121:33050 > I0224 17:50:26.581771 17772 recover.cpp:193] Received a recover response = from a replica in EMPTY status > I0224 17:50:26.582188 17771 recover.cpp:564] Updating replica status to S= TARTING > I0224 17:50:26.583030 17772 master.cpp:376] Master 00a3ac12-9e76-48f5-92f= a-48770b82035d (ip-172-30-2-121.mesosphere.io) started on 172.30.2.121:3305= 0 > I0224 17:50:26.583051 17772 master.cpp:378] Flags at startup: --acls=3D""= --allocation_interval=3D"1secs" --allocator=3D"HierarchicalDRF" --authenti= cate=3D"true" --authenticate_http=3D"true" --authenticate_slaves=3D"true" -= -authenticators=3D"crammd5" --authorizers=3D"local" --credentials=3D"/tmp/j= SZ9of/credentials" --framework_sorter=3D"drf" --help=3D"false" --hostname_l= ookup=3D"true" --http_authenticators=3D"basic" --initialize_driver_logging= =3D"true" --log_auto_initialize=3D"true" --logbufsecs=3D"0" --logging_level= =3D"INFO" --max_completed_frameworks=3D"50" --max_completed_tasks_per_frame= work=3D"1000" --max_slave_ping_timeouts=3D"5" --quiet=3D"false" --recovery_= slave_removal_limit=3D"100%" --registry=3D"replicated_log" --registry_fetch= _timeout=3D"1mins" --registry_store_timeout=3D"100secs" --registry_strict= =3D"true" --root_submissions=3D"true" --slave_ping_timeout=3D"15secs" --sla= ve_reregister_timeout=3D"10mins" --user_sorter=3D"drf" --version=3D"false" = --webui_dir=3D"/usr/local/share/mesos/webui" --work_dir=3D"/tmp/jSZ9of/mast= er" --zk_session_timeout=3D"10secs" > I0224 17:50:26.583328 17772 master.cpp:423] Master only allowing authenti= cated frameworks to register > I0224 17:50:26.583336 17772 master.cpp:428] Master only allowing authenti= cated slaves to register > I0224 17:50:26.583343 17772 credentials.hpp:35] Loading credentials for a= uthentication from '/tmp/jSZ9of/credentials' > I0224 17:50:26.583901 17772 master.cpp:468] Using default 'crammd5' authe= nticator > I0224 17:50:26.584022 17772 master.cpp:537] Using default 'basic' HTTP au= thenticator > I0224 17:50:26.584141 17772 master.cpp:571] Authorization enabled > I0224 17:50:26.584234 17770 leveldb.cpp:304] Persisting metadata (8 bytes= ) to leveldb took 1.955608ms > I0224 17:50:26.584264 17770 replica.cpp:320] Persisted replica status to = STARTING > I0224 17:50:26.584285 17771 hierarchical.cpp:144] Initialized hierarchica= l allocator process > I0224 17:50:26.584295 17773 whitelist_watcher.cpp:77] No whitelist given > I0224 17:50:26.584463 17775 recover.cpp:473] Replica is in STARTING statu= s > I0224 17:50:26.585260 17771 replica.cpp:673] Replica in STARTING status r= eceived a broadcasted recover request from (14100)@172.30.2.121:33050 > I0224 17:50:26.585553 17777 recover.cpp:193] Received a recover response = from a replica in STARTING status > I0224 17:50:26.586042 17773 recover.cpp:564] Updating replica status to V= OTING > I0224 17:50:26.586091 17770 master.cpp:1712] The newly elected leader is = master@172.30.2.121:33050 with id 00a3ac12-9e76-48f5-92fa-48770b82035d > I0224 17:50:26.586122 17770 master.cpp:1725] Elected as the leading maste= r! > I0224 17:50:26.586146 17770 master.cpp:1470] Recovering from registrar > I0224 17:50:26.586294 17773 registrar.cpp:307] Recovering registrar > I0224 17:50:26.588148 17776 leveldb.cpp:304] Persisting metadata (8 bytes= ) to leveldb took 1.89126ms > I0224 17:50:26.588171 17776 replica.cpp:320] Persisted replica status to = VOTING > I0224 17:50:26.588260 17772 recover.cpp:578] Successfully joined the Paxo= s group > I0224 17:50:26.588440 17772 recover.cpp:462] Recover process terminated > I0224 17:50:26.588770 17773 log.cpp:659] Attempting to start the writer > I0224 17:50:26.589782 17770 replica.cpp:493] Replica received implicit pr= omise request from (14101)@172.30.2.121:33050 with proposal 1 > I0224 17:50:26.591498 17770 leveldb.cpp:304] Persisting metadata (8 bytes= ) to leveldb took 1.690976ms > I0224 17:50:26.591519 17770 replica.cpp:342] Persisted promised to 1 > I0224 17:50:26.592025 17773 coordinator.cpp:238] Coordinator attempting t= o fill missing positions > I0224 17:50:26.593091 17776 replica.cpp:388] Replica received explicit pr= omise request from (14102)@172.30.2.121:33050 for position 0 with proposal = 2 > I0224 17:50:26.594964 17776 leveldb.cpp:341] Persisting action (8 bytes) = to leveldb took 1.842643ms > I0224 17:50:26.594988 17776 replica.cpp:712] Persisted action at 0 > I0224 17:50:26.595870 17775 replica.cpp:537] Replica received write reque= st for position 0 from (14103)@172.30.2.121:33050 > I0224 17:50:26.595914 17775 leveldb.cpp:436] Reading position from leveld= b took 21421ns > I0224 17:50:26.597745 17775 leveldb.cpp:341] Persisting action (14 bytes)= to leveldb took 1.799795ms > I0224 17:50:26.597767 17775 replica.cpp:712] Persisted action at 0 > I0224 17:50:26.598217 17772 replica.cpp:691] Replica received learned not= ice for position 0 from @0.0.0.0:0 > I0224 17:50:26.600085 17772 leveldb.cpp:341] Persisting action (16 bytes)= to leveldb took 1.835672ms > I0224 17:50:26.600106 17772 replica.cpp:712] Persisted action at 0 > I0224 17:50:26.600121 17772 replica.cpp:697] Replica learned NOP action a= t position 0 > I0224 17:50:26.600606 17773 log.cpp:675] Writer started with ending posit= ion 0 > I0224 17:50:26.601505 17774 leveldb.cpp:436] Reading position from leveld= b took 25926ns > I0224 17:50:26.602330 17774 registrar.cpp:340] Successfully fetched the r= egistry (0B) in 15.977984ms > I0224 17:50:26.602429 17774 registrar.cpp:439] Applied 1 operations in 22= 557ns; attempting to update the 'registry' > I0224 17:50:26.602996 17772 log.cpp:683] Attempting to append 210 bytes t= o the log > I0224 17:50:26.603103 17777 coordinator.cpp:348] Coordinator attempting t= o write APPEND action at position 1 > I0224 17:50:26.603703 17771 replica.cpp:537] Replica received write reque= st for position 1 from (14104)@172.30.2.121:33050 > I0224 17:50:26.605535 17771 leveldb.cpp:341] Persisting action (229 bytes= ) to leveldb took 1.798462ms > I0224 17:50:26.605557 17771 replica.cpp:712] Persisted action at 1 > I0224 17:50:26.606109 17770 replica.cpp:691] Replica received learned not= ice for position 1 from @0.0.0.0:0 > I0224 17:50:26.607885 17770 leveldb.cpp:341] Persisting action (231 bytes= ) to leveldb took 1.746759ms > I0224 17:50:26.607908 17770 replica.cpp:712] Persisted action at 1 > I0224 17:50:26.607921 17770 replica.cpp:697] Replica learned APPEND actio= n at position 1 > I0224 17:50:26.608687 17772 registrar.cpp:484] Successfully updated the '= registry' in 6.201856ms > I0224 17:50:26.608826 17772 registrar.cpp:370] Successfully recovered reg= istrar > I0224 17:50:26.608857 17775 log.cpp:702] Attempting to truncate the log t= o 1 > I0224 17:50:26.608968 17773 coordinator.cpp:348] Coordinator attempting t= o write TRUNCATE action at position 2 > I0224 17:50:26.609362 17776 master.cpp:1522] Recovered 0 slaves from the = Registry (171B) ; allowing 10mins for slaves to re-register > I0224 17:50:26.609390 17773 hierarchical.cpp:171] Skipping recovery of hi= erarchical allocator: nothing to recover > I0224 17:50:26.609741 17776 replica.cpp:537] Replica received write reque= st for position 2 from (14105)@172.30.2.121:33050 > I0224 17:50:26.611588 17776 leveldb.cpp:341] Persisting action (16 bytes)= to leveldb took 1.819377ms > I0224 17:50:26.611610 17776 replica.cpp:712] Persisted action at 2 > I0224 17:50:26.612218 17777 replica.cpp:691] Replica received learned not= ice for position 2 from @0.0.0.0:0 > I0224 17:50:26.614109 17777 leveldb.cpp:341] Persisting action (18 bytes)= to leveldb took 1.866511ms > I0224 17:50:26.614161 17777 leveldb.cpp:399] Deleting ~1 keys from leveld= b took 28883ns > I0224 17:50:26.614176 17777 replica.cpp:712] Persisted action at 2 > I0224 17:50:26.614190 17777 replica.cpp:697] Replica learned TRUNCATE act= ion at position 2 > I0224 17:50:26.623827 17770 slave.cpp:193] Slave started on 414)@172.30.2= .121:33050 > I0224 17:50:26.623848 17770 slave.cpp:194] Flags at startup: --appc_simpl= e_discovery_uri_prefix=3D"http://" --appc_store_dir=3D"/tmp/mesos/store/app= c" --authenticatee=3D"crammd5" --cgroups_cpu_enable_pids_and_tids_count=3D"= false" --cgroups_enable_cfs=3D"false" --cgroups_hierarchy=3D"/sys/fs/cgroup= " --cgroups_limit_swap=3D"false" --cgroups_root=3D"mesos" --container_disk_= watch_interval=3D"15secs" --containerizers=3D"mesos" --credential=3D"/tmp/D= ockerContainerizerTest_ROOT_DOCKER_DockerInspectDiscard_nZvyhv/credential" = --default_role=3D"*" --disk_watch_interval=3D"1mins" --docker=3D"docker" --= docker_kill_orphans=3D"true" --docker_registry=3D"https://registry-1.docker= .io" --docker_remove_delay=3D"6hrs" --docker_socket=3D"/var/run/docker.sock= " --docker_stop_timeout=3D"0ns" --docker_store_dir=3D"/tmp/mesos/store/dock= er" --enforce_container_disk_quota=3D"false" --executor_registration_timeou= t=3D"1mins" --executor_shutdown_grace_period=3D"5secs" --fetcher_cache_dir= =3D"/tmp/DockerContainerizerTest_ROOT_DOCKER_DockerInspectDiscard_nZvyhv/fe= tch" --fetcher_cache_size=3D"2GB" --frameworks_home=3D"" --gc_delay=3D"1wee= ks" --gc_disk_headroom=3D"0.1" --hadoop_home=3D"" --help=3D"false" --hostna= me_lookup=3D"true" --image_provisioner_backend=3D"copy" --initialize_driver= _logging=3D"true" --isolation=3D"posix/cpu,posix/mem" --launcher_dir=3D"/mn= t/teamcity/work/4240ba9ddd0997c3/build/src" --logbufsecs=3D"0" --logging_le= vel=3D"INFO" --oversubscribed_resources_interval=3D"15secs" --perf_duration= =3D"10secs" --perf_interval=3D"1mins" --qos_correction_interval_min=3D"0ns"= --quiet=3D"false" --recover=3D"reconnect" --recovery_timeout=3D"15mins" --= registration_backoff_factor=3D"10ms" --resources=3D"cpus:2;mem:1024;disk:10= 24;ports:[31000-32000]" --revocable_cpu_low_priority=3D"true" --sandbox_dir= ectory=3D"/mnt/mesos/sandbox" --strict=3D"true" --switch_user=3D"true" --sy= stemd_enable_support=3D"true" --systemd_runtime_directory=3D"/run/systemd/s= ystem" --version=3D"false" --work_dir=3D"/tmp/DockerContainerizerTest_ROOT_= DOCKER_DockerInspectDiscard_nZvyhv" > I0224 17:50:26.624110 17770 credentials.hpp:83] Loading credential for au= thentication from '/tmp/DockerContainerizerTest_ROOT_DOCKER_DockerInspectDi= scard_nZvyhv/credential' > I0224 17:50:26.624255 17770 slave.cpp:324] Slave using credential for: te= st-principal > I0224 17:50:26.624414 17770 resources.cpp:576] Parsing resources as JSON = failed: cpus:2;mem:1024;disk:1024;ports:[31000-32000] > Trying semicolon-delimited string format instead > I0224 17:50:26.624729 17770 slave.cpp:464] Slave resources: cpus(*):2; me= m(*):1024; disk(*):1024; ports(*):[31000-32000] > I0224 17:50:26.624783 17770 slave.cpp:472] Slave attributes: [ ] > I0224 17:50:26.624791 17770 slave.cpp:477] Slave hostname: ip-172-30-2-12= 1.mesosphere.io > I0224 17:50:26.625807 17775 state.cpp:58] Recovering state from '/tmp/Doc= kerContainerizerTest_ROOT_DOCKER_DockerInspectDiscard_nZvyhv/meta' > I0224 17:50:26.625898 17755 sched.cpp:222] Version: 0.28.0 > I0224 17:50:26.626061 17770 status_update_manager.cpp:200] Recovering sta= tus update manager > I0224 17:50:26.626246 17773 docker.cpp:726] Recovering Docker containers > I0224 17:50:26.626441 17774 sched.cpp:326] New master detected at master@= 172.30.2.121:33050 > I0224 17:50:26.626442 17776 slave.cpp:4565] Finished recovery > I0224 17:50:26.626569 17774 sched.cpp:382] Authenticating with master mas= ter@172.30.2.121:33050 > I0224 17:50:26.626585 17774 sched.cpp:389] Using default CRAM-MD5 authent= icatee > I0224 17:50:26.626734 17776 slave.cpp:4737] Querying resource estimator f= or oversubscribable resources > I0224 17:50:26.626807 17777 authenticatee.cpp:121] Creating new client SA= SL connection > I0224 17:50:26.626953 17770 status_update_manager.cpp:174] Pausing sendin= g status updates > I0224 17:50:26.627019 17775 master.cpp:5526] Authenticating scheduler-8f5= 548ac-e23b-4155-a9c1-9913e79c349b@172.30.2.121:33050 > I0224 17:50:26.627018 17772 slave.cpp:796] New master detected at master@= 172.30.2.121:33050 > I0224 17:50:26.627100 17772 slave.cpp:859] Authenticating with master mas= ter@172.30.2.121:33050 > I0224 17:50:26.627115 17772 slave.cpp:864] Using default CRAM-MD5 authent= icatee > I0224 17:50:26.627128 17777 authenticator.cpp:413] Starting authenticatio= n session for crammd5_authenticatee(864)@172.30.2.121:33050 > I0224 17:50:26.627259 17775 authenticatee.cpp:121] Creating new client SA= SL connection > I0224 17:50:26.627264 17772 slave.cpp:832] Detecting new master > I0224 17:50:26.627315 17771 authenticator.cpp:98] Creating new server SAS= L connection > I0224 17:50:26.627368 17772 slave.cpp:4751] Received oversubscribable res= ources from the resource estimator > I0224 17:50:26.627434 17777 master.cpp:5526] Authenticating slave(414)@17= 2.30.2.121:33050 > I0224 17:50:26.627445 17774 authenticatee.cpp:212] Received SASL authenti= cation mechanisms: CRAM-MD5 > I0224 17:50:26.627465 17774 authenticatee.cpp:238] Attempting to authenti= cate with mechanism 'CRAM-MD5' > I0224 17:50:26.627538 17770 authenticator.cpp:413] Starting authenticatio= n session for crammd5_authenticatee(865)@172.30.2.121:33050 > I0224 17:50:26.627662 17776 authenticator.cpp:203] Received SASL authenti= cation start > I0224 17:50:26.627717 17775 authenticator.cpp:98] Creating new server SAS= L connection > I0224 17:50:26.627730 17776 authenticator.cpp:325] Authentication require= s more steps > I0224 17:50:26.627851 17774 authenticatee.cpp:212] Received SASL authenti= cation mechanisms: CRAM-MD5 > I0224 17:50:26.627853 17776 authenticatee.cpp:258] Received SASL authenti= cation step > I0224 17:50:26.627876 17774 authenticatee.cpp:238] Attempting to authenti= cate with mechanism 'CRAM-MD5' > I0224 17:50:26.627986 17774 authenticator.cpp:203] Received SASL authenti= cation start > I0224 17:50:26.628008 17773 authenticator.cpp:231] Received SASL authenti= cation step > I0224 17:50:26.628033 17774 authenticator.cpp:325] Authentication require= s more steps > I0224 17:50:26.628039 17773 auxprop.cpp:107] Request to lookup properties= for user: 'test-principal' realm: 'ip-172-30-2-121' server FQDN: 'ip-172-3= 0-2-121' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false=20 > I0224 17:50:26.628052 17773 auxprop.cpp:179] Looking up auxiliary propert= y '*userPassword' > I0224 17:50:26.628080 17773 auxprop.cpp:179] Looking up auxiliary propert= y '*cmusaslsecretCRAM-MD5' > I0224 17:50:26.628103 17773 auxprop.cpp:107] Request to lookup properties= for user: 'test-principal' realm: 'ip-172-30-2-121' server FQDN: 'ip-172-3= 0-2-121' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true=20 > I0224 17:50:26.628118 17773 auxprop.cpp:129] Skipping auxiliary property = '*userPassword' since SASL_AUXPROP_AUTHZID =3D=3D true > I0224 17:50:26.628129 17773 auxprop.cpp:129] Skipping auxiliary property = '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID =3D=3D true > I0224 17:50:26.628128 17774 authenticatee.cpp:258] Received SASL authenti= cation step > I0224 17:50:26.628144 17773 authenticator.cpp:317] Authentication success > I0224 17:50:26.628227 17770 authenticatee.cpp:298] Authentication success > I0224 17:50:26.628288 17774 master.cpp:5556] Successfully authenticated p= rincipal 'test-principal' at scheduler-8f5548ac-e23b-4155-a9c1-9913e79c349b= @172.30.2.121:33050 > I0224 17:50:26.628319 17776 authenticator.cpp:231] Received SASL authenti= cation step > I0224 17:50:26.628348 17776 auxprop.cpp:107] Request to lookup properties= for user: 'test-principal' realm: 'ip-172-30-2-121' server FQDN: 'ip-172-3= 0-2-121' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false=20 > I0224 17:50:26.628347 17770 authenticator.cpp:431] Authentication session= cleanup for crammd5_authenticatee(864)@172.30.2.121:33050 > I0224 17:50:26.628368 17776 auxprop.cpp:179] Looking up auxiliary propert= y '*userPassword' > I0224 17:50:26.628545 17775 sched.cpp:471] Successfully authenticated wit= h master master@172.30.2.121:33050 > I0224 17:50:26.628561 17775 sched.cpp:776] Sending SUBSCRIBE call to mast= er@172.30.2.121:33050 > I0224 17:50:26.628559 17776 auxprop.cpp:179] Looking up auxiliary propert= y '*cmusaslsecretCRAM-MD5' > I0224 17:50:26.628589 17776 auxprop.cpp:107] Request to lookup properties= for user: 'test-principal' realm: 'ip-172-30-2-121' server FQDN: 'ip-172-3= 0-2-121' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true=20 > I0224 17:50:26.628598 17776 auxprop.cpp:129] Skipping auxiliary property = '*userPassword' since SASL_AUXPROP_AUTHZID =3D=3D true > I0224 17:50:26.628603 17776 auxprop.cpp:129] Skipping auxiliary property = '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID =3D=3D true > I0224 17:50:26.628614 17776 authenticator.cpp:317] Authentication success > I0224 17:50:26.628679 17773 authenticatee.cpp:298] Authentication success > I0224 17:50:26.628698 17775 sched.cpp:809] Will retry registration in 1.1= 92267517secs if necessary > I0224 17:50:26.628798 17770 master.cpp:2280] Received SUBSCRIBE call for = framework 'default' at scheduler-8f5548ac-e23b-4155-a9c1-9913e79c349b@172.3= 0.2.121:33050 > I0224 17:50:26.628798 17776 authenticator.cpp:431] Authentication session= cleanup for crammd5_authenticatee(865)@172.30.2.121:33050 > I0224 17:50:26.628859 17770 master.cpp:1751] Authorizing framework princi= pal 'test-principal' to receive offers for role '*' > I0224 17:50:26.629012 17770 master.cpp:5556] Successfully authenticated p= rincipal 'test-principal' at slave(414)@172.30.2.121:33050 > I0224 17:50:26.629134 17771 slave.cpp:927] Successfully authenticated wit= h master master@172.30.2.121:33050 > I0224 17:50:26.629161 17777 master.cpp:2351] Subscribing framework defaul= t with checkpointing disabled and capabilities [ ] > I0224 17:50:26.629262 17771 slave.cpp:1321] Will retry registration in 15= .916328ms if necessary > I0224 17:50:26.629547 17771 hierarchical.cpp:265] Added framework 00a3ac1= 2-9e76-48f5-92fa-48770b82035d-0000 > I0224 17:50:26.629621 17771 hierarchical.cpp:1434] No resources available= to allocate! > I0224 17:50:26.629642 17775 sched.cpp:703] Framework registered with 00a3= ac12-9e76-48f5-92fa-48770b82035d-0000 > I0224 17:50:26.629658 17771 hierarchical.cpp:1529] No inverse offers to s= end out! > I0224 17:50:26.629686 17771 hierarchical.cpp:1127] Performed allocation f= or 0 slaves in 115429ns > I0224 17:50:26.629695 17777 master.cpp:4240] Registering slave at slave(4= 14)@172.30.2.121:33050 (ip-172-30-2-121.mesosphere.io) with id 00a3ac12-9e7= 6-48f5-92fa-48770b82035d-S0 > I0224 17:50:26.629711 17775 sched.cpp:717] Scheduler::registered took 489= 60ns > I0224 17:50:26.630197 17770 registrar.cpp:439] Applied 1 operations in 57= 113ns; attempting to update the 'registry' > I0224 17:50:26.630928 17772 log.cpp:683] Attempting to append 396 bytes t= o the log > I0224 17:50:26.631023 17777 coordinator.cpp:348] Coordinator attempting t= o write APPEND action at position 3 > I0224 17:50:26.631654 17774 replica.cpp:537] Replica received write reque= st for position 3 from (14112)@172.30.2.121:33050 > I0224 17:50:26.634012 17774 leveldb.cpp:341] Persisting action (415 bytes= ) to leveldb took 2.329586ms > I0224 17:50:26.634033 17774 replica.cpp:712] Persisted action at 3 > I0224 17:50:26.634613 17775 replica.cpp:691] Replica received learned not= ice for position 3 from @0.0.0.0:0 > I0224 17:50:26.636519 17775 leveldb.cpp:341] Persisting action (417 bytes= ) to leveldb took 1.868667ms > I0224 17:50:26.636541 17775 replica.cpp:712] Persisted action at 3 > I0224 17:50:26.636555 17775 replica.cpp:697] Replica learned APPEND actio= n at position 3 > I0224 17:50:26.637675 17771 registrar.cpp:484] Successfully updated the '= registry' in 7.406848ms > I0224 17:50:26.637861 17772 log.cpp:702] Attempting to truncate the log t= o 3 > I0224 17:50:26.637969 17770 coordinator.cpp:348] Coordinator attempting t= o write TRUNCATE action at position 4 > I0224 17:50:26.638437 17770 slave.cpp:3482] Received ping from slave-obse= rver(379)@172.30.2.121:33050 > I0224 17:50:26.638533 17775 master.cpp:4308] Registered slave 00a3ac12-9e= 76-48f5-92fa-48770b82035d-S0 at slave(414)@172.30.2.121:33050 (ip-172-30-2-= 121.mesosphere.io) with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[310= 00-32000] > I0224 17:50:26.638612 17770 slave.cpp:971] Registered with master master@= 172.30.2.121:33050; given slave ID 00a3ac12-9e76-48f5-92fa-48770b82035d-S0 > I0224 17:50:26.638635 17770 fetcher.cpp:81] Clearing fetcher cache > I0224 17:50:26.638658 17777 replica.cpp:537] Replica received write reque= st for position 4 from (14113)@172.30.2.121:33050 > I0224 17:50:26.638705 17776 hierarchical.cpp:473] Added slave 00a3ac12-9e= 76-48f5-92fa-48770b82035d-S0 (ip-172-30-2-121.mesosphere.io) with cpus(*):2= ; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (allocated: ) > I0224 17:50:26.638756 17772 status_update_manager.cpp:181] Resuming sendi= ng status updates > I0224 17:50:26.638934 17770 slave.cpp:994] Checkpointing SlaveInfo to '/t= mp/DockerContainerizerTest_ROOT_DOCKER_DockerInspectDiscard_nZvyhv/meta/sla= ves/00a3ac12-9e76-48f5-92fa-48770b82035d-S0/slave.info' > I0224 17:50:26.639209 17770 slave.cpp:1030] Forwarding total oversubscrib= ed resources=20 > I0224 17:50:26.639331 17776 hierarchical.cpp:1529] No inverse offers to s= end out! > I0224 17:50:26.639363 17776 hierarchical.cpp:1147] Performed allocation f= or slave 00a3ac12-9e76-48f5-92fa-48770b82035d-S0 in 621054ns > I0224 17:50:26.639367 17770 master.cpp:4649] Received update of slave 00a= 3ac12-9e76-48f5-92fa-48770b82035d-S0 at slave(414)@172.30.2.121:33050 (ip-1= 72-30-2-121.mesosphere.io) with total oversubscribed resources=20 > I0224 17:50:26.639735 17770 master.cpp:5355] Sending 1 offers to framewor= k 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 (default) at scheduler-8f5548ac= -e23b-4155-a9c1-9913e79c349b@172.30.2.121:33050 > I0224 17:50:26.639755 17771 hierarchical.cpp:531] Slave 00a3ac12-9e76-48f= 5-92fa-48770b82035d-S0 (ip-172-30-2-121.mesosphere.io) updated with oversub= scribed resources (total: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[= 31000-32000], allocated: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31= 000-32000]) > I0224 17:50:26.639873 17771 hierarchical.cpp:1434] No resources available= to allocate! > I0224 17:50:26.639907 17771 hierarchical.cpp:1529] No inverse offers to s= end out! > I0224 17:50:26.639925 17771 hierarchical.cpp:1147] Performed allocation f= or slave 00a3ac12-9e76-48f5-92fa-48770b82035d-S0 in 139782ns > I0224 17:50:26.640146 17775 sched.cpp:873] Scheduler::resourceOffers took= 95935ns > I0224 17:50:26.640867 17777 leveldb.cpp:341] Persisting action (16 bytes)= to leveldb took 2.180791ms > I0224 17:50:26.640898 17777 replica.cpp:712] Persisted action at 4 > I0224 17:50:26.641491 17770 replica.cpp:691] Replica received learned not= ice for position 4 from @0.0.0.0:0 > I0224 17:50:26.641758 17772 master.cpp:3138] Processing ACCEPT call for o= ffers: [ 00a3ac12-9e76-48f5-92fa-48770b82035d-O0 ] on slave 00a3ac12-9e76-4= 8f5-92fa-48770b82035d-S0 at slave(414)@172.30.2.121:33050 (ip-172-30-2-121.= mesosphere.io) for framework 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 (def= ault) at scheduler-8f5548ac-e23b-4155-a9c1-9913e79c349b@172.30.2.121:33050 > I0224 17:50:26.641793 17772 master.cpp:2825] Authorizing framework princi= pal 'test-principal' to launch task 1 as user 'root' > W0224 17:50:26.643143 17773 validation.cpp:404] Executor e1 for task 1 us= es less CPUs (None) than the minimum required (0.01). Please update your ex= ecutor, as this will be mandatory in future releases. > W0224 17:50:26.643172 17773 validation.cpp:416] Executor e1 for task 1 us= es less memory (None) than the minimum required (32MB). Please update your = executor, as this will be mandatory in future releases. > I0224 17:50:26.643299 17770 leveldb.cpp:341] Persisting action (18 bytes)= to leveldb took 1.777824ms > I0224 17:50:26.643353 17770 leveldb.cpp:399] Deleting ~2 keys from leveld= b took 31165ns > I0224 17:50:26.643368 17770 replica.cpp:712] Persisted action at 4 > I0224 17:50:26.643383 17770 replica.cpp:697] Replica learned TRUNCATE act= ion at position 4 > I0224 17:50:26.643484 17773 master.hpp:176] Adding task 1 with resources = cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 00a3a= c12-9e76-48f5-92fa-48770b82035d-S0 (ip-172-30-2-121.mesosphere.io) > I0224 17:50:26.643645 17773 master.cpp:3623] Launching task 1 of framewor= k 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 (default) at scheduler-8f5548ac= -e23b-4155-a9c1-9913e79c349b@172.30.2.121:33050 with resources cpus(*):2; m= em(*):1024; disk(*):1024; ports(*):[31000-32000] on slave 00a3ac12-9e76-48f= 5-92fa-48770b82035d-S0 at slave(414)@172.30.2.121:33050 (ip-172-30-2-121.me= sosphere.io) > I0224 17:50:26.643995 17772 slave.cpp:1361] Got assigned task 1 for frame= work 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 > I0224 17:50:26.644445 17772 slave.cpp:1480] Launching task 1 for framewor= k 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 > I0224 17:50:26.644870 17772 paths.cpp:474] Trying to chown '/tmp/DockerCo= ntainerizerTest_ROOT_DOCKER_DockerInspectDiscard_nZvyhv/slaves/00a3ac12-9e7= 6-48f5-92fa-48770b82035d-S0/frameworks/00a3ac12-9e76-48f5-92fa-48770b82035d= -0000/executors/e1/runs/fe24672f-728c-4771-9b82-9f9270edfbff' to user 'root= ' > I0224 17:50:26.650173 17772 slave.cpp:5367] Launching executor e1 of fram= ework 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 with resources in work dir= ectory '/tmp/DockerContainerizerTest_ROOT_DOCKER_DockerInspectDiscard_nZvyh= v/slaves/00a3ac12-9e76-48f5-92fa-48770b82035d-S0/frameworks/00a3ac12-9e76-4= 8f5-92fa-48770b82035d-0000/executors/e1/runs/fe24672f-728c-4771-9b82-9f9270= edfbff' > I0224 17:50:26.650780 17772 slave.cpp:1698] Queuing task '1' for executor= 'e1' of framework 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 > I0224 17:50:26.652282 17772 slave.cpp:749] Successfully attached file '/t= mp/DockerContainerizerTest_ROOT_DOCKER_DockerInspectDiscard_nZvyhv/slaves/0= 0a3ac12-9e76-48f5-92fa-48770b82035d-S0/frameworks/00a3ac12-9e76-48f5-92fa-4= 8770b82035d-0000/executors/e1/runs/fe24672f-728c-4771-9b82-9f9270edfbff' > I0224 17:50:26.654928 17776 docker.cpp:1028] Starting container 'fe24672f= -728c-4771-9b82-9f9270edfbff' for executor 'e1' and framework '00a3ac12-9e7= 6-48f5-92fa-48770b82035d-0000' > I0224 17:50:26.655567 17776 docker.cpp:1053] Running docker -H unix:///va= r/run/docker.sock inspect tnachen/test-executor:latest > I0224 17:50:26.771446 17770 docker.cpp:1124] Running docker -H unix:///va= r/run/docker.sock pull tnachen/test-executor:latest > I0224 17:50:27.585420 17774 hierarchical.cpp:1434] No resources available= to allocate! > I0224 17:50:27.585474 17774 hierarchical.cpp:1529] No inverse offers to s= end out! > I0224 17:50:27.585505 17774 hierarchical.cpp:1127] Performed allocation f= or 1 slaves in 272928ns > I0224 17:50:28.585777 17772 hierarchical.cpp:1434] No resources available= to allocate! > I0224 17:50:28.585814 17772 hierarchical.cpp:1529] No inverse offers to s= end out! > I0224 17:50:28.585839 17772 hierarchical.cpp:1127] Performed allocation f= or 1 slaves in 166918ns > I0224 17:50:29.586848 17777 hierarchical.cpp:1434] No resources available= to allocate! > I0224 17:50:29.586910 17777 hierarchical.cpp:1529] No inverse offers to s= end out! > I0224 17:50:29.586940 17777 hierarchical.cpp:1127] Performed allocation f= or 1 slaves in 287445ns > I0224 17:50:30.587625 17775 hierarchical.cpp:1434] No resources available= to allocate! > I0224 17:50:30.587679 17775 hierarchical.cpp:1529] No inverse offers to s= end out! > I0224 17:50:30.587707 17775 hierarchical.cpp:1127] Performed allocation f= or 1 slaves in 207639ns > I0224 17:50:31.588436 17775 hierarchical.cpp:1434] No resources available= to allocate! > I0224 17:50:31.588474 17775 hierarchical.cpp:1529] No inverse offers to s= end out! > I0224 17:50:31.588495 17775 hierarchical.cpp:1127] Performed allocation f= or 1 slaves in 160471ns > I0224 17:50:32.589864 17770 hierarchical.cpp:1434] No resources available= to allocate! > I0224 17:50:32.589932 17770 hierarchical.cpp:1529] No inverse offers to s= end out! > I0224 17:50:32.589963 17770 hierarchical.cpp:1127] Performed allocation f= or 1 slaves in 331646ns > I0224 17:50:33.591181 17775 hierarchical.cpp:1434] No resources available= to allocate! > I0224 17:50:33.591239 17775 hierarchical.cpp:1529] No inverse offers to s= end out! > I0224 17:50:33.591266 17775 hierarchical.cpp:1127] Performed allocation f= or 1 slaves in 199094ns > I0224 17:50:34.592267 17774 hierarchical.cpp:1434] No resources available= to allocate! > I0224 17:50:34.592463 17774 hierarchical.cpp:1529] No inverse offers to s= end out! > I0224 17:50:34.592488 17774 hierarchical.cpp:1127] Performed allocation f= or 1 slaves in 327690ns > I0224 17:50:35.593335 17771 hierarchical.cpp:1434] No resources available= to allocate! > I0224 17:50:35.593374 17771 hierarchical.cpp:1529] No inverse offers to s= end out! > I0224 17:50:35.593397 17771 hierarchical.cpp:1127] Performed allocation f= or 1 slaves in 166519ns > I0224 17:50:36.594398 17770 hierarchical.cpp:1434] No resources available= to allocate! > I0224 17:50:36.594434 17770 hierarchical.cpp:1529] No inverse offers to s= end out! > I0224 17:50:36.594456 17770 hierarchical.cpp:1127] Performed allocation f= or 1 slaves in 154878ns > I0224 17:50:37.595175 17776 hierarchical.cpp:1434] No resources available= to allocate! > I0224 17:50:37.595211 17776 hierarchical.cpp:1529] No inverse offers to s= end out! > I0224 17:50:37.595265 17776 hierarchical.cpp:1127] Performed allocation f= or 1 slaves in 194236ns > I0224 17:50:38.596570 17775 hierarchical.cpp:1434] No resources available= to allocate! > I0224 17:50:38.596637 17775 hierarchical.cpp:1529] No inverse offers to s= end out! > I0224 17:50:38.596668 17775 hierarchical.cpp:1127] Performed allocation f= or 1 slaves in 294733ns > I0224 17:50:39.596954 17771 hierarchical.cpp:1434] No resources available= to allocate! > I0224 17:50:39.596999 17771 hierarchical.cpp:1529] No inverse offers to s= end out! > I0224 17:50:39.597024 17771 hierarchical.cpp:1127] Performed allocation f= or 1 slaves in 180045ns > I0224 17:50:40.597641 17772 hierarchical.cpp:1434] No resources available= to allocate! > I0224 17:50:40.597705 17772 hierarchical.cpp:1529] No inverse offers to s= end out! > I0224 17:50:40.597735 17772 hierarchical.cpp:1127] Performed allocation f= or 1 slaves in 284738ns > I0224 17:50:41.598578 17774 hierarchical.cpp:1434] No resources available= to allocate! > I0224 17:50:41.598615 17774 hierarchical.cpp:1529] No inverse offers to s= end out! > I0224 17:50:41.598639 17774 hierarchical.cpp:1127] Performed allocation f= or 1 slaves in 165034ns > I0224 17:50:41.627979 17775 slave.cpp:4737] Querying resource estimator f= or oversubscribable resources > I0224 17:50:41.628226 17771 slave.cpp:4751] Received oversubscribable res= ources from the resource estimator > I0224 17:50:41.639770 17773 slave.cpp:3482] Received ping from slave-obse= rver(379)@172.30.2.121:33050 > ../../src/tests/containerizer/docker_containerizer_tests.cpp:3589: Failur= e > Failed to wait 15secs for statusFailed > ../../src/tests/containerizer/docker_containerizer_tests.cpp:3576: Failur= e > I0224 17:50:41.653586 17770 master.cpp:1213] Framework 00a3ac12-9e76-48f5= -92fa-48770b82035d-0000 (default) at scheduler-8f5548ac-e23b-4155-a9c1-9913= e79c349b@172.30.2.121:33050 disconnected > Actual function call count doesn't match EXPECT_CALL(sched, statusUpdate(= &driver, _))... > Expected: to be called once > I0224 17:50:41.653626 17770 master.cpp:2576] Disconnecting framework 00a3= ac12-9e76-48f5-92fa-48770b82035d-0000 (default) at scheduler-8f5548ac-e23b-= 4155-a9c1-9913e79c349b@172.30.2.121:33050 > Actual: never called - unsatisfied and active > I0224 17:50:41.653662 17770 master.cpp:2600] Deactivating framework 00a3a= c12-9e76-48f5-92fa-48770b82035d-0000 (default) at scheduler-8f5548ac-e23b-4= 155-a9c1-9913e79c349b@172.30.2.121:33050 > I0224 17:50:41.653772 17770 master.cpp:1237] Giving framework 00a3ac12-9e= 76-48f5-92fa-48770b82035d-0000 (default) at scheduler-8f5548ac-e23b-4155-a9= c1-9913e79c349b@172.30.2.121:33050 0ns to failover > I0224 17:50:41.653803 17776 hierarchical.cpp:375] Deactivated framework 0= 0a3ac12-9e76-48f5-92fa-48770b82035d-0000 > ../../src/tests/containerizer/docker_containerizer_tests.cpp:3515: Failur= e > Actual function call count doesn't match EXPECT_CALL(*mockDocker, inspect= (_, _))... > Expected: to be called once > Actual: never called - unsatisfied and active > ../../src/tests/containerizer/docker_containerizer_tests.cpp:3520: Failur= e > I0224 17:50:41.655174 17755 docker.cpp:885] Running docker -H unix:///var= /run/docker.sock ps -a > Actual function call count doesn't match EXPECT_CALL(*mockDocker, run(_, = _, _, _, _, _, _, _, _))... > Expected: to be called once > I0224 17:50:41.655172 17772 master.cpp:5207] Framework failover timeout, = removing framework 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 (default) at s= cheduler-8f5548ac-e23b-4155-a9c1-9913e79c349b@172.30.2.121:33050 > Actual: never called - unsatisfied and active > I0224 17:50:41.655208 17772 master.cpp:5938] Removing framework 00a3ac12-= 9e76-48f5-92fa-48770b82035d-0000 (default) at scheduler-8f5548ac-e23b-4155-= a9c1-9913e79c349b@172.30.2.121:33050 > I0224 17:50:41.655330 17774 slave.cpp:2079] Asked to shut down framework = 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 by master@172.30.2.121:33050 > I0224 17:50:41.657733 17774 slave.cpp:2104] Shutting down framework 00a3a= c12-9e76-48f5-92fa-48770b82035d-0000 > I0224 17:50:41.657977 17774 slave.cpp:4198] Shutting down executor 'e1' o= f framework 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 > W0224 17:50:41.658028 17774 slave.hpp:665] Unable to send event to execut= or 'e1' of framework 00a3ac12-9e76-48f5-92fa-48770b82035d-0000: unknown con= nection type > I0224 17:50:41.658149 17772 master.cpp:6450] Updating the state of task 1= of framework 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 (latest state: TASK= _KILLED, status update state: TASK_KILLED) > I0224 17:50:41.659183 17771 hierarchical.cpp:892] Recovered cpus(*):2; me= m(*):1024; disk(*):1024; ports(*):[31000-32000] (total: cpus(*):2; mem(*):1= 024; disk(*):1024; ports(*):[31000-32000], allocated: ) on slave 00a3ac12-9= e76-48f5-92fa-48770b82035d-S0 from framework 00a3ac12-9e76-48f5-92fa-48770b= 82035d-0000 > I0224 17:50:41.659298 17772 master.cpp:6516] Removing task 1 with resourc= es cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] of framewor= k 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 on slave 00a3ac12-9e76-48f5-92f= a-48770b82035d-S0 at slave(414)@172.30.2.121:33050 (ip-172-30-2-121.mesosph= ere.io) > I0224 17:50:41.659651 17772 master.cpp:6545] Removing executor 'e1' with = resources of framework 00a3ac12-9e76-48f5-92fa-48770b82035d-0000 on slave = 00a3ac12-9e76-48f5-92fa-48770b82035d-S0 at slave(414)@172.30.2.121:33050 (i= p-172-30-2-121.mesosphere.io) > I0224 17:50:41.660135 17775 hierarchical.cpp:326] Removed framework 00a3a= c12-9e76-48f5-92fa-48770b82035d-0000 > F0224 17:50:41.660135 17755 logging.cpp:64] RAW: Pure virtual method call= ed > @ 0x7fa35bb62318 google::LogMessage::Fail() > @ 0x7fa35bb678ae google::RawLog__() > @ 0x7fa35acc04ed __cxa_pure_virtual > @ 0xa91676 mesos::internal::tests::Cluster::Slaves::shutdo= wn() > @ 0xa914e2 mesos::internal::tests::Cluster::Slaves::~Slave= s() > @ 0x9df8a2 mesos::internal::tests::Cluster::~Cluster() > @ 0x9df925 mesos::internal::tests::MesosTest::~MesosTest() > @ 0x15552f1 mesos::internal::tests::DockerContainerizerTest= ::~DockerContainerizerTest() > @ 0x155fa83 mesos::internal::tests::DockerContainerizerTest= _ROOT_DOCKER_DockerInspectDiscard_Test::~DockerContainerizerTest_ROOT_DOCKE= R_DockerInspectDiscard_Test() > @ 0x155fab2 mesos::internal::tests::DockerContainerizerTest= _ROOT_DOCKER_DockerInspectDiscard_Test::~DockerContainerizerTest_ROOT_DOCKE= R_DockerInspectDiscard_Test() > @ 0x16eb6d6 testing::Test::DeleteSelf_() > @ 0x16f5686 testing::internal::HandleSehExceptionsInMethodI= fSupported<>() > @ 0x16f06ea testing::internal::HandleExceptionsInMethodIfSu= pported<>() > @ 0x16d2210 testing::TestInfo::Run() > @ 0x16d27fe testing::TestCase::Run() > @ 0x16d914d testing::internal::UnitTestImpl::RunAllTests() > @ 0x16f6315 testing::internal::HandleSehExceptionsInMethodI= fSupported<>() > @ 0x16f1240 testing::internal::HandleExceptionsInMethodIfSu= pported<>() > @ 0x16d7e7d testing::UnitTest::Run() > @ 0xe3a04e RUN_ALL_TESTS() > @ 0xe39c64 main > @ 0x7fa3562bed5d __libc_start_main > @ 0x9bf259 (unknown) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)