mesos-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Armand Grillet (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MESOS-8335) ProvisionerDockerTest fails on Debian 9 and CentOS 6.
Date Tue, 02 Jan 2018 10:49:00 GMT

    [ https://issues.apache.org/jira/browse/MESOS-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307867#comment-16307867
] 

Armand Grillet commented on MESOS-8335:
---------------------------------------

Since 7.47.0, the curl tool enables HTTP/2 by default for HTTPS connections. We can see in
the logs of CentOS 6 with curl 7.57.0 that we indeed receive a {{HTTP/2 200}}.

libcurl uses HTTP1.1 for clear text HTTP servers since 7.47.0, we have not seen fails regarding
that thus we can fix the issue by enforcing curl to use HTTP 1.1 for everything using the
option {{CURL_HTTP_VERSION_1_1}}.

Adding a {{.curlrc}} with {{http1.1}} in {{/root}} indeed fixes the test on CentOS 6 with
Docker version 1.7.1, build 786b29d and curl 7.57.0 (x86_64-redhat-linux-gnu) libcurl/7.57.0
OpenSSL/1.0.1e zlib/1.2.3 c-ares/1.13.0 libssh2/1.8.0 nghttp2/1.6.0. 

> ProvisionerDockerTest fails on Debian 9  and CentOS 6.
> ------------------------------------------------------
>
>                 Key: MESOS-8335
>                 URL: https://issues.apache.org/jira/browse/MESOS-8335
>             Project: Mesos
>          Issue Type: Bug
>          Components: test
>            Reporter: Armand Grillet
>            Assignee: Armand Grillet
>         Attachments: centos-6-curl-7.19.7.txt, centos-6-curl-7.57.txt
>
>
> Version of Docker used: Docker version 17.11.0-ce, build 1caf76c
> Version of Curl used: curl 7.52.1 (x86_64-pc-linux-gnu) libcurl/7.52.1 OpenSSL/1.0.2l
zlib/1.2.8 libidn2/0.16 libpsl/0.17.0 (+libidn2/0.16) libssh2/1.7.0 nghttp2/1.18.1 librtmp/2.3
> Error:
> {code}
> [ RUN      ] ImageAlpine/ProvisionerDockerTest.ROOT_INTERNET_CURL_SimpleCommand/2
> I1215 00:09:28.694677 19343 cluster.cpp:172] Creating default 'local' authorizer
> I1215 00:09:28.697144 30867 master.cpp:456] Master 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3
(ip-172-16-10-160.ec2.internal) started on 127.0.1.1:35029
> I1215 00:09:28.697163 30867 master.cpp:458] Flags at startup: --acls="" --agent_ping_timeout="15secs"
--agent_reregister_timeout="10mins" --allocation_interval="1secs" --allocator="HierarchicalDRF"
--authenticate_agents="true" --authenticate_frameworks="true" --authenticate_http_frameworks="true"
--authenticate_http_readonly="true" --authenticate_http_readwrite="true" --authenticators="crammd5"
--authorizers="local" --credentials="/tmp/4RYdF1/credentials" --filter_gpu_resources="true"
--framework_sorter="drf" --help="false" --hostname_lookup="true" --http_authenticators="basic"
--http_framework_authenticators="basic" --initialize_driver_logging="true" --log_auto_initialize="true"
--logbufsecs="0" --logging_level="INFO" --max_agent_ping_timeouts="5" --max_completed_frameworks="50"
--max_completed_tasks_per_framework="1000" --max_unreachable_tasks_per_framework="1000" --port="5050"
--quiet="false" --recovery_agent_removal_limit="100%" --registry="in_memory" --registry_fetch_timeout="1mins"
--registry_gc_interval="15mins" --registry_max_agent_age="2weeks" --registry_max_agent_count="102400"
--registry_store_timeout="100secs" --registry_strict="false" --root_submissions="true" --user_sorter="drf"
--version="false" --webui_dir="/usr/local/share/mesos/webui" --work_dir="/tmp/4RYdF1/master"
--zk_session_timeout="10secs"
> I1215 00:09:28.697413 30867 master.cpp:507] Master only allowing authenticated frameworks
to register
> I1215 00:09:28.697422 30867 master.cpp:513] Master only allowing authenticated agents
to register
> I1215 00:09:28.697427 30867 master.cpp:519] Master only allowing authenticated HTTP frameworks
to register
> I1215 00:09:28.697433 30867 credentials.hpp:37] Loading credentials for authentication
from '/tmp/4RYdF1/credentials'
> I1215 00:09:28.697654 30867 master.cpp:563] Using default 'crammd5' authenticator
> I1215 00:09:28.697806 30867 http.cpp:1045] Creating default 'basic' HTTP authenticator
for realm 'mesos-master-readonly'
> I1215 00:09:28.697962 30867 http.cpp:1045] Creating default 'basic' HTTP authenticator
for realm 'mesos-master-readwrite'
> I1215 00:09:28.698076 30867 http.cpp:1045] Creating default 'basic' HTTP authenticator
for realm 'mesos-master-scheduler'
> I1215 00:09:28.698194 30867 master.cpp:642] Authorization enabled
> I1215 00:09:28.698468 30864 hierarchical.cpp:175] Initialized hierarchical allocator
process
> I1215 00:09:28.698563 30864 whitelist_watcher.cpp:77] No whitelist given
> I1215 00:09:28.701695 30871 master.cpp:2209] Elected as the leading master!
> I1215 00:09:28.701723 30871 master.cpp:1689] Recovering from registrar
> I1215 00:09:28.701859 30869 registrar.cpp:347] Recovering registrar
> I1215 00:09:28.702401 30869 registrar.cpp:391] Successfully fetched the registry (0B)
in 507904ns
> I1215 00:09:28.702495 30869 registrar.cpp:495] Applied 1 operations in 28977ns; attempting
to update the registry
> I1215 00:09:28.702997 30869 registrar.cpp:552] Successfully updated the registry in 464896ns
> I1215 00:09:28.703086 30869 registrar.cpp:424] Successfully recovered registrar
> I1215 00:09:28.703640 30865 master.cpp:1802] Recovered 0 agents from the registry (167B);
allowing 10mins for agents to re-register
> I1215 00:09:28.703661 30869 hierarchical.cpp:213] Skipping recovery of hierarchical allocator:
nothing to recover
> W1215 00:09:28.706816 19343 process.cpp:2756] Attempted to spawn already running process
files@127.0.1.1:35029
> I1215 00:09:28.707818 19343 containerizer.cpp:304] Using isolation { environment_secret,
volume/sandbox_path, volume/host_path, docker/runtime, filesystem/linux, network/cni, volume/image
}
> I1215 00:09:28.712363 19343 linux_launcher.cpp:146] Using /sys/fs/cgroup/freezer as the
freezer hierarchy for the Linux launcher
> sh: 1: hadoop: not found
> I1215 00:09:28.740589 19343 fetcher.cpp:69] Skipping URI fetcher plugin 'hadoop' as it
could not be created: Failed to create HDFS client: Hadoop client is not available, exit status:
32512
> I1215 00:09:28.740788 19343 registry_puller.cpp:129] Creating registry puller with docker
registry 'https://registry-1.docker.io'
> I1215 00:09:28.742266 19343 provisioner.cpp:299] Using default backend 'overlay'
> I1215 00:09:28.745649 19343 cluster.cpp:460] Creating default 'local' authorizer
> W1215 00:09:28.748108 19343 process.cpp:2756] Attempted to spawn already running process
version@127.0.1.1:35029
> I1215 00:09:28.748811 30866 slave.cpp:258] Mesos agent started on (1275)@127.0.1.1:35029
> I1215 00:09:28.748911 19343 sched.cpp:232] Version: 1.5.0
> I1215 00:09:28.748826 30866 slave.cpp:259] Flags at startup: --acls="" --appc_simple_discovery_uri_prefix="http://"
--appc_store_dir="/tmp/ImageAlpine_ProvisionerDockerTest_ROOT_INTERNET_CURL_SimpleCommand_2_UEhrKs/store/appc"
--authenticate_http_readonly="true" --authenticate_http_readwrite="true" --authenticatee="crammd5"
--authentication_backoff_factor="1secs" --authorizer="local" --cgroups_cpu_enable_pids_and_tids_count="false"
--cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup" --cgroups_limit_swap="false"
--cgroups_root="mesos" --container_disk_watch_interval="15secs" --containerizers="mesos" --credential="/tmp/ImageAlpine_ProvisionerDockerTest_ROOT_INTERNET_CURL_SimpleCommand_2_UEhrKs/credential"
--default_role="*" --disallow_sharing_agent_pid_namespace="false" --disk_watch_interval="1mins"
--docker="docker" --docker_kill_orphans="true" --docker_registry="https://registry-1.docker.io"
--docker_remove_delay="6hrs" --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns"
--docker_store_dir="/tmp/ImageAlpine_ProvisionerDockerTest_ROOT_INTERNET_CURL_SimpleCommand_2_UEhrKs/store/docker"
--docker_volume_checkpoint_dir="/var/run/mesos/isolators/docker/volume" --enforce_container_disk_quota="false"
--executor_registration_timeout="10mins" --executor_reregistration_timeout="2secs" --executor_shutdown_grace_period="5secs"
--fetcher_cache_dir="/tmp/ImageAlpine_ProvisionerDockerTest_ROOT_INTERNET_CURL_SimpleCommand_2_UEhrKs/fetch"
--fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks" --gc_disk_headroom="0.1"
--hadoop_home="" --help="false" --hostname_lookup="true" --http_command_executor="false" --http_credentials="/tmp/ImageAlpine_ProvisionerDockerTest_ROOT_INTERNET_CURL_SimpleCommand_2_UEhrKs/http_credentials"
--http_heartbeat_interval="30secs" --image_providers="docker" --initialize_driver_logging="true"
--isolation="docker/runtime,filesystem/linux" --launcher="linux" --launcher_dir="/home/admin/workspace/mesos/Mesos_CI-build/FLAG/CMake/label/mesos-ec2-debian-9/mesos/build/src"
--logbufsecs="0" --logging_level="INFO" --max_completed_executors_per_framework="150" --oversubscribed_resources_interval="15secs"
--perf_duration="10secs" --perf_interval="1mins" --port="5051" --qos_correction_interval_min="0ns"
--quiet="false" --reconfiguration_policy="equal" --recover="reconnect" --recovery_timeout="15mins"
--registration_backoff_factor="10ms" --resources="cpus:2;gpus:0;mem:1024;disk:1024;ports:[31000-32000]"
--revocable_cpu_low_priority="true" --runtime_dir="/tmp/ImageAlpine_ProvisionerDockerTest_ROOT_INTERNET_CURL_SimpleCommand_2_UEhrKs"
--sandbox_directory="/mnt/mesos/sandbox" --strict="true" --switch_user="true" --systemd_enable_support="true"
--systemd_runtime_directory="/run/systemd/system" --version="false" --work_dir="/tmp/ImageAlpine_ProvisionerDockerTest_ROOT_INTERNET_CURL_SimpleCommand_2_rkqPcK"
--zk_session_timeout="10secs"
> I1215 00:09:28.749200 30866 credentials.hpp:86] Loading credential for authentication
from '/tmp/ImageAlpine_ProvisionerDockerTest_ROOT_INTERNET_CURL_SimpleCommand_2_UEhrKs/credential'
> I1215 00:09:28.749317 30866 slave.cpp:291] Agent using credential for: test-principal
> I1215 00:09:28.749331 30866 credentials.hpp:37] Loading credentials for authentication
from '/tmp/ImageAlpine_ProvisionerDockerTest_ROOT_INTERNET_CURL_SimpleCommand_2_UEhrKs/http_credentials'
> I1215 00:09:28.749513 30866 http.cpp:1045] Creating default 'basic' HTTP authenticator
for realm 'mesos-agent-readonly'
> I1215 00:09:28.749656 30866 http.cpp:1045] Creating default 'basic' HTTP authenticator
for realm 'mesos-agent-readwrite'
> I1215 00:09:28.749974 30871 sched.cpp:336] New master detected at master@127.0.1.1:35029
> I1215 00:09:28.750061 30871 sched.cpp:396] Authenticating with master master@127.0.1.1:35029
> I1215 00:09:28.750073 30871 sched.cpp:403] Using default CRAM-MD5 authenticatee
> I1215 00:09:28.750438 30871 authenticatee.cpp:121] Creating new client SASL connection
> I1215 00:09:28.750818 30871 master.cpp:8934] Authenticating scheduler-05695f64-15af-479c-8d9d-9e06cd5bc5fc@127.0.1.1:35029
> I1215 00:09:28.751013 30871 authenticator.cpp:414] Starting authentication session for
crammd5-authenticatee(2146)@127.0.1.1:35029
> I1215 00:09:28.750947 30866 slave.cpp:590] Agent resources: [{"name":"cpus","scalar":{"value":2.0},"type":"SCALAR"},{"name":"mem","scalar":{"value":1024.0},"type":"SCALAR"},{"name":"disk","scalar":{"value":1024.0},"type":"SCALAR"},{"name":"ports","ranges":{"range":[{"begin":31000,"end":32000}]},"type":"RANGES"}]
> I1215 00:09:28.751157 30866 slave.cpp:598] Agent attributes: [  ]
> I1215 00:09:28.751169 30866 slave.cpp:607] Agent hostname: ip-172-16-10-160.ec2.internal
> I1215 00:09:28.751365 30871 authenticator.cpp:98] Creating new server SASL connection
> I1215 00:09:28.751643 30864 authenticatee.cpp:213] Received SASL authentication mechanisms:
CRAM-MD5
> I1215 00:09:28.751675 30864 authenticatee.cpp:239] Attempting to authenticate with mechanism
'CRAM-MD5'
> I1215 00:09:28.751824 30864 authenticator.cpp:204] Received SASL authentication start
> I1215 00:09:28.751868 30864 authenticator.cpp:326] Authentication requires more steps
> I1215 00:09:28.752003 30864 authenticatee.cpp:259] Received SASL authentication step
> I1215 00:09:28.752105 30864 authenticator.cpp:232] Received SASL authentication step
> I1215 00:09:28.752131 30864 auxprop.cpp:109] Request to lookup properties for user: 'test-principal'
realm: 'ip-172-16-10-160.ec2.internal' server FQDN: 'ip-172-16-10-160.ec2.internal' SASL_AUXPROP_VERIFY_AGAINST_HASH:
false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false
> I1215 00:09:28.752140 30864 auxprop.cpp:181] Looking up auxiliary property '*userPassword'
> I1215 00:09:28.752182 30864 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5'
> I1215 00:09:28.752203 30864 auxprop.cpp:109] Request to lookup properties for user: 'test-principal'
realm: 'ip-172-16-10-160.ec2.internal' server FQDN: 'ip-172-16-10-160.ec2.internal' SASL_AUXPROP_VERIFY_AGAINST_HASH:
false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true
> I1215 00:09:28.752210 30864 auxprop.cpp:131] Skipping auxiliary property '*userPassword'
since SASL_AUXPROP_AUTHZID == true
> I1215 00:09:28.752216 30864 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5'
since SASL_AUXPROP_AUTHZID == true
> I1215 00:09:28.752228 30864 authenticator.cpp:318] Authentication success
> I1215 00:09:28.752631 30867 authenticatee.cpp:299] Authentication success
> I1215 00:09:28.752724 30867 master.cpp:8964] Successfully authenticated principal 'test-principal'
at scheduler-05695f64-15af-479c-8d9d-9e06cd5bc5fc@127.0.1.1:35029
> I1215 00:09:28.752779 30864 authenticator.cpp:432] Authentication session cleanup for
crammd5-authenticatee(2146)@127.0.1.1:35029
> I1215 00:09:28.752990 30871 task_status_update_manager.cpp:181] Pausing sending task
status updates
> I1215 00:09:28.753233 30867 sched.cpp:502] Successfully authenticated with master master@127.0.1.1:35029
> I1215 00:09:28.753252 30867 sched.cpp:824] Sending SUBSCRIBE call to master@127.0.1.1:35029
> I1215 00:09:28.753360 30867 sched.cpp:857] Will retry registration in 1.328167601secs
if necessary
> I1215 00:09:28.753563 30865 master.cpp:2957] Received SUBSCRIBE call for framework 'default'
at scheduler-05695f64-15af-479c-8d9d-9e06cd5bc5fc@127.0.1.1:35029
> I1215 00:09:28.753674 30865 master.cpp:2274] Authorizing framework principal 'test-principal'
to receive offers for roles '{ * }'
> I1215 00:09:28.754196 30869 master.cpp:3037] Subscribing framework default with checkpointing
disabled and capabilities [ MULTI_ROLE, RESERVATION_REFINEMENT ]
> I1215 00:09:28.754448 30869 master.cpp:9155] Adding framework 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-0000
(default) at scheduler-05695f64-15af-479c-8d9d-9e06cd5bc5fc@127.0.1.1:35029 with roles { 
} suppressed
> I1215 00:09:28.755012 30864 hierarchical.cpp:297] Added framework 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-0000
> I1215 00:09:28.755228 30864 hierarchical.cpp:1517] Performed allocation for 0 agents
in 68100ns
> I1215 00:09:28.755270 30865 sched.cpp:751] Framework registered with 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-0000
> I1215 00:09:28.755308 30865 sched.cpp:765] Scheduler::registered took 21349ns
> I1215 00:09:28.755578 30867 state.cpp:66] Recovering state from '/tmp/ImageAlpine_ProvisionerDockerTest_ROOT_INTERNET_CURL_SimpleCommand_2_rkqPcK/meta'
> I1215 00:09:28.755852 30867 task_status_update_manager.cpp:207] Recovering task status
update manager
> I1215 00:09:28.756047 30865 containerizer.cpp:674] Recovering containerizer
> I1215 00:09:28.758538 30864 metadata_manager.cpp:254] No images to load from disk. Docker
provisioner image storage path '/tmp/ImageAlpine_ProvisionerDockerTest_ROOT_INTERNET_CURL_SimpleCommand_2_UEhrKs/store/docker/storedImages'
does not exist
> I1215 00:09:28.759016 30865 provisioner.cpp:495] Provisioner recovery complete
> I1215 00:09:28.759356 30867 slave.cpp:6742] Finished recovery
> I1215 00:09:28.759928 30865 task_status_update_manager.cpp:181] Pausing sending task
status updates
> I1215 00:09:28.759934 30867 slave.cpp:1033] New master detected at master@127.0.1.1:35029
> I1215 00:09:28.760025 30867 slave.cpp:1088] Detecting new master
> I1215 00:09:28.770395 30871 slave.cpp:1115] Authenticating with master master@127.0.1.1:35029
> I1215 00:09:28.770465 30871 slave.cpp:1124] Using default CRAM-MD5 authenticatee
> I1215 00:09:28.770689 30864 authenticatee.cpp:121] Creating new client SASL connection
> I1215 00:09:28.770879 30869 master.cpp:8934] Authenticating slave(1275)@127.0.1.1:35029
> I1215 00:09:28.770988 30865 authenticator.cpp:414] Starting authentication session for
crammd5-authenticatee(2147)@127.0.1.1:35029
> I1215 00:09:28.771196 30867 authenticator.cpp:98] Creating new server SASL connection
> I1215 00:09:28.771359 30868 authenticatee.cpp:213] Received SASL authentication mechanisms:
CRAM-MD5
> I1215 00:09:28.771385 30868 authenticatee.cpp:239] Attempting to authenticate with mechanism
'CRAM-MD5'
> I1215 00:09:28.771489 30870 authenticator.cpp:204] Received SASL authentication start
> I1215 00:09:28.771546 30870 authenticator.cpp:326] Authentication requires more steps
> I1215 00:09:28.771641 30870 authenticatee.cpp:259] Received SASL authentication step
> I1215 00:09:28.771731 30870 authenticator.cpp:232] Received SASL authentication step
> I1215 00:09:28.771756 30870 auxprop.cpp:109] Request to lookup properties for user: 'test-principal'
realm: 'ip-172-16-10-160.ec2.internal' server FQDN: 'ip-172-16-10-160.ec2.internal' SASL_AUXPROP_VERIFY_AGAINST_HASH:
false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false
> I1215 00:09:28.771765 30870 auxprop.cpp:181] Looking up auxiliary property '*userPassword'
> I1215 00:09:28.771796 30870 auxprop.cpp:181] Looking up auxiliary property '*cmusaslsecretCRAM-MD5'
> I1215 00:09:28.771824 30870 auxprop.cpp:109] Request to lookup properties for user: 'test-principal'
realm: 'ip-172-16-10-160.ec2.internal' server FQDN: 'ip-172-16-10-160.ec2.internal' SASL_AUXPROP_VERIFY_AGAINST_HASH:
false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true
> I1215 00:09:28.771841 30870 auxprop.cpp:131] Skipping auxiliary property '*userPassword'
since SASL_AUXPROP_AUTHZID == true
> I1215 00:09:28.771845 30870 auxprop.cpp:131] Skipping auxiliary property '*cmusaslsecretCRAM-MD5'
since SASL_AUXPROP_AUTHZID == true
> I1215 00:09:28.771857 30870 authenticator.cpp:318] Authentication success
> I1215 00:09:28.771950 30864 authenticatee.cpp:299] Authentication success
> I1215 00:09:28.772038 30864 master.cpp:8964] Successfully authenticated principal 'test-principal'
at slave(1275)@127.0.1.1:35029
> I1215 00:09:28.772056 30870 authenticator.cpp:432] Authentication session cleanup for
crammd5-authenticatee(2147)@127.0.1.1:35029
> I1215 00:09:28.772331 30871 slave.cpp:1207] Successfully authenticated with master master@127.0.1.1:35029
> I1215 00:09:28.772502 30871 slave.cpp:1686] Will retry registration in 9.305442ms if
necessary
> I1215 00:09:28.772632 30868 master.cpp:6077] Received register agent message from slave(1275)@127.0.1.1:35029
(ip-172-16-10-160.ec2.internal)
> I1215 00:09:28.772701 30868 master.cpp:3866] Authorizing agent with principal 'test-principal'
> I1215 00:09:28.773026 30864 master.cpp:6147] Authorized registration of agent at slave(1275)@127.0.1.1:35029
(ip-172-16-10-160.ec2.internal)
> I1215 00:09:28.773099 30864 master.cpp:6240] Registering agent at slave(1275)@127.0.1.1:35029
(ip-172-16-10-160.ec2.internal) with id 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-S0
> I1215 00:09:28.773488 30866 registrar.cpp:495] Applied 1 operations in 68587ns; attempting
to update the registry
> I1215 00:09:28.774104 30866 registrar.cpp:552] Successfully updated the registry in 566016ns
> I1215 00:09:28.774257 30868 master.cpp:6288] Admitted agent 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-S0
at slave(1275)@127.0.1.1:35029 (ip-172-16-10-160.ec2.internal)
> I1215 00:09:28.774807 30868 master.cpp:6337] Registered agent 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-S0
at slave(1275)@127.0.1.1:35029 (ip-172-16-10-160.ec2.internal) with cpus:2; mem:1024; disk:1024;
ports:[31000-32000]
> I1215 00:09:28.774876 30865 slave.cpp:1253] Registered with master master@127.0.1.1:35029;
given agent ID 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-S0
> I1215 00:09:28.775096 30865 slave.cpp:1273] Checkpointing SlaveInfo to '/tmp/ImageAlpine_ProvisionerDockerTest_ROOT_INTERNET_CURL_SimpleCommand_2_rkqPcK/meta/slaves/75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-S0/slave.info'
> I1215 00:09:28.775248 30868 hierarchical.cpp:574] Added agent 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-S0
(ip-172-16-10-160.ec2.internal) with cpus:2; mem:1024; disk:1024; ports:[31000-32000] (allocated:
{})
> I1215 00:09:28.775449 30867 task_status_update_manager.cpp:188] Resuming sending task
status updates
> I1215 00:09:28.775477 30865 slave.cpp:1330] Forwarding agent update {"offer_operations":{},"resource_version_uuid":{"value":"kVb6A6JqReqCV1FJLKMFkg=="},"slave_id":{"value":"75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-S0"},"update_oversubscribed_resources":true}
> I1215 00:09:28.775669 30865 master.cpp:7266] Received update of agent 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-S0
at slave(1275)@127.0.1.1:35029 (ip-172-16-10-160.ec2.internal) with total oversubscribed resources
{}
> I1215 00:09:28.775833 30865 master.cpp:7366] Ignoring update on agent 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-S0
at slave(1275)@127.0.1.1:35029 (ip-172-16-10-160.ec2.internal) as it reports no changes
> I1215 00:09:28.776401 30868 hierarchical.cpp:1517] Performed allocation for 1 agents
in 829678ns
> I1215 00:09:28.776715 30867 master.cpp:8764] Sending 1 offers to framework 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-0000
(default) at scheduler-05695f64-15af-479c-8d9d-9e06cd5bc5fc@127.0.1.1:35029
> I1215 00:09:28.777099 30865 sched.cpp:921] Scheduler::resourceOffers took 82115ns
> I1215 00:09:28.778460 30871 master.cpp:10680] Removing offer 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-O0
> I1215 00:09:28.778581 30871 master.cpp:4273] Processing ACCEPT call for offers: [ 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-O0
] on agent 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-S0 at slave(1275)@127.0.1.1:35029 (ip-172-16-10-160.ec2.internal)
for framework 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-0000 (default) at scheduler-05695f64-15af-479c-8d9d-9e06cd5bc5fc@127.0.1.1:35029
> I1215 00:09:28.778666 30871 master.cpp:3593] Authorizing framework principal 'test-principal'
to launch task 4a7f7e34-faba-4df3-a3c6-4d210c257eba
> I1215 00:09:28.780021 30866 master.cpp:11435] Adding task 4a7f7e34-faba-4df3-a3c6-4d210c257eba
with resources cpus(allocated: *):1; mem(allocated: *):128 on agent 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-S0
at slave(1275)@127.0.1.1:35029 (ip-172-16-10-160.ec2.internal)
> I1215 00:09:28.780251 30866 master.cpp:5024] Launching task 4a7f7e34-faba-4df3-a3c6-4d210c257eba
of framework 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-0000 (default) at scheduler-05695f64-15af-479c-8d9d-9e06cd5bc5fc@127.0.1.1:35029
with resources [{"allocation_info":{"role":"*"},"name":"cpus","scalar":{"value":1.0},"type":"SCALAR"},{"allocation_info":{"role":"*"},"name":"mem","scalar":{"value":128.0},"type":"SCALAR"}]
on agent 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-S0 at slave(1275)@127.0.1.1:35029 (ip-172-16-10-160.ec2.internal)
> I1215 00:09:28.781033 30864 slave.cpp:1809] Got assigned task '4a7f7e34-faba-4df3-a3c6-4d210c257eba'
for framework 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-0000
> I1215 00:09:28.781435 30866 hierarchical.cpp:1192] Recovered cpus(allocated: *):1; mem(allocated:
*):896; disk(allocated: *):1024; ports(allocated: *):[31000-32000] (total: cpus:2; mem:1024;
disk:1024; ports:[31000-32000], allocated: cpus(allocated: *):1; mem(allocated: *):128) on
agent 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-S0 from framework 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-0000
> I1215 00:09:28.781487 30866 hierarchical.cpp:1238] Framework 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-0000
filtered agent 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-S0 for 5secs
> I1215 00:09:28.781637 30864 slave.cpp:2079] Authorizing task '4a7f7e34-faba-4df3-a3c6-4d210c257eba'
for framework 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-0000
> I1215 00:09:28.781687 30864 slave.cpp:7865] Authorizing framework principal 'test-principal'
to launch task 4a7f7e34-faba-4df3-a3c6-4d210c257eba
> I1215 00:09:28.782510 30865 slave.cpp:2472] Launching task '4a7f7e34-faba-4df3-a3c6-4d210c257eba'
for framework 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-0000
> I1215 00:09:28.782785 30865 paths.cpp:756] Trying to chown '/tmp/ImageAlpine_ProvisionerDockerTest_ROOT_INTERNET_CURL_SimpleCommand_2_rkqPcK/slaves/75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-S0/frameworks/75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-0000/executors/4a7f7e34-faba-4df3-a3c6-4d210c257eba/runs/521ab4af-b838-4963-a40a-38b3b4456aaf'
to user 'root'
> I1215 00:09:28.782986 30865 slave.cpp:8339] Launching executor '4a7f7e34-faba-4df3-a3c6-4d210c257eba'
of framework 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-0000 with resources [{"allocation_info":{"role":"*"},"name":"cpus","scalar":{"value":0.1},"type":"SCALAR"},{"allocation_info":{"role":"*"},"name":"mem","scalar":{"value":32.0},"type":"SCALAR"}]
in work directory '/tmp/ImageAlpine_ProvisionerDockerTest_ROOT_INTERNET_CURL_SimpleCommand_2_rkqPcK/slaves/75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-S0/frameworks/75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-0000/executors/4a7f7e34-faba-4df3-a3c6-4d210c257eba/runs/521ab4af-b838-4963-a40a-38b3b4456aaf'
> I1215 00:09:28.783558 30865 slave.cpp:3024] Launching container 521ab4af-b838-4963-a40a-38b3b4456aaf
for executor '4a7f7e34-faba-4df3-a3c6-4d210c257eba' of framework 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-0000
> I1215 00:09:28.784023 30865 slave.cpp:2558] Queued task '4a7f7e34-faba-4df3-a3c6-4d210c257eba'
for executor '4a7f7e34-faba-4df3-a3c6-4d210c257eba' of framework 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-0000
> I1215 00:09:28.784113 30865 slave.cpp:984] Successfully attached '/tmp/ImageAlpine_ProvisionerDockerTest_ROOT_INTERNET_CURL_SimpleCommand_2_rkqPcK/slaves/75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-S0/frameworks/75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-0000/executors/4a7f7e34-faba-4df3-a3c6-4d210c257eba/runs/521ab4af-b838-4963-a40a-38b3b4456aaf'
to virtual path '/tmp/ImageAlpine_ProvisionerDockerTest_ROOT_INTERNET_CURL_SimpleCommand_2_rkqPcK/slaves/75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-S0/frameworks/75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-0000/executors/4a7f7e34-faba-4df3-a3c6-4d210c257eba/runs/latest'
> I1215 00:09:28.784157 30865 slave.cpp:984] Successfully attached '/tmp/ImageAlpine_ProvisionerDockerTest_ROOT_INTERNET_CURL_SimpleCommand_2_rkqPcK/slaves/75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-S0/frameworks/75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-0000/executors/4a7f7e34-faba-4df3-a3c6-4d210c257eba/runs/521ab4af-b838-4963-a40a-38b3b4456aaf'
to virtual path '/frameworks/75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-0000/executors/4a7f7e34-faba-4df3-a3c6-4d210c257eba/runs/latest'
> I1215 00:09:28.784191 30865 slave.cpp:984] Successfully attached '/tmp/ImageAlpine_ProvisionerDockerTest_ROOT_INTERNET_CURL_SimpleCommand_2_rkqPcK/slaves/75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-S0/frameworks/75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-0000/executors/4a7f7e34-faba-4df3-a3c6-4d210c257eba/runs/521ab4af-b838-4963-a40a-38b3b4456aaf'
to virtual path '/tmp/ImageAlpine_ProvisionerDockerTest_ROOT_INTERNET_CURL_SimpleCommand_2_rkqPcK/slaves/75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-S0/frameworks/75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-0000/executors/4a7f7e34-faba-4df3-a3c6-4d210c257eba/runs/521ab4af-b838-4963-a40a-38b3b4456aaf'
> I1215 00:09:28.784395 30870 containerizer.cpp:1201] Starting container 521ab4af-b838-4963-a40a-38b3b4456aaf
> I1215 00:09:28.785315 30868 metadata_manager.cpp:179] Looking for image 'quay.io/coreos/alpine-sh'
> I1215 00:09:28.785763 30866 registry_puller.cpp:287] Pulling image 'quay.io/coreos/alpine-sh'
from 'docker-manifest://quay.iocoreos/alpine-sh?latest#https' to '/tmp/ImageAlpine_ProvisionerDockerTest_ROOT_INTERNET_CURL_SimpleCommand_2_UEhrKs/store/docker/staging/exm8SC'
> E1215 00:09:28.942898 30866 slave.cpp:5645] Container '521ab4af-b838-4963-a40a-38b3b4456aaf'
for executor '4a7f7e34-faba-4df3-a3c6-4d210c257eba' of framework 75b48a47-7b6b-4e60-82d3-dfdc0cf8bff3-0000
failed to start: Failed to decode HTTP responses: Decoding failed
> {code}
> The same issue happens on CentOS 6 with Docker version 1.7.1, build 786b29d and curl
7.57.0 (x86_64-redhat-linux-gnu) libcurl/7.57.0 OpenSSL/1.0.1e zlib/1.2.3 c-ares/1.13.0 libssh2/1.8.0
nghttp2/1.6.0.
> The issue does *not* happen on CentOS 6 with Docker version 1.7.1, build 786b29d and
curl 7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.57.0 OpenSSL/1.0.1e zlib/1.2.3 c-ares/1.13.0
libssh2/1.8.0 nghttp2/1.6.0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message