mesos-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "wangqun (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (MESOS-5148) Supporting Container Images in Mesos Containerizer doesn't work by using marathon api
Date Sat, 09 Apr 2016 02:15:25 GMT

    [ https://issues.apache.org/jira/browse/MESOS-5148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15233293#comment-15233293
] 

wangqun edited comment on MESOS-5148 at 4/9/16 2:14 AM:
--------------------------------------------------------

@Tim Anderegg Thank you tell me the wrong place. I have modify the mesos.json above according
to your said.
$sudo vim mesos.json
{
"container": {
"type": "MESOS",
"docker":
{ "image": "library/redis" }

},
"id": "ubuntumesos",
"instances": 1,
"cpus": 0.5,
"mem": 512,
"uris": [],
"cmd": "ping 8.8.8.8"
}
And I test it again by the command "sudo docker run -ti --net=host redis redis-cli"
it can't still connect it successfully. I want to know if my test way is wrong. I don't know
how to validate if the container has created successsfully. And I only run the command "sudo
docker run -ti --net=host redis redis-cli" according to https://github.com/apache/mesos/blob/master/docs/container-image.md#test-it-out.
Because I am using the mesos containerizer, if I shouldn't access the docker client by command
"sudo docker run -ti --net=host redis redis-cli". And I have paste the master log and slave
log. Please check it.
Thanks.


was (Author: wangqun):
@Tim Anderegg Thank you tell me the wrong place. I have modify the mesos.json above according
to your said.
$sudo vim mesos.json
{
"container": {
"type": "MESOS",
"docker":
{ "image": "library/redis" }

},
"id": "ubuntumesos",
"instances": 1,
"cpus": 0.5,
"mem": 512,
"uris": [],
"cmd": "ping 8.8.8.8"
}
And I test it again by the command "sudo docker run -ti --net=host redis redis-cli"
it can't still connect it successfully. I want to know if my test way is wrong. I don't know
how to validate if the container has created successsfully. And I only run the command "sudo
docker run -ti --net=host redis redis-cli" according to https://github.com/apache/mesos/blob/master/docs/container-image.md#test-it-out. And
I have paste the master log and slave log. Please check it.
Thanks.

> Supporting Container Images in Mesos Containerizer doesn't work by using marathon api
> -------------------------------------------------------------------------------------
>
>                 Key: MESOS-5148
>                 URL: https://issues.apache.org/jira/browse/MESOS-5148
>             Project: Mesos
>          Issue Type: Bug
>            Reporter: wangqun
>
> Hi
>     I use the marathon api to create tasks to test Supporting Container Images in Mesos
Containerizer .
> My steps is the following:
> 1) to run the process in master node.
> sudo /usr/sbin/mesos-master --zk=zk://10.0.0.4:2181/mesos --port=5050 --log_dir=/var/log/mesos
--cluster=mesosbay --hostname=10.0.0.4 --ip=10.0.0.4 --quorum=1 --work_dir=/var/lib/mesos
> 2) to run the process in slave node.
> sudo /usr/sbin/mesos-slave --master=zk://10.0.0.4:2181/mesos --log_dir=/var/log/mesos
--containerizers=docker,mesos --executor_registration_timeout=5mins --hostname=10.0.0.5 --ip=10.0.0.5
--isolation=docker/runtime,filesystem/linux --work_dir=/tmp/mesos/slave --image_providers=docker
--executor_environment_variables="{}"
> 3) to create one json file to specify the container to be managed by mesos.
> sudo  touch mesos.json
> sudo vim  mesos.json
> {
>   "container": {
>     "type": "MESOS",
>     "docker": {
>       "image": "library/redis"
>     }
>   },
>   "id": "ubuntumesos",
>   "instances": 1,
>   "cpus": 0.5,
>   "mem": 512,
>   "uris": [],
>   "cmd": "ping 8.8.8.8"
> }
> 4)sudo curl -X POST -H "Content-Type: application/json" localhost:8080/v2/apps -d@mesos.json
> 5)sudo  curl http://localhost:8080/v2/tasks
> {"tasks":[{"id":"ubuntumesos.fc1879be-fc9f-11e5-81e0-024294de4967","host":"10.0.0.5","ipAddresses":[],"ports":[31597],"startedAt":"2016-04-07T09:06:24.900Z","stagedAt":"2016-04-07T09:06:16.611Z","version":"2016-04-07T09:06:14.354Z","slaveId":"058fb5a7-9273-4bfa-83bb-8cb091621e19-S1","appId":"/ubuntumesos","servicePorts":[10000]}]}
> 6) sudo docker run -ti --net=host redis redis-cli  
> Could not connect to Redis at 127.0.0.1:6379: Connection refused
> not connected> 
> 7)
> I0409 01:43:48.774868 3492 slave.cpp:3886] Executor 'ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce'
of framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000 exited with status 0
> I0409 01:43:48.781307 3492 slave.cpp:3990] Cleaning up executor 'ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce'
of framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000 at executor(1)@10.0.0.5:60134
> I0409 01:43:48.808364 3492 slave.cpp:4078] Cleaning up framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000
> I0409 01:43:48.811336 3493 gc.cpp:55] Scheduling '/tmp/mesos/slave/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000/executors/ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce/runs/24d0872d-1ba1-4384-be11-a20c82893ea4'
for gc 6.99999070953778days in the future
> I0409 01:43:48.817401 3493 gc.cpp:55] Scheduling '/tmp/mesos/slave/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000/executors/ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce'
for gc 6.99999065992889days in the future
> I0409 01:43:48.823158 3493 gc.cpp:55] Scheduling '/tmp/mesos/slave/meta/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000/executors/ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce/runs/24d0872d-1ba1-4384-be11-a20c82893ea4'
for gc 6.99999065273185days in the future
> I0409 01:43:48.826216 3491 status_update_manager.cpp:282] Closing status update streams
for framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000
> I0409 01:43:48.835602 3493 gc.cpp:55] Scheduling '/tmp/mesos/slave/meta/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000/executors/ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce'
for gc 6.99999064716444days in the future
> I0409 01:43:48.838580 3493 gc.cpp:55] Scheduling '/tmp/mesos/slave/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000'
for gc 6.99999041064889days in the future
> I0409 01:43:48.844699 3493 gc.cpp:55] Scheduling '/tmp/mesos/slave/meta/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000'
for gc 6.9999902654163days in the future
> I0409 01:44:01.623440 3494 slave.cpp:4374] Current disk usage 27.10%. Max allowed age:
4.403153217546436days
> I0409 01:44:32.339310 3494 slave.cpp:1361] Got assigned task ubuntumesos.9ab04999-fdf4-11e5-8b4b-0242e2dedfce
for framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000
> I0409 01:44:32.451300 3489 gc.cpp:83] Unscheduling '/tmp/mesos/slave/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000'
from gc
> I0409 01:44:32.459689 3491 gc.cpp:83] Unscheduling '/tmp/mesos/slave/meta/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000'
from gc
> I0409 01:44:32.465939 3494 slave.cpp:1480] Launching task ubuntumesos.9ab04999-fdf4-11e5-8b4b-0242e2dedfce
for framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000
> I0409 01:44:32.508301 3494 paths.cpp:528] Trying to chown '/tmp/mesos/slave/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000/executors/ubuntumesos.9ab04999-fdf4-11e5-8b4b-0242e2dedfce/runs/5d230e57-25be-4105-8725-fbbbb65e15ff'
to user 'root'
> I0409 01:44:33.795454 3494 slave.cpp:5367] Launching executor ubuntumesos.9ab04999-fdf4-11e5-8b4b-0242e2dedfce
of framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000 with resources cpus:0.1; mem:32 in
work directory '/tmp/mesos/slave/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000/executors/ubuntumesos.9ab04999-fdf4-11e5-8b4b-0242e2dedfce/runs/5d230e57-25be-4105-8725-fbbbb65e15ff'
> I0409 01:44:33.915488 3495 docker.cpp:1014] Skipping non-docker container
> I0409 01:44:33.980628 3491 containerizer.cpp:666] Starting container '5d230e57-25be-4105-8725-fbbbb65e15ff'
for executor 'ubuntumesos.9ab04999-fdf4-11e5-8b4b-0242e2dedfce' of framework 'ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000'
> I0409 01:44:34.027020 3494 slave.cpp:1698] Queuing task 'ubuntumesos.9ab04999-fdf4-11e5-8b4b-0242e2dedfce'
for executor 'ubuntumesos.9ab04999-fdf4-11e5-8b4b-0242e2dedfce' of framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000
> I0409 01:44:34.292232 3492 linux_launcher.cpp:304] Cloning child process with flags =
CLONE_NEWNS
> I0409 01:44:34.453189 3492 containerizer.cpp:1118] Checkpointing executor's forked pid
3982 to '/tmp/mesos/slave/meta/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000/executors/ubuntumesos.9ab04999-fdf4-11e5-8b4b-0242e2dedfce/runs/5d230e57-25be-4105-8725-fbbbb65e15ff/pids/forked.pid'
> I0409 01:44:38.632611 3492 slave.cpp:2643] Got registration for executor 'ubuntumesos.9ab04999-fdf4-11e5-8b4b-0242e2dedfce'
of framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000 from executor(1)@10.0.0.5:39977
> I0409 01:44:38.883911 3493 slave.cpp:1863] Sending queued task 'ubuntumesos.9ab04999-fdf4-11e5-8b4b-0242e2dedfce'
to executor 'ubuntumesos.9ab04999-fdf4-11e5-8b4b-0242e2dedfce' of framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000
at executor(1)@10.0.0.5:39977
> I0409 01:44:39.327751 3492 slave.cpp:3002] Handling status update TASK_RUNNING (UUID:
2df8c4b4-8aa1-472f-ae0b-a07d092022bc) for task ubuntumesos.9ab04999-fdf4-11e5-8b4b-0242e2dedfce
of framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000 from executor(1)@10.0.0.5:39977
> I0409 01:44:39.451637 3494 status_update_manager.cpp:320] Received status update TASK_RUNNING
(UUID: 2df8c4b4-8aa1-472f-ae0b-a07d092022bc) for task ubuntumesos.9ab04999-fdf4-11e5-8b4b-0242e2dedfce
of framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000
> I0409 01:44:39.480607 3494 status_update_manager.cpp:824] Checkpointing UPDATE for status
update TASK_RUNNING (UUID: 2df8c4b4-8aa1-472f-ae0b-a07d092022bc) for task ubuntumesos.9ab04999-fdf4-11e5-8b4b-0242e2dedfce
of framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000
> I0409 01:44:39.562551 3493 slave.cpp:3400] Forwarding the update TASK_RUNNING (UUID:
2df8c4b4-8aa1-472f-ae0b-a07d092022bc) for task ubuntumesos.9ab04999-fdf4-11e5-8b4b-0242e2dedfce
of framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000 to master@10.0.0.4:5050
> I0409 01:44:39.594686 3493 slave.cpp:3310] Sending acknowledgement for status update
TASK_RUNNING (UUID: 2df8c4b4-8aa1-472f-ae0b-a07d092022bc) for task ubuntumesos.9ab04999-fdf4-11e5-8b4b-0242e2dedfce
of framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000 to executor(1)@10.0.0.5:39977
> I0409 01:44:39.966917 3490 status_update_manager.cpp:392] Received status update acknowledgement
(UUID: 2df8c4b4-8aa1-472f-ae0b-a07d092022bc) for task ubuntumesos.9ab04999-fdf4-11e5-8b4b-0242e2dedfce
of framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000
> I0409 01:44:39.977138 3490 status_update_manager.cpp:824] Checkpointing ACK for status
update TASK_RUNNING (UUID: 2df8c4b4-8aa1-472f-ae0b-a07d092022bc) for task ubuntumesos.9ab04999-fdf4-11e5-8b4b-0242e2dedfce
of framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000
> 8) This is master info
> I0409 01:50:42.840116 4175 master.cpp:377] Flags at startup: --allocation_interval="1secs"
--allocator="HierarchicalDRF" --authenticate="false" --authenticate_http="false" --authenticate_slaves="false"
--authenticators="crammd5" --authorizers="local" --cluster="mesosbay" --framework_sorter="drf"
--help="false" --hostname="10.0.0.4" --hostname_lookup="true" --http_authenticators="basic"
--initialize_driver_logging="true" --ip="10.0.0.4" --log_auto_initialize="true" --log_dir="/var/log/mesos"
--logbufsecs="0" --logging_level="INFO" --max_completed_frameworks="50" --max_completed_tasks_per_framework="1000"
--max_slave_ping_timeouts="5" --port="5050" --quiet="false" --quorum="1" --recovery_slave_removal_limit="100%"
--registry="replicated_log" --registry_fetch_timeout="1mins" --registry_store_timeout="20secs"
--registry_strict="false" --root_submissions="true" --slave_ping_timeout="15secs" --slave_reregister_timeout="10mins"
--user_sorter="drf" --version="false" --webui_dir="/usr/share/mesos/webui" --work_dir="/var/lib/mesos"
--zk="zk://10.0.0.4:2181/mesos" --zk_session_timeout="10secs"
> I0409 01:50:43.047547 4175 master.cpp:424] Master allowing unauthenticated frameworks
to register
> I0409 01:50:43.099207 4175 master.cpp:429] Master allowing unauthenticated slaves to
register
> I0409 01:50:43.082841 4188 network.hpp:413] ZooKeeper group memberships changed
> I0409 01:50:43.122980 4175 master.cpp:467] Using default 'crammd5' authenticator
> W0409 01:50:43.133535 4175 authenticator.cpp:511] No credentials provided, authentication
requests will be refused
> I0409 01:50:43.138221 4185 group.cpp:700] Trying to get '/mesos/log_replicas/0000000004'
in ZooKeeper
> I0409 01:50:43.201158 4175 authenticator.cpp:518] Initializing server SASL
> I0409 01:50:43.282641 4187 network.hpp:461] ZooKeeper group PIDs:
> { log-replica(1)@10.0.0.4:5050 }
> I0409 01:50:43.669838 4175 master.cpp:1650] Successfully attached file '/var/log/mesos/mesos-master.INFO'
> I0409 01:50:43.697309 4182 contender.cpp:147] Joining the ZK group
> I0409 01:50:43.850167 4183 detector.cpp:152] Detected a new leader: (id='5')
> I0409 01:50:43.890305 4188 contender.cpp:263] New candidate (id='5') has entered the
contest for leadership
> I0409 01:50:43.905459 4185 group.cpp:700] Trying to get '/mesos/json.info_0000000005'
in ZooKeeper
> I0409 01:50:44.114951 4184 detector.cpp:479] A new leading master (UPID=master@10.0.0.4:5050)
is detected
> I0409 01:50:44.143424 4186 master.cpp:1711] The newly elected leader is master@10.0.0.4:5050
with id f0b801f1-4126-4410-ac5b-1cee081936c2
> I0409 01:50:44.148952 4186 master.cpp:1724] Elected as the leading master!
> I0409 01:50:44.163781 4186 master.cpp:1469] Recovering from registrar
> I0409 01:50:44.209345 4187 registrar.cpp:307] Recovering registrar
> I0409 01:50:44.361701 4183 log.cpp:659] Attempting to start the writer
> I0409 01:50:44.513617 4186 replica.cpp:493] Replica received implicit promise request
from (6)@10.0.0.4:5050 with proposal 5
> I0409 01:50:44.558959 4186 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb
took 31.97148ms
> I0409 01:50:44.570137 4186 replica.cpp:342] Persisted promised to 5
> I0409 01:50:44.628942 4183 coordinator.cpp:238] Coordinator attempting to fill missing
positions
> I0409 01:50:44.650364 4187 log.cpp:675] Writer started with ending position 18
> I0409 01:50:44.738010 4188 leveldb.cpp:436] Reading position from leveldb took 13.148665ms
> I0409 01:50:44.748862 4188 leveldb.cpp:436] Reading position from leveldb took 2.432192ms
> I0409 01:50:44.862632 4188 registrar.cpp:340] Successfully fetched the registry (279B)
in 633.18016ms
> I0409 01:50:44.879431 4188 registrar.cpp:439] Applied 1 operations in 7.347417ms; attempting
to update the 'registry'
> I0409 01:50:44.981389 4184 log.cpp:683] Attempting to append 318 bytes to the log
> I0409 01:50:44.997843 4182 coordinator.cpp:348] Coordinator attempting to write APPEND
action at position 19
> I0409 01:50:45.074020 4185 replica.cpp:537] Replica received write request for position
19 from (7)@10.0.0.4:5050
> I0409 01:50:45.122946 4185 leveldb.cpp:341] Persisting action (337 bytes) to leveldb
took 36.842289ms
> I0409 01:50:45.131013 4185 replica.cpp:712] Persisted action at 19
> I0409 01:50:45.176854 4187 replica.cpp:691] Replica received learned notice for position
19 from @0.0.0.0:0
> I0409 01:50:45.207540 4187 leveldb.cpp:341] Persisting action (339 bytes) to leveldb
took 21.688351ms
> I0409 01:50:45.215061 4187 replica.cpp:712] Persisted action at 19
> I0409 01:50:45.221374 4187 replica.cpp:697] Replica learned APPEND action at position
19
> I0409 01:50:45.277045 4185 registrar.cpp:484] Successfully updated the 'registry' in
385.18784ms
> I0409 01:50:45.292476 4185 registrar.cpp:370] Successfully recovered registrar
> I0409 01:50:45.317539 4188 log.cpp:702] Attempting to truncate the log to 19
> I0409 01:50:45.325106 4186 coordinator.cpp:348] Coordinator attempting to write TRUNCATE
action at position 20
> I0409 01:50:45.360450 4187 replica.cpp:537] Replica received write request for position
20 from (8)@10.0.0.4:5050
> I0409 01:50:45.381896 4189 master.cpp:1521] Recovered 1 slaves from the Registry (279B)
; allowing 10mins for slaves to re-register
> I0409 01:50:45.419257 4187 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took
45.81257ms
> I0409 01:50:45.425475 4187 replica.cpp:712] Persisted action at 20
> I0409 01:50:45.452914 4183 replica.cpp:691] Replica received learned notice for position
20 from @0.0.0.0:0
> I0409 01:50:45.482556 4183 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took
16.159979ms
> I0409 01:50:45.501960 4183 leveldb.cpp:399] Deleting ~2 keys from leveldb took 10.368738ms
> I0409 01:50:45.509778 4183 replica.cpp:712] Persisted action at 20
> I0409 01:50:45.515420 4183 replica.cpp:697] Replica learned TRUNCATE action at position
20
> I0409 01:50:45.853322 4185 master.cpp:2231] Received SUBSCRIBE call for framework 'marathon'
at scheduler-517826ff-682a-4407-ae14-3cf4047a538b@10.0.0.4:38884
> I0409 01:50:45.869339 4185 master.cpp:2302] Subscribing framework marathon with checkpointing
enabled and capabilities [ ]
> I0409 01:50:59.431000 4187 coordinator.cpp:348] Coordinator attempting to write APPEND
action at position 21
> I0409 01:50:59.568622 4188 replica.cpp:537] Replica received write request for position
21 from (10)@10.0.0.4:5050
> I0409 01:50:59.698292 4188 leveldb.cpp:341] Persisting action (337 bytes) to leveldb
took 112.074178ms
> I0409 01:50:59.741580 4188 replica.cpp:712] Persisted action at 21
> I0409 01:50:59.871381 4185 replica.cpp:691] Replica received learned notice for position
21 from @0.0.0.0:0
> I0409 01:51:00.010634 4185 leveldb.cpp:341] Persisting action (339 bytes) to leveldb
took 56.018515ms
> I0409 01:51:00.034754 4185 replica.cpp:712] Persisted action at 21
> I0409 01:51:00.041738 4185 replica.cpp:697] Replica learned APPEND action at position
21
> I0409 01:51:00.026186 4188 master.cpp:4432] Ignoring re-register slave message from slave
da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0 at slave(1)@10.0.0.5:5051 (10.0.0.5) as readmission
is already in progress
> I0409 01:51:00.169466 4187 registrar.cpp:484] Successfully updated the 'registry' in
1.021276928secs
> I0409 01:51:00.246542 4183 log.cpp:702] Attempting to truncate the log to 21
> I0409 01:51:00.258146 4184 coordinator.cpp:348] Coordinator attempting to write TRUNCATE
action at position 22
> I0409 01:51:00.327055 4189 replica.cpp:537] Replica received write request for position
22 from (11)@10.0.0.4:5050
> I0409 01:51:00.441076 4189 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took
105.698508ms
> I0409 01:51:00.450001 4189 replica.cpp:712] Persisted action at 22
> I0409 01:51:00.509459 4183 replica.cpp:691] Replica received learned notice for position
22 from @0.0.0.0:0
> I0409 01:51:00.582427 4183 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took
46.515156ms
> I0409 01:51:00.653038 4183 leveldb.cpp:399] Deleting ~2 keys from leveldb took 39.978138ms
> I0409 01:51:00.704844 4183 replica.cpp:712] Persisted action at 22
> I0409 01:51:00.721366 4183 replica.cpp:697] Replica learned TRUNCATE action at position
22
> I0409 01:51:00.809880 4182 master.cpp:4521] Re-registered slave da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0
at slave(1)@10.0.0.5:5051 (10.0.0.5) with cpus:1; mem:1001; disk:3811; ports:[31000-32000]
> I0409 01:51:00.778683 4183 hierarchical.cpp:473] Added slave da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0
(10.0.0.5) with cpus:1; mem:1001; disk:3811; ports:[31000-32000] (allocated: )
> I0409 01:51:00.835283 4182 master.cpp:4556] Sending updated checkpointed resources to
slave da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0 at slave(1)@10.0.0.5:5051 (10.0.0.5)
> I0409 01:51:00.949656 4182 master.cpp:4618] Received update of slave da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0
at slave(1)@10.0.0.5:5051 (10.0.0.5) with total oversubscribed resources
> I0409 01:51:01.011227 4183 hierarchical.cpp:531] Slave da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0
(10.0.0.5) updated with oversubscribed resources (total: cpus:1; mem:1001; disk:3811; ports:[31000-32000],
allocated: cpus:1; mem:1001; disk:3811; ports:[31000-32000])
> I0409 01:51:01.029819 4187 master.cpp:5324] Sending 1 offers to framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000
(marathon) at scheduler-517826ff-682a-4407-ae14-3cf4047a538b@10.0.0.4:38884
> -S0 at slave(1)@10.0.0.5:5051 (10.0.0.5)
> I0409 01:51:10.095077 4185 master.cpp:5324] Sending 1 offers to framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000
(marathon) at scheduler-517826ff-682a-4407-ae14-3cf4047a538b@10.0.0.4:38884
> I0409 01:51:10.141659 4182 master.cpp:3641] Processing DECLINE call for offers: [ f0b801f1-4126-4410-ac5b-1cee081936c2-O2
] for framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000 (marathon) at scheduler-517826ff-682a-4407-ae14-3cf4047a538b@10.0.0.4:38884
> I0409 01:51:14.905616 4183 master.cpp:4763] Status update TASK_RUNNING (UUID: ce192af2-4dbd-491d-b82f-f7f81de0ed72)
for task ubuntumesos.8701b95a-fdf5-11e5-8b4b-0242e2dedfce of framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000
from slave da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0 at slave(1)@10.0.0.5:5051 (10.0.0.5)
> I0409 01:51:14.912132 4183 master.cpp:4811] Forwarding status update TASK_RUNNING (UUID:
ce192af2-4dbd-491d-b82f-f7f81de0ed72) for task ubuntumesos.8701b95a-fdf5-11e5-8b4b-0242e2dedfce
of framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000
> I0409 01:51:14.937263 4183 master.cpp:6421] Updating the state of task ubuntumesos.8701b95a-fdf5-11e5-8b4b-0242e2dedfce
of framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000 (latest state: TASK_RUNNING, status
update state: TASK_RUNNING)
> I0409 01:51:15.498894 4185 master.cpp:3918] Processing ACKNOWLEDGE call ce192af2-4dbd-491d-b82f-f7f81de0ed72
for task ubuntumesos.8701b95a-fdf5-11e5-8b4b-0242e2dedfce of framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000
(marathon) at scheduler-517826ff-682a-4407-ae14-3cf4047a538b@10.0.0.4:38884 on slave da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0
> I0409 01:52:01.360699 4189 http.cpp:312] HTTP GET for /master/state from 172.24.4.1:60882
with User-Agent='python-requests/2.9.1'
> I0409 01:53:03.336822 4184 http.cpp:312] HTTP GET for /master/state from 172.24.4.1:33222
with User-Agent='python-requests/2.9.1'
> I0409 01:53:10.936683 4184 master.cpp:5324] Sending 1 offers to framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000
(marathon) at scheduler-517826ff-682a-4407-ae14-3cf4047a538b@10.0.0.4:38884
> I0409 01:53:10.970932 4183 master.cpp:3641] Processing DECLINE call for offers: [ f0b801f1-4126-4410-ac5b-1cee081936c2-O3
] for framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-0000 (marathon) at scheduler-517826ff-682a-4407-ae14-3cf4047a538b@10.0.0.4:38884
> I0409 01:54:05.342922 4188 http.cpp:312] HTTP GET for /master/state from 172.24.4.1:33796
with User-Agent='python-requests/2.9.1'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message