aurora-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John Sirois <j...@conductant.com>
Subject Re: ExecutorConfig data format
Date Thu, 14 Jan 2016 13:29:28 GMT
--
John Sirois
303-512-3301
On Jan 14, 2016 6:16 AM, "Riccardo Poggi" <riccardo.poggi@cern.ch> wrote:
>
> Thanks John,
>
> very nice way to inspect!
>
> If I understood correctly
>
>     >>> jobs[0].json_dumps()
>
> will dump the Job Struct, is that what actually goes into
ExectorConfig.data?
> The full output, not just a subset of it?

Not sure.  One of us needs to dig into the code to find out ;).

>
> Cheers,
>   Riccardo
>
>
>
> On 01/13/2016 07:03 PM, John Sirois wrote:
>>
>> On Wed, Jan 13, 2016 at 7:09 AM, Riccardo Poggi <riccardo.poggi@cern.ch>
>> wrote:
>>
>>> Hello,
>>>
>>> Does anybody know what the ExecutorConfig.data json, in the thrift API,
is
>>> supposed to loook like?
>>>
>> I'm not sure if there is an easier way to do this - I could only find
>> `aurora` client commands that ingest the json format, not emit it.
>> You can do this to inspect the json format of any aurora job you have the
>> config file for.  From an aurora clones top dir:
>>
>> $ ./pants -q repl src/main/python/apache/aurora/config
>> warning: tag '0.3.0' is really 'rel/0.3.0' here
>>
>> Python 2.7.10 (default, Aug 24 2015, 12:12:42)
>> [GCC 5.2.0] on linux2
>> Type "help", "copyright", "credits" or "license" for more information.
>> (InteractiveConsole)
>>>>>
>>>>> from apache.aurora.config.schema.base import *
>>>>> execfile('examples/jobs/hello_world.aurora')
>>>>> jobs[0].json_dumps()
>>
>> '{"environment": "prod", "health_check_config": {"initial_interval_secs":
>> 15.0, "endpoint": "/health", "health_checker": {"http":
>> {"expected_response_code": 0, "endpoint": "/health", "expected_response":
>> "ok"}}, "expected_response_code": 0, "expected_response": "ok",
>> "max_consecutive_failures": 0, "timeout_secs": 1.0, "interval_secs":
10.0},
>> "cluster": "devcluster", "name": "hello", "service": true,
"update_config":
>> {"restart_threshold": 60, "wait_for_batch_completion": false,
"batch_size":
>> 1, "watch_secs": 45, "rollback_on_failure": true,
"max_per_shard_failures":
>> 0, "max_total_failures": 0}, "max_task_failures": 1,
>> "cron_collision_policy": "KILL_EXISTING", "enable_hooks": false,
>> "instances": 1, "task": {"processes": [{"daemon": false, "name": "hello",
>> "ephemeral": false, "max_failures": 1, "min_duration": 5, "cmdline": "\\n
>>   while true; do\\n      echo hello world\\n      sleep 10\\n
done\\n  ",
>> "final": false}], "name": "hello", "finalization_wait": 30,
"max_failures":
>> 1, "max_concurrency": 0, "resources": {"disk": 134217728, "ram":
134217728,
>> "cpu": 1.0}, "constraints": [{"order": ["hello"]}]}, "production": false,
>> "role": "www-data", "lifecycle": {"http": {"graceful_shutdown_endpoint":
>> "/quitquitquit", "port": "health", "shutdown_endpoint":
>> "/abortabortabort"}}, "priority": 0}'
>> The "./pants -q repl src/main/python/apache/aurora/config" bit you'd
always
>> run as-is as well as the "from apache.aurora.config.schema.base import *"
>> when in the repl - these a repl with all the job config modeling code.
The
>> "execfile(...)" and the index in the array of "jobs" will vary for your
use
>> case.
>>
>> Hopefully I'm missing an aurora client command to do the dump, but if
not,
>> it seems to me a natural feature to pair with things like "aurora job
>> inspect --read-json..."
>>
>>
>>> Cheers,
>>>     Riccardo
>>>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message