aurora-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mark Chu-Carroll <mchucarr...@apache.org>
Subject Re: Build failed in Jenkins: Aurora #432
Date Tue, 15 Jul 2014 18:41:25 GMT
All tests pasts locally, so I'm not sure why they failed in the CB, but I'm
going to revert while I debug.

  -Mark


On Tue, Jul 15, 2014 at 11:31 AM, Apache Jenkins Server <
jenkins@builds.apache.org> wrote:

> See <https://builds.apache.org/job/Aurora/432/changes>
>
> Changes:
>
> [mchucarroll] Improve aurora "job diff" command.
>
> ------------------------------------------
> [...truncated 3615 lines...]
>
> src/test/python/apache/aurora/client/cli/test_sla.py:31:
> TestGetTaskUpCountCommand.test_get_task_up_count_no_duration PASSED
> src/test/python/apache/aurora/client/cli/test_sla.py:46:
> TestGetTaskUpCountCommand.test_get_task_up_count_with_durations PASSED
> src/test/python/apache/aurora/client/cli/test_sla.py:68:
> TestGetJobUptimeCommand.test_get_job_uptime_no_percentile PASSED
> src/test/python/apache/aurora/client/cli/test_sla.py:87:
> TestGetJobUptimeCommand.test_get_job_uptime_with_percentiles PASSED
> src/test/python/apache/aurora/client/cli/test_sla.py:99:
> TestGetJobUptimeCommand.test_invalid_percentile usage: tmpwdK93h sla
> get-job-uptime [-h] [--percentiles PERCENTILES]
>                                     [--verbose-logging]
>                                     [--logging-level LOGGING_LEVEL]
>                                     [--reveal-errors]
>                                     [--skip-hooks hook,hook,...]
>                                     CLUSTER/ROLE/ENV/NAME
> tmpwdK93h sla get-job-uptime: error: argument --percentiles: Invalid
> percentile value:100. Must be between 0.0 and 100.0 exclusive.
> PASSED
>
> =========================== 5 passed in 0.69 seconds
> ===========================
> ============================= test session starts
> ==============================
> platform linux2 -- Python 2.7.3 -- py-1.4.21 -- pytest-2.5.2 --
> /usr/bin/python2.7
> plugins: cov, timeout
> collecting ... collected 13 items
>
> src/test/python/apache/thermos/core/test_runner_integration.py:49:
> TestRunnerBasic.test_runner_state_success PASSED
> src/test/python/apache/thermos/core/test_runner_integration.py:52:
> TestRunnerBasic.test_runner_header_populated PASSED
> src/test/python/apache/thermos/core/test_runner_integration.py:61:
> TestRunnerBasic.test_runner_has_allocated_name_ports PASSED
> src/test/python/apache/thermos/core/test_runner_integration.py:66:
> TestRunnerBasic.test_runner_has_expected_processes PASSED
> src/test/python/apache/thermos/core/test_runner_integration.py:74:
> TestRunnerBasic.test_runner_processes_have_expected_output PASSED
> src/test/python/apache/thermos/core/test_runner_integration.py:83:
> TestRunnerBasic.test_runner_processes_have_monotonically_increasing_timestamps
> PASSED
> src/test/python/apache/thermos/core/test_runner_integration.py <-
> ../../../../../tmp/tmpD1ZvyY/apache/thermos/testing/runner.py:209:
> TestRunnerBasic.test_runner_state_reconstruction PASSEDFailed to kill
> runner: [Errno 3] No such process
>
> src/test/python/apache/thermos/core/test_runner_integration.py:103:
> TestConcurrencyBasic.test_runner_state_success PASSED
> src/test/python/apache/thermos/core/test_runner_integration.py:107:
> TestConcurrencyBasic.test_runner_processes_separated_temporally_due_to_concurrency_limit
> PASSED
> src/test/python/apache/thermos/core/test_runner_integration.py <-
> ../../../../../tmp/tmpD1ZvyY/apache/thermos/testing/runner.py:209:
> TestConcurrencyBasic.test_runner_state_reconstruction PASSEDFailed to kill
> runner: [Errno 3] No such process
>
> src/test/python/apache/thermos/core/test_runner_integration.py:160:
> TestRunnerEnvironment.test_runner_state_success PASSED
> src/test/python/apache/thermos/core/test_runner_integration.py:163:
> TestRunnerEnvironment.test_runner_processes_have_expected_output PASSED
> src/test/python/apache/thermos/core/test_runner_integration.py <-
> ../../../../../tmp/tmpD1ZvyY/apache/thermos/testing/runner.py:209:
> TestRunnerEnvironment.test_runner_state_reconstruction PASSEDFailed to kill
> runner: [Errno 3] No such process
>
>
> ========================== 13 passed in 11.63 seconds
> ==========================
> ============================= test session starts
> ==============================
> platform linux2 -- Python 2.7.3 -- py-1.4.21 -- pytest-2.5.2 --
> /usr/bin/python2.7
> plugins: cov, timeout
> collecting ... collected 4 items
>
> src/test/python/apache/aurora/executor/test_executor_vars.py:20:
> test_release_from_tag PASSED
> src/test/python/apache/aurora/executor/test_executor_vars.py:33:
> test_extract_pexinfo PASSED
> src/test/python/apache/aurora/executor/test_executor_vars.py:44: test_init
> PASSED
> src/test/python/apache/aurora/executor/test_executor_vars.py:52:
> test_sample PASSED
>
> =========================== 4 passed in 0.17 seconds
> ===========================
> ============================= test session starts
> ==============================
> platform linux2 -- Python 2.7.3 -- py-1.4.21 -- pytest-2.5.2 --
> /usr/bin/python2.7
> plugins: cov, timeout
> collecting ... collected 12 items
>
> src/test/python/apache/aurora/client/hooks/test_hooked_api.py:32:
> test_api_methods_exist[cancel_update] PASSED
> src/test/python/apache/aurora/client/hooks/test_hooked_api.py:32:
> test_api_methods_exist[create_job] PASSED
> src/test/python/apache/aurora/client/hooks/test_hooked_api.py:32:
> test_api_methods_exist[kill_job] PASSED
> src/test/python/apache/aurora/client/hooks/test_hooked_api.py:32:
> test_api_methods_exist[restart] PASSED
> src/test/python/apache/aurora/client/hooks/test_hooked_api.py:32:
> test_api_methods_exist[start_cronjob] PASSED
> src/test/python/apache/aurora/client/hooks/test_hooked_api.py:32:
> test_api_methods_exist[update_job] PASSED
> src/test/python/apache/aurora/client/hooks/test_hooked_api.py:39:
> test_api_methods_params[cancel_update] PASSED
> src/test/python/apache/aurora/client/hooks/test_hooked_api.py:39:
> test_api_methods_params[create_job] PASSED
> src/test/python/apache/aurora/client/hooks/test_hooked_api.py:39:
> test_api_methods_params[kill_job] PASSED
> src/test/python/apache/aurora/client/hooks/test_hooked_api.py:39:
> test_api_methods_params[restart] PASSED
> src/test/python/apache/aurora/client/hooks/test_hooked_api.py:39:
> test_api_methods_params[start_cronjob] PASSED
> src/test/python/apache/aurora/client/hooks/test_hooked_api.py:39:
> test_api_methods_params[update_job] PASSED
>
> ========================== 12 passed in 0.51 seconds
> ===========================
> ============================= test session starts
> ==============================
> platform linux2 -- Python 2.7.3 -- py-1.4.21 -- pytest-2.5.2 --
> /usr/bin/python2.7
> plugins: cov, timeout
> collecting ... collected 4 items
>
> src/test/python/apache/aurora/client/api/test_job_monitor.py:88:
> JobMonitorTest.test_empty_job_succeeds PASSED
> src/test/python/apache/aurora/client/api/test_job_monitor.py:77:
> JobMonitorTest.test_wait_until_state PASSED
> src/test/python/apache/aurora/client/api/test_job_monitor.py:104:
> JobMonitorTest.test_wait_until_timeout PASSED
> src/test/python/apache/aurora/client/api/test_job_monitor.py:95:
> JobMonitorTest.test_wait_with_instances PASSED
>
> =========================== 4 passed in 1.08 seconds
> ===========================
> ============================= test session starts
> ==============================
> platform linux2 -- Python 2.7.3 -- py-1.4.21 -- pytest-2.5.2 --
> /usr/bin/python2.7
> plugins: cov, timeout
> collecting ... collected 3 items
>
> src/test/python/apache/aurora/client/cli/test_task_run.py:91:
> TestRunCommand.test_successful_run slavehost:  hello
> slavehost:  hello
> slavehost:  hello
> PASSED
> src/test/python/apache/aurora/client/cli/test_task_run.py:95:
> TestRunCommand.test_successful_run_with_instances slavehost:  hello
> slavehost:  hello
> slavehost:  hello
> PASSED
> src/test/python/apache/aurora/client/cli/test_task_run.py:190:
> TestSshCommand.test_successful_ssh PASSED
>
> =========================== 3 passed in 0.62 seconds
> ===========================
> ============================= test session starts
> ==============================
> platform linux2 -- Python 2.7.3 -- py-1.4.21 -- pytest-2.5.2 --
> /usr/bin/python2.7
> plugins: cov, timeout
> collecting ... collected 10 items
>
> src/test/python/apache/aurora/client/test_config.py:69:
> test_get_config_announces PASSED
> src/test/python/apache/aurora/client/test_config.py:80:
> test_get_config_select PASSED
> src/test/python/apache/aurora/client/test_config.py:98: test_include PASSED
> src/test/python/apache/aurora/client/test_config.py:119:
> test_environment_names PASSED
> src/test/python/apache/aurora/client/test_config.py:136:
> test_inject_default_environment
> Job did not specify environment, auto-populating to "devel".
>
> PASSED
> src/test/python/apache/aurora/client/test_config.py:151:
> test_dedicated_portmap
> Announcer specified primary port as 'http' but no processes have bound
> that port.
> If you would like to utilize this port, you should listen on
> {{thermos.ports[http]}}
> from some Process bound to your task.
>
>
> Announcer specified primary port as 'http' but no processes have bound
> that port.
> If you would like to utilize this port, you should listen on
> {{thermos.ports[http]}}
> from some Process bound to your task.
>
>
> Announcer specified primary port as 'http' but no processes have bound
> that port.
> If you would like to utilize this port, you should listen on
> {{thermos.ports[http]}}
> from some Process bound to your task.
>
> PASSED
> src/test/python/apache/aurora/client/test_config.py:174:
> test_update_config_passes_with_default_values PASSED
> src/test/python/apache/aurora/client/test_config.py:183:
> test_update_config_passes_with_min_requirement_values PASSED
> src/test/python/apache/aurora/client/test_config.py:194:
> test_update_config_fails_insufficient_watch_secs_less_than_target
> CRITICAL:root:
> You have specified an insufficiently short watch period (10 seconds) in
> your update configuration.
> Your update will always succeed. In order for the updater to detect health
> check failures,
> UpdateConfig.watch_secs must be greater than 15 seconds to account for an
> initial
> health check interval (15 seconds) plus 0 consecutive failures at a check
> interval of 10 seconds.
>
> PASSED
> src/test/python/apache/aurora/client/test_config.py:205:
> test_update_config_fails_insufficient_watch_secs_equal_to_target
> CRITICAL:root:
> You have specified an insufficiently short watch period (25 seconds) in
> your update configuration.
> Your update will always succeed. In order for the updater to detect health
> check failures,
> UpdateConfig.watch_secs must be greater than 25 seconds to account for an
> initial
> health check interval (15 seconds) plus 1 consecutive failures at a check
> interval of 10 seconds.
>
> PASSED
>
> ========================== 10 passed in 0.44 seconds
> ===========================
> ============================= test session starts
> ==============================
> platform linux2 -- Python 2.7.3 -- py-1.4.21 -- pytest-2.5.2 --
> /usr/bin/python2.7
> plugins: cov, timeout
> collecting ... collected 3 items
>
> src/test/python/apache/thermos/core/test_failing_runner.py:47:
> TestFailingRunner.test_runner_state_success PASSED
> src/test/python/apache/thermos/core/test_failing_runner.py:50:
> TestFailingRunner.test_runner_processes_have_expected_runs PASSED
> src/test/python/apache/thermos/core/test_failing_runner.py <-
> ../../../../../tmp/tmpdsuu9S/apache/thermos/testing/runner.py:209:
> TestFailingRunner.test_runner_state_reconstruction PASSEDFailed to kill
> runner: [Errno 3] No such process
>
>
> =========================== 3 passed in 6.13 seconds
> ===========================
> ============================= test session starts
> ==============================
> platform linux2 -- Python 2.7.3 -- py-1.4.21 -- pytest-2.5.2 --
> /usr/bin/python2.7
> plugins: cov, timeout
> collecting ... collected 0 items
>
> ===============================  in 0.17 seconds
> ===============================
> ============================= test session starts
> ==============================
> platform linux2 -- Python 2.7.3 -- py-1.4.21 -- pytest-2.5.2 --
> /usr/bin/python2.7
> plugins: cov, timeout
> collecting ... collected 58 items
>
> src/test/python/apache/aurora/client/cli/test_cancel_update.py:96:
> TestClientCancelUpdateCommand.test_cancel_update_api_level PASSED
> src/test/python/apache/aurora/client/cli/test_cancel_update.py:72:
> TestClientCancelUpdateCommand.test_simple_successful_cancel_update PASSED
> src/test/python/apache/aurora/client/cli/test_create.py:124:
> TestClientCreateCommand.test_create_job_delayed PASSED
> src/test/python/apache/aurora/client/cli/test_create.py:148:
> TestClientCreateCommand.test_create_job_failed Error executing command:
> Error reported by scheduler; see log for details
> PASSED
> src/test/python/apache/aurora/client/cli/test_create.py:171:
> TestClientCreateCommand.test_create_job_failed_invalid_config Error
> executing command: Error loading configuration: invalid syntax (tmp9gwGfz,
> line 9)
> PASSED
> src/test/python/apache/aurora/client/cli/test_create.py:190:
> TestClientCreateCommand.test_interrupt log(ERROR): Command interrupted by
> user
> log(ERROR): Command interrupted by user
> log(ERROR): Command interrupted by user
> log(ERROR): Command interrupted by user
> log(ERROR): Command interrupted by user
> log(ERROR): Command interrupted by user
> PASSED
> src/test/python/apache/aurora/client/cli/test_create.py:89:
> TestClientCreateCommand.test_simple_successful_create_job PASSED
> src/test/python/apache/aurora/client/cli/test_create.py:208:
> TestClientCreateCommand.test_unknown_error log(ERROR): Unknown error: Argh
> log(ERROR): Unknown error: Argh
> log(ERROR): Unknown error: Argh
> log(ERROR): Unknown error: Argh
> log(ERROR): Unknown error: Argh
> log(ERROR): Unknown error: Argh
> log(ERROR): Unknown error: Argh
> log(ERROR): Unknown error: Argh
> PASSED
> src/test/python/apache/aurora/client/cli/test_diff.py:419:
> TestDiffCommand.test_diff_invalid_config Error executing command: Error
> loading configuration: invalid syntax (tmpR7gdva, line 9)
> PASSED
> src/test/python/apache/aurora/client/cli/test_diff.py:445:
> TestDiffCommand.test_diff_server_error Error executing command: Could not
> find job to diff against
> PASSED
> src/test/python/apache/aurora/client/cli/test_diff.py:193:
> TestDiffCommand.test_success_diffs_metadata ['Task diffs found in instance
> 0', u"\tField 'metadata' is '[{u'key': u'a', u'value': u'1'}, {u'key':
> u'b', u'value': u'2'}, {u'key': u'instance', u'value': u'0'}]' local, but
> '[{u'key': u'a', u'value': u'1'}, {u'key': u'b', u'value': u'2'}, {u'key':
> u'instance', u'value': u'2'}]' remote", 'Task diffs found in instance 1',
> u"\tField 'metadata' is '[{u'key': u'a', u'value': u'3'}, {u'key': u'b',
> u'value': u'2'}, {u'key': u'instance', u'value': u'1'}]' local, but
> '[{u'key': u'a', u'value': u'1'}, {u'key': u'b', u'value': u'2'}, {u'key':
> u'instance', u'value': u'0'}]' remote", 'Task diffs found in instance 2',
> u"\tField 'metadata' is '[{u'key': u'a', u'value': u'1'}, {u'key': u'b',
> u'value': u'2'}, {u'key': u'instance', u'value': u'2'}]' local, but
> '[{u'key': u'a', u'value': u'1'}, {u'key': u'b', u'value': u'2'}, {u'key':
> u'instance', u'value': u'1'}]' remote", '3 total diff(s) found']
> FAILED
>
> =================================== FAILURES
> ===================================
> _________________ TestDiffCommand.test_success_diffs_metadata
> __________________
>
> self = <test_diff.TestDiffCommand testMethod=test_success_diffs_metadata>
>
>     def test_success_diffs_metadata(self):
>       one = [{"name": "serv", "metadata": [Metadata(key="a", value="1"),
>           Metadata(key="b", value="2"), Metadata(key="instance",
> value="0")]},
>           {"name": "serv", "metadata": [Metadata(key="a", value="1"),
>               Metadata(key="b", value="2"), Metadata(key="instance",
> value="1")]},
>           {"name": "serv", "metadata": [Metadata(key="a", value="1"),
>               Metadata(key="b", value="2"), Metadata(key="instance",
> value="2")]}]
>
>       two = [{"name": "serv", "metadata": [Metadata(key="b", value="2"),
>           Metadata(key="a", value="1"), Metadata(key="instance",
> value="0")]},
>           {"name": "serv", "metadata": [Metadata(key="instance",
> value="1"),
>           Metadata(key="a", value="3"), Metadata(key="b", value="2")]},
>           {"name": "serv", "metadata": [Metadata(key="a", value="1"),
>           Metadata(key="instance", value="2"), Metadata(key="b",
> value="2")]}]
>
>       result = self._test_successful_diff_generic(one, two)
>       assert result == EXIT_OK
>       print(MOCK_OUT)
> >     assert MOCK_OUT == ['Task diffs found in instance 1',
>           (u"\tField 'metadata' is '[{u'key': u'a', u'value': u'3'},
> {u'key': u'b', u'value': u'2'}, "
>                "{u'key': u'instance', u'value': u'1'}]' local, but
> '[{u'key': u'a', u'value': u'1'}, "
>               "{u'key': u'b', u'value': u'2'}, {u'key': u'instance',
> u'value': u'1'}]' remote"),
>           '1 total diff(s) found']
> E     AssertionError: assert ['Task diffs ... remote", ...] == ['Task
> diffs f...iff(s) found']
> E       At index 0 diff: 'Task diffs found in instance 0' != 'Task diffs
> found in instance 1'
> E       Left contains more items, first extra item: "   Field 'metadata'
> is '[{u'key': u'a', u'value': u'3'}, {u'key': u'b', u'value': u'2'},
> {u'key': u'instance', u'value'..., but '[{u'key': u'a', u'value': u'1'},
> {u'key': u'b', u'value': u'2'}, {u'key': u'instance', u'value': u'0'}]'
> remote"
>
> src/test/python/apache/aurora/client/cli/test_diff.py:211: AssertionError
> !!!!!!!!!!!!!!!!!!!! Interrupted: stopping after 1 failures
> !!!!!!!!!!!!!!!!!!!!
> ===================== 1 failed, 10 passed in 1.30 seconds
> ======================
> Build operating on top level addresses:
> set([BuildFileAddress(src/test/python/BUILD, all)])
> src.test.python.apache.aurora.client.api.job_monitor
>      .....   SUCCESS
> src.test.python.apache.aurora.client.api.updater
>      .....   SUCCESS
> src.test.python.apache.aurora.client.cli.config
>       .....   SUCCESS
> src.test.python.apache.aurora.client.cli.job
>      .....   FAILURE
> src.test.python.apache.aurora.client.cli.sla
>      .....   SUCCESS
> src.test.python.apache.aurora.client.cli.task
>       .....   SUCCESS
> src.test.python.apache.aurora.client.config
>       .....   SUCCESS
> src.test.python.apache.aurora.client.hooks.hooked_api
>       .....   SUCCESS
> src.test.python.apache.aurora.client.hooks.non_hooked_api
>       .....   SUCCESS
> src.test.python.apache.aurora.common.test_http_signaler
>       .....   SUCCESS
> src.test.python.apache.aurora.executor.common.task_info
>       .....   SUCCESS
> src.test.python.apache.aurora.executor.executor_vars
>      .....   SUCCESS
> src.test.python.apache.thermos.bin.test_thermos
>       .....   SUCCESS
> src.test.python.apache.thermos.common.test_pathspec
>       .....   SUCCESS
> src.test.python.apache.thermos.core.test_failing_runner
>       .....   SUCCESS
> src.test.python.apache.thermos.core.test_runner_integration
>       .....   SUCCESS
> src.test.python.apache.thermos.monitoring.test_disk
>       .....   SUCCESS
> Build step 'Execute shell' marked build as failure
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message