hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Work logged] (HDDS-1424) Support multi-container robot test execution
Date Tue, 07 May 2019 15:57:00 GMT

     [ https://issues.apache.org/jira/browse/HDDS-1424?focusedWorklogId=238616&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-238616
]

ASF GitHub Bot logged work on HDDS-1424:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 07/May/19 15:56
            Start Date: 07/May/19 15:56
    Worklog Time Spent: 10m 
      Work Description: hadoop-yetus commented on issue #726: HDDS-1424. Support multi-container
robot test execution
URL: https://github.com/apache/hadoop/pull/726#issuecomment-490140291
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | 0 | reexec | 30 | Docker mode activated. |
   | -1 | patch | 14 | https://github.com/apache/hadoop/pull/726 does not apply to trunk.
Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help.
|
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-726/7/artifact/out/Dockerfile
|
   | GITHUB PR | https://github.com/apache/hadoop/pull/726 |
   | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-726/7/console
|
   | versions | git=2.7.4 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 238616)
    Time Spent: 3h 20m  (was: 3h 10m)

> Support multi-container robot test execution
> --------------------------------------------
>
>                 Key: HDDS-1424
>                 URL: https://issues.apache.org/jira/browse/HDDS-1424
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>            Reporter: Elek, Marton
>            Assignee: Elek, Marton
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> The ./smoketest folder in the distribution package contains robotframework based test
scripts to test the main behaviour of Ozone.
> The tests have two layers:
> 1. robot test definitions to execute commands and assert the results (on a given host
machine)
> 2. ./smoketest/test.sh which starts/stops the docker-compose based environments AND execute
the selected robot tests inside the right hosts
> The second one (test.sh) has some serious limitations:
> 1. all the tests are executed inside the same container (om):
> https://github.com/apache/hadoop/blob/5f951ea2e39ae4dfe554942baeec05849cd7d3c2/hadoop-ozone/dist/src/main/smoketest/test.sh#L89
> Some of the tests (ozonesecure-mr, ozonefs) may require the flexibility to execute different
robot tests in different containers.
> 2. The definition of the global test set is complex and hard to understood. 
> The current code is:
> {code}
>    TESTS=("basic")
>    execute_tests ozone "${TESTS[@]}"
>    TESTS=("auditparser")
>    execute_tests ozone "${TESTS[@]}"
>    TESTS=("ozonefs")
>    execute_tests ozonefs "${TESTS[@]}"
>    TESTS=("basic")
>    execute_tests ozone-hdfs "${TESTS[@]}"
>    TESTS=("s3")
>    execute_tests ozones3 "${TESTS[@]}"
>    TESTS=("security")
>    execute_tests ozonesecure .
> {code} 
> For example for ozonesecure the TESTS is not used. And the usage of bash lists require
additional complexity in the execute_tests function.
> I propose here a very lightweight refactor. Instead of including both the test definitions
AND the helper methods in test.sh I would separate them.
> Let's put a test.sh to each of the compose directories. The separated test.sh can include
common methods from a main shell script. For example:
> {code}
> source "$COMPOSE_DIR/../testlib.sh"
> start_docker_env
> execute_robot_test scm basic/basic.robot
> execute_robot_test scm s3
> stop_docker_env
> generate_report
> {code}
> This is a more clean and more flexible definition. It's easy to execute just this test
(as it's saved to the compose/ozones3 directory. And it's more flexible.
> Other example, where multiple containers are used to execute tests:
> {code}
> source "$COMPOSE_DIR/../testlib.sh"
> start_docker_env
> execute_robot_test scm ozonefs/ozonefs.robot
> export OZONE_HOME=/opt/ozone
> execute_robot_test hadoop32 ozonefs/hadoopo3fs.robot
> execute_robot_test hadoop31 ozonefs/hadoopo3fs.robot
> stop_docker_env
> generate_report
> {code}
> With this separation the definition of the helper methods (eg. execute_robot_test or
stop_docker_env) would also be simplified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message