hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDDS-352) Separate install and testing phases in acceptance tests.
Date Fri, 14 Sep 2018 17:31:00 GMT

    [ https://issues.apache.org/jira/browse/HDDS-352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16615119#comment-16615119

Hadoop QA commented on HDDS-352:

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at https://builds.apache.org/job/PreCommit-HDDS-Build/1070/console in case
of problems.

> Separate install and testing phases in acceptance tests.
> --------------------------------------------------------
>                 Key: HDDS-352
>                 URL: https://issues.apache.org/jira/browse/HDDS-352
>             Project: Hadoop Distributed Data Store
>          Issue Type: Improvement
>            Reporter: Elek, Marton
>            Priority: Major
>              Labels: test
>         Attachments: HDDS-352-ozone-0.2.001.patch
> In the current acceptance tests (hadoop-ozone/acceptance-test) the robot files contain
two kind of commands:
> 1) starting and stopping clusters
> 2) testing the basic behaviour with client calls
> It would be great to separate the two functionality and include only the testing part
in the robot files.
> 1. Ideally the tests could be executed in any environment. After a kubernetes install
I would like to do a smoke test. It could be a different environment but I would like to execute
most of the tests (check ozone cli, rest api, etc.)
> 2. There could be multiple ozone environment (standlaone ozone cluster, hdfs + ozone
cluster, etc.). We need to test all of them with all the tests.
> 3. With this approach we can collect the docker-compose files just in one place (hadoop-dist
project). After a docker-compose up there should be a way to execute the tests with an existing
cluster. Something like this:
> {code}
> docker run -it apache/hadoop-runner -v ./acceptance-test:/opt/acceptance-test -e SCM_URL=http://scm:9876
--network=composenetwork start-all-tests.sh
> {code}
> 4. It also means that we need to execute the tests from a separated container instance.
We need a configuration parameter to define the cluster topology. Ideally it could be just
one environment variables with the url of the scm and the scm could be used to discovery all
of the required components + download the configuration files from there.
> 5. Until now we used the log output of the docker-compose files to do some readiness
probes. They should be converted to poll the jmx endpoints and check if the cluster is up
and running. If we need the log files for additional testing we can create multiple implementations
for different type of environments (docker-compose/kubernetes) and include the right set of
functions based on an external parameters.
> 6. Still we need a generic script under the ozone-acceptance test project to run all
the tests (starting the docker-compose clusters, execute tests in a different container, stop
the cluster) 

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message