hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bharat Viswanadham (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDDS-352) Separate install and testing phases in acceptance tests.
Date Tue, 18 Sep 2018 00:26:00 GMT

    [ https://issues.apache.org/jira/browse/HDDS-352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618315#comment-16618315

Bharat Viswanadham commented on HDDS-352:

Thank You [~jnp] for review and [~elek] for the patch.

I have committed this to the trunk.

I will push into ozone-0.2 branch shortly.


[~elek] For any additional change's as discussed we can file new Jira's.

> Separate install and testing phases in acceptance tests.
> --------------------------------------------------------
>                 Key: HDDS-352
>                 URL: https://issues.apache.org/jira/browse/HDDS-352
>             Project: Hadoop Distributed Data Store
>          Issue Type: Improvement
>            Reporter: Elek, Marton
>            Assignee: Elek, Marton
>            Priority: Major
>              Labels: test
>         Attachments: HDDS-352-ozone-0.2.001.patch, HDDS-352-ozone-0.2.002.patch, HDDS-352-ozone-0.2.003.patch,
HDDS-352-ozone-0.2.004.patch, HDDS-352-ozone-0.2.005.patch, HDDS-352.00.patch, TestRun.rtf
> In the current acceptance tests (hadoop-ozone/acceptance-test) the robot files contain
two kind of commands:
> 1) starting and stopping clusters
> 2) testing the basic behaviour with client calls
> It would be great to separate the two functionality and include only the testing part
in the robot files.
> 1. Ideally the tests could be executed in any environment. After a kubernetes install
I would like to do a smoke test. It could be a different environment but I would like to execute
most of the tests (check ozone cli, rest api, etc.)
> 2. There could be multiple ozone environment (standlaone ozone cluster, hdfs + ozone
cluster, etc.). We need to test all of them with all the tests.
> 3. With this approach we can collect the docker-compose files just in one place (hadoop-dist
project). After a docker-compose up there should be a way to execute the tests with an existing
cluster. Something like this:
> {code}
> docker run -it apache/hadoop-runner -v ./acceptance-test:/opt/acceptance-test -e SCM_URL=http://scm:9876
--network=composenetwork start-all-tests.sh
> {code}
> 4. It also means that we need to execute the tests from a separated container instance.
We need a configuration parameter to define the cluster topology. Ideally it could be just
one environment variables with the url of the scm and the scm could be used to discovery all
of the required components + download the configuration files from there.
> 5. Until now we used the log output of the docker-compose files to do some readiness
probes. They should be converted to poll the jmx endpoints and check if the cluster is up
and running. If we need the log files for additional testing we can create multiple implementations
for different type of environments (docker-compose/kubernetes) and include the right set of
functions based on an external parameters.
> 6. Still we need a generic script under the ozone-acceptance test project to run all
the tests (starting the docker-compose clusters, execute tests in a different container, stop
the cluster) 

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message