From hdfs-issues-return-262616-archive-asf-public=cust-asf.ponee.io@hadoop.apache.org Fri May 10 10:16:03 2019 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [207.244.88.153]) by mx-eu-01.ponee.io (Postfix) with SMTP id 2407418061A for ; Fri, 10 May 2019 12:16:03 +0200 (CEST) Received: (qmail 872 invoked by uid 500); 10 May 2019 10:16:02 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 859 invoked by uid 99); 10 May 2019 10:16:02 -0000 Received: from mailrelay1-us-west.apache.org (HELO mailrelay1-us-west.apache.org) (209.188.14.139) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 10 May 2019 10:16:02 +0000 Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 20C74E2BA6 for ; Fri, 10 May 2019 10:16:01 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 5F5622581A for ; Fri, 10 May 2019 10:16:00 +0000 (UTC) Date: Fri, 10 May 2019 10:16:00 +0000 (UTC) From: "ASF GitHub Bot (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Work logged] (HDDS-1458) Create a maven profile to run fault injection tests MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDDS-1458?focusedWorklogId=240168&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-240168 ] ASF GitHub Bot logged work on HDDS-1458: ---------------------------------------- Author: ASF GitHub Bot Created on: 10/May/19 10:15 Start Date: 10/May/19 10:15 Worklog Time Spent: 10m Work Description: elek commented on issue #800: HDDS-1458. Create a maven profile to run fault injection tests URL: https://github.com/apache/hadoop/pull/800#issuecomment-491238146 Sure, this patch demonstrates how the existing tests can be executed as part of the maven build process. I think we can agree that the same pattern can be used to executed any other type of tests. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: users@infra.apache.org Issue Time Tracking ------------------- Worklog Id: (was: 240168) Time Spent: 40m (was: 0.5h) > Create a maven profile to run fault injection tests > --------------------------------------------------- > > Key: HDDS-1458 > URL: https://issues.apache.org/jira/browse/HDDS-1458 > Project: Hadoop Distributed Data Store > Issue Type: Test > Reporter: Eric Yang > Assignee: Eric Yang > Priority: Major > Labels: pull-request-available > Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, HDDS-1458.003.patch, HDDS-1458.004.patch > > Time Spent: 40m > Remaining Estimate: 0h > > Some fault injection tests have been written using blockade. It would be nice to have ability to start docker compose and exercise the blockade test cases against Ozone docker containers, and generate reports. This is optional integration tests to catch race conditions and fault tolerance defects. > We can introduce a profile with id: it (short for integration tests). This will launch docker compose via maven-exec-plugin and run blockade to simulate container failures and timeout. > Usage command: > {code} > mvn clean verify -Pit > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org