Return-Path: X-Original-To: apmail-hadoop-common-dev-archive@www.apache.org Delivered-To: apmail-hadoop-common-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 1AC8ED584 for ; Tue, 18 Dec 2012 09:12:00 +0000 (UTC) Received: (qmail 45452 invoked by uid 500); 18 Dec 2012 09:11:58 -0000 Delivered-To: apmail-hadoop-common-dev-archive@hadoop.apache.org Received: (qmail 45101 invoked by uid 500); 18 Dec 2012 09:11:57 -0000 Mailing-List: contact common-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-dev@hadoop.apache.org Delivered-To: mailing list common-dev@hadoop.apache.org Received: (qmail 45064 invoked by uid 99); 18 Dec 2012 09:11:56 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 18 Dec 2012 09:11:56 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of rarecactus@gmail.com designates 74.125.83.43 as permitted sender) Received: from [74.125.83.43] (HELO mail-ee0-f43.google.com) (74.125.83.43) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 18 Dec 2012 09:11:52 +0000 Received: by mail-ee0-f43.google.com with SMTP id e49so173040eek.16 for ; Tue, 18 Dec 2012 01:11:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:content-type; bh=HBxFoeSD3lljFlGZVFYeX0u1zyNFV/TxIEGuQcSWM2Y=; b=CRtobHXmOnnbKip15f+QZV/0dka0+rn363AF1JtactS61sSAoUL1xC7j8qOoiOhmkX T/sqlD2AjzspnVE5qZfjlNTutrUXm3t5quaPcrIDm+jj0Bnw9XX++2nwysIxmNYxRl5i tFIDgXGGIAZ3naLsdkJ6Fp54+NM495URrhVqKw2/NnP6qp/rCNvT5UD+s33QADW38QN4 Ce/NEJ+F8ZXYqdG04jeEEyYo+7UYY/InPjsxEw2NUg4pHGrCVY4EVWkOO62gW4A/oTHs j5FmWwSCwO6BCwe7p8KXGhoC62/GF97WZb6vKxXq8xROQGCrJ8KWZ6SlwO4oomHKS2Z1 asuw== MIME-Version: 1.0 Received: by 10.14.214.132 with SMTP id c4mr3940621eep.18.1355821891122; Tue, 18 Dec 2012 01:11:31 -0800 (PST) Sender: rarecactus@gmail.com Received: by 10.14.203.195 with HTTP; Tue, 18 Dec 2012 01:11:30 -0800 (PST) In-Reply-To: References: Date: Tue, 18 Dec 2012 01:11:30 -0800 X-Google-Sender-Auth: tlm5A7RV41c3ZtAUrK6LE3csv5g Message-ID: Subject: Re: making a hadoop-common test run if a property is set From: Colin McCabe To: common-dev@hadoop.apache.org Content-Type: text/plain; charset=ISO-8859-1 X-Virus-Checked: Checked by ClamAV on apache.org On Tue, Dec 18, 2012 at 1:05 AM, Colin McCabe wrote: > On Mon, Dec 17, 2012 at 11:03 AM, Steve Loughran wrote: >> On 17 December 2012 16:06, Tom White wrote: >> >>> There are some tests like the S3 tests that end with "Test" (e.g. >>> Jets3tNativeS3FileSystemContractTest) - unlike normal tests which >>> start with "Test". Only those that start with "Test" are run >>> automatically (see the surefire configuration in >>> hadoop-project/pom.xml). You have to run the others manually with "mvn >>> test -Dtest=...". >>> >>> The mechanism that Colin describes is probably better though, since >>> the environment-specific tests can be run as a part of a full test run >>> by Jenkins if configured appropriately. >>> >> >> I'd like that -though one problem with the current system is that you need >> to get the s3 (and soon: openstack) credentials into >> src/test/resources/core-site.xml, which isn't the right approach. If we >> could get them into properties files things would be easier. >> That's overkill for adding a few more openstack tests -but I would like to >> make it easier to turn those and the rackspace ones without sticking my >> secrets into an XML file under SCM > > I think the way to go is to have one XML file include another. > > > > > > boring.config.1 > boring-value > ... etc, etc... > > > > That way, you can keep the boring configuration under version control, > and still have your password sitting in a small separate > non-version-controlled XML file. > > We use this trick a bunch with the HA configuration stuff-- 99% of the > configuration is the same between the Active and Standby Namenodes, > but you can't give them the same dfs.ha.namenode.id or dfs.name.dir. > Includes help a lot here. > >> another tactic could be to have specific test projects: test-s3, >> test-openstack, test-... which contain nothing but test cases. You'd set >> jenkins up those test projects too -the reason for having the separate >> names is to make it blatantly clear which tests you've not run > > I dunno. Every time a project puts unit or system tests into a > separate project, the developers never run them. I've seen it happen > enough times that I think I can call it an anti-pattern by now. I > like having tests alongside the code-- to the maximum extent that is > possible. Just to be clear, I'm not referring to any Hadoop-related project here, just certain other open source (and not) ones I've worked on. System/unit tests belong with the rest of the code, otherwise they get stale real fast. It sometimes makes sense for integration tests to live in a separate repo, since by their nature they're usually talking to stuff that lives in multiple repos. best, Colin > > cheers, > Colin > >> >> >> >>> Tom >>> >>> On Mon, Dec 17, 2012 at 10:06 AM, Steve Loughran >>> wrote: >>> > thanks, I'l; have a look. I've always wanted to add the notion of skipped >>> > to test runs -all the way through to the XML and generated reports, but >>> > you'd have to do a new junit runner for this and tweak the reporting >>> code. >>> > Which, if it involved going near maven source, is not something I am >>> > prepared to do >>> > >>> > On 14 December 2012 18:57, Colin McCabe wrote: >>> > >>> >> One approach we've taken in the past is making the junit test skip >>> >> itself when some precondition is not true. Then, we often create a >>> >> property which people can use to cause the skipped tests to become a >>> >> hard error. >>> >> >>> >> For example, all the tests that rely on libhadoop start with these >>> lines: >>> >> >>> >> > @Test >>> >> > public void myTest() { >>> >> > Assume.assumeTrue(NativeCodeLoader.isNativeCodeLoaded()); >>> >> > ... >>> >> > } >>> >> >>> >> This causes them to be silently skipped when libhadoop.so is not >>> >> available or loaded (perhaps because it hasn't been built.) >>> >> >>> >> However, if you want to cause this to be a hard error, you simply run >>> >> > mvn test -Drequire.test.libhadoop >>> >> >>> >> See TestHdfsNativeCodeLoader.java to see how this is implemented. >>> >> >>> >> The main idea is that your Jenkins build slaves use all the -Drequire >>> >> lines, but people running tests locally are not inconvenienced by the >>> >> need to build libhadoop.so in every case. This is especially good >>> >> because libhadoop.so isn't known to build on certain platforms like >>> >> AIX, etc. It seems to be a good tradeoff so far. I imagine that s3 >>> >> could do something similar. >>> >> >>> >> cheers, >>> >> Colin >>> >> >>> >> >>> >> On Fri, Dec 14, 2012 at 9:56 AM, Steve Loughran >> > >>> >> wrote: >>> >> > The swiftfs tests need only to run if there's a target filesystem; >>> >> copying >>> >> > the s3/s3n tests, something like >>> >> > >>> >> > >>> >> > test.fs.swift.name >>> >> > swift://your-object-store-herel/ >>> >> > >>> >> > >>> >> > How does one actually go about making junit tests optional in >>> mvn-land? >>> >> > Should the probe/skip logic be in the code -which can make people >>> think >>> >> the >>> >> > test passed when it didn't actually run? Or can I turn it on/off in >>> >> maven? >>> >> > >>> >> > -steve >>> >> >>>