Return-Path: Delivered-To: apmail-hadoop-common-issues-archive@minotaur.apache.org Received: (qmail 47207 invoked from network); 19 May 2010 21:56:24 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 19 May 2010 21:56:24 -0000 Received: (qmail 46881 invoked by uid 500); 19 May 2010 21:56:24 -0000 Delivered-To: apmail-hadoop-common-issues-archive@hadoop.apache.org Received: (qmail 46853 invoked by uid 500); 19 May 2010 21:56:24 -0000 Mailing-List: contact common-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-issues@hadoop.apache.org Delivered-To: mailing list common-issues@hadoop.apache.org Received: (qmail 46845 invoked by uid 99); 19 May 2010 21:56:24 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 19 May 2010 21:56:24 +0000 X-ASF-Spam-Status: No, hits=-1446.0 required=10.0 tests=ALL_TRUSTED,AWL X-Spam-Check-By: apache.org Received: from [140.211.11.22] (HELO thor.apache.org) (140.211.11.22) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 19 May 2010 21:56:23 +0000 Received: from thor (localhost [127.0.0.1]) by thor.apache.org (8.13.8+Sun/8.13.8) with ESMTP id o4JLu3S2019577 for ; Wed, 19 May 2010 21:56:03 GMT Message-ID: <27023974.26491274306163000.JavaMail.jira@thor> Date: Wed, 19 May 2010 17:56:02 -0400 (EDT) From: "Konstantin Boudnik (JIRA)" To: common-issues@hadoop.apache.org Subject: [jira] Commented: (HADOOP-6332) Large-scale Automated Test Framework In-Reply-To: <36506918.1256452199424.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12869371#action_12869371 ] Konstantin Boudnik commented on HADOOP-6332: -------------------------------------------- I'm not saying it is _impossible_ to do as a separate project. Packaging problem isn't an issue. In fact, current approach will publish instrumented artifacts separately too. Now, to weave aspects one doesn't need to have source code available at the build time: compiled aspects should be sufficient. However, keeping the framework out of the Hadoop's source tree has two fold problem: - all visible changes in the bulld system will be the same + a lot of stuff from {{src/test/aop/build/aop.xml}} will have to be brought into the Common, HDFS, and MR builds anyway. - we'll need to have a source code dependency on Hadoop's subprojects in the framework development time to make sure the aspects are binding right, etc. These are disadvantages. And I really don't see any advantage of the separation besides of reducing the number of source files under {{src/test/system}}. Also, please keep in mind that this test framework is Hadoop specific so it seems logical to keep them together. > Large-scale Automated Test Framework > ------------------------------------ > > Key: HADOOP-6332 > URL: https://issues.apache.org/jira/browse/HADOOP-6332 > Project: Hadoop Common > Issue Type: New Feature > Components: test > Affects Versions: 0.21.0 > Reporter: Arun C Murthy > Assignee: Konstantin Boudnik > Fix For: 0.22.0 > > Attachments: 6332-phase2.fix1.patch, 6332-phase2.fix2.patch, 6332-phase2.patch, 6332.patch, 6332.patch, 6332.patch, 6332_v1.patch, 6332_v2.patch, HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.patch, HADOOP-6332.patch > > > Hadoop would benefit from having a large-scale, automated, test-framework. This jira is meant to be a master-jira to track relevant work. > ---- > The proposal is a junit-based, large-scale test framework which would run against _real_ clusters. > There are several pieces we need to achieve this goal: > # A set of utilities we can use in junit-based tests to work with real, large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & stop clusters, bring down tasktrackers, datanodes, entire racks of both etc. > # Enhanced control-ability and inspect-ability of the various components in the system e.g. daemons such as namenode, jobtracker should expose their data-structures for query/manipulation etc. Tests would be much more relevant if we could for e.g. query for specific states of the jobtracker, scheduler etc. Clearly these apis should _not_ be part of the production clusters - hence the proposal is to use aspectj to weave these new apis to debug-deployments. > ---- > Related note: we should break up our tests into at least 3 categories: > # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & MAPREDUCE-1050). > # src/test/integration -> Current junit tests with Mini* clusters etc. > # src/test/system -> HADOOP-6332 and it's children -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.